<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a highly skilled Data Engineer to join our team on a contract basis in Atlanta, Georgia. This role focuses on optimizing data processes and infrastructure, ensuring efficient data management and performance. The ideal candidate will possess expertise in modern data engineering tools and technologies.<br><br>Responsibilities:<br>• Optimize data indexing and address fragmentation issues to enhance system performance.<br>• Develop and maintain data pipelines using ETL processes to ensure accurate data transformation and integration.<br>• Utilize Apache Spark for scalable data processing and analytics.<br>• Implement and manage big data solutions with Apache Hadoop.<br>• Design and deploy real-time data streaming frameworks using Apache Kafka.<br>• Collaborate with cross-functional teams to identify and resolve data-related challenges.<br>• Monitor and improve system performance by analyzing data usage and storage trends.<br>• Write efficient code in Python to support data engineering tasks.<br>• Document processes and maintain clear records of data workflows and optimizations.<br>• Ensure data security and compliance with organizational standards.
<p>We are seeking a Data Engineer responsible for designing, building, and maintaining the architecture and infrastructure needed to collect, store, process, and analyze large datasets efficiently. Someone with experience in a critical role supporting analytics and business intelligence teams by ensuring data is accessible, reliable, and ready for downstream use. </p><p><strong>Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable data pipelines and architectures.</li><li>Develop processes for data collection, transformation, and storage to support analytics, reporting, and business intelligence.</li><li>Clean, organize, and validate large volumes of data from various sources.</li><li>Optimize data workflows for performance and reliability.</li><li>Collaborate with data scientists, analysts, and software engineers to meet business data needs.</li><li>Implement security, data governance, and compliance controls.</li><li>Troubleshoot data issues and ensure data consistency and quality.</li></ul><p><br></p>
<p>We are seeking a Data Engineer responsible for designing, building, and maintaining the architecture and infrastructure needed to collect, store, process, and analyze large datasets efficiently. Someone with experience in a critical role supporting analytics and business intelligence teams by ensuring data is accessible, reliable, and ready for downstream use.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable data pipelines and architectures.</li><li>Develop processes for data collection, transformation, and storage to support analytics, reporting, and business intelligence.</li><li>Clean, organize, and validate large volumes of data from various sources.</li><li>Optimize data workflows for performance and reliability.</li><li>Collaborate with data scientists, analysts, and software engineers to meet business data needs.</li><li>Implement security, data governance, and compliance controls.</li><li>Troubleshoot data issues and ensure data consistency and quality.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Ann Arbor, Michigan, and contribute to the development of a modern, scalable data platform. In this role, you will focus on building efficient data pipelines, ensuring data quality, and enabling seamless integration across systems to support business analytics and decision-making. This position offers an exciting opportunity to work with cutting-edge technologies and play a key role in the transformation of our data environment.<br><br>Responsibilities:<br>• Design and implement robust data pipelines on Azure using tools such as Databricks, Spark, Delta Lake, and Airflow.<br>• Develop workflows to ingest and integrate data from diverse sources into Azure Data Lake.<br>• Build and maintain data transformation layers following the medallion architecture principles.<br>• Apply data quality checks, validation processes, and deduplication techniques to ensure accuracy and reliability.<br>• Create reusable and parameterized notebooks to streamline batch and streaming data processes.<br>• Optimize merge and update logic in Delta Lake by leveraging efficient partitioning strategies.<br>• Collaborate with business and application teams to understand and fulfill data integration requirements.<br>• Enable downstream integrations with APIs, Power BI dashboards, and reporting systems.<br>• Establish monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.<br>• Participate in code reviews, agile development practices, and team design discussions.
<p>Robert Half is currently partnering with a well-established company in San Diego that is looking for a Senior Data Engineer, experienced in BigQuery, DBT (Data Build Tool), and GCP. This position is full time (permanent placement) that is 100% onsite in San Diego. We are looking for a Principal Data Engineer that is passionate about optimizing systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing. Must have experience in DBT!</p><p>Responsibilities:</p><ul><li>Design and implement scalable, high-performance data solutions on GCP.</li><li>Develop data pipelines, data warehouses, and data lakes using GCP services (BigQuery, and DBT, etc.).</li><li>Build and maintain ETL/ELT pipelines to ingest, transform, and load data from various sources.</li><li>Ensure data quality, integrity, and security throughout the data lifecycle.</li><li>Design, develop, and implement a new version of a big data tool tailored to client requirements.</li><li>Leverage advanced expertise in DBT (Data Build Tool) and Google BigQuery to model and transform data pipelines.</li><li>Optimize systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing.</li><li>Collaborate cross-functionally with product and technical teams to align project deliverables with client goals.</li><li>Monitor, debug, and refine the performance of the big data tool throughout the development lifecycle.</li></ul><p><strong>Minimum Qualifications:</strong></p><ul><li>5+ years of experience in a data engineering role in GCP .</li><li>Proven experience in designing, building, and deploying data solutions on GCP.</li><li>Strong expertise in SQL, data warehouse design, and data pipeline development.</li><li>Understanding of cloud architecture principles and best practices.</li><li>Proven experience with DBT, BigQuery, and other big data tools.</li><li>Advanced knowledge of partitioning, indexing, and Google Sequences strategies.</li><li>Strong problem-solving skills with the ability to manage and troubleshoot complex systems.</li><li>Excellent written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders.</li><li>Experience with Looker or other data visualization tools.</li></ul>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this long-term contract role, you will design and implement data solutions, ensuring efficient data processing and management. The ideal candidate will have expertise in handling large-scale data systems and a passion for optimizing workflows.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using modern tools and frameworks.<br>• Implement data transformation processes to ensure efficient storage and retrieval.<br>• Collaborate with cross-functional teams to design and optimize data architecture.<br>• Utilize Apache Spark and Python to process and analyze large datasets.<br>• Manage and monitor data workflows, ensuring high performance and reliability.<br>• Integrate and maintain ETL processes to streamline data operations.<br>• Work with Apache Kafka and Hadoop to enhance system capabilities.<br>• Troubleshoot and resolve issues related to data systems and workflows.<br>• Ensure data security and compliance with industry standards.<br>• Document processes and provide technical support to stakeholders.
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. The ideal candidate will play a key role in designing, implementing, and maintaining data applications while ensuring alignment with organizational data standards. This position requires expertise in handling large-scale data processing and a collaborative approach to problem-solving.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with teams to design and implement applications utilizing both established and emerging technology platforms.</p><p>• Ensure all applications adhere to organizational data management standards.</p><p>• Develop and optimize queries, stored procedures, and reports using SQL Server to address user requests.</p><p>• Work closely with team members to monitor application performance and ensure quality.</p><p>• Communicate effectively with users and management to resolve issues and provide updates.</p><p>• Create and maintain technical documentation and application procedures.</p><p>• Ensure compliance with change management and security protocols.</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Cleveland, Ohio. This long-term contract position offers the opportunity to contribute to the development and optimization of data platforms, with a primary focus on Snowflake and Apache Airflow technologies. You will play a key role in ensuring efficient data management and processing to support critical business needs.<br><br>Responsibilities:<br>• Design, develop, and maintain data pipelines using Snowflake and Apache Airflow.<br>• Collaborate with cross-functional teams to implement scalable data solutions.<br>• Optimize data processing workflows to ensure high performance and reliability.<br>• Monitor and troubleshoot issues within the Snowflake data platform.<br>• Develop ETL processes to support data integration and transformation.<br>• Work with tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data operations.<br>• Implement robust data warehousing strategies to support business intelligence initiatives.<br>• Analyze and resolve data-related technical challenges promptly.<br>• Provide support and guidance during Snowflake deployments across subsidiaries.<br>• Document processes and ensure best practices for data engineering are followed.
<p><strong>Senior Data Engineer</strong></p><p><strong>Location:</strong> Calabasas, CA (Fully Remote if outside 50 miles)</p><p> <strong>Compensation:</strong> $140K–$160K </p><p> <strong>Reports to:</strong> Director of Data Engineering</p><p>Our entertainment client is seeking a <strong>Senior Data Engineer</strong> to design, build, and optimize enterprise data pipelines and cloud infrastructure. This hands-on role focuses on implementing scalable data architectures, developing automation, and driving modern data engineering best practices across the company.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and maintain ELT/ETL pipelines in Snowflake, Databricks, and AWS.</li><li>Build and orchestrate workflows using Python, SQL, Airflow, and dbt.</li><li>Implement medallion/lakehouse architectures and event-driven pipelines.</li><li>Manage AWS services (Lambda, EC2, S3, Glue) and infrastructure-as-code (Terraform).</li><li>Optimize data performance, quality, and governance across systems.</li></ul><p>For immediate consideration, direct message Reid Gormly on Linkedin and Apply Now!</p>
<p><strong><u>Data Engineer</u></strong></p><p><strong>Onsite 4x week in El Segundo</strong></p><p><strong>$130K - $160K + benefits</strong></p><p>We are looking for an experienced Data Engineer to join our dynamic team in El Segundo, California. In this role, you will play a key part in designing, developing, and optimizing data pipelines and architectures to support business operations and analytics. This position offers the opportunity to work on cutting-edge technologies, including AI and machine learning applications.</p><p><br></p><p>Responsibilities:</p><p>• Develop, test, and maintain scalable data pipelines and architectures to support business intelligence and analytics needs.</p><p>• Collaborate with cross-functional teams to integrate data from diverse sources, including D365 Commerce and Adobe Experience Platform.</p><p>• Utilize Python, PySpark, and Azure data services to transform and orchestrate datasets.</p><p>• Implement and manage Kafka-based systems for real-time data streaming.</p><p>• Ensure compliance with data governance, security, and privacy standards.</p><p>• Optimize data storage solutions, leveraging medallion architecture and modern data modeling practices.</p><p>• Prepare datasets for AI/ML applications and advanced analytical models.</p><p>• Monitor, troubleshoot, and improve the performance of data systems.</p><p>• Design semantic models and dashboards using Power BI to support decision-making.</p><p>• Stay updated on emerging technologies and best practices in data engineering.</p>
We are looking for a highly skilled Senior Data Engineer to join our team on a long-term contract basis. In this role, you will design and implement robust data pipelines and architectures to support data-driven decision-making across the organization. You will work closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions using cutting-edge tools and technologies. This position is based in Dallas, Texas.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks to streamline data ingestion and transformation.<br>• Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink.<br>• Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3 to ensure efficiency and scalability.<br>• Establish and enforce data governance, quality, and security protocols through best practices and monitoring systems.<br>• Manage complex workflows and processes across hybrid and multi-cloud environments.<br>• Work with diverse data formats, including Parquet and Avro, to enhance data accessibility and integration.<br>• Troubleshoot and fine-tune distributed data systems to maximize performance and reliability.<br>• Mentor and guide engineers at the beginning of their careers to promote a culture of collaboration and technical excellence.
<p>We're seeking a Data Engineer to design, build, and optimize data pipelines supporting advanced analytics, AI, and machine learning initiatives for a client in Watertown, MA. This role combines technical expertise with collaboration across IT and business teams to deliver scalable, secure, and impactful data solutions. This role is onsite 4 days a week in Watertown. </p><p><strong>Responsibilities</strong></p><ul><li>Develop and manage data pipelines using Python, SQL Server, Snowflake, and Azure Data Factory.</li><li>Optimize T-SQL code and database structures for performance and reliability.</li><li>Integrate analytics and machine learning outputs into business workflows.</li><li>Ensure compliance with data governance and security standards.</li><li>Collaborate with stakeholders to refine requirements and deliver solutions.</li><li>Support BI tools such as Power BI, Tableau, and SQL Server BI stack.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team in Raleigh, North Carolina. In this position, you will play a pivotal role in designing, developing, and managing data solutions that support organizational decision-making. The ideal candidate will have a strong background in data engineering and a desire to grow into Data Architecture responsibilities.<br><br>Responsibilities:<br>• Design and implement data pipelines to extract sales data and integrate it into Snowflake using Snowpipe.<br>• Collaborate with executive leadership and vendors to ensure data solutions align with business objectives.<br>• Develop and maintain APIs to connect various systems and streamline data flow.<br>• Optimize data security and access controls within Snowflake, ensuring compliance with organizational standards.<br>• Participate in data modeling updates, focusing on improving principles, solutions, and methodologies.<br>• Work closely with a small team of data analysts, engineers, and managers to address diverse engineering challenges.<br>• Monitor and enhance data pipeline performance through testing, governance, and continuous delivery.<br>• Provide input on enterprise data architecture strategies, including logical, conceptual, and physical data models.
<p>We are looking for a highly skilled Data Engineering and Software Engineering professional to design, build, and optimize our Data Lake and Data Processing platform on AWS. This role requires deep expertise in data architecture, cloud computing, and software development, as well as the ability to define and implement strategies for deployment, testing, and production workflows.</p><p><br></p><p>Key Responsibilities:</p><ul><li>Design and develop a scalable Data Lake and data processing platform from the ground up on AWS.</li><li>Lead decision-making and provide guidance on code deployment, testing strategies, and production environment workflows.</li><li>Define the roadmap for Data Lake development, ensuring efficient data storage and processing.</li><li>Oversee S3 data storage, Delta.io for change data capture, and AWS data processing services.</li><li>Work with Python and PySpark to process large-scale data efficiently.</li><li>Implement and manage Lambda, Glue, Kafka, and Firehose for seamless data integration and processing.</li><li>Collaborate with stakeholders to align technical strategies with business objectives, while maintaining a hands-on engineering focus.</li><li>Drive innovation and cost optimization in data architecture and cloud infrastructure.</li><li>Provide expertise in data warehousing and transitioning into modern AWS-based data processing practices.</li></ul>
We are looking for a skilled Data Engineer to design, develop, and maintain data systems that support critical business operations and analytics. This role requires collaboration with various teams to ensure data integrity, reliability, and scalability. You will play a pivotal role in optimizing data workflows and implementing robust solutions to meet organizational needs.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to integrate information from various sources.<br>• Create and manage data models, tables, and warehouses using platforms such as Redshift, BigQuery, or Snowflake.<br>• Implement automated workflows and data quality checks to guarantee accurate and consistent datasets.<br>• Collaborate with cross-functional teams to understand and address their data requirements.<br>• Enhance query efficiency by applying techniques such as indexing, partitioning, and dataset restructuring.<br>• Ensure adherence to best practices in data governance, security, and comprehensive documentation.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Incorporate tools like Airflow, dbt, or Kafka to streamline data processing and management.<br>• Support real-time data processing and streaming initiatives.<br>• Facilitate the integration of DevOps tools like Docker, Terraform, or CI/CD pipelines into data systems.
We are looking for a skilled Data Engineer to join our team in Johnson City, Texas. In this role, you will design and optimize data solutions to enable seamless data transfer and management in Snowflake. You will work collaboratively with cross-functional teams to enhance data accessibility and support data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design, develop, and implement ETL solutions to facilitate data transfer between diverse sources and Snowflake.<br>• Optimize the performance of Snowflake databases by constructing efficient data structures and utilizing indexes.<br>• Develop and maintain automated, scalable data pipelines within the Snowflake environment.<br>• Deploy and configure monitoring tools to ensure optimal performance of the Snowflake platform.<br>• Collaborate with product managers and agile teams to refine requirements and deliver solutions.<br>• Create integrations to accommodate growing data volume and complexity.<br>• Enhance data models to improve accessibility for business intelligence tools.<br>• Implement systems to ensure data quality and availability for stakeholders.<br>• Write unit and integration tests while documenting technical work.<br>• Automate testing and deployment processes in Snowflake within Azure.
<p>Robert Half is hiring a highly skilled and innovative Intelligent Automation Engineer to design, develop, and deploy advanced automation solutions using Microsoft Power Automate, Python, and AI technologies. This role is ideal for a hands-on technologist passionate about streamlining business processes, integrating systems, and applying cutting-edge AI to drive intelligent decision-making. This role is a hybrid position based in Philadelphia. For consideration, please apply directly. </p><p><br></p><p>Key Responsibilities</p><ul><li>Design and implement end-to-end automation workflows using Microsoft Power Automate (Cloud & Desktop).</li><li>Develop Python scripts and APIs to support automation, system integration, and data pipeline management.</li><li>Integrate Power Automate with Azure services (Logic Apps, Functions, AI Services, App Insights) and enterprise platforms such as SharePoint, Dynamics 365, and Microsoft Teams.</li><li>Apply Generative AI, LLMs, and Conversational AI to enhance automation with intelligent, context-aware interactions.</li><li>Leverage Agentic AI frameworks (LangChain, AutoGen, CrewAI, OpenAI Function Calling) to build dynamic, adaptive automation solutions.</li></ul>
We are looking for an experienced Data Engineer to take a leading role in optimizing and transforming our data architecture. This position will focus on enhancing performance, scalability, and analytical capabilities within our systems. The ideal candidate will have a strong technical background and the ability to mentor teams while delivering innovative solutions.<br><br>Responsibilities:<br>• Redesign the existing PostgreSQL database architecture to support modern analytical models and improve overall system performance.<br>• Optimize schemas by balancing normalization with denormalization techniques to achieve faster analytical reads.<br>• Implement advanced strategies such as indexing, partitioning, caching, and replication to ensure high-throughput and low-latency data delivery.<br>• Develop and maintain scalable data pipelines to guarantee accurate and timely data availability across distributed systems.<br>• Collaborate with engineering teams to provide guidance on data modeling, query optimization, and architectural best practices.<br>• Monitor and troubleshoot database performance issues, ensuring solutions are implemented effectively.<br>• Lead efforts to enhance the reliability and scalability of data infrastructure to support future growth.<br>• Serve as a technical mentor to team members, sharing expertise in database architecture and performance tuning.<br>• Partner with stakeholders to understand data requirements and deliver solutions that meet business needs.
<p><strong>Position: Databricks Data Engineer</strong></p><p><strong>Location:</strong> Remote (U.S. based) — Preference for candidates in or willing to relocate to <strong>Washington, DC</strong> or <strong>Indianapolis, IN</strong> for periodic on-site support</p><p><strong>Citizenship Requirement:</strong> U.S. Citizen</p><p><br></p><p><strong>Role Summary:</strong></p><p>Seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within an Azure cloud-based data lake. This role translates business requirements into scalable data engineering solutions and supports ongoing ETL operations with a focus on data quality and management.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and optimize scalable data solutions using <strong>Databricks</strong> and <strong>Medallion Architecture</strong>.</li><li>Develop ingestion routines for multi-terabyte datasets across multiple projects and Databricks workspaces.</li><li>Integrate structured and unstructured data sources to enable high-quality business insights.</li><li>Apply data analysis techniques to extract insights from large datasets.</li><li>Implement data management strategies to ensure data integrity, availability, and accessibility.</li><li>Identify and execute cost optimization strategies in data storage, processing, and analytics.</li><li>Monitor and respond to user requests, addressing performance issues, cluster stability, Spark optimization, and configuration management.</li><li>Collaborate with cross-functional teams to support AI-driven analytics and data science workflows.</li><li>Integrate with Azure services including:</li><li>Azure Functions</li><li>Storage Services</li><li>Data Factory</li><li>Log Analytics</li><li>User Management</li><li>Provision and manage infrastructure using <strong>Infrastructure-as-Code (IaC)</strong>.</li><li>Apply best practices for <strong>data security</strong>, <strong>governance</strong>, and <strong>compliance</strong>, supporting federal regulations and public trust standards.</li><li>Work closely with technical and non-technical teams to gather requirements and translate business needs into data solutions.</li></ul><p><strong>Preferred Experience:</strong></p><ul><li>Hands-on experience with the above Azure services.</li><li>Strong foundation in <strong>advanced AI technologies</strong>.</li><li>Experience with <strong>Databricks</strong>, <strong>Spark</strong>, and <strong>Python</strong>.</li><li>Familiarity with <strong>.NET</strong> is a plus.</li></ul>
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
We are looking for a skilled Data Engineer to join our team in San Antonio, Texas. This role offers an opportunity to design, develop, and optimize data solutions that support business operations and strategic decision-making. The ideal candidate will possess a strong technical background, excellent problem-solving skills, and the ability to collaborate effectively across departments.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines using Azure Synapse Analytics, Microsoft Fabric, and Azure Data Factory.<br>• Implement advanced data modeling techniques and design scalable BI solutions that align with business objectives.<br>• Create and maintain dashboards and reports using Power BI, ensuring data accuracy and usability.<br>• Integrate data from various sources, including APIs and Dataverse, into Azure Data Lake Storage Gen2.<br>• Utilize tools like Delta Lake and Parquet to manage and structure data within a lakehouse architecture.<br>• Define and implement BI governance frameworks to ensure consistent data standards and practices.<br>• Collaborate with cross-functional teams such as Operations, Sales, Engineering, and Accounting to gather requirements and deliver actionable insights.<br>• Troubleshoot, document, and resolve data issues independently while driving continuous improvement initiatives.<br>• Lead or contribute to Agile/Scrum-based projects to deliver high-quality data solutions within deadlines.<br>• Stay updated on emerging technologies and trends to enhance data engineering practices.