<p><strong>Data Engineer (Contract) – St. Louis, MO</strong></p><p><strong>Overview:</strong></p><p>Our company is seeking an experienced Data Engineer to join our team for a contract engagement in St. Louis, MO. As a Data Engineer, you will play a critical role in designing, building, and maintaining robust data pipelines and architectures to support advanced analytics and business intelligence initiatives.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Develop, construct, test, and maintain scalable data architectures and data pipelines.</li><li>Integrate data from diverse sources (structured and unstructured) for analytics and reporting.</li><li>Optimize database performance and ensure data quality.</li><li>Implement data security, privacy standards, and compliance protocols.</li><li>Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver effective data solutions.</li><li>Troubleshoot and resolve data-related issues and bottlenecks.</li><li>Automate data ingestion and transformation processes.</li><li>Support ongoing data management, documentation, and best practices.</li></ul>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to support our organization's data initiatives in Savannah, Georgia. This Contract to permanent role focuses on managing, optimizing, and securing data systems to drive strategic decision-making and improve overall performance. The ideal candidate will work closely with technology teams, analytics departments, and business stakeholders to ensure seamless data integration, accuracy, and scalability.<br><br>Responsibilities:<br>• Design and implement robust data lake and warehouse architectures to support organizational needs.<br>• Develop efficient ETL pipelines to process and integrate data from multiple sources.<br>• Collaborate with analytics teams to create and refine data models for reporting and visualization.<br>• Monitor and maintain data systems to ensure quality, security, and availability.<br>• Troubleshoot data-related issues and perform in-depth analyses to identify solutions.<br>• Define and manage organizational data assets, including SaaS tools and platforms.<br>• Partner with IT and security teams to meet compliance and governance standards.<br>• Document workflows, pipelines, and architecture for knowledge sharing and long-term use.<br>• Translate business requirements into technical solutions that meet reporting and analytics needs.<br>• Provide guidance and mentorship to team members on data usage and best practices.
<p>Robert Half is seeking a <strong>Contract Data Engineer</strong> to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Development:</strong> Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources.</li><li><strong>Data Architecture:</strong> Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs.</li><li><strong>Cloud Data Platforms:</strong> Work within cloud environments such as <strong>AWS, Azure, or GCP</strong> to deploy and manage data solutions.</li><li><strong>Data Warehousing:</strong> Design and support enterprise data warehouses using platforms such as <strong>Snowflake, Redshift, BigQuery, or Azure Synapse</strong>.</li><li><strong>Big Data Processing:</strong> Develop solutions using big data technologies such as <strong>Spark, Databricks, Kafka, and Hadoop</strong> when required.</li><li><strong>Performance Optimization:</strong> Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency.</li><li><strong>Data Quality & Reliability:</strong> Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability.</li><li><strong>Collaboration:</strong> Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions.</li><li><strong>Documentation:</strong> Maintain detailed documentation for pipelines, data flows, and system architecture.</li></ul><p><br></p>
<p>We are looking for a skilled Data Engineer to join a dynamic healthcare organization in the DFW area. In this role, you will play a pivotal part in transforming data operations by building foundational systems for predictive analytics and strategic insights. This position offers a unique opportunity to design and implement data solutions that directly impact clinical and financial decision-making processes.</p><p><br></p><p>Responsibilities:</p><p>• Develop and manage data pipelines and storage systems to ensure seamless reporting and data accessibility.</p><p>• Design forecasting models and predictive analytics to transform retrospective data into actionable insights.</p><p>• Create and maintain dashboards and visualizations that present clinical and financial performance metrics clearly and effectively.</p><p>• Collaborate with leadership to identify strategic metrics and deliver data-driven insights that influence organizational decisions.</p><p>• Act as the subject matter expert for data architecture and analytics, providing guidance and best practices.</p><p>• Implement and optimize ETL processes to streamline data integration and transformation.</p><p>• Utilize tools such as Apache Spark, Python, and Apache Hadoop to develop robust data solutions.</p><p>• Ensure data integrity and accuracy across all reporting and analytics platforms.</p><p>• Leverage BI platforms, with a preference for Power BI, to enhance data visualization and reporting.</p><p>• Monitor and troubleshoot data systems to maintain efficiency and performance.</p>
<p>We are looking for an experienced Data Engineer to join our team. This role involves working on a high-priority cloud migration project within a dynamic business unit. If you have a strong background in data engineering and are ready to contribute to an impactful initiative, we encourage you to apply.</p><p><br></p><p>Responsibilities:</p><p>• Develop and optimize data pipelines to support seamless migration to a cloud-based platform.</p><p>• Collaborate closely with the data analytics leader to align project objectives and deliverables.</p><p>• Utilize Python and Google Cloud Platform to create efficient and scalable data solutions.</p><p>• Integrate data from various SaaS applications into the cloud environment.</p><p>• Address challenges and uncertainties by designing innovative solutions during the early stages of data modernization.</p><p>• Ensure the accuracy and reliability of data ingestion processes across multiple groups.</p><p>• Monitor and maintain the performance of data pipelines to meet business needs.</p><p>• Provide regular updates on project progress and identify areas for improvement.</p><p>• Work independently while adhering to project timelines and requirements.</p>
We are looking for an experienced Data Engineer to join our team in Jacksonville, Florida. In this role, you will take the lead in designing and building a cutting-edge Azure lakehouse platform that enables business leaders to access analytics through natural language queries. This position combines hands-on technical expertise with leadership responsibilities, offering an opportunity to mentor a team of skilled engineers while driving innovation.<br><br>Responsibilities:<br>• Architect and develop a robust Azure lakehouse platform, utilizing Azure Data Lake Gen2, Delta Lake, and PySpark to create efficient data pipelines.<br>• Implement a semantic layer and metric store to ensure consistent data translation and definitions across the organization.<br>• Design and maintain real-time and batch data pipelines, incorporating medallion architecture, schema evolution, and data contracts.<br>• Build retrieval systems for large language models (LLMs) using Azure OpenAI and vectorized Delta tables to support chat-based analytics.<br>• Ensure data quality, lineage, and observability through tools like Great Expectations and Unity Catalog, while optimizing costs through partitioning and compaction.<br>• Develop automated systems for anomaly detection and alerting using Azure ML pipelines and Event Grid.<br>• Collaborate with product and operations teams to translate complex business questions into actionable data models and queries.<br>• Lead and mentor a team of data and Python engineers, establishing best practices in CI/CD, code reviews, and documentation.<br>• Ensure compliance with security, privacy, and governance standards by designing and implementing robust data handling protocols.
<p>We are looking for an experienced Senior Data Engineer to join our team in Denver, Colorado. In this role, you will design and implement data solutions that drive business insights and operational efficiency. You will collaborate with cross-functional teams to manage data pipelines, optimize workflows, and ensure the integrity and security of data systems.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain robust data pipelines to process and transform large datasets effectively.</p><p>• Advise on tools / technologies to implement. </p><p>• Collaborate with stakeholders to understand data requirements and translate them into technical solutions.</p><p>• Design and implement ETL processes to facilitate seamless data integration.</p><p>• Optimize data workflows and ensure system performance meets organizational needs.</p><p>• Work with Apache Spark, Hadoop, and Kafka to build scalable data systems.</p><p>• Create and maintain SQL queries for data extraction and analysis.</p><p>• Ensure data security and integrity by adhering to best practices.</p><p>• Troubleshoot and resolve issues in data systems to minimize downtime.</p><p>• Provide technical guidance and mentorship to less experienced team members.</p><p>• Stay updated on emerging technologies to enhance data engineering practices.</p>
<p>We are looking for an experienced Senior Data Engineer to join our team. This role involves designing and implementing scalable data solutions, optimizing data workflows, and driving innovation in data architecture. The ideal candidate will possess strong leadership qualities and a passion for problem-solving in a fast-paced, cutting-edge environment.</p><p><br></p><p>Responsibilities:</p><p>• Develop high-performance data systems, including databases, APIs, and data integration pipelines, to support scalable solutions.</p><p>• Design and implement metadata-driven architectures and automate deployment processes using infrastructure-as-code principles.</p><p>• Promote best practices in software engineering, such as code reviews, testing, and continuous integration/delivery (CI/CD).</p><p>• Establish and maintain a robust data governance framework to ensure compliance and data integrity.</p><p>• Monitor processes and implement improvements, including query optimization, code refactoring, and efficiency enhancements.</p><p>• Leverage cloud platforms, particularly Azure and Databricks, to improve system architecture and scalability.</p><p>• Conduct data quality checks and build procedures to address and resolve data issues effectively.</p><p>• Create and maintain documentation for data architecture, standards, and best practices.</p><p>• Provide technical leadership to the team, guiding design discussions and fostering innovation in data infrastructure.</p><p>• Identify and implement opportunities for process optimization and automation to improve operational efficiency.</p>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>Position Overview</p><p>We are seeking a Data Engineer Engineer to support and enhance a Databricks‑based data platform during its development phase. This role is focused on building reliable, scalable data solutions early in the lifecycle—not production firefighting.</p><p>The ideal candidate brings hands‑on experience with Databricks, PySpark, Python, and a working understanding of Azure cloud services. You will partner closely with Data Engineering teams to ensure pipelines, notebooks, and workflows are designed for long‑term scalability and production readiness.</p><p><br></p><p>Key Responsibilities</p><ul><li>Develop and enhance Databricks notebooks, jobs, and workflows</li><li>Write and optimize PySpark and Python code for distributed data processing</li><li>Assist in designing scalable and reliable data pipelines</li><li>Apply Spark performance best practices: partitioning, caching, joins, file sizing</li><li>Work with Delta Lake tables, schemas, and data models</li><li>Perform data validation and quality checks during development cycles</li><li>Support cluster configuration, sizing, and tuning for development workloads</li><li>Identify performance bottlenecks early and recommend improvements</li><li>Partner with Data Engineers to prepare solutions for future production rollout</li><li>Document development standards, patterns, and best practices</li></ul>
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p>**** For Faster response on the position, please send a message to Jimmy Escobar on LinkedIn or send an email to Jimmy.Escobar@roberthalf(.com) with your resume. You can also call my office number at 424-270-9193****</p><p><br></p><p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Los Angeles, California. In this role, you will design, build, and maintain robust data infrastructure to support business operations and analytics. This position offers an opportunity to work with cutting-edge technologies and contribute to impactful projects. This position is a hybrid that is 3 days a week on-site and 2 days remote.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement scalable data pipelines using Apache Spark, Hadoop, and other big data technologies.</p><p>• Collaborate with cross-functional teams to understand and translate business requirements into technical solutions.</p><p>• Create and maintain ETL processes to ensure data integrity and accessibility.</p><p>• Manage and monitor large-scale data processing systems, ensuring seamless operation.</p><p>• Design and deploy solutions for real-time data streaming using Apache Kafka.</p><p>• Perform advanced data analytics to support business decision-making.</p><p>• Troubleshoot and resolve issues related to data infrastructure and applications.</p><p><br></p>
<p>The Data Engineer role focuses on designing, building, and optimizing scalable data solutions that support diverse business needs. This position requires the ability to work independently while collaborating effectively in a fast-paced, agile environment. The individual in this role partners with cross-functional teams to gather data requirements, recommend enhancements to existing data pipelines and architectures, and ensure the reliability, performance, and efficiency of data processes.</p><p>Responsibilities</p><ul><li>Support the team’s adoption and continued evolution of the Databricks platform, leveraging features such as Delta Live Tables, workflows, and related tooling</li><li>Design, develop, and maintain data pipelines that extract data from relational sources, load it into a data lake, transform it as needed, and publish it to a Databricks-based lakehouse environment</li><li>Optimize data pipelines and processing workflows to improve performance, scalability, and overall efficiency</li><li>Implement data quality checks and validation logic to ensure data accuracy, consistency, and completeness</li><li>Create and maintain documentation including data mappings, data definitions, architectural diagrams, and data flow diagrams</li><li>Develop proof-of-concepts to evaluate and validate new technologies, tools, or data processes</li><li>Deploy, manage, and support code across non-production and production environments</li><li>Investigate, troubleshoot, and resolve data-related issues, including identifying root causes and implementing fixes</li><li>Identify performance bottlenecks and recommend optimization strategies, including database tuning and query performance improvements</li></ul>
<p>We’re seeking a Data Engineer to build and maintain scalable data pipelines that power analytics, reporting, and machine learning across the organization. You’ll turn raw data into clean, reliable, and accessible datasets that drive business decisions.</p><p>What You’ll Do</p><ul><li>Design and maintain data warehouses and data lakes</li><li>Build ETL/ELT pipelines integrating data from multiple systems</li><li>Optimize performance for large-scale datasets</li><li>Ensure data quality, security, and governance</li><li>Collaborate with analysts and ML teams to create analytics-ready datasets</li><li>Automate workflows and monitoring</li></ul><p><br></p>
<p>Since it’s 2026, the Data Engineering landscape in DC has shifted heavily toward <strong>Cloud-Native architectures</strong> and <strong>GenAI-ready pipelines</strong>. Robert Half typically recruits for both their internal corporate teams and their high-end consulting arm (Protiviti).</p><p>Here is a tailored job description based on current 2026 market standards and Robert Half’s specific hiring trends in the District.</p><p><br></p><p>Job Title: Data Engineer</p><p><strong>Location:</strong> Washington, DC (Hybrid – Downtown DC Office)</p><p><strong>Company:</strong> Robert Half </p><p><strong>Employment Type: </strong>Contract-to-Hire</p><p>Role Overview</p><p>As a Data Engineer at Robert Half, you will be the backbone of our data-driven decision-making process. You aren't just "moving data"; you are architecting the flow of information that powers our localized market analytics and global recruitment engines. In the DC market, this often involves handling high-compliance data environments and integrating cutting-edge AI frameworks into traditional ETL workflows.</p><p><br></p><p><br></p>
<p><strong>Data Pipeline Development</strong></p><ul><li>Design, build, and optimize scalable ETL/ELT pipelines to support analytics and operational workflows.</li><li>Ingest structured, semi-structured, and unstructured data from multiple internal and external sources.</li><li>Automate and orchestrate data workflows using tools like Airflow, Azure Data Factory, AWS Glue, or similar.</li></ul><p><strong>Data Architecture & Modeling</strong></p><ul><li>Develop and maintain data models, data marts, and data warehouses (relational, dimensional, and/or cloud-native).</li><li>Implement best practices for data partitioning, performance optimization, and storage management.</li><li>Work with BI developers, data scientists, and analysts to ensure datasets are structured to meet business needs.</li></ul><p><strong>Cloud Engineering & Storage</strong></p><ul><li>Build and maintain cloud data environments (Azure, AWS, GCP), including storage, compute, and security components.</li><li>Deploy and manage scalable data systems such as Snowflake, Databricks, BigQuery, Redshift, or Synapse.</li><li>Optimize cloud data cost, performance, and governance.</li></ul><p><strong>Data Quality & Reliability</strong></p><ul><li>Implement data validation, error handling, and monitoring to ensure accuracy, completeness, and reliability.</li><li>Troubleshoot pipeline failures, performance issues, and data discrepancies.</li><li>Maintain documentation and data lineage for transparency and auditability.</li></ul><p><strong>Collaboration & Cross‑Functional Support</strong></p><ul><li>Partner with product, engineering, and analytics teams to translate business requirements into technical solutions.</li><li>Support self-service analytics initiatives by preparing high-quality datasets and data products.</li><li>Provide technical guidance on data best practices and engineering standards.</li></ul><p><br></p><p><br></p>
<p>We are seeking a Data Governance & Data Quality Platform Engineer responsible for the technical administration, integration, and optimization of enterprise data governance and data quality tools such as Atlan and DQ Labs / Monte Carlo. This role ensures these platforms are scalable, secure, integrated across enterprise data ecosystems, and maintained for high availability and performance. The position also supports automation, data quality monitoring, and compliance reporting initiatives across the organization.</p><p><br></p><p>Key Responsibilities</p><p><br></p><p>1. Platform Engineering & Administration</p><ul><li>Configure and maintain Atlan for metadata management, lineage tracking, and governance workflows</li><li>Configure DQ Labs / data quality tools for data profiling, rule creation, and monitoring dashboards</li><li>Manage user roles, authentication, SSO, RBAC, and security settings across governance platforms</li></ul><p>2. Integration & Automation</p><ul><li>Develop and maintain integrations between data sources, databases, data lakes, and BI tools</li><li>Automate metadata ingestion and data quality checks using APIs, Python scripts, or ETL frameworks</li><li>Configure connectors for enterprise data and analytics platforms</li></ul><p>3. Performance, Scalability & Reliability</p><ul><li>Monitor system health and optimize performance across governance and data quality environments</li><li>Apply patches, updates, and troubleshoot technical issues as needed</li><li>Implement logging, alerting, and proactive monitoring across the platform ecosystem</li></ul><p>4. Technical Support & Issue Resolution</p><ul><li>Provide Tier 3 support for platform‑related issues</li><li>Debug integration failures and resolve configuration conflicts</li><li>Collaborate with vendors for advanced troubleshooting, feature requests, and roadmap alignment</li></ul><p>5. Security & Compliance</p><ul><li>Ensure platforms comply with security and data privacy standards (e.g., GDPR, CCPA)</li><li>Implement encryption, access controls, and audit logging</li><li>Support compliance reporting and risk assessments using platform governance features and data quality metrics</li></ul>
<p>Essential Duties and Responsibilities:</p><p> · Knowledge of database coding and tables; as well as general database management</p><p> · Understanding of client management, support, and communicating progress and timelines accordingly</p><p> · Organizes and/or leads Informatics projects in the implementation/use of new data warehouse tools and systems</p><p> · Ability to train new hires; as well as lead in training of new client staff members</p><p> · Understanding data schema and the analysis of database performance and accuracy</p><p> · Understanding of ETL tools, OLAP design, and data quality processes</p><p> · Knowledge of Business Intelligence life cycle: planning, design, development, validation, deployment, documentation, and ongoing support</p><p> · Working knowledge of electronic medical records software (eCW, Nextgen, etc) and the backend storage of that data</p><p> · Ability to generate effective probability modeling and statistics as it pertains to healthcare outcomes and financial risks</p><p> · Ability to manage sometimes lengthy and complicated projects from throughout the life cycle and meet the deadlines associated with these projects</p><p> · Development, maintenance, technical support of various reports and dashboards</p><p> · Knowledge of Microsoft® SQL including coding language, creation of tables, stored procedures, and query design</p><p> · Fundamental understanding of outpatient healthcare workflows</p><p> · Knowledge of relational database concepts and flat/formatted file processing.</p><p> · Possesses strong commitment to data validation processes in order to ensure accuracy of reporting (internal quality control)</p><p> · Possesses a firm grasp of patient confidentiality and system security practices to prevent HIPAA and other security violations.</p><p> · Knowledge of IBM Cognos® or other database reporting software such as SAS, SPSS, and Crystal Reports</p><p> · Ability to meet the needs of other members of the Informatics department to maximize efficiency and minimize complexity of end-user products</p><p><br></p><p>Requirements:</p><p> · Education: Bachelor's Degree</p><p> · Proven experience as a dbt Developer or in a similar Data Engineer role.</p><p> · Expert-level SQL skills — capable of writing, tuning, and debugging complex queries across large datasets.</p><p> · Strong experience with Snowflake or comparable data warehouse technologies (BigQuery, Redshift, etc.).</p><p> · Proficiency in Python for scripting, automation, or data manipulation.</p><p> · Solid understanding of data warehousing concepts, modeling, and ELT workflows.</p><p> · Familiarity with Git or other version control systems.</p><p> · Experience working with cloud-based platforms such as AWS, GCP, or Azure.</p><p><br></p><p><br></p>
<p>Design and manage data pipelines, ensuring optimized performance for analytics and reporting. Support BI tools to provide actionable insights for decision-making.</p>
<p>We are looking for a skilled Data Engineer to design and enhance scalable data solutions that meet diverse business objectives. This role involves collaborating with cross-functional teams to identify data requirements, improve existing pipelines, and ensure efficient data processing. The ideal candidate will bring expertise in server-side development, database management, and software deployment, working in a dynamic and fast-paced environment.</p><p><br></p><p>Responsibilities</p><ul><li>Enhance and optimize existing data storage platforms, including relational and NoSQL databases, to improve data accessibility, performance, and persistence</li><li>Apply advanced database techniques such as tuning, indexing, views, and stored procedures to support efficient and reliable data management</li><li>Develop server-side Python services utilizing concurrency patterns such as asynchronous programming and multi-threading, and leveraging libraries such as NumPy and Pandas</li><li>Design, build, and maintain APIs using modern frameworks, with experience across communication protocols including gRPC and socket-based implementations</li><li>Create, manage, and maintain CI/CD pipelines using DevOps and artifact management tools to enable efficient and reliable software delivery</li><li>Design and deploy applications in enterprise Linux environments, ensuring stability, performance, and scalability</li><li>Partner with cross-functional teams to gather requirements and deliver technical solutions aligned with business objectives</li><li>Follow software development lifecycle best practices to ensure high-quality, maintainable, and secure solutions</li><li>Work effectively in iterative, fast-paced development environments while consistently delivering high-quality outcomes on schedule</li></ul><p><br></p>
<p>As a Data Engineer at Robert Half, you will be the backbone of our data-driven decision-making process. You aren't just "moving data"; you are architecting the flow of information that powers our localized market analytics and global recruitment engines. In the DC market, this often involves handling high-compliance data environments and integrating cutting-edge AI frameworks into traditional ETL workflows.</p>
<p>Robert Half is hiring a highly skilled and innovative Intelligent Automation Engineer to design, develop, and deploy advanced automation solutions using Microsoft Power Automate, Python, and AI technologies. This role is ideal for a hands-on technologist passionate about streamlining business processes, integrating systems, and applying cutting-edge AI to drive intelligent decision-making. This role is a hybrid position based in Philadelphia. For consideration, please apply directly. </p><p><br></p><p>Key Responsibilities</p><ul><li>Design and implement end-to-end automation workflows using Microsoft Power Automate (Cloud & Desktop).</li><li>Develop Python scripts and APIs to support automation, system integration, and data pipeline management.</li><li>Integrate Power Automate with Azure services (Logic Apps, Functions, AI Services, App Insights) and enterprise platforms such as SharePoint, Dynamics 365, and Microsoft Teams.</li><li>Apply Generative AI, LLMs, and Conversational AI to enhance automation with intelligent, context-aware interactions.</li><li>Leverage Agentic AI frameworks (LangChain, AutoGen, CrewAI, OpenAI Function Calling) to build dynamic, adaptive automation solutions.</li></ul>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.