<p>Robert Half is seeking a <strong>Contract Data Engineer</strong> to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Development:</strong> Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources.</li><li><strong>Data Architecture:</strong> Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs.</li><li><strong>Cloud Data Platforms:</strong> Work within cloud environments such as <strong>AWS, Azure, or GCP</strong> to deploy and manage data solutions.</li><li><strong>Data Warehousing:</strong> Design and support enterprise data warehouses using platforms such as <strong>Snowflake, Redshift, BigQuery, or Azure Synapse</strong>.</li><li><strong>Big Data Processing:</strong> Develop solutions using big data technologies such as <strong>Spark, Databricks, Kafka, and Hadoop</strong> when required.</li><li><strong>Performance Optimization:</strong> Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency.</li><li><strong>Data Quality & Reliability:</strong> Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability.</li><li><strong>Collaboration:</strong> Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions.</li><li><strong>Documentation:</strong> Maintain detailed documentation for pipelines, data flows, and system architecture</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>We are looking for an experienced Data Engineer to join our team on a contract basis in Columbus, Ohio. In this role, you will take on a leadership position, driving the development and optimization of data pipelines that support enterprise-wide analytics and decision-making. You will also play a key role in mentoring team members, fostering collaboration, and ensuring the integrity and quality of data across various business functions.</p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and maintain efficient data pipelines to support enterprise analytics and reporting.</p><p>• Collaborate with business analysts and data science teams to refine data requirements and ensure alignment with organizational goals.</p><p>• Enhance and automate data integration and management processes to improve operational efficiency.</p><p>• Lead efforts to ensure data quality by testing for accuracy, consistency, and conformity to business rules.</p><p>• Provide training and guidance to team members and other stakeholders on data pipelining and preparation techniques.</p><p>• Partner with data governance teams to promote vetted content into the curated data catalog for reuse.</p><p>• Stay updated on emerging technologies and assess their impact on current systems and processes.</p><p>• Offer leadership, coaching, and mentorship to team members, encouraging attention to detail in their development.</p><p>• Work closely with stakeholders to understand business needs and ensure solutions meet those requirements.</p><p>• Perform additional duties as assigned to support organizational objectives.</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools such as Apache Spark and Python.<br>• Design efficient ETL processes to extract, transform, and load data from various sources.<br>• Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.<br>• Implement and manage big data solutions using Apache Hadoop and Apache Kafka.<br>• Monitor and optimize the performance of data systems to ensure reliability and scalability.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Maintain documentation for data workflows and processes to ensure clarity and consistency.<br>• Stay updated on emerging technologies and best practices in data engineering to continuously improve systems.
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, build, and manage data pipelines and systems to support business operations and decision-making processes. This position offers an exciting opportunity to work with cutting-edge technologies within the energy and natural resources sector.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to efficiently process large volumes of data.<br>• Collaborate with cross-functional teams to gather requirements and design data solutions that meet business needs.<br>• Implement and optimize ETL processes to ensure the accuracy and reliability of data flows.<br>• Utilize technologies such as Apache Spark, Hadoop, and Kafka to manage and process data streams.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Perform data integration from multiple sources to create unified datasets for analysis.<br>• Ensure data security and compliance with organizational and industry standards.<br>• Continuously evaluate and adopt new tools and technologies to enhance data engineering practices.<br>• Provide technical guidance and mentorship to entry-level team members as needed.
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
<p>We are looking for an experienced Senior Data Engineer to join our team on a contract basis in Columbus, Ohio. In this role, you will take the lead in designing, building, and optimizing data pipelines to support enterprise-wide data initiatives. You will collaborate with cross-functional teams, ensuring that data solutions are aligned with business needs while maintaining high standards of data quality and consistency. This position offers an excellent opportunity to mentor team members and contribute as a technical leader while driving innovation in data engineering.</p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and maintain scalable data pipelines to support data-driven decision-making across the organization.</p><p>• Collaborate with data scientists and business analysts to refine data requirements and ensure seamless integration for analytics initiatives.</p><p>• Implement automation in data integration processes to enhance efficiency and scalability.</p><p>• Train team members and other stakeholders in data preparation techniques to improve data accessibility and usability.</p><p>• Ensure data quality by testing for accuracy, consistency, and compliance with business rules.</p><p>• Partner with data governance teams to promote curated data content for reuse and standardization.</p><p>• Provide leadership and mentorship to team members, fostering growth and collaboration within the team.</p><p>• Analyze emerging technologies and assess their potential impact on data engineering processes.</p><p>• Work closely with stakeholders to understand business needs and deliver tailored data solutions.</p><p>• Demonstrate attention to detail while building strong relationships across departments.</p>
<p>Robert Half is seeking a <strong>Data Center Technician </strong>to support financial client based in Tukwila, WA. This individual will serve as the sole local technician, responsible for maintaining data center infrastructure, coordinating installations and decommissions, and ensuring optimal performance of critical systems. Apply today!</p><p> </p><p><strong>Duration:</strong> 6 months with potential to extend long term </p><p><strong>Location: </strong>100% Onsite in Tukwila and occasional travel (reimbursed)</p><p><strong>Schedule: </strong>Tuesday - Saturday - 5pm to 1am PST</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Serve as the primary on-site Data Center Technician for Washington-based facilities </li><li>Oversee and execute hardware installations, decommissions, and upgrades </li><li>Rack, stack, and cable servers, routers, and network equipment </li><li>Troubleshoot issues across servers, switches, and networking components </li><li>Perform structured cable management across high-density infrastructure </li><li>Update and maintain accurate records within internal databases and Excel spreadsheets </li><li>Work within a ticketing system to track, prioritize, and resolve technical requests </li><li>Support hardware environments including a variety of network brands and hardware. </li></ul>
We are looking for a highly skilled Systems Administrator to oversee and enhance critical project cost systems for large-scale export initiatives. This position involves close collaboration with the Project Controls team to ensure optimal configuration, administration, and performance of EcoSys and associated applications. The ideal candidate will play a key role in maintaining system integrity, supporting end-users, and driving efficient project cost management.<br><br>Responsibilities:<br>• Configure and administer EcoSys modules, workflows, and settings within production and non-production environments.<br>• Manage user provisioning and permissions based on least-privilege principles, ensuring secure access aligned with contracts and funding structures.<br>• Convert business requirements into effective technical solutions and functional system enhancements.<br>• Monitor system performance, troubleshoot issues, and collaborate with vendors to resolve technical challenges.<br>• Implement system upgrades while adhering to established IT change management protocols.<br>• Develop and maintain custom reports, dashboards, and data visualizations to support business operations.<br>• Integrate EcoSys with other organizational platforms, maintaining smooth data flow and functionality.<br>• Support customized configurations within EcoSys to meet specific contract module requirements.<br>• Conduct advanced data analysis to assist project cost control teams and optimize system usage.<br>• Manage and update master data, including organizational structures, contract lines, and security groups.
<p>We are looking for a skilled Data Warehouse Analyst to join our team in New Jersey. In this role, you will transform logistics challenges into actionable insights through advanced data analysis and reporting. By collaborating with cross-functional teams, you will play a pivotal role in enhancing operational efficiency and driving key business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with Operations, Transportation, and Finance teams to establish and refine KPIs that drive logistics and fulfillment performance.</p><p>• Develop and optimize labor planning and forecasting models for warehouse and delivery operations, partnering closely with recruitment teams.</p><p>• Analyze distribution and fulfillment data to uncover performance trends and identify cost-saving opportunities.</p><p>• Design and maintain dashboards and reports to provide real-time insights into logistics metrics, including delivery times, warehouse productivity, and route optimization.</p><p>• Automate reporting processes to improve accuracy and timeliness of operational data.</p><p>• Continuously enhance data integrity and streamline workflows to optimize logistics operations.</p><p>• Work on data modeling and warehousing projects to support scalable analytics and reporting solutions.</p><p>• Partner with stakeholders to deliver clear and actionable insights to improve decision-making processes.</p><p>• Investigate and implement tools and techniques to improve overall business intelligence capabilities.</p>
<p>We are looking for an experienced Oracle Apex Developer to join our team on a long-term contract basis in Bloomfield Township, MI. The ideal candidate will possess strong technical expertise in Oracle E-Business Suite Financials applications and the Oracle technology stack, along with hands-on experience in Apex development. This position offers an opportunity to work on complex projects and contribute to the development of innovative solutions.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain applications using Oracle Apex, ensuring high performance and scalability.</p><p>• Utilize Oracle E-Business Suite Financials applications to design and implement effective solutions.</p><p>• Create and manage workflows using Oracle Loader, BI Publisher, and Unix shell scripting.</p><p>• Collaborate with stakeholders to understand business requirements and translate them into technical solutions.</p><p>• Integrate Oracle systems with external applications, such as Salesforce, to streamline processes.</p><p>• Develop and maintain REST APIs and event-driven architectures for seamless data exchange.</p><p>• Use J-Developer for advanced programming tasks and application enhancements.</p><p>• Troubleshoot and resolve technical issues, ensuring system reliability and user satisfaction.</p><p>• Employ SQL and PL/SQL programming to optimize database operations.</p><p>• Explore and implement solutions using Google Cloud technologies where applicable.</p>
We are looking for an experienced D365 CE Developer to join our team on a long-term contract basis. In this role, you will play a critical part in advancing and maintaining integrations within the Dynamics 365 Customer Engagement platform. This position offers the opportunity to collaborate on architecture and design initiatives while contributing to impactful enterprise solutions.<br><br>Responsibilities:<br>• Enhance and maintain integrations within the Dynamics 365 Customer Engagement platform.<br>• Develop custom solutions to meet business needs using Microsoft Dynamics CRM functionality.<br>• Collaborate with stakeholders to design and implement scalable system architectures.<br>• Provide technical expertise and guidance for Dynamics CE-related projects.<br>• Troubleshoot and resolve system issues to ensure optimal performance.<br>• Work closely with cross-functional teams to align technical solutions with business goals.<br>• Participate in code reviews and ensure adherence to best practices.<br>• Document development processes and solutions for future reference.<br>• Stay updated on emerging Dynamics technologies and incorporate them into workflows as appropriate.<br>• Support testing efforts to validate system functionality and integration.
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>