Job Title: Data Engineer Location: Hybrid, Chicago, IL <br> About the Role: Our company is seeking a Data Engineer with 5–7 years of experience in data engineering. This role is designed for individuals who excel at building robust, scalable data solutions in AWS cloud environments. As part of our team, you’ll engineer and optimize data pipelines critical to our analytics, reporting, and data-driven strategy, while collaborating cross-functionally in a hybrid Chicago-based setting. <br> Key Responsibilities: <br> Design, build, and maintain scalable data pipelines and architectures using AWS cloud services. Develop, manage, and optimize ETL/ELT workflows to acquire, clean, and transform data from diverse sources. Collaborate with business stakeholders, analysts, and data scientists to understand data requirements and deliver solutions. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Monitor pipeline performance and troubleshoot issues to maximize data reliability and efficiency. Apply data governance best practices and maintain technical documentation. <br> Must-Have Technologies: Languages: Python, SQL Cloud: AWS (S3, Redshift, Glue, Lambda, RDS, Data Pipeline) Big Data: Apache Spark, Hadoop, Kafka ETL Tools: Airflow, AWS Data Pipeline Databases: PostgreSQL, MySQL, Redshift, MongoDB Work Arrangement: <br> Hybrid schedule with regular in-office collaboration in Chicago, IL. Preferred Certifications (optional): <br> AWS Certified Data Analytics – Specialty AWS Certified Solutions Architect Certified Data detail oriented (CDP)
We are looking for a highly skilled Senior Data Engineer to join our team on a long-term contract basis. In this role, you will design and implement robust data pipelines and architectures to support data-driven decision-making across the organization. You will work closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions using cutting-edge tools and technologies. This position is based in Dallas, Texas.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks to streamline data ingestion and transformation.<br>• Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink.<br>• Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3 to ensure efficiency and scalability.<br>• Establish and enforce data governance, quality, and security protocols through best practices and monitoring systems.<br>• Manage complex workflows and processes across hybrid and multi-cloud environments.<br>• Work with diverse data formats, including Parquet and Avro, to enhance data accessibility and integration.<br>• Troubleshoot and fine-tune distributed data systems to maximize performance and reliability.<br>• Mentor and guide engineers at the beginning of their careers to promote a culture of collaboration and technical excellence.
<p><strong><u>Data Engineer</u></strong></p><p><strong>Onsite 4x week in El Segundo</strong></p><p><strong>$130K - $160K + benefits</strong></p><p>We are looking for an experienced Data Engineer to join our dynamic team in El Segundo, California. In this role, you will play a key part in designing, developing, and optimizing data pipelines and architectures to support business operations and analytics. This position offers the opportunity to work on cutting-edge technologies, including AI and machine learning applications.</p><p><br></p><p>Responsibilities:</p><p>• Develop, test, and maintain scalable data pipelines and architectures to support business intelligence and analytics needs.</p><p>• Collaborate with cross-functional teams to integrate data from diverse sources, including D365 Commerce and Adobe Experience Platform.</p><p>• Utilize Python, PySpark, and Azure data services to transform and orchestrate datasets.</p><p>• Implement and manage Kafka-based systems for real-time data streaming.</p><p>• Ensure compliance with data governance, security, and privacy standards.</p><p>• Optimize data storage solutions, leveraging medallion architecture and modern data modeling practices.</p><p>• Prepare datasets for AI/ML applications and advanced analytical models.</p><p>• Monitor, troubleshoot, and improve the performance of data systems.</p><p>• Design semantic models and dashboards using Power BI to support decision-making.</p><p>• Stay updated on emerging technologies and best practices in data engineering.</p>
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Ann Arbor, Michigan, and contribute to the development of a modern, scalable data platform. In this role, you will focus on building efficient data pipelines, ensuring data quality, and enabling seamless integration across systems to support business analytics and decision-making. This position offers an exciting opportunity to work with cutting-edge technologies and play a key role in the transformation of our data environment.<br><br>Responsibilities:<br>• Design and implement robust data pipelines on Azure using tools such as Databricks, Spark, Delta Lake, and Airflow.<br>• Develop workflows to ingest and integrate data from diverse sources into Azure Data Lake.<br>• Build and maintain data transformation layers following the medallion architecture principles.<br>• Apply data quality checks, validation processes, and deduplication techniques to ensure accuracy and reliability.<br>• Create reusable and parameterized notebooks to streamline batch and streaming data processes.<br>• Optimize merge and update logic in Delta Lake by leveraging efficient partitioning strategies.<br>• Collaborate with business and application teams to understand and fulfill data integration requirements.<br>• Enable downstream integrations with APIs, Power BI dashboards, and reporting systems.<br>• Establish monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.<br>• Participate in code reviews, agile development practices, and team design discussions.
<p><strong>Senior Data Engineer</strong></p><p><strong>Location:</strong> Calabasas, CA (Fully Remote if outside 50 miles)</p><p> <strong>Compensation:</strong> $140K–$160K </p><p> <strong>Reports to:</strong> Director of Data Engineering</p><p>Our entertainment client is seeking a <strong>Senior Data Engineer</strong> to design, build, and optimize enterprise data pipelines and cloud infrastructure. This hands-on role focuses on implementing scalable data architectures, developing automation, and driving modern data engineering best practices across the company.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and maintain ELT/ETL pipelines in Snowflake, Databricks, and AWS.</li><li>Build and orchestrate workflows using Python, SQL, Airflow, and dbt.</li><li>Implement medallion/lakehouse architectures and event-driven pipelines.</li><li>Manage AWS services (Lambda, EC2, S3, Glue) and infrastructure-as-code (Terraform).</li><li>Optimize data performance, quality, and governance across systems.</li></ul><p>For immediate consideration, direct message Reid Gormly on Linkedin and Apply Now!</p>
We are looking for a skilled Data Engineer to join our team in Cleveland, Ohio. This long-term contract position offers the opportunity to contribute to the development and optimization of data platforms, with a primary focus on Snowflake and Apache Airflow technologies. You will play a key role in ensuring efficient data management and processing to support critical business needs.<br><br>Responsibilities:<br>• Design, develop, and maintain data pipelines using Snowflake and Apache Airflow.<br>• Collaborate with cross-functional teams to implement scalable data solutions.<br>• Optimize data processing workflows to ensure high performance and reliability.<br>• Monitor and troubleshoot issues within the Snowflake data platform.<br>• Develop ETL processes to support data integration and transformation.<br>• Work with tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data operations.<br>• Implement robust data warehousing strategies to support business intelligence initiatives.<br>• Analyze and resolve data-related technical challenges promptly.<br>• Provide support and guidance during Snowflake deployments across subsidiaries.<br>• Document processes and ensure best practices for data engineering are followed.
We are looking for an experienced Data Engineer to take a leading role in optimizing and transforming our data architecture. This position will focus on enhancing performance, scalability, and analytical capabilities within our systems. The ideal candidate will have a strong technical background and the ability to mentor teams while delivering innovative solutions.<br><br>Responsibilities:<br>• Redesign the existing PostgreSQL database architecture to support modern analytical models and improve overall system performance.<br>• Optimize schemas by balancing normalization with denormalization techniques to achieve faster analytical reads.<br>• Implement advanced strategies such as indexing, partitioning, caching, and replication to ensure high-throughput and low-latency data delivery.<br>• Develop and maintain scalable data pipelines to guarantee accurate and timely data availability across distributed systems.<br>• Collaborate with engineering teams to provide guidance on data modeling, query optimization, and architectural best practices.<br>• Monitor and troubleshoot database performance issues, ensuring solutions are implemented effectively.<br>• Lead efforts to enhance the reliability and scalability of data infrastructure to support future growth.<br>• Serve as a technical mentor to team members, sharing expertise in database architecture and performance tuning.<br>• Partner with stakeholders to understand data requirements and deliver solutions that meet business needs.
<p>On behalf of our well-established client in the financial industry, Robert Half Talent Solutions, Technology Division is seeking a <strong>Senior Data Engineer</strong> with exceptional Python skills to lead and support a high-performing data team. This role is ideal for someone who thrives in a collaborative environment, enjoys mentoring others, and is passionate about building scalable data solutions.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li><strong>Python Leadership:</strong> Serve as the Python expert on the team, coaching and mentoring junior developers and data scientists.</li><li><strong>Fraud Engine Development:</strong> Design and implement a robust engine to identify and retribute fraudulent activity using advanced data techniques.</li><li><strong>Data Wrangling:</strong> Clean, transform, and prepare large datasets for analysis and modeling.</li><li><strong>Model Building:</strong> Collaborate with data scientists to build and deploy machine learning models within Databricks.</li><li><strong>ETL Pipeline Development:</strong> Design and maintain scalable ETL pipelines to support data ingestion and transformation.</li><li><strong>Azure Functions:</strong> Develop and deploy serverless functions to automate data workflows and support real-time processing.</li><li><strong>Databricks Architecture:</strong> Leverage Databricks for data engineering and machine learning workflows, ensuring best practices in architecture and performance.</li></ul><p><br></p><p><br></p><p><br></p>
We are looking for a skilled Data Engineer to join our team in Johnson City, Texas. In this role, you will design and optimize data solutions to enable seamless data transfer and management in Snowflake. You will work collaboratively with cross-functional teams to enhance data accessibility and support data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design, develop, and implement ETL solutions to facilitate data transfer between diverse sources and Snowflake.<br>• Optimize the performance of Snowflake databases by constructing efficient data structures and utilizing indexes.<br>• Develop and maintain automated, scalable data pipelines within the Snowflake environment.<br>• Deploy and configure monitoring tools to ensure optimal performance of the Snowflake platform.<br>• Collaborate with product managers and agile teams to refine requirements and deliver solutions.<br>• Create integrations to accommodate growing data volume and complexity.<br>• Enhance data models to improve accessibility for business intelligence tools.<br>• Implement systems to ensure data quality and availability for stakeholders.<br>• Write unit and integration tests while documenting technical work.<br>• Automate testing and deployment processes in Snowflake within Azure.
<p>Robert Half is hiring a highly skilled and innovative Intelligent Automation Engineer to design, develop, and deploy advanced automation solutions using Microsoft Power Automate, Python, and AI technologies. This role is ideal for a hands-on technologist passionate about streamlining business processes, integrating systems, and applying cutting-edge AI to drive intelligent decision-making. This role is a hybrid position based in Philadelphia. For consideration, please apply directly. </p><p><br></p><p>Key Responsibilities</p><ul><li>Design and implement end-to-end automation workflows using Microsoft Power Automate (Cloud & Desktop).</li><li>Develop Python scripts and APIs to support automation, system integration, and data pipeline management.</li><li>Integrate Power Automate with Azure services (Logic Apps, Functions, AI Services, App Insights) and enterprise platforms such as SharePoint, Dynamics 365, and Microsoft Teams.</li><li>Apply Generative AI, LLMs, and Conversational AI to enhance automation with intelligent, context-aware interactions.</li><li>Leverage Agentic AI frameworks (LangChain, AutoGen, CrewAI, OpenAI Function Calling) to build dynamic, adaptive automation solutions.</li></ul>
We are looking for an experienced Data Engineer to join our team on a contract basis. In this role, you will play a key part in developing and maintaining data systems that support business objectives. Based in Cincinnati, Ohio, the position offers an opportunity to work with cutting-edge technologies and collaborate with a dynamic team.<br><br>Responsibilities:<br>• Design, develop, and optimize data pipelines and workflows using ETL processes.<br>• Implement and maintain big data solutions leveraging Apache Spark, Hadoop, and Kafka.<br>• Collaborate with cross-functional teams to analyze data requirements and ensure seamless integration.<br>• Develop scalable and efficient data models to support business intelligence and analytics.<br>• Write clean, maintainable Python code for data processing and transformation.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Ensure compliance with data governance policies and security standards.<br>• Provide technical support and guidance for the implementation of data solutions.<br>• Document system architecture and processes to maintain clarity and consistency.<br>• Stay updated with emerging trends and technologies in data engineering.
We are looking for a highly skilled Data Engineer to join our team on a contract basis in Atlanta, Georgia. This role focuses on optimizing data processes and infrastructure, ensuring efficient data management and performance. The ideal candidate will possess expertise in modern data engineering tools and technologies.<br><br>Responsibilities:<br>• Optimize data indexing and address fragmentation issues to enhance system performance.<br>• Develop and maintain data pipelines using ETL processes to ensure accurate data transformation and integration.<br>• Utilize Apache Spark for scalable data processing and analytics.<br>• Implement and manage big data solutions with Apache Hadoop.<br>• Design and deploy real-time data streaming frameworks using Apache Kafka.<br>• Collaborate with cross-functional teams to identify and resolve data-related challenges.<br>• Monitor and improve system performance by analyzing data usage and storage trends.<br>• Write efficient code in Python to support data engineering tasks.<br>• Document processes and maintain clear records of data workflows and optimizations.<br>• Ensure data security and compliance with organizational standards.
We are looking for a skilled Data Engineer to join our team in San Antonio, Texas. This role offers an opportunity to design, develop, and optimize data solutions that support business operations and strategic decision-making. The ideal candidate will possess a strong technical background, excellent problem-solving skills, and the ability to collaborate effectively across departments.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines using Azure Synapse Analytics, Microsoft Fabric, and Azure Data Factory.<br>• Implement advanced data modeling techniques and design scalable BI solutions that align with business objectives.<br>• Create and maintain dashboards and reports using Power BI, ensuring data accuracy and usability.<br>• Integrate data from various sources, including APIs and Dataverse, into Azure Data Lake Storage Gen2.<br>• Utilize tools like Delta Lake and Parquet to manage and structure data within a lakehouse architecture.<br>• Define and implement BI governance frameworks to ensure consistent data standards and practices.<br>• Collaborate with cross-functional teams such as Operations, Sales, Engineering, and Accounting to gather requirements and deliver actionable insights.<br>• Troubleshoot, document, and resolve data issues independently while driving continuous improvement initiatives.<br>• Lead or contribute to Agile/Scrum-based projects to deliver high-quality data solutions within deadlines.<br>• Stay updated on emerging technologies and trends to enhance data engineering practices.
<p><strong>Position: Databricks Data Engineer</strong></p><p><strong>Location:</strong> Remote (U.S. based) — Preference for candidates in or willing to relocate to <strong>Washington, DC</strong> or <strong>Indianapolis, IN</strong> for periodic on-site support</p><p><strong>Citizenship Requirement:</strong> U.S. Citizen</p><p><br></p><p><strong>Role Summary:</strong></p><p>Seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within an Azure cloud-based data lake. This role translates business requirements into scalable data engineering solutions and supports ongoing ETL operations with a focus on data quality and management.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and optimize scalable data solutions using <strong>Databricks</strong> and <strong>Medallion Architecture</strong>.</li><li>Develop ingestion routines for multi-terabyte datasets across multiple projects and Databricks workspaces.</li><li>Integrate structured and unstructured data sources to enable high-quality business insights.</li><li>Apply data analysis techniques to extract insights from large datasets.</li><li>Implement data management strategies to ensure data integrity, availability, and accessibility.</li><li>Identify and execute cost optimization strategies in data storage, processing, and analytics.</li><li>Monitor and respond to user requests, addressing performance issues, cluster stability, Spark optimization, and configuration management.</li><li>Collaborate with cross-functional teams to support AI-driven analytics and data science workflows.</li><li>Integrate with Azure services including:</li><li>Azure Functions</li><li>Storage Services</li><li>Data Factory</li><li>Log Analytics</li><li>User Management</li><li>Provision and manage infrastructure using <strong>Infrastructure-as-Code (IaC)</strong>.</li><li>Apply best practices for <strong>data security</strong>, <strong>governance</strong>, and <strong>compliance</strong>, supporting federal regulations and public trust standards.</li><li>Work closely with technical and non-technical teams to gather requirements and translate business needs into data solutions.</li></ul><p><strong>Preferred Experience:</strong></p><ul><li>Hands-on experience with the above Azure services.</li><li>Strong foundation in <strong>advanced AI technologies</strong>.</li><li>Experience with <strong>Databricks</strong>, <strong>Spark</strong>, and <strong>Python</strong>.</li><li>Familiarity with <strong>.NET</strong> is a plus.</li></ul>
<p><strong><u>DATA ENGINEER</u></strong></p><p> -- <em>Job Type</em> = Permanent, Full-time Employment</p><p> -- <em>Job Location</em> = South Bay - Los Angeles, CA</p><p><br></p><p><strong><em><u>Job Summary</u></em></strong></p><p>This role is responsible for supporting the organization’s enterprise data systems, with a primary focus on modernizing and centralizing data architecture to enable scalable analytics and AI applications. Reporting to the VP of Business Insights and Analytics, this position will collaborate closely with teams across Sales, Marketing, and Technology to unify data from platforms such as: Azure Fabric, Microsoft Dynamics 365, Adobe Experience Platform, and Power BI. The Data Engineer will play a key role in transforming fragmented data sources into a cohesive, AI-ready foundation that drives strategic insights, operational efficiency, and new revenue opportunities.</p>
<p>We are looking for a skilled <strong>Data Engineer</strong> to design and build robust data solutions that align with business objectives. In this role, you will collaborate with cross-functional teams to develop and maintain scalable data architectures, pipelines, and models. Your expertise will ensure the quality, security, and compliance of data systems while contributing to the organization’s data-driven decision-making processes. Call 319-362-8606, or email your resume directly to Shania Lewis - Technology Recruiting Manager at Robert Half (email information is on LinkedIn). Let's talk!!</p><p><br></p><p><strong>Responsibilities:</strong></p><ul><li>Design and implement scalable data architectures, pipelines, and models.</li><li>Translate business requirements into practical data solutions.</li><li>Ensure data quality, security, and regulatory compliance.</li><li>Maintain and improve existing data infrastructure.</li><li>Optimize system performance for efficiency and reliability.</li><li>Research and recommend emerging data technologies.</li><li>Mentor team members and foster collaboration.</li><li>Enable effective analytics through robust data solutions.</li></ul>
Description: As a Senior Analytics Engineer at The Walt Disney Studios, you will play a pivotal role in the transformation of data into actionable insights. Collaborate with our dynamic team of technologists to develop cutting-edge data solutions that drive innovation and fuel business growth. Your responsibilities will include managing complex data structures and delivering scalable and efficient data solutions. Your expertise in data engineering will be crucial in optimizing our data-driven decision-making processes. If you& #39;re passionate about leveraging data to make a tangible impact, we welcome you to join us in shaping the future of our organization. You will: Architect and design data products using foundational data sets. Develop and maintain code for data products. Consult with business stakeholders on data strategy and current data assets. Provide specifications for data ingestion and transformation. Document and instruct others on using data products for automation and decision-making. Build data pipelines to automate the creation and deployment of knowledge from models. Monitor and improve statistical and machine learning models in data products. Work with data scientists to implement methodologies for marketing problem-solving. Coordinate with other science and technology teams.Required Education ● Bachelor’s Degree in Computer Science, Information Systems, or a related field, or equivalent work experience. Must be okay working 4 onsite per week. Expert level on SQL-definitely must have Snowflake nice to have AWS is nice to have -4 plus years experience on SQL< br >Preferred Qualifications:Required Education<br>● Bachelor’s Degree in Computer Science, Information Systems, or a related field, or equivalent<br>work experience.<br>● Master’s Degree is a plus.< br >Basic Qualifications: Bachelor& #39;s degree in Computer Science, Information Systems, Software Engineering, or<br>related field.<br> 5+ years of experience in analytics engineering and technology.<br> Demonstrated academic achievement in statistics and probability.<br> Proficiency in Python and SQL.<br> Strong problem-solving, decision-making, and critical thinking skills.<br> Outstanding interpersonal skills and ability to manage multiple priorities.<br> Strong written and verbal communication skills.<br> Ability to work independently and collaboratively in a diverse environment.<br>Project management and business analysis skills preferred Education: STEM Bachelor's Degree
We are looking for a skilled Engineer to develop and enhance software solutions that address complex challenges in the real estate and property industry. This long-term contract position involves designing, coding, testing, and maintaining scalable and secure software systems. Based in Minneapolis, Minnesota, this role offers an opportunity to contribute to impactful engineering projects while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Design and implement software solutions that align with customer needs and organizational goals.<br>• Develop, test, debug, and document code to ensure reliability and performance.<br>• Collaborate with team members to solve technical challenges and remove roadblocks.<br>• Apply knowledge of frameworks and systems design to create stable and scalable software.<br>• Participate in product planning and provide input on technical strategies and solutions.<br>• Troubleshoot and analyze complex issues to identify and resolve defects.<br>• Mentor developers who are early in their careers and provide technical guidance to the team.<br>• Explore and adopt new technologies to enhance product performance and lifecycle.<br>• Contribute to DevOps processes, including support rotations and subsystem knowledge-building.<br>• Assist in recruiting efforts by participating in interviews and evaluating potential team members.
<p>Our client is undergoing a major digital transformation, shifting toward a cloud-native, API-driven infrastructure. They’re looking for a Data Engineer to help build a modern, scalable data platform that supports this evolution. This role will focus on creating secure, efficient data pipelines, preparing data for analytics, and enabling real-time data sharing across systems.</p><p>As the organization transitions from older, legacy systems to more dynamic, event-based and API-integrated models, the Data Engineer will be instrumental in modernizing the data environment—particularly across the bronze, silver, and gold layers of their medallion architecture.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and deploy scalable data pipelines in Azure using tools like Databricks, Spark, Delta Lake, DBT, Dagster, Airflow, and Parquet.</li><li>Build workflows to ingest data from various sources (e.g., SFTP, vendor APIs) into Azure Data Lake.</li><li>Develop and maintain data transformation layers (Bronze/Silver/Gold) within a medallion architecture.</li><li>Apply data quality checks, deduplication, and validation logic throughout the ingestion process.</li><li>Create reusable and parameterized notebooks for both batch and streaming data jobs.</li><li>Implement efficient merge/update logic in Delta Lake using partitioning strategies.</li><li>Work closely with business and application teams to gather and deliver data integration needs.</li><li>Support downstream integrations with APIs, Power BI dashboards, and SQL-based reports.</li><li>Set up monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.</li><li>Participate in code reviews, design sessions, and agile backlog grooming.</li></ul><p><strong>Additional Technical Duties:</strong></p><ul><li><strong>SQL Server Development:</strong> Write and optimize stored procedures, functions, views, and indexing strategies for high-performance data processing.</li><li><strong>ETL/ELT Processes:</strong> Manage data extraction, transformation, and loading using SSIS and SQL batch jobs.</li></ul><p><strong>Tech Stack:</strong></p><ul><li><strong>Languages & Frameworks:</strong> Python, C#, .NET Core, SQL, T-SQL</li><li><strong>Databases & ETL Tools:</strong> SQL Server, SSIS, SSRS, Power BI</li><li><strong>API Development:</strong> ASP.NET Core Web API, RESTful APIs</li><li><strong>Cloud & Data Services (Roadmap):</strong> Azure Data Factory, Azure Functions, Azure Databricks, Azure SQL Database, Azure Data Lake, Azure Storage</li><li><strong>Streaming & Big Data (Roadmap):</strong> Delta Lake, Databricks, Kafka (preferred but not required)</li><li><strong>Governance & Security:</strong> Data integrity, performance tuning, access control, compliance</li><li><strong>Collaboration Tools:</strong> Jira, Confluence, Visio, Smartsheet</li></ul>
<p>Robert Half is currently partnering with a well-established company in San Diego that is looking for a Senior Data Engineer, experienced in BigQuery, DBT (Data Build Tool), and GCP. This position is full time (permanent placement) that is 100% onsite in San Diego. We are looking for a Principal Data Engineer that is passionate about optimizing systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing. Must have experience in DBT!</p><p>Responsibilities:</p><ul><li>Design and implement scalable, high-performance data solutions on GCP.</li><li>Develop data pipelines, data warehouses, and data lakes using GCP services (BigQuery, and DBT, etc.).</li><li>Build and maintain ETL/ELT pipelines to ingest, transform, and load data from various sources.</li><li>Ensure data quality, integrity, and security throughout the data lifecycle.</li><li>Design, develop, and implement a new version of a big data tool tailored to client requirements.</li><li>Leverage advanced expertise in DBT (Data Build Tool) and Google BigQuery to model and transform data pipelines.</li><li>Optimize systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing.</li><li>Collaborate cross-functionally with product and technical teams to align project deliverables with client goals.</li><li>Monitor, debug, and refine the performance of the big data tool throughout the development lifecycle.</li></ul><p><strong>Minimum Qualifications:</strong></p><ul><li>5+ years of experience in a data engineering role in GCP .</li><li>Proven experience in designing, building, and deploying data solutions on GCP.</li><li>Strong expertise in SQL, data warehouse design, and data pipeline development.</li><li>Understanding of cloud architecture principles and best practices.</li><li>Proven experience with DBT, BigQuery, and other big data tools.</li><li>Advanced knowledge of partitioning, indexing, and Google Sequences strategies.</li><li>Strong problem-solving skills with the ability to manage and troubleshoot complex systems.</li><li>Excellent written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders.</li><li>Experience with Looker or other data visualization tools.</li></ul>
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>As a Data Scientist, you will analyze complex datasets to extract actionable insights and build data-driven solutions. You will work closely with cross-functional teams to design predictive models, create data visualizations, and develop algorithms that support business objectives.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Analyze large, structured, and unstructured datasets to identify patterns and trends.</li><li>Design, develop, and implement machine learning models and statistical algorithms.</li><li>Conduct exploratory data analysis (EDA) to support decision-making.</li><li>Collaborate with engineering and business teams to translate business needs into data-driven solutions.</li><li>Create and deploy predictive and prescriptive analytics solutions.</li><li>Build and maintain data pipelines and workflows to support data accessibility.</li><li>Develop visualizations, dashboards, and reports to communicate findings effectively.</li><li>Stay updated with the latest advancements in data science and integrate them into projects.</li></ul><p><br></p>
<p><strong>Job Title:</strong> Cloud Data Engineer</p><p><strong>Location:</strong> Remote (occasional travel to the Washington D.C. metro area may be required)</p><p><strong>Clearance Required:</strong> Public Trust</p><p><strong>Position Overview</strong></p><p>We are seeking a customer-focused <strong>Cloud Data Engineer</strong> to join a dynamic team of subject matter experts and developers. This role involves designing and implementing full lifecycle data pipeline services for Azure-based data lake, SQL, and NoSQL data stores. The ideal candidate will be mission-driven, delivery-oriented, and skilled in translating business requirements into scalable data engineering solutions.</p><p><strong>Key Responsibilities</strong></p><ul><li>Maintain and operate legacy ETL processes using Microsoft SSIS, PowerShell, SQL procedures, SSAS, and .NET.</li><li>Develop and manage full lifecycle Azure cloud-native data pipelines.</li><li>Collaborate with stakeholders to understand data requirements and deliver effective solutions.</li><li>Design and implement data models and pipelines for various data architectures including relational, dimensional, lakehouse (medallion), warehouse, and mart.</li><li>Utilize Azure services such as Data Factory, Synapse Pipelines, Apache Spark Notebooks, Python, and SQL.</li><li>Migrate existing SSIS ETL scripts to Azure Data Factory and Synapse Pipelines.</li><li>Prepare data for advanced analytics, visualization, reporting, and AI/ML applications.</li><li>Ensure data integrity, quality, metadata management, and security across pipelines.</li><li>Monitor and troubleshoot data issues to maintain performance and availability.</li><li>Implement governance, CI/CD, and monitoring for automated platform operations.</li><li>Participate in Agile DevOps processes and continuous learning initiatives.</li><li>Maintain strict versioning and configuration control.</li></ul>
<p>We are looking for a skilled Software Engineer to join our team in Middletown, Ohio. This role offers a unique opportunity to contribute to the development of high-quality software solutions that drive efficiency and innovation within our organization. The successful candidate will focus on creating reliable data pipelines, customizing software systems, and improving business decision-making processes through advanced technology.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement robust data pipelines and automation processes, ensuring accurate data flow across the enterprise.</p><p>• Customize software systems, including targeted screen updates, workflow changes, and system enhancements using C# and .NET technologies.</p><p>• Build and extend functionalities for web services, endpoints, and Generic Inquiries to enhance system performance and usability.</p><p>• Monitor system performance and establish dashboards, alerts, and runbooks to ensure smooth operations and timely issue resolution.</p><p>• Execute cutovers and provide hypercare support, including performance tuning and rapid resolution of defects.</p><p>• Collaborate with team members to identify high-impact use cases for AI and technology improvements.</p><p>• Create proofs of concept and scale successful pilots into maintainable features that enhance business efficiency.</p><p>• Design and maintain data models, ensuring clean reconciliations and effective handling of historical changes in reference data.</p><p>• Utilize modern orchestration and transformation tools to optimize data migration and system integrations.</p><p>• Stay updated on emerging technologies and incorporate AI-assisted coding tools to improve development processes.</p>