<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Ann Arbor, Michigan, and contribute to the development of a modern, scalable data platform. In this role, you will focus on building efficient data pipelines, ensuring data quality, and enabling seamless integration across systems to support business analytics and decision-making. This position offers an exciting opportunity to work with cutting-edge technologies and play a key role in the transformation of our data environment.<br><br>Responsibilities:<br>• Design and implement robust data pipelines on Azure using tools such as Databricks, Spark, Delta Lake, and Airflow.<br>• Develop workflows to ingest and integrate data from diverse sources into Azure Data Lake.<br>• Build and maintain data transformation layers following the medallion architecture principles.<br>• Apply data quality checks, validation processes, and deduplication techniques to ensure accuracy and reliability.<br>• Create reusable and parameterized notebooks to streamline batch and streaming data processes.<br>• Optimize merge and update logic in Delta Lake by leveraging efficient partitioning strategies.<br>• Collaborate with business and application teams to understand and fulfill data integration requirements.<br>• Enable downstream integrations with APIs, Power BI dashboards, and reporting systems.<br>• Establish monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.<br>• Participate in code reviews, agile development practices, and team design discussions.
We are looking for an experienced Data Engineer to join our team on a contract basis. In this role, you will play a key part in developing and maintaining data systems that support business objectives. Based in Cincinnati, Ohio, the position offers an opportunity to work with cutting-edge technologies and collaborate with a dynamic team.<br><br>Responsibilities:<br>• Design, develop, and optimize data pipelines and workflows using ETL processes.<br>• Implement and maintain big data solutions leveraging Apache Spark, Hadoop, and Kafka.<br>• Collaborate with cross-functional teams to analyze data requirements and ensure seamless integration.<br>• Develop scalable and efficient data models to support business intelligence and analytics.<br>• Write clean, maintainable Python code for data processing and transformation.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Ensure compliance with data governance policies and security standards.<br>• Provide technical support and guidance for the implementation of data solutions.<br>• Document system architecture and processes to maintain clarity and consistency.<br>• Stay updated with emerging trends and technologies in data engineering.
Job Title: Data Engineer Location: Hybrid, Chicago, IL <br> About the Role: Our company is seeking a Data Engineer with 5–7 years of experience in data engineering. This role is designed for individuals who excel at building robust, scalable data solutions in AWS cloud environments. As part of our team, you’ll engineer and optimize data pipelines critical to our analytics, reporting, and data-driven strategy, while collaborating cross-functionally in a hybrid Chicago-based setting. <br> Key Responsibilities: <br> Design, build, and maintain scalable data pipelines and architectures using AWS cloud services. Develop, manage, and optimize ETL/ELT workflows to acquire, clean, and transform data from diverse sources. Collaborate with business stakeholders, analysts, and data scientists to understand data requirements and deliver solutions. Ensure data quality, integrity, and security throughout all stages of the data lifecycle. Monitor pipeline performance and troubleshoot issues to maximize data reliability and efficiency. Apply data governance best practices and maintain technical documentation. <br> Must-Have Technologies: Languages: Python, SQL Cloud: AWS (S3, Redshift, Glue, Lambda, RDS, Data Pipeline) Big Data: Apache Spark, Hadoop, Kafka ETL Tools: Airflow, AWS Data Pipeline Databases: PostgreSQL, MySQL, Redshift, MongoDB Work Arrangement: <br> Hybrid schedule with regular in-office collaboration in Chicago, IL. Preferred Certifications (optional): <br> AWS Certified Data Analytics – Specialty AWS Certified Solutions Architect Certified Data detail oriented (CDP)
<p>On behalf of our well-established client in the financial industry, Robert Half Talent Solutions, Technology Division is seeking a <strong>Senior Data Engineer</strong> with exceptional Python skills to lead and support a high-performing data team. This role is ideal for someone who thrives in a collaborative environment, enjoys mentoring others, and is passionate about building scalable data solutions.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li><strong>Python Leadership:</strong> Serve as the Python expert on the team, coaching and mentoring junior developers and data scientists.</li><li><strong>Fraud Engine Development:</strong> Design and implement a robust engine to identify and retribute fraudulent activity using advanced data techniques.</li><li><strong>Data Wrangling:</strong> Clean, transform, and prepare large datasets for analysis and modeling.</li><li><strong>Model Building:</strong> Collaborate with data scientists to build and deploy machine learning models within Databricks.</li><li><strong>ETL Pipeline Development:</strong> Design and maintain scalable ETL pipelines to support data ingestion and transformation.</li><li><strong>Azure Functions:</strong> Develop and deploy serverless functions to automate data workflows and support real-time processing.</li><li><strong>Databricks Architecture:</strong> Leverage Databricks for data engineering and machine learning workflows, ensuring best practices in architecture and performance.</li></ul><p><br></p><p><br></p><p><br></p>
<p>As a Data Scientist, you will analyze complex datasets to extract actionable insights and build data-driven solutions. You will work closely with cross-functional teams to design predictive models, create data visualizations, and develop algorithms that support business objectives.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Analyze large, structured, and unstructured datasets to identify patterns and trends.</li><li>Design, develop, and implement machine learning models and statistical algorithms.</li><li>Conduct exploratory data analysis (EDA) to support decision-making.</li><li>Collaborate with engineering and business teams to translate business needs into data-driven solutions.</li><li>Create and deploy predictive and prescriptive analytics solutions.</li><li>Build and maintain data pipelines and workflows to support data accessibility.</li><li>Develop visualizations, dashboards, and reports to communicate findings effectively.</li><li>Stay updated with the latest advancements in data science and integrate them into projects.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team in Johns Creek, Georgia, on a Contract to permanent basis. This position offers an exciting opportunity to design and optimize data pipelines, manage cloud systems, and contribute to the scalability of our Azure environment. The ideal candidate will bring advanced technical skills and a collaborative mindset to support critical data infrastructure initiatives.<br><br>Responsibilities:<br>• Develop and enhance data pipelines using Azure Data Factory to ensure efficient data processing and integration.<br>• Manage and administer Azure Managed Instances to support database operations and ensure system reliability.<br>• Implement real-time data replication from on-premises systems to cloud environments to support seamless data accessibility.<br>• Utilize advanced ETL tools and processes to transform and integrate complex data workflows.<br>• Collaborate with cross-functional teams to ensure data integration across systems, with a preference for experience in Salesforce integration.<br>• Leverage real-time streaming technologies such as Confluent Cloud or Apache Kafka to support dynamic data environments.<br>• Optimize data workflows using tools like Apache Spark and Hadoop to enhance processing performance.<br>• Troubleshoot and resolve database-related issues to maintain system stability and performance.<br>• Work closely with stakeholders to understand data requirements and provide innovative solutions.
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>Our client is undergoing a major digital transformation, shifting toward a cloud-native, API-driven infrastructure. They’re looking for a Data Engineer to help build a modern, scalable data platform that supports this evolution. This role will focus on creating secure, efficient data pipelines, preparing data for analytics, and enabling real-time data sharing across systems.</p><p>As the organization transitions from older, legacy systems to more dynamic, event-based and API-integrated models, the Data Engineer will be instrumental in modernizing the data environment—particularly across the bronze, silver, and gold layers of their medallion architecture.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and deploy scalable data pipelines in Azure using tools like Databricks, Spark, Delta Lake, DBT, Dagster, Airflow, and Parquet.</li><li>Build workflows to ingest data from various sources (e.g., SFTP, vendor APIs) into Azure Data Lake.</li><li>Develop and maintain data transformation layers (Bronze/Silver/Gold) within a medallion architecture.</li><li>Apply data quality checks, deduplication, and validation logic throughout the ingestion process.</li><li>Create reusable and parameterized notebooks for both batch and streaming data jobs.</li><li>Implement efficient merge/update logic in Delta Lake using partitioning strategies.</li><li>Work closely with business and application teams to gather and deliver data integration needs.</li><li>Support downstream integrations with APIs, Power BI dashboards, and SQL-based reports.</li><li>Set up monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.</li><li>Participate in code reviews, design sessions, and agile backlog grooming.</li></ul><p><strong>Additional Technical Duties:</strong></p><ul><li><strong>SQL Server Development:</strong> Write and optimize stored procedures, functions, views, and indexing strategies for high-performance data processing.</li><li><strong>ETL/ELT Processes:</strong> Manage data extraction, transformation, and loading using SSIS and SQL batch jobs.</li></ul><p><strong>Tech Stack:</strong></p><ul><li><strong>Languages & Frameworks:</strong> Python, C#, .NET Core, SQL, T-SQL</li><li><strong>Databases & ETL Tools:</strong> SQL Server, SSIS, SSRS, Power BI</li><li><strong>API Development:</strong> ASP.NET Core Web API, RESTful APIs</li><li><strong>Cloud & Data Services (Roadmap):</strong> Azure Data Factory, Azure Functions, Azure Databricks, Azure SQL Database, Azure Data Lake, Azure Storage</li><li><strong>Streaming & Big Data (Roadmap):</strong> Delta Lake, Databricks, Kafka (preferred but not required)</li><li><strong>Governance & Security:</strong> Data integrity, performance tuning, access control, compliance</li><li><strong>Collaboration Tools:</strong> Jira, Confluence, Visio, Smartsheet</li></ul>
We are looking for an experienced Senior Data Engineer to join our team in Glendale, California. In this long-term contract position, you will play a critical role in ensuring secure and efficient data management across enterprise systems, leveraging cutting-edge tools and methodologies. The ideal candidate will bring expertise in data governance, security, and engineering to support large-scale enterprise environments.<br><br>Responsibilities:<br>• Implement and manage data access governance solutions using tools such as Immuta, SecuPi, or similar technologies.<br>• Develop and maintain data lineage, cataloging, and classification systems with tools like Alation or BigID.<br>• Collaborate with stakeholders to ensure compliance with data security policies and standards.<br>• Design and optimize secure data pipelines and workflows in enterprise environments.<br>• Apply agile and scrum methodologies to manage projects effectively and enhance team collaboration.<br>• Monitor and troubleshoot data systems to ensure optimal performance and security.<br>• Provide expertise in Snowflake configuration and management, ensuring secure data access and storage.<br>• Conduct audits and assessments to identify and mitigate data security risks.<br>• Work closely with cross-functional teams to implement best practices in data governance and engineering.<br>• Stay updated on industry trends and emerging technologies to continuously improve data security strategies.
We are looking for a skilled Data Engineer to join our team in Cleveland, Ohio. This long-term contract position offers the opportunity to contribute to the development and optimization of data platforms, with a primary focus on Snowflake and Apache Airflow technologies. You will play a key role in ensuring efficient data management and processing to support critical business needs.<br><br>Responsibilities:<br>• Design, develop, and maintain data pipelines using Snowflake and Apache Airflow.<br>• Collaborate with cross-functional teams to implement scalable data solutions.<br>• Optimize data processing workflows to ensure high performance and reliability.<br>• Monitor and troubleshoot issues within the Snowflake data platform.<br>• Develop ETL processes to support data integration and transformation.<br>• Work with tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data operations.<br>• Implement robust data warehousing strategies to support business intelligence initiatives.<br>• Analyze and resolve data-related technical challenges promptly.<br>• Provide support and guidance during Snowflake deployments across subsidiaries.<br>• Document processes and ensure best practices for data engineering are followed.
We are looking for an experienced Data Engineer to join our team in Brookfield, Wisconsin. This role offers the opportunity to work in a dynamic environment, tackling diverse data engineering challenges while contributing to impactful projects. As a valued team member, you will leverage advanced tools and technologies to design, optimize, and implement data solutions that support organizational goals.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to ensure efficient data flow and integration.<br>• Design and implement data models and relational database systems to support analytics and reporting.<br>• Collaborate with cross-functional teams to gather requirements and deliver data solutions aligned with business needs.<br>• Utilize tools such as Apache Spark, Hadoop, and Kafka to process and manage large-scale data.<br>• Perform Extract, Transform, Load (ETL) operations to prepare data for analysis and decision-making.<br>• Create and maintain documentation for data architecture, processes, and workflows.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Integrate cloud-based data storage solutions like Snowflake or Azure into existing infrastructure.<br>• Work with data visualization tools, such as Power BI and Tableau, to provide actionable insights.<br>• Stay updated on emerging technologies and trends in data engineering to enhance system capabilities.
<p><strong>Senior Data Engineer</strong></p><p><strong>Location:</strong> Calabasas, CA (Fully Remote if outside 50 miles)</p><p> <strong>Compensation:</strong> $140K–$160K </p><p> <strong>Reports to:</strong> Director of Data Engineering</p><p>Our entertainment client is seeking a <strong>Senior Data Engineer</strong> to design, build, and optimize enterprise data pipelines and cloud infrastructure. This hands-on role focuses on implementing scalable data architectures, developing automation, and driving modern data engineering best practices across the company.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and maintain ELT/ETL pipelines in Snowflake, Databricks, and AWS.</li><li>Build and orchestrate workflows using Python, SQL, Airflow, and dbt.</li><li>Implement medallion/lakehouse architectures and event-driven pipelines.</li><li>Manage AWS services (Lambda, EC2, S3, Glue) and infrastructure-as-code (Terraform).</li><li>Optimize data performance, quality, and governance across systems.</li></ul><p>For immediate consideration, direct message Reid Gormly on Linkedin and Apply Now!</p>
<p><strong><u>Data Engineer</u></strong></p><p><strong>Onsite 4x week in El Segundo</strong></p><p><strong>$130K - $160K + benefits</strong></p><p>We are looking for an experienced Data Engineer to join our dynamic team in El Segundo, California. In this role, you will play a key part in designing, developing, and optimizing data pipelines and architectures to support business operations and analytics. This position offers the opportunity to work on cutting-edge technologies, including AI and machine learning applications.</p><p><br></p><p>Responsibilities:</p><p>• Develop, test, and maintain scalable data pipelines and architectures to support business intelligence and analytics needs.</p><p>• Collaborate with cross-functional teams to integrate data from diverse sources, including D365 Commerce and Adobe Experience Platform.</p><p>• Utilize Python, PySpark, and Azure data services to transform and orchestrate datasets.</p><p>• Implement and manage Kafka-based systems for real-time data streaming.</p><p>• Ensure compliance with data governance, security, and privacy standards.</p><p>• Optimize data storage solutions, leveraging medallion architecture and modern data modeling practices.</p><p>• Prepare datasets for AI/ML applications and advanced analytical models.</p><p>• Monitor, troubleshoot, and improve the performance of data systems.</p><p>• Design semantic models and dashboards using Power BI to support decision-making.</p><p>• Stay updated on emerging technologies and best practices in data engineering.</p>
We are looking for a highly skilled Senior Data Engineer to join our team on a long-term contract basis. In this role, you will design and implement robust data pipelines and architectures to support data-driven decision-making across the organization. You will work closely with cross-functional teams to deliver scalable, secure, and high-performance data solutions using cutting-edge tools and technologies. This position is based in Dallas, Texas.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools like Apache Airflow, NiFi, and Databricks to streamline data ingestion and transformation.<br>• Implement and manage real-time data streaming solutions utilizing Apache Kafka and Flink.<br>• Optimize and oversee data storage systems with technologies such as Hadoop and Amazon S3 to ensure efficiency and scalability.<br>• Establish and enforce data governance, quality, and security protocols through best practices and monitoring systems.<br>• Manage complex workflows and processes across hybrid and multi-cloud environments.<br>• Work with diverse data formats, including Parquet and Avro, to enhance data accessibility and integration.<br>• Troubleshoot and fine-tune distributed data systems to maximize performance and reliability.<br>• Mentor and guide engineers at the beginning of their careers to promote a culture of collaboration and technical excellence.
Description: As a Senior Analytics Engineer at The Walt Disney Studios, you will play a pivotal role in the transformation of data into actionable insights. Collaborate with our dynamic team of technologists to develop cutting-edge data solutions that drive innovation and fuel business growth. Your responsibilities will include managing complex data structures and delivering scalable and efficient data solutions. Your expertise in data engineering will be crucial in optimizing our data-driven decision-making processes. If you& #39;re passionate about leveraging data to make a tangible impact, we welcome you to join us in shaping the future of our organization. You will: Architect and design data products using foundational data sets. Develop and maintain code for data products. Consult with business stakeholders on data strategy and current data assets. Provide specifications for data ingestion and transformation. Document and instruct others on using data products for automation and decision-making. Build data pipelines to automate the creation and deployment of knowledge from models. Monitor and improve statistical and machine learning models in data products. Work with data scientists to implement methodologies for marketing problem-solving. Coordinate with other science and technology teams.Required Education ● Bachelor’s Degree in Computer Science, Information Systems, or a related field, or equivalent work experience. Must be okay working 4 onsite per week. Expert level on SQL-definitely must have Snowflake nice to have AWS is nice to have -4 plus years experience on SQL< br >Preferred Qualifications:Required Education<br>● Bachelor’s Degree in Computer Science, Information Systems, or a related field, or equivalent<br>work experience.<br>● Master’s Degree is a plus.< br >Basic Qualifications: Bachelor& #39;s degree in Computer Science, Information Systems, Software Engineering, or<br>related field.<br> 5+ years of experience in analytics engineering and technology.<br> Demonstrated academic achievement in statistics and probability.<br> Proficiency in Python and SQL.<br> Strong problem-solving, decision-making, and critical thinking skills.<br> Outstanding interpersonal skills and ability to manage multiple priorities.<br> Strong written and verbal communication skills.<br> Ability to work independently and collaboratively in a diverse environment.<br>Project management and business analysis skills preferred Education: STEM Bachelor's Degree
<p>Robert Half is hiring a highly skilled and innovative Intelligent Automation Engineer to design, develop, and deploy advanced automation solutions using Microsoft Power Automate, Python, and AI technologies. This role is ideal for a hands-on technologist passionate about streamlining business processes, integrating systems, and applying cutting-edge AI to drive intelligent decision-making. This role is a hybrid position based in Philadelphia. For consideration, please apply directly. </p><p><br></p><p>Key Responsibilities</p><ul><li>Design and implement end-to-end automation workflows using Microsoft Power Automate (Cloud & Desktop).</li><li>Develop Python scripts and APIs to support automation, system integration, and data pipeline management.</li><li>Integrate Power Automate with Azure services (Logic Apps, Functions, AI Services, App Insights) and enterprise platforms such as SharePoint, Dynamics 365, and Microsoft Teams.</li><li>Apply Generative AI, LLMs, and Conversational AI to enhance automation with intelligent, context-aware interactions.</li><li>Leverage Agentic AI frameworks (LangChain, AutoGen, CrewAI, OpenAI Function Calling) to build dynamic, adaptive automation solutions.</li></ul>
<p>We are looking for an experienced Senior Data Engineer to join our team. This role involves designing and implementing scalable data solutions, optimizing data workflows, and driving innovation in data architecture. The ideal candidate will possess strong leadership qualities and a passion for problem-solving in a fast-paced, cutting-edge environment.</p><p><br></p><p>Responsibilities:</p><p>• Develop high-performance data systems, including databases, APIs, and data integration pipelines, to support scalable solutions.</p><p>• Design and implement metadata-driven architectures and automate deployment processes using infrastructure-as-code principles.</p><p>• Promote best practices in software engineering, such as code reviews, testing, and continuous integration/delivery (CI/CD).</p><p>• Establish and maintain a robust data governance framework to ensure compliance and data integrity.</p><p>• Monitor processes and implement improvements, including query optimization, code refactoring, and efficiency enhancements.</p><p>• Leverage cloud platforms, particularly Azure and Databricks, to improve system architecture and scalability.</p><p>• Conduct data quality checks and build procedures to address and resolve data issues effectively.</p><p>• Create and maintain documentation for data architecture, standards, and best practices.</p><p>• Provide technical leadership to the team, guiding design discussions and fostering innovation in data infrastructure.</p><p>• Identify and implement opportunities for process optimization and automation to improve operational efficiency.</p>
We are looking for a skilled Data Engineer to join our team in Johnson City, Texas. In this role, you will design and optimize data solutions to enable seamless data transfer and management in Snowflake. You will work collaboratively with cross-functional teams to enhance data accessibility and support data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design, develop, and implement ETL solutions to facilitate data transfer between diverse sources and Snowflake.<br>• Optimize the performance of Snowflake databases by constructing efficient data structures and utilizing indexes.<br>• Develop and maintain automated, scalable data pipelines within the Snowflake environment.<br>• Deploy and configure monitoring tools to ensure optimal performance of the Snowflake platform.<br>• Collaborate with product managers and agile teams to refine requirements and deliver solutions.<br>• Create integrations to accommodate growing data volume and complexity.<br>• Enhance data models to improve accessibility for business intelligence tools.<br>• Implement systems to ensure data quality and availability for stakeholders.<br>• Write unit and integration tests while documenting technical work.<br>• Automate testing and deployment processes in Snowflake within Azure.
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
<p><strong><u>DATA ENGINEER</u></strong></p><p> -- <em>Job Type</em> = Permanent, Full-time Employment</p><p> -- <em>Job Location</em> = South Bay - Los Angeles, CA</p><p><br></p><p><strong><em><u>Job Summary</u></em></strong></p><p>This role is responsible for supporting the organization’s enterprise data systems, with a primary focus on modernizing and centralizing data architecture to enable scalable analytics and AI applications. Reporting to the VP of Business Insights and Analytics, this position will collaborate closely with teams across Sales, Marketing, and Technology to unify data from platforms such as: Azure Fabric, Microsoft Dynamics 365, Adobe Experience Platform, and Power BI. The Data Engineer will play a key role in transforming fragmented data sources into a cohesive, AI-ready foundation that drives strategic insights, operational efficiency, and new revenue opportunities.</p>
<p><strong>Position: Databricks Data Engineer</strong></p><p><strong>Location:</strong> Remote (U.S. based) — Preference for candidates in or willing to relocate to <strong>Washington, DC</strong> or <strong>Indianapolis, IN</strong> for periodic on-site support</p><p><strong>Citizenship Requirement:</strong> U.S. Citizen</p><p><br></p><p><strong>Role Summary:</strong></p><p>Seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within an Azure cloud-based data lake. This role translates business requirements into scalable data engineering solutions and supports ongoing ETL operations with a focus on data quality and management.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and optimize scalable data solutions using <strong>Databricks</strong> and <strong>Medallion Architecture</strong>.</li><li>Develop ingestion routines for multi-terabyte datasets across multiple projects and Databricks workspaces.</li><li>Integrate structured and unstructured data sources to enable high-quality business insights.</li><li>Apply data analysis techniques to extract insights from large datasets.</li><li>Implement data management strategies to ensure data integrity, availability, and accessibility.</li><li>Identify and execute cost optimization strategies in data storage, processing, and analytics.</li><li>Monitor and respond to user requests, addressing performance issues, cluster stability, Spark optimization, and configuration management.</li><li>Collaborate with cross-functional teams to support AI-driven analytics and data science workflows.</li><li>Integrate with Azure services including:</li><li>Azure Functions</li><li>Storage Services</li><li>Data Factory</li><li>Log Analytics</li><li>User Management</li><li>Provision and manage infrastructure using <strong>Infrastructure-as-Code (IaC)</strong>.</li><li>Apply best practices for <strong>data security</strong>, <strong>governance</strong>, and <strong>compliance</strong>, supporting federal regulations and public trust standards.</li><li>Work closely with technical and non-technical teams to gather requirements and translate business needs into data solutions.</li></ul><p><strong>Preferred Experience:</strong></p><ul><li>Hands-on experience with the above Azure services.</li><li>Strong foundation in <strong>advanced AI technologies</strong>.</li><li>Experience with <strong>Databricks</strong>, <strong>Spark</strong>, and <strong>Python</strong>.</li><li>Familiarity with <strong>.NET</strong> is a plus.</li></ul>
<p>We are looking for a skilled <strong>Data Engineer</strong> to design and build robust data solutions that align with business objectives. In this role, you will collaborate with cross-functional teams to develop and maintain scalable data architectures, pipelines, and models. Your expertise will ensure the quality, security, and compliance of data systems while contributing to the organization’s data-driven decision-making processes. Call 319-362-8606, or email your resume directly to Shania Lewis - Technology Recruiting Manager at Robert Half (email information is on LinkedIn). Let's talk!!</p><p><br></p><p><strong>Responsibilities:</strong></p><ul><li>Design and implement scalable data architectures, pipelines, and models.</li><li>Translate business requirements into practical data solutions.</li><li>Ensure data quality, security, and regulatory compliance.</li><li>Maintain and improve existing data infrastructure.</li><li>Optimize system performance for efficiency and reliability.</li><li>Research and recommend emerging data technologies.</li><li>Mentor team members and foster collaboration.</li><li>Enable effective analytics through robust data solutions.</li></ul>
We are looking for a skilled Engineer to develop and enhance software solutions that address complex challenges in the real estate and property industry. This long-term contract position involves designing, coding, testing, and maintaining scalable and secure software systems. Based in Minneapolis, Minnesota, this role offers an opportunity to contribute to impactful engineering projects while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Design and implement software solutions that align with customer needs and organizational goals.<br>• Develop, test, debug, and document code to ensure reliability and performance.<br>• Collaborate with team members to solve technical challenges and remove roadblocks.<br>• Apply knowledge of frameworks and systems design to create stable and scalable software.<br>• Participate in product planning and provide input on technical strategies and solutions.<br>• Troubleshoot and analyze complex issues to identify and resolve defects.<br>• Mentor developers who are early in their careers and provide technical guidance to the team.<br>• Explore and adopt new technologies to enhance product performance and lifecycle.<br>• Contribute to DevOps processes, including support rotations and subsystem knowledge-building.<br>• Assist in recruiting efforts by participating in interviews and evaluating potential team members.
<p><strong>Job Title:</strong> Cloud Data Engineer</p><p><strong>Location:</strong> Remote (occasional travel to the Washington D.C. metro area may be required)</p><p><strong>Clearance Required:</strong> Public Trust</p><p><strong>Position Overview</strong></p><p>We are seeking a customer-focused <strong>Cloud Data Engineer</strong> to join a dynamic team of subject matter experts and developers. This role involves designing and implementing full lifecycle data pipeline services for Azure-based data lake, SQL, and NoSQL data stores. The ideal candidate will be mission-driven, delivery-oriented, and skilled in translating business requirements into scalable data engineering solutions.</p><p><strong>Key Responsibilities</strong></p><ul><li>Maintain and operate legacy ETL processes using Microsoft SSIS, PowerShell, SQL procedures, SSAS, and .NET.</li><li>Develop and manage full lifecycle Azure cloud-native data pipelines.</li><li>Collaborate with stakeholders to understand data requirements and deliver effective solutions.</li><li>Design and implement data models and pipelines for various data architectures including relational, dimensional, lakehouse (medallion), warehouse, and mart.</li><li>Utilize Azure services such as Data Factory, Synapse Pipelines, Apache Spark Notebooks, Python, and SQL.</li><li>Migrate existing SSIS ETL scripts to Azure Data Factory and Synapse Pipelines.</li><li>Prepare data for advanced analytics, visualization, reporting, and AI/ML applications.</li><li>Ensure data integrity, quality, metadata management, and security across pipelines.</li><li>Monitor and troubleshoot data issues to maintain performance and availability.</li><li>Implement governance, CI/CD, and monitoring for automated platform operations.</li><li>Participate in Agile DevOps processes and continuous learning initiatives.</li><li>Maintain strict versioning and configuration control.</li></ul>