Search jobs now Find the right job type for you Explore how we help job seekers Contract talent Permanent talent Learn how we work with you Executive search Finance and Accounting Technology Marketing and Creative Legal Administrative and Customer Support Technology Risk, Audit and Compliance Finance and Accounting Digital, Marketing and Customer Experience Legal Operations Human Resources 2026 Salary Guide Demand for Skilled Talent Report Building Future-Forward Tech Teams Job Market Outlook Press Room Salary and hiring trends Adaptive working Competitive advantage Work/life balance Inclusion Browse jobs Find your next hire Our locations
Data Engineer
<p>Our client is undergoing a major digital transformation, shifting toward a cloud-native, API-driven infrastructure. They’re looking for a Data Engineer to help build a modern, scalable data platform that supports this evolution. This role will focus on creating secure, efficient data pipelines, preparing data for analytics, and enabling real-time data sharing across systems.</p><p>As the organization transitions from older, legacy systems to more dynamic, event-based and API-integrated models, the Data Engineer will be instrumental in modernizing the data environment—particularly across the bronze, silver, and gold layers of their medallion architecture.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and deploy scalable data pipelines in Azure using tools like Databricks, Spark, Delta Lake, DBT, Dagster, Airflow, and Parquet.</li><li>Build workflows to ingest data from various sources (e.g., SFTP, vendor APIs) into Azure Data Lake.</li><li>Develop and maintain data transformation layers (Bronze/Silver/Gold) within a medallion architecture.</li><li>Apply data quality checks, deduplication, and validation logic throughout the ingestion process.</li><li>Create reusable and parameterized notebooks for both batch and streaming data jobs.</li><li>Implement efficient merge/update logic in Delta Lake using partitioning strategies.</li><li>Work closely with business and application teams to gather and deliver data integration needs.</li><li>Support downstream integrations with APIs, Power BI dashboards, and SQL-based reports.</li><li>Set up monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.</li><li>Participate in code reviews, design sessions, and agile backlog grooming.</li></ul><p><strong>Additional Technical Duties:</strong></p><ul><li><strong>SQL Server Development:</strong> Write and optimize stored procedures, functions, views, and indexing strategies for high-performance data processing.</li><li><strong>ETL/ELT Processes:</strong> Manage data extraction, transformation, and loading using SSIS and SQL batch jobs.</li></ul><p><strong>Tech Stack:</strong></p><ul><li><strong>Languages & Frameworks:</strong> Python, C#, .NET Core, SQL, T-SQL</li><li><strong>Databases & ETL Tools:</strong> SQL Server, SSIS, SSRS, Power BI</li><li><strong>API Development:</strong> ASP.NET Core Web API, RESTful APIs</li><li><strong>Cloud & Data Services (Roadmap):</strong> Azure Data Factory, Azure Functions, Azure Databricks, Azure SQL Database, Azure Data Lake, Azure Storage</li><li><strong>Streaming & Big Data (Roadmap):</strong> Delta Lake, Databricks, Kafka (preferred but not required)</li><li><strong>Governance & Security:</strong> Data integrity, performance tuning, access control, compliance</li><li><strong>Collaboration Tools:</strong> Jira, Confluence, Visio, Smartsheet</li></ul>
<p><strong>Skills & Competencies:</strong></p><ul><li>Deep expertise in SQL Server and T-SQL, including performance tuning and query optimization</li><li>Strong understanding of data ingestion strategies and partitioning</li><li>Proficiency in PySpark/SQL with a focus on performance</li><li>Solid knowledge of modern data lake architecture and structured streaming</li><li>Excellent problem-solving and debugging abilities</li><li>Strong collaboration and communication skills, with attention to documentation</li></ul><p><strong>Qualifications:</strong></p><ul><li>Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience)</li><li>5+ years of experience building data pipelines and distributed data systems</li><li>Strong hands-on experience with Databricks, Delta Lake, and Azure big data tools</li><li>Experience working in financial or regulated data environments is preferred</li><li>Familiarity with Git, CI/CD workflows, and agile development practices</li><li>Background in mortgage servicing or lending is a plus</li></ul><p><br></p>
<h3 class="rh-display-3--rich-text">Technology Doesn't Change the World, People Do.<sup>®</sup></h3> <p>Robert Half is the world’s first and largest specialized talent solutions firm that connects highly qualified job seekers to opportunities at great companies. We offer contract, temporary and permanent placement solutions for finance and accounting, technology, marketing and creative, legal, and administrative and customer support roles.</p> <p>Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. <a href="https://www.roberthalf.com/us/en/mobile-app" target="_blank">Download the Robert Half app</a> and get 1-tap apply, notifications of AI-matched jobs, and much more.</p> <p>All applicants applying for U.S. job openings must be legally authorized to work in the United States. Benefits are available to contract/temporary professionals, including medical, vision, dental, and life and disability insurance. Hired contract/temporary professionals are also eligible to enroll in our company 401(k) plan. Visit <a href="https://roberthalf.gobenefits.net/" target="_blank">roberthalf.gobenefits.net</a> for more information.</p> <p>© 2025 Robert Half. An Equal Opportunity Employer. M/F/Disability/Veterans. By clicking “Apply Now,” you’re agreeing to <a href="https://www.roberthalf.com/us/en/terms">Robert Half’s Terms of Use</a>.</p>
  • Ann Arbor, MI
  • remote
  • Permanent
  • 120000.00 - 140000.00 USD / Yearly
  • <p>Our client is undergoing a major digital transformation, shifting toward a cloud-native, API-driven infrastructure. They’re looking for a Data Engineer to help build a modern, scalable data platform that supports this evolution. This role will focus on creating secure, efficient data pipelines, preparing data for analytics, and enabling real-time data sharing across systems.</p><p>As the organization transitions from older, legacy systems to more dynamic, event-based and API-integrated models, the Data Engineer will be instrumental in modernizing the data environment—particularly across the bronze, silver, and gold layers of their medallion architecture.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and deploy scalable data pipelines in Azure using tools like Databricks, Spark, Delta Lake, DBT, Dagster, Airflow, and Parquet.</li><li>Build workflows to ingest data from various sources (e.g., SFTP, vendor APIs) into Azure Data Lake.</li><li>Develop and maintain data transformation layers (Bronze/Silver/Gold) within a medallion architecture.</li><li>Apply data quality checks, deduplication, and validation logic throughout the ingestion process.</li><li>Create reusable and parameterized notebooks for both batch and streaming data jobs.</li><li>Implement efficient merge/update logic in Delta Lake using partitioning strategies.</li><li>Work closely with business and application teams to gather and deliver data integration needs.</li><li>Support downstream integrations with APIs, Power BI dashboards, and SQL-based reports.</li><li>Set up monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.</li><li>Participate in code reviews, design sessions, and agile backlog grooming.</li></ul><p><strong>Additional Technical Duties:</strong></p><ul><li><strong>SQL Server Development:</strong> Write and optimize stored procedures, functions, views, and indexing strategies for high-performance data processing.</li><li><strong>ETL/ELT Processes:</strong> Manage data extraction, transformation, and loading using SSIS and SQL batch jobs.</li></ul><p><strong>Tech Stack:</strong></p><ul><li><strong>Languages & Frameworks:</strong> Python, C#, .NET Core, SQL, T-SQL</li><li><strong>Databases & ETL Tools:</strong> SQL Server, SSIS, SSRS, Power BI</li><li><strong>API Development:</strong> ASP.NET Core Web API, RESTful APIs</li><li><strong>Cloud & Data Services (Roadmap):</strong> Azure Data Factory, Azure Functions, Azure Databricks, Azure SQL Database, Azure Data Lake, Azure Storage</li><li><strong>Streaming & Big Data (Roadmap):</strong> Delta Lake, Databricks, Kafka (preferred but not required)</li><li><strong>Governance & Security:</strong> Data integrity, performance tuning, access control, compliance</li><li><strong>Collaboration Tools:</strong> Jira, Confluence, Visio, Smartsheet</li></ul>
  • 2025-10-29T18:44:13Z

Data Engineer Job in Ann Arbor | Robert Half