Search jobs now Find the right job type for you Create a job alert Explore how we help job seekers Contract talent Permanent talent Learn how we work with you Executive search Finance and Accounting Technology Marketing and Creative Legal Administrative and Customer Support Technology Risk, Audit and Compliance Finance and Accounting Digital, Marketing and Customer Experience Legal Operations Human Resources 2026 Salary Guide Demand for Skilled Talent Report Job Market Outlook Press Room Tech insights Labor market overview AI in recruiting Navigating the AI era Staffing for small businesses Cost of a bad hire Browse jobs Find your next hire Our locations

Add your latest resume to match with open positions.

3 results for Data Engineer Azure Data Factory in Buford, GA

Data Engineer
  • Atlanta, GA
  • onsite
  • Permanent
  • 110000 - 120000 USD / Yearly
  • <p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
  • 2026-03-20T00:00:00Z
AWS Data Engineer
  • Atlanta, GA
  • onsite
  • Temporary
  • 50 - 55 USD / Hourly
  • <p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring &amp; Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding &amp; Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality &amp; Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer &amp; Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
  • 2026-03-27T00:00:00Z
Sr. AWS Data Engineer
  • Atlanta, GA
  • onsite
  • Temporary
  • 60 - 67 USD / Hourly
  • <p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring &amp; Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding &amp; Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality &amp; Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer &amp; Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
  • 2026-03-27T00:00:00Z