<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
<p>We are looking for a talented Data Engineer to join our team in Fort Lauderdale, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for an experienced Data Engineer to join our team in Cincinnati, Ohio. This long-term contract position offers the opportunity to work on cutting-edge data engineering projects while collaborating with multidisciplinary teams to deliver high-quality solutions. The ideal candidate will have a strong background in Databricks and big data technologies, along with a passion for optimizing data processes and systems.<br><br>Responsibilities:<br>• Design, build, and enhance data pipelines using Databricks Runtime, Delta Lake, Autoloader, and Structured Streaming.<br>• Implement secure and governed data access protocols utilizing Unity Catalog, workspace controls, and audit configurations.<br>• Manage and integrate structured and unstructured data from diverse sources, including APIs and cloud storage.<br>• Develop and maintain notebook-based workflows and manage jobs using Databricks Workflows and Jobs.<br>• Apply best practices for performance tuning, scalability, and cost optimization in Databricks environments.<br>• Collaborate with data scientists, analysts, and business stakeholders to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks jobs and system configurations.<br>• Ensure high standards of data quality and security across all engineering tasks.<br>• Troubleshoot and resolve issues to maintain operational efficiency in data pipelines.
<p><strong>AWS Big Data Architect (with Hadoop) </strong></p><p><strong>Location:</strong> Hybrid 4x Onsite – Philadelphia, PA</p><p><strong>Contract Duration:</strong> April 6, 2026 – December 31, 2026</p><p><strong>Employment Type:</strong> W2 Contract</p><p><strong>Overview</strong></p><p>We are seeking a highly skilled <strong>AWS Big Data Architect / Senior Data Engineer</strong> to design, develop, and deliver scalable Big Data Warehouse solutions. This is a hands-on role suited for someone who is passionate about technology, thrives in a collaborative environment, and can work effectively with both technical and non-technical stakeholders. The ideal candidate excels in fast-paced settings and is committed to producing high-quality, impactful results.</p><p>This role offers the opportunity to collaborate with engineering teams across the enterprise and influence broader data and technology strategies.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design and develop scalable Big Data Warehouse solutions across the full data supply chain.</li><li>Build and implement metadata management solutions.</li><li>Create and maintain technical documentation, user documentation, data models, data dictionaries, glossaries, process flows, and architecture diagrams.</li><li>Enhance and expand the enterprise Data Lake environment.</li><li>Solve complex data integration challenges across multiple systems.</li><li>Design and execute strategies for real-time data analysis and decision-making.</li><li>Collaborate with business partners, analysts, developers, architects, and engineers to support ongoing data quality initiatives.</li><li>Work closely with Data Science teams to improve actionable insights.</li><li>Continuously expand knowledge of new tools, platforms, and technologies.</li></ul>
<p>Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies.</p><p>Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives.</p><p>Oversee and execute migrations from legacy relational databases into Databricks-based ecosystems.</p><p>Design and structure scalable data pipelines and foundational data infrastructure aligned with organizational goals.</p><p>Create and maintain ETL/ELT processes within Databricks to ensure efficient ingestion, transformation, and delivery of data.</p><p>Continuously refine and optimize data workflows to improve performance, stability, and data quality across all processes.</p><p>Manage end-to-end data transitions to ensure operational continuity with minimal business disruption.</p><p>Monitor Databricks workloads and optimize performance, scalability, and cost efficiency across compute and storage layers.</p><p>Partner with data engineers, scientists, analysts, and product stakeholders to gather requirements and build fit‑for‑purpose data solutions.</p><p>Establish and enforce data engineering best practices, development standards, and architectural guidelines.</p><p>Assess emerging tools and technologies to enhance pipeline efficiency, reliability, and automation capabilities.</p><p>Provide technical direction, guidance, and mentorship to junior engineers and team members.</p><p>Collaborate closely with DevOps and infrastructure teams to deploy, manage, and support data systems in production.</p><p>Ensure all data solutions meet compliance standards, organizational security policies, and regulatory obligations.</p><p>Work with enterprise architects and IT leadership to align data architecture with broader technology strategies and long-term roadmaps</p>
<p>We are seeking a Senior Data Engineer – Ingest to help transform data into meaningful insights and power innovation across the organization. In this role, you will work with a collaborative team of technologists to build scalable data solutions, integrate diverse data sources, and strengthen the core data platform. Your engineering expertise will directly support analytics, data science, operations, and key business stakeholders.</p><p>If you’re passionate about building high‑quality data systems that make a measurable impact, this role offers the opportunity to shape the future of a large, data‑driven organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Maintain, update, and expand configuration‑driven data pipelines within the core data platform.</li><li>Build tools and services supporting data discovery, lineage, governance, and privacy.</li><li>Partner with software engineers, data engineers, architects, and product managers to deliver reliable and scalable data solutions.</li><li>Help define and document data standards, naming conventions, pipeline best practices, and system guidelines.</li><li>Ensure the reliability, accuracy, and operational efficiency of datasets to meet SLAs.</li><li>Participate in Agile/Scrum ceremonies and contribute to ongoing process improvements.</li><li>Collaborate closely with users and stakeholders to understand needs and prioritize enhancements.</li><li>Maintain detailed technical documentation to support data quality, governance, and compliance requirements.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
We are looking for a skilled Data Engineer to join our team in Washington, District of Columbia. In this role, you will play a key part in designing and implementing secure, scalable solutions to support data and analytics initiatives. This is a long-term contract position, offering the opportunity to work with cutting-edge technologies and contribute to impactful projects.<br><br>Responsibilities:<br>• Develop, test, and maintain robust data pipelines and engineering solutions to support analytics and integrate new data sources.<br>• Collaborate with team members, stakeholders, and external vendors to evaluate and implement reliable, scalable, and secure technologies.<br>• Create efficient, automated processes to handle repetitive data management tasks.<br>• Conduct targeted data manipulation and analysis across diverse datasets.<br>• Implement advanced security measures within data warehouses and analytics platforms to counter evolving threats.<br>• Document technical processes and solutions to ensure seamless collaboration and knowledge sharing.<br>• Monitor and optimize system performance to ensure scalability and reliability.<br>• Stay updated on emerging data engineering trends and incorporate them into workflows.
<p>We are seeking a skilled <strong>Azure Data Engineer</strong> to design, build, and maintain scalable data solutions on the Microsoft Azure platform. The ideal candidate will have strong experience developing data pipelines, optimizing data architectures, and supporting analytics and business intelligence initiatives. This role will work closely with data analysts, data scientists, and business stakeholders to ensure reliable, high-quality data is available for reporting and advanced analytics.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines and ETL/ELT processes</strong> using Azure data services.</li><li>Build and manage data solutions using tools such as <strong>Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, and Azure Databricks</strong>.</li><li>Develop and optimize <strong>data models, transformations, and storage strategies</strong> for large-scale structured and unstructured datasets.</li><li>Ensure <strong>data quality, integrity, and security</strong> across the data platform.</li><li>Monitor and troubleshoot data workflows, pipeline failures, and performance issues.</li><li>Collaborate with data analysts, BI developers, and data scientists to deliver reliable datasets for reporting and analytics.</li><li>Implement <strong>data governance and best practices</strong> for data management and documentation.</li><li>Automate data processes and deployments using <strong>CI/CD pipelines and infrastructure-as-code practices</strong>.</li><li>Optimize cost and performance of Azure data services.</li><li>Stay current with new Azure features, tools, and industry best practices.</li></ul><p><br></p>
<p>Position Overview</p><p>We are seeking a talented <strong>Data Engineer</strong> with strong experience in <strong>Python, AWS, and Databricks</strong> to design and build scalable data pipelines and modern data platforms. The ideal candidate will help develop and maintain data infrastructure that supports analytics, machine learning, and business intelligence initiatives. This role requires hands-on experience working with large datasets, cloud-native architectures, and distributed data processing frameworks.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain <strong>scalable data pipelines and ETL/ELT workflows</strong> using Python and cloud technologies.</li><li>Develop and optimize data solutions using <strong>AWS services and Databricks</strong>.</li><li>Build and manage <strong>data lakes and data warehouses</strong> for structured and unstructured data.</li><li>Implement <strong>data transformation and processing pipelines</strong> using Apache Spark within Databricks.</li><li>Integrate data from multiple sources including APIs, databases, and streaming systems.</li><li>Ensure <strong>data quality, governance, security, and compliance</strong> across the data platform.</li><li>Monitor pipeline performance and troubleshoot <strong>data pipeline failures or latency issues</strong>.</li><li>Collaborate with <strong>data analysts, data scientists, and business stakeholders</strong> to deliver reliable datasets.</li><li>Optimize storage and compute costs within the AWS ecosystem.</li><li><br></li></ul><p><br></p>
We are looking for a Senior Database Engineer to take on a critical role in shaping the future of our global data platform. In this position, you will lead technical strategy, architect robust multi-cloud systems, and oversee initiatives to ensure reliability, scalability, and cost efficiency. You will have a hands-on approach, providing mentorship and collaborating with leadership to drive impactful technical decisions. This is a contract opportunity with the potential for a permanent position, located in Lehi, Utah.<br><br>Responsibilities:<br>• Develop and execute the technical roadmap for a scalable and reliable data infrastructure.<br>• Architect and implement multi-region, cross-account data platforms to support global operations.<br>• Establish and enforce engineering standards for database design, data pipelines, reliability, and observability.<br>• Lead post-incident reviews and implement solutions to prevent recurring issues.<br>• Collaborate with product and engineering teams to identify technical risks and optimize roadmaps.<br>• Design and oversee large-scale data migrations, ensuring fault tolerance and self-healing capabilities.<br>• Optimize database performance through indexing, query tuning, and capacity planning.<br>• Implement robust security measures, including encryption, secrets management, and access controls.<br>• Partner with cross-functional teams to align business requirements with technical solutions.<br>• Provide hands-on leadership in developing critical systems and resolving complex production incidents.
<p><strong>For immediate response please message Valerie Nielsen on LinkedIn or email!</strong></p><p><br></p><p><strong>Job Title:</strong> Senior Data Engineer</p><p> <strong>Location:</strong> Hybrid – Westwood (Los Angeles, CA) near University of California, Los Angeles</p><p> <strong>Compensation:</strong> $175,000 – $185,000 base salary + 10% annual bonus</p><p> <strong>Employment Type:</strong> Full-Time</p><p><br></p><p>Overview</p><p>We are seeking a <strong>Senior Data Engineer</strong> to join a growing data team in <strong>Westwood, CA</strong>. This role will focus on designing and building scalable data pipelines, supporting analytics and reporting initiatives, and improving data infrastructure across the organization.</p><p>The ideal candidate is highly experienced with <strong>Snowflake, dbt, Python</strong>, and modern data pipeline architecture, and enjoys working closely with analytics and business teams to deliver reliable, high-quality data. Experience integrating data from CRM platforms such as <strong>Salesforce</strong> is a strong plus.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines</strong> supporting analytics, reporting, and operational data needs</li><li>Build and optimize data models and transformations using <strong>dbt</strong> within a <strong>Snowflake</strong> data warehouse environment</li><li>Develop robust ETL/ELT workflows using <strong>Python</strong> and modern data engineering best practices</li><li>Collaborate with analytics teams to deliver clean, reliable datasets used in <strong>Power BI</strong> dashboards and reporting</li><li>Ensure data quality, reliability, and performance across the data platform</li><li>Optimize Snowflake warehouse performance and manage cost-efficient data storage and compute usage</li><li>Integrate data from internal and external systems, including CRM and SaaS platforms</li><li>Partner with stakeholders across engineering, product, and business teams to define data requirements and solutions</li><li>Maintain documentation and promote data engineering standards and best practices</li></ul><p><br></p><p><br></p>
<p><strong>Overview</strong></p><p>We are seeking a Senior Data Engineer to support a major Salesforce Phase 2 data migration initiative. This role will focus heavily on building and optimizing data pipelines, developing ETL workflows, and moving CRM data from Salesforce into Databricks.</p><p>The engineer will work closely with a senior team member, contribute to Scrum ceremonies, and play a key role in developing the core CRM data environment used by the advertising organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><p><strong>Data Engineering & Migration</strong></p><ul><li>Develop ETL jobs that move and transform Salesforce data into Databricks.</li><li>Build, test, and maintain high‑volume data pipelines across AWS + Databricks.</li><li>Perform data migration, data integration, and pipeline development (including Mulesoft-related work).</li><li>Ensure all pipelines are reliable, scalable, and optimized for production.</li></ul><p><strong>Development & Infrastructure</strong></p><ul><li>Use Python and PySpark to build ETL components and transformation logic.</li><li>Leverage Spark/PySpark for distributed processing at scale (must‑have).</li><li>Use Terraform to provision and manage cloud infrastructure.</li><li>Set up CI/CD pipelines using Concourse or GitHub Actions for automated deployments.</li></ul><p><strong>Quality, Documentation & Support</strong></p><ul><li>Document ETL processes, pipelines, and data flows.</li><li>Participate in testing, QA, and validation of migrated datasets.</li><li>Provide post‑delivery support and proactively mitigate project risks or single points of failure (SPOF).</li><li>Troubleshoot production issues and implement long‑term fixes to maintain pipeline stability.</li></ul><p><strong>Collaboration</strong></p><ul><li>Work closely with engineering teammates to translate business requirements into working pipelines.</li><li>Participate in weekly Scrum ceremonies.</li><li>Contribute to shared best practices and continuous improvement across the data engineering team.</li></ul><p><br></p>
<p>I’m building a world-class team to power our next generation of data products. We’re looking for a Senior Data Engineer who knows AWS inside and out—someone who can <strong>design secure, scalable data pipelines</strong>, <strong>own ETL/ELT workflows</strong>, <strong>engineer cloud data infrastructure</strong>, and <strong>deliver dimensional and semantic models</strong> that our analysts, data scientists, and applications can trust.</p><p>You’ll work closely with product, security, platform engineering, and analytics to move our architecture toward a <strong>real-time, governed, cost-aware</strong>, and <strong>highly automated</strong> data ecosystem.</p><p><strong>What You’ll Do</strong></p><ul><li><strong>Design & build end-to-end pipelines</strong> on AWS (batch and streaming) using services like <strong>Glue, EMR, Lambda, Step Functions, Kinesis, MSK</strong>, and <strong>Fargate</strong>.</li><li><strong>Develop robust ETL/ELT</strong> (PySpark, Spark SQL, SQL, Python) for structured, semi-structured, and unstructured data at scale.</li><li><strong>Own data storage & processing layers</strong>: <strong>S3 (Lake/Lakehouse), Redshift (or Snowflake on AWS), DynamoDB</strong>, and <strong>Athena</strong> with strong partitioning, compaction, and performance tuning.</li><li><strong>Implement data models</strong> (3NF, dimensional/star, Data Vault, Lakehouse medallion) for analytics and operational workloads.</li><li><strong>Engineer secure infrastructure-as-code</strong> with <strong>Terraform</strong> (or <strong>CDK</strong>) across multi-account setups; implement CI/CD via <strong>GitHub Actions</strong> or <strong>AWS CodeBuild/CodePipeline</strong>.</li><li><strong>Harden security & governance</strong>: use <strong>IAM</strong>, <strong>Lake Formation</strong>, <strong>KMS</strong>, <strong>Secrets Manager</strong>, <strong>VPC/PrivateLink</strong>, <strong>GLUE Catalog</strong>, and fine-grained access controls. Partner with SecOps on compliance (e.g., <strong>SOC 2</strong>, <strong>FedRAMP</strong>, <strong>HIPAA</strong> depending on dataset).</li><li><strong>Observability & reliability</strong>: build monitoring with <strong>CloudWatch</strong>, <strong>OpenTelemetry</strong>, and data quality checks (e.g., <strong>Great Expectations</strong>, <strong>Deequ</strong>), implement SLOs and alerts.</li><li><strong>Champion best practices</strong>: code reviews, testing (unit/integration), documentation, runbooks, and blameless postmortems.</li><li><strong>Mentor</strong> mid-level engineers and collaborate on architectural decisions, standards, and technical roadmaps.</li></ul><p><br></p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Boston, Massachusetts. In this role, you will be responsible for designing and building a robust data platform from the ground up, playing a pivotal part in shaping the data strategy and supporting AI-driven initiatives. This is a unique opportunity to contribute to the creation of a new data engineering function within a dynamic financial services environment. This role is hybrid, onsite in Boston 3 days a week. </p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and implement a scalable data platform using Microsoft Fabric and other technologies within the Microsoft ecosystem.</p><p>• Collaborate with stakeholders to define the data strategy and implement solutions that align with business goals.</p><p>• Oversee and manage external consultants assisting with the development of the data platform.</p><p>• Support AI enablement initiatives by ensuring the data architecture meets analytical and operational needs.</p><p>• Create and maintain ETL processes to ensure efficient data extraction, transformation, and loading.</p><p>• Optimize database performance across SQL, NoSQL, and other database systems.</p><p>• Utilize Python for data engineering tasks, including scripting and automation.</p><p>• Work closely with IT and analytics teams to ensure seamless integration of the data platform into existing systems.</p><p>• Provide technical leadership and guidance while exploring future opportunities to build and expand the data engineering function.</p><p>• Ensure compliance with industry standards and best practices in data security and management.</p>
<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
We are seeking a Senior Data Engineer to join a growing data engineering team responsible for building and scaling an enterprise data platform. This role will focus on developing cloud-based data pipelines within Google Cloud Platform (GCP) while also supporting elements of a legacy on-premise data warehouse environment during an ongoing cloud migration.<br><br>The ideal candidate will have strong experience building scalable data pipelines, event-driven data architectures, and cloud-native data services. This is a great opportunity to contribute to a rapidly expanding data ecosystem and help drive the transition to modern cloud data platforms.<br><br>Key Responsibilities<br><br>Design, build, and maintain data pipelines within Google Cloud Platform (GCP)<br><br>Develop event-driven data streaming solutions using Pub/Sub<br><br>Build and maintain Python-based services using Cloud Run<br><br>Develop and optimize BigQuery datasets and queries<br><br>Integrate new data sources into the enterprise data platform<br><br>Maintain and support existing ETL processes within SQL Server<br><br>Work with SSIS and stored procedures in legacy data environments<br><br>Monitor, troubleshoot, and optimize data pipeline performance<br><br>Collaborate with engineering teams to support data-driven initiatives<br><br>Participate in on-call rotations for production systems<br><br>Required Qualifications<br><br>5+ years of experience in Data Engineering<br><br>Strong experience with Google Cloud Platform (GCP)<br><br>Experience building data pipelines and ETL processes<br><br>Experience with Pub/Sub or event-driven data streaming<br><br>Strong experience with BigQuery<br><br>Proficiency in Python<br><br>Experience with Cloud Run or similar serverless services<br><br>Strong SQL experience including SQL Server<br><br>Experience with SSIS or similar ETL tools
We are looking for a skilled Data Engineer to join our logistics team in Lithonia, Georgia. In this role, you will design, construct, and maintain data pipelines and infrastructure to support analytics and operational systems. You will play a key role in enabling data visualization tools, optimizing data processes, and ensuring the accuracy and availability of critical information.<br><br>Responsibilities:<br>• Design and implement data pipelines to efficiently extract, transform, and load data from multiple sources.<br>• Develop and maintain data models and storage solutions to support analytics and reporting needs.<br>• Collaborate with stakeholders to troubleshoot data inconsistencies and resolve technical issues.<br>• Utilize Tableau or Power BI to create meaningful data visualizations that drive business insights.<br>• Write and optimize database procedures, triggers, and other SQL-based functionalities.<br>• Manage and monitor databases to ensure their performance and reliability.<br>• Provide technical guidance to analysts on best practices in data governance and performance optimization.<br>• Participate in cross-functional projects to enhance data accessibility and quality across departments.<br>• Explore and integrate Python-based solutions to enhance data engineering processes.<br>• Assist in training and development related to data availability and analytics tools.
<p>We are looking for a talented Data Engineer to join our team in Miami, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for an experienced Data Engineer to join our team in Jacksonville, Florida. In this role, you will take the lead in designing and building a cutting-edge Azure lakehouse platform that enables business leaders to access analytics through natural language queries. This position combines hands-on technical expertise with leadership responsibilities, offering an opportunity to mentor a team of skilled engineers while driving innovation.<br><br>Responsibilities:<br>• Architect and develop a robust Azure lakehouse platform, utilizing Azure Data Lake Gen2, Delta Lake, and PySpark to create efficient data pipelines.<br>• Implement a semantic layer and metric store to ensure consistent data translation and definitions across the organization.<br>• Design and maintain real-time and batch data pipelines, incorporating medallion architecture, schema evolution, and data contracts.<br>• Build retrieval systems for large language models (LLMs) using Azure OpenAI and vectorized Delta tables to support chat-based analytics.<br>• Ensure data quality, lineage, and observability through tools like Great Expectations and Unity Catalog, while optimizing costs through partitioning and compaction.<br>• Develop automated systems for anomaly detection and alerting using Azure ML pipelines and Event Grid.<br>• Collaborate with product and operations teams to translate complex business questions into actionable data models and queries.<br>• Lead and mentor a team of data and Python engineers, establishing best practices in CI/CD, code reviews, and documentation.<br>• Ensure compliance with security, privacy, and governance standards by designing and implementing robust data handling protocols.
<p>Robert half has a brand new opening for a Data Engineer with a reputable client here in Tampa.</p><p>Full-time position, HYBRID schedule out of their Tampa office.</p><p>Compensation ranging $100-115K depending on experience</p><p>*Medical benefits are also 100% covered after onboarding period*</p><p><br></p><p>Data Engineer (BI/ETL) focused on building and optimizing ETL/ELT pipelines, migrating/cleaning data between internal, vendor, and legacy systems, and improving data quality. SQL is absolutely required, and this role leans heavily into backend data movement — not dashboarding.</p><p><br></p><p><strong>Top Skills Looking For:</strong></p><ul><li>Strong <strong>SQL </strong>(non negotiable)</li><li>Experience designing and maintaining <strong>ETL / ELT pipelines</strong> using frameworks such as <strong>Apache Airflow, DBT (Data Build Tool), or equivalent orchestration systems</strong>, with the ability to schedule, monitor, and recover complex multi-stage jobs.</li><li><strong>Experience moving data across multiple systems</strong></li></ul><p>Description:</p><p>Build and maintain business intelligence solutions to include law enforcement, detention, human resources, finance, and integration of data from agency criminal justice partners.</p><p>• Design and develop BI solutions.</p><p>• Gather user requirements, develop technical and functional requirements, produce reporting solutions, and document the design and development process, metadata, and business rules.</p><p>• Model, implement, and maintain databases and data marts to support BI reporting.</p><p>• Develop extract, transform, load (ETL) to support the loading of data into data marts.</p><p>• Monitor the data quality of existing databases and data marts and recommend governance and control around self-service BI/Analytics considering the evolution of the BI Industry’s best practices.</p><p>• Perform other related duties as required.</p>
We are looking for an experienced Data Engineer to join our team in New York, New York. In this role, you will design, build, and maintain data infrastructure to support business intelligence and analytics needs. The ideal candidate will have a strong technical background, a passion for working with complex datasets, and expertise in cloud-based data platforms.<br><br>Responsibilities:<br>• Develop, implement, and optimize ETL pipelines to ensure efficient data processing and integration.<br>• Design and maintain scalable data solutions, including data warehouses and data lakes.<br>• Collaborate with cross-functional teams to identify data requirements and deliver actionable insights.<br>• Utilize Snowflake, AWS, and other cloud-based platforms to manage data infrastructure and ensure performance optimization.<br>• Leverage Python and SQL to build robust data workflows and automate processes.<br>• Employ orchestration tools like Airflow and dbt to streamline data operations.<br>• Support data analytics and visualization efforts by enabling the creation of impactful dashboards using tools such as Tableau.<br>• Work with marketing and product data sources, including platforms like Google Analytics, to extract and integrate valuable insights.<br>• Implement CI/CD pipelines and DevOps practices to enhance data engineering processes.<br>• Ensure data security and compliance across all systems and tools.
We are looking for a Senior Data Engineer to develop and optimize enterprise data systems that support analytics and digital solutions. In this role, you will design and implement robust data architectures, ensuring seamless data integration and transformation processes across the organization. Your expertise will drive the creation of reliable pipelines and scalable infrastructure, enabling advanced analytics and machine learning capabilities.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines using Databricks, Spark, and Delta Lake to support enterprise-level analytics.<br>• Develop and maintain efficient data models tailored for AI, analytics, and operational systems.<br>• Lead Master Data Management initiatives to establish unified and accurate data records across platforms.<br>• Create batch and near-real-time data processing workflows for structured and semi-structured datasets.<br>• Collaborate with AI and software development teams to ensure delivery of high-quality datasets for machine learning.<br>• Define and enforce data architecture standards, ensuring scalability, reliability, and governance.<br>• Troubleshoot and optimize data systems to maintain performance and reliability in complex environments.<br>• Partner with cloud and IT teams to integrate modern data platforms and ensure seamless functionality.