<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p>**** For Faster response on the position, please send a message to Jimmy Escobar on LinkedIn or send an email to Jimmy.Escobar@roberthalf(.com) with your resume. You can also call my office number at 424-270-9193****</p><p><br></p><p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Los Angeles, California. In this role, you will design, build, and maintain robust data infrastructure to support business operations and analytics. This position offers an opportunity to work with cutting-edge technologies and contribute to impactful projects. This position is a hybrid that is 3 days a week on-site and 2 days remote. </p><p><br></p><p>Responsibilities:</p><p>• Develop and implement scalable data pipelines using Apache Spark, Hadoop, and other big data technologies.</p><p>• Collaborate with cross-functional teams to understand and translate business requirements into technical solutions.</p><p>• Create and maintain ETL processes to ensure data integrity and accessibility.</p><p>• Optimize data systems and workflows for improved performance and reliability.</p><p>• Manage and monitor large-scale data processing systems, ensuring seamless operation.</p><p>• Design and deploy solutions for real-time data streaming using Apache Kafka.</p><p>• Perform advanced data analytics to support business decision-making.</p><p>• Troubleshoot and resolve issues related to data infrastructure and applications.</p><p>• Document processes and provide technical guidance to team members.</p><p>• Ensure compliance with data governance and security standards.</p>
Key Responsibilities:<br><br>· Design, develop, and maintain scalable backend systems to support data warehousing and data lake initiatives.<br><br>· Build and optimize ETL/ELT processes to extract, transform, and load data from various sources into centralized data repositories.<br><br>· Develop and implement integration solutions for seamless data exchange between systems, applications, and platforms.<br><br>· Collaborate with data architects, analysts, and other stakeholders to define and implement data models, schemas, and storage solutions.<br><br>· Ensure data quality, consistency, and security by implementing best practices and monitoring frameworks.<br><br>· Monitor and troubleshoot data pipelines and systems to ensure high availability and performance.<br><br>· Stay up-to-date with emerging technologies and trends in data engineering and integration to recommend improvements and innovations.<br><br>· Document technical designs, processes, and standards for the team and stakeholders.<br><br><br><br>Qualifications:<br><br>· Bachelor’s degree in Computer Science, Engineering, or a related field; equivalent experience considered.<br><br>· Proven experience as a Data Engineer with 5 or more years of experience; or in a similar backend development role.<br><br>· Strong proficiency in programming languages such as Python, Java, or Scala.<br><br>· Hands-on experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Informatica, etc.).<br><br>· Extensive knowledge of relational and non-relational databases (e.g., SQL, NoSQL, PostgreSQL, MongoDB).<br><br>· Expertise in building and managing enterprise data warehouses (e.g., Snowflake, Amazon Redshift, Google BigQuery) and data lakes (e.g., AWS S3, Azure Data Lake).<br><br>· Familiarity with cloud platforms (AWS, Azure, Google Cloud) and their data services.<br><br>· Experience with API integrations and data exchange protocols (e.g., REST, SOAP, JSON, XML).<br><br>· Solid understanding of data governance, security, and compliance standards.<br><br>· Strong analytical and problem-solving skills with attention to detail.<br><br>· Excellent communication and collaboration abilities.<br><br><br><br>Preferred Qualifications:<br><br>· Certifications in cloud platforms (AWS Certified Data Analytics, Azure Data Engineer, etc.)<br><br>· Experience with big data technologies (e.g., Apache Hadoop, Spark, Kafka).<br><br>· Knowledge of data visualization tools (e.g., Tableau, Power BI) for supporting downstream analytics.<br><br>· Familiarity with DevOps practices and tools (e.g., Docker, Kubernetes, Jenkins).
<p>We are looking for a skilled and innovative Data Engineer to join our team in Grove City, Ohio. In this role, you will be responsible for designing and implementing advanced data pipelines, ensuring the seamless integration and accessibility of data across various systems. As a key player in our analytics and data infrastructure efforts, you will contribute to building a robust and scalable data ecosystem to support AI and machine learning initiatives.</p><p><br></p><p>Responsibilities:</p><p>• Design and develop scalable data pipelines to ingest, process, and transform data from multiple sources.</p><p>• Optimize data models to support analytics, forecasting, and AI/ML applications.</p><p>• Collaborate with internal teams and external partners to enhance data engineering capabilities.</p><p>• Implement and enforce data governance, security, and quality standards across hybrid cloud environments.</p><p>• Work closely with analytics and data science teams to ensure seamless data accessibility and integration.</p><p>• Develop and maintain data products and services to enable actionable insights.</p><p>• Troubleshoot and improve the performance of data workflows and storage systems.</p><p>• Align data systems across departments to create a unified and reliable data infrastructure.</p><p>• Support innovation by leveraging big data tools and frameworks such as Databricks and Spark.</p>
<p>Robert Half is seeking a <strong>Contract Data Engineer</strong> to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Development:</strong> Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources.</li><li><strong>Data Architecture:</strong> Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs.</li><li><strong>Cloud Data Platforms:</strong> Work within cloud environments such as <strong>AWS, Azure, or GCP</strong> to deploy and manage data solutions.</li><li><strong>Data Warehousing:</strong> Design and support enterprise data warehouses using platforms such as <strong>Snowflake, Redshift, BigQuery, or Azure Synapse</strong>.</li><li><strong>Big Data Processing:</strong> Develop solutions using big data technologies such as <strong>Spark, Databricks, Kafka, and Hadoop</strong> when required.</li><li><strong>Performance Optimization:</strong> Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency.</li><li><strong>Data Quality & Reliability:</strong> Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability.</li><li><strong>Collaboration:</strong> Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions.</li><li><strong>Documentation:</strong> Maintain detailed documentation for pipelines, data flows, and system architecture.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to design and implement robust technical solutions for enterprise applications. This role involves creating scalable and secure cloud-native systems on Azure, while collaborating closely with stakeholders to meet business requirements. The ideal candidate will possess strong expertise in data architecture and integration strategies, ensuring high engineering standards and seamless orchestration across systems.<br><br>Responsibilities:<br>• Design and maintain comprehensive technical architectures for enterprise applications, ensuring scalability and security.<br>• Develop integration strategies across multiple systems, including manufacturing, field service, and customer portals.<br>• Collaborate with the Principal Architect to define data contracts and establish effective integration patterns.<br>• Partner with teams in Product, AI/ML Engineering, and business units to translate requirements into functional solutions.<br>• Create reference implementations and frameworks to streamline development processes.<br>• Oversee system-level orchestration and elevate engineering standards across projects.<br>• Implement cloud-native solutions on Azure, leveraging modern tools and technologies.<br>• Provide technical guidance and mentorship to engineering teams, fostering best practices.<br>• Continuously monitor and improve system performance, addressing issues proactively.
<p><strong>Data Engineer (Contract) – St. Louis, MO</strong></p><p><strong>Overview:</strong></p><p>Our company is seeking an experienced Data Engineer to join our team for a contract engagement in St. Louis, MO. As a Data Engineer, you will play a critical role in designing, building, and maintaining robust data pipelines and architectures to support advanced analytics and business intelligence initiatives.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Develop, construct, test, and maintain scalable data architectures and data pipelines.</li><li>Integrate data from diverse sources (structured and unstructured) for analytics and reporting.</li><li>Optimize database performance and ensure data quality.</li><li>Implement data security, privacy standards, and compliance protocols.</li><li>Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver effective data solutions.</li><li>Troubleshoot and resolve data-related issues and bottlenecks.</li><li>Automate data ingestion and transformation processes.</li><li>Support ongoing data management, documentation, and best practices.</li></ul>
<p>We are on the lookout for a Data Engineer in New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
We are looking for a skilled Data Engineer to join our team in Los Angeles, California. This role focuses on designing and implementing advanced data solutions to support innovative advertising technologies. The ideal candidate will have hands-on experience with large datasets, cloud platforms, and machine learning, and will play a critical role in shaping our data infrastructure.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines to ensure seamless data extraction, transformation, and loading processes.<br>• Design scalable architectures that support machine learning models and advanced analytics.<br>• Collaborate with cross-functional teams to deliver business intelligence tools, reporting solutions, and analytical dashboards.<br>• Implement real-time data streaming solutions using platforms like Apache Kafka and Apache Spark.<br>• Optimize database performance and ensure efficient data storage and retrieval.<br>• Build and manage resilient data science programs and personas to support AI initiatives.<br>• Lead and mentor a team of data scientists, machine learning engineers, and data architects.<br>• Design and implement strategies for maintaining large datasets, ensuring data integrity and accessibility.<br>• Create detailed technical documentation for workflows, processes, and system architecture.<br>• Stay up-to-date with emerging technologies to continuously improve data engineering practices.
<p>Senior Data Engineer – </p><p><strong>Salary:</strong> Up to $170,000 (DOE)</p><p><br></p><p>About the Role</p><p>We are seeking a <strong>Senior Data Engineer</strong> to help design, build, and scale our financial systems. In this role, you will work on high-impact data platforms that power dynamic pricing, forecasting, and machine learning–driven decision-making. You’ll collaborate closely with data scientists, product teams, and business stakeholders to deliver reliable, real-time, and analytics-ready data solutions.</p><p><br></p><p>What You’ll Do</p><ul><li>Design, build, and maintain scalable data pipelines on <strong>Azure</strong>, supporting both batch and real-time workloads</li><li>Develop and optimize data processing using <strong>Python, Spark, and Spark SQL</strong></li><li>Build and manage solutions in <strong>Azure Databricks</strong>, including <strong>MLflow</strong> and <strong>model serving</strong></li><li>Create and orchestrate data workflows using <strong>Azure Synapse Analytics and Synapse Pipelines</strong></li><li>Support analytics and reporting needs through curated datasets for <strong>Power BI</strong></li><li>Enable real-time data ingestion and processing using <strong>Kafka, Azure Event Hubs, and Spark Streaming</strong> (where applicable)</li><li>Partner with data science teams to productionize <strong>machine learning models</strong>, particularly for <strong>dynamic pricing and revenue optimization</strong></li><li>Ensure data quality, reliability, performance, and scalability across platforms</li><li>Mentor junior engineers and contribute to data engineering best practices and standards</li></ul><p>For immediate consideration, apply now and message Reid Gormly on LinkedIN today!</p>
<p>we are seeking a <strong>Data Engineer</strong> to join its growing data team in West LA. This role is perfect for someone early in their data engineering career who wants to work with modern data stacks, cloud technologies, and high-impact analytics projects in a collaborative, fast-paced environment.</p><p><br></p><p> <strong>Compensation:</strong> $100–130K + 5% bonus (flexible for strong candidates)</p><p><br></p><p><strong>About the Role</strong></p><p>In this position, you’ll support the full data lifecycle—from ingesting and transforming raw data to building pipelines, reporting tools, and analytics infrastructure that empower teams across the business. You’ll work with Python, SQL, cloud platforms, ETL solutions, and visualization tools, contributing to the evolution of next-generation data systems supporting large-scale digital operations.</p><p><br></p><p><strong>What You'll Do</strong></p><ul><li>Build, maintain, and optimize ETL/ELT pipelines using tools such as Talend, SSIS, or Informatica</li><li>Work hands-on with cloud platforms (any cloud; GCP preferred) to support data workflows</li><li>Develop reports and dashboards using visualization tools (Looker, Tableau, Power BI, etc.)</li><li>Collaborate with product, analytics, and engineering teams to deliver reliable datasets and insights</li><li>Own data issues end-to-end — from collection and extraction to cleaning and validation</li><li>Support data architecture, pipeline resilience, and performance tuning</li><li>Assist in maintaining and scaling datasets, data models, and analytics environments</li><li>Contribute to real-time streaming initiatives (a plus)</li></ul><p><br></p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Denver, Colorado. In this role, you will design and implement data solutions that drive business insights and operational efficiency. You will collaborate with cross-functional teams to manage data pipelines, optimize workflows, and ensure the integrity and security of data systems.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain robust data pipelines to process and transform large datasets effectively.</p><p>• Advise on tools / technologies to implement. </p><p>• Collaborate with stakeholders to understand data requirements and translate them into technical solutions.</p><p>• Design and implement ETL processes to facilitate seamless data integration.</p><p>• Optimize data workflows and ensure system performance meets organizational needs.</p><p>• Work with Apache Spark, Hadoop, and Kafka to build scalable data systems.</p><p>• Create and maintain SQL queries for data extraction and analysis.</p><p>• Ensure data security and integrity by adhering to best practices.</p><p>• Troubleshoot and resolve issues in data systems to minimize downtime.</p><p>• Provide technical guidance and mentorship to less experienced team members.</p><p>• Stay updated on emerging technologies to enhance data engineering practices.</p>
<p>We are seeking a seasoned <strong>Databricks Data Engineer</strong> with expertise in Azure cloud services and the Databricks Lakehouse platform. The role involves designing and optimizing large-scale data pipelines, modernizing cloud-based data ecosystems, and enabling secure, governed data solutions. Strong skills in SQL, Python, PySpark, ETL/ELT frameworks, and experience with Delta Lake, Unity Catalog, and CI/CD automation are essential.</p><p> </p><p><strong>About the Role</strong></p><p>We are seeking a highly skilled Databricks Data Engineer with deep expertise in data engineering, Azure cloud services, and Databricks Lakehouse technologies. This role will focus on building scalable, secure, and high-performance data solutions for enterprise analytics.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design, build, and optimize large-scale data pipelines on the Databricks Lakehouse platform.</li><li>Modernize the Azure-based data ecosystem, including architecture, data modeling, security, and CI/CD automation.</li><li>Implement orchestration and workflow automation using Apache Airflow and similar tools.</li><li>Work with regulated datasets, ensuring compliance and governance best practices.</li></ul><p><br></p>
<p>Essential Duties and Responsibilities:</p><p> · Knowledge of database coding and tables; as well as general database management</p><p> · Understanding of client management, support, and communicating progress and timelines accordingly</p><p> · Organizes and/or leads Informatics projects in the implementation/use of new data warehouse tools and systems</p><p> · Ability to train new hires; as well as lead in training of new client staff members</p><p> · Understanding data schema and the analysis of database performance and accuracy</p><p> · Understanding of ETL tools, OLAP design, and data quality processes</p><p> · Knowledge of Business Intelligence life cycle: planning, design, development, validation, deployment, documentation, and ongoing support</p><p> · Working knowledge of electronic medical records software (eCW, Nextgen, etc) and the backend storage of that data</p><p> · Ability to generate effective probability modeling and statistics as it pertains to healthcare outcomes and financial risks</p><p> · Ability to manage sometimes lengthy and complicated projects from throughout the life cycle and meet the deadlines associated with these projects</p><p> · Development, maintenance, technical support of various reports and dashboards</p><p> · Knowledge of Microsoft® SQL including coding language, creation of tables, stored procedures, and query design</p><p> · Fundamental understanding of outpatient healthcare workflows</p><p> · Knowledge of relational database concepts and flat/formatted file processing.</p><p> · Possesses strong commitment to data validation processes in order to ensure accuracy of reporting (internal quality control)</p><p> · Possesses a firm grasp of patient confidentiality and system security practices to prevent HIPAA and other security violations.</p><p> · Knowledge of IBM Cognos® or other database reporting software such as SAS, SPSS, and Crystal Reports</p><p> · Ability to meet the needs of other members of the Informatics department to maximize efficiency and minimize complexity of end-user products</p><p><br></p><p>Requirements:</p><p> · Education: Bachelor's Degree</p><p> · Proven experience as a dbt Developer or in a similar Data Engineer role.</p><p> · Expert-level SQL skills — capable of writing, tuning, and debugging complex queries across large datasets.</p><p> · Strong experience with Snowflake or comparable data warehouse technologies (BigQuery, Redshift, etc.).</p><p> · Proficiency in Python for scripting, automation, or data manipulation.</p><p> · Solid understanding of data warehousing concepts, modeling, and ELT workflows.</p><p> · Familiarity with Git or other version control systems.</p><p> · Experience working with cloud-based platforms such as AWS, GCP, or Azure.</p><p><br></p><p><br></p>
<p>Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senior ETL Engineer - Position WEEKLY HYBRID position with major flexibility! FULL Microsoft On-Prem stack.</p><p><br></p><p>LOCATION : HYBRID WEEKLY in Des Moines. You must reside in the Des Moines area for weekly onsite . NO travel back and forth and not a remote position! If you live in Des Moines, eventually you can MOSTLY work remote!! This position has upside with training in Azure.</p><p><br></p><p>IMMEDIATE HIRE ! Solve real Business Problems.</p><p><br></p><p>Hands-On Technical SENIOR Microsoft Stack Data Engineer | SENIOR Data Warehouse Engineer / SENIOR Data Engineer / Senior ETL Developer / Azure Data Engineer / ( Direct Hire) who is looking to help modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD but FIRST REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting.</p><p><br></p><p>YOU WILL DOING ALL ASPECTS OF Data Engineering. Must have data warehouse & Data Lake skills. You will be in the technical weeds and technical data day to day BUT you could grow to the Technical Leader of this team. ETL skills like SSIS., working with disparate data. SSAS is a Plus! Fact and Dimension Data warehouse experience AND experience.</p><p>Hands-On Technical Hands-On Technical SENIOR Microsoft Stack Data Engineer / SENIOR Data Warehouse / SENIOR Data Engineer / Azure Data Factory Data Engineer This is a Permanent Direct Hire Hands-On Technical Manager of Data Engineering position with one of our clients in Des Moines up to 155K Plus bonus</p><p><br></p><p>PERKS: Bonus, 2 1/2 day weekends !</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
<p><strong>Robert Half</strong> is actively partnering with an Austin-based client to identify a <strong>Data Engineer (contract).</strong> In this role, you will support the continued growth of our technology and analytics capabilities. This role will focus on building, enhancing, and maintaining reliable data pipelines and infrastructure, ensuring high-quality data flows across the organization. This person will work closely with multiple teams to support data accessibility, performance, and scalability. <strong>This role is onsite in Austin, Tx. </strong></p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and maintain robust data pipeline architectures.</li><li>Administer and configure enterprise pipeline orchestration tools (e.g., AWS Glue, Airflow, Fivetran).</li><li>Build and manage large, complex datasets to meet both technical and business requirements.</li><li>Identify opportunities to streamline internal processes, automate repetitive tasks, enhance data delivery, and scale data infrastructure.</li><li>Create systems and frameworks for efficient extraction, transformation, and loading of data from diverse sources using appropriate technologies.</li><li>Collaborate with stakeholders to troubleshoot data issues and support ongoing infrastructure needs.</li><li>Ensure data is protected and properly governed within the environment.</li><li>Partner with analytics teams to increase the usability and performance of data systems.</li><li>Develop and maintain processes for data transformation, metadata management, data dependencies, and workload optimization.</li><li>Maintain and evolve the organization’s data model.</li><li>Perform other related duties as assigned.</li></ul>
<p>We’re seeking a Data Engineer to build and maintain scalable data pipelines that power analytics, reporting, and machine learning across the organization. You’ll turn raw data into clean, reliable, and accessible datasets that drive business decisions.</p><p>What You’ll Do</p><ul><li>Design and maintain data warehouses and data lakes</li><li>Build ETL/ELT pipelines integrating data from multiple systems</li><li>Optimize performance for large-scale datasets</li><li>Ensure data quality, security, and governance</li><li>Collaborate with analysts and ML teams to create analytics-ready datasets</li><li>Automate workflows and monitoring</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. As part of the Manufacturing industry, you will play a pivotal role in developing and maintaining data infrastructure critical to our operations. This is a long-term contract position that offers the opportunity to work on innovative projects and collaborate with a dynamic team.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines to support business operations and analytics.<br>• Develop, test, and maintain ETL processes for efficient data extraction, transformation, and loading.<br>• Utilize tools such as Apache Spark and Hadoop to manage and process large datasets.<br>• Integrate and optimize data streaming platforms like Apache Kafka.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Create and maintain documentation for data processes and systems.<br>• Stay updated on emerging technologies and recommend improvements to enhance data engineering practices.<br>• Ensure data security and compliance with industry standards and regulations.
We are looking for a skilled Data Engineer to support our organization's data initiatives in Savannah, Georgia. This Contract to permanent role focuses on managing, optimizing, and securing data systems to drive strategic decision-making and improve overall performance. The ideal candidate will work closely with technology teams, analytics departments, and business stakeholders to ensure seamless data integration, accuracy, and scalability.<br><br>Responsibilities:<br>• Design and implement robust data lake and warehouse architectures to support organizational needs.<br>• Develop efficient ETL pipelines to process and integrate data from multiple sources.<br>• Collaborate with analytics teams to create and refine data models for reporting and visualization.<br>• Monitor and maintain data systems to ensure quality, security, and availability.<br>• Troubleshoot data-related issues and perform in-depth analyses to identify solutions.<br>• Define and manage organizational data assets, including SaaS tools and platforms.<br>• Partner with IT and security teams to meet compliance and governance standards.<br>• Document workflows, pipelines, and architecture for knowledge sharing and long-term use.<br>• Translate business requirements into technical solutions that meet reporting and analytics needs.<br>• Provide guidance and mentorship to team members on data usage and best practices.
<p>Position Overview</p><p>We are seeking a Data Engineer Engineer to support and enhance a Databricks‑based data platform during its development phase. This role is focused on building reliable, scalable data solutions early in the lifecycle—not production firefighting.</p><p>The ideal candidate brings hands‑on experience with Databricks, PySpark, Python, and a working understanding of Azure cloud services. You will partner closely with Data Engineering teams to ensure pipelines, notebooks, and workflows are designed for long‑term scalability and production readiness.</p><p><br></p><p>Key Responsibilities</p><ul><li>Develop and enhance Databricks notebooks, jobs, and workflows</li><li>Write and optimize PySpark and Python code for distributed data processing</li><li>Assist in designing scalable and reliable data pipelines</li><li>Apply Spark performance best practices: partitioning, caching, joins, file sizing</li><li>Work with Delta Lake tables, schemas, and data models</li><li>Perform data validation and quality checks during development cycles</li><li>Support cluster configuration, sizing, and tuning for development workloads</li><li>Identify performance bottlenecks early and recommend improvements</li><li>Partner with Data Engineers to prepare solutions for future production rollout</li><li>Document development standards, patterns, and best practices</li></ul>
<p>We are seeking a <strong>Senior Data Engineer</strong> to join a highly collaborative, hands-on team in West Hollywood. This is an on-site role for someone who enjoys building, owning, and evolving data platforms end-to-end. You’ll work closely with technical and non-technical stakeholders, take ownership of complex data challenges, and help drive cross-functional initiatives.</p><p>This role requires someone who is curious, independent, and comfortable rolling up their sleeves to solve foundational and ambiguous problems.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain scalable data pipelines using Python and SQL</li><li>Develop and manage cloud-based data infrastructure (GCP, AWS, or Azure)</li><li>Own data orchestration workflows (Prefect preferred; Airflow or Dagster experience is transferable)</li><li>Implement robust data modeling practices (dbt preferred; strong SQL foundations required)</li><li>Reconcile multiple data sources and establish master data management strategies</li><li>Partner with stakeholders to define requirements, track deliverables, and run cross-functional projects</li><li>Support containerized workloads and data services using Docker and Kubernetes</li><li>Evaluate and integrate low-code or no-code data syncing tools where appropriate</li></ul><p>What We’re Looking For</p><ul><li>Someone who takes ownership and can be “pointed at a problem” and run with it</li><li>A builder who enjoys both strategy and execution</li><li>A strong data engineering foundation with curiosity about modern AI-enabled architectures</li></ul><p>If you’re excited about building resilient data systems, collaborating cross-functionally, and pushing into AI-enhanced data platforms—this role offers the opportunity to make a meaningful impact from day one.</p><p><br></p><p>For immediate consideration, direct message Reid Gormly on LinkedIN and Apply Now!</p>
<p>Key Responsibilities</p><ul><li>Design, build, and maintain end-to-end data pipelines using Azure-native services</li><li>Develop and optimize ETL/ELT processes using Azure Data Factory, Databricks, and SQL</li><li>Create and manage data models for analytics and reporting use cases</li><li>Integrate data from ERP, CRM, operational systems, and external sources</li><li>Ensure data quality, reliability, and performance across data platforms</li><li>Implement data governance, security, and access controls</li><li>Monitor and troubleshoot pipeline failures and performance issues</li><li>Collaborate with BI teams, analysts, and business stakeholders to support reporting needs</li><li>Maintain technical documentation and data standards</li></ul><p><br></p><p><br></p>