We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>We are looking for a talented Data Engineer to join our team in Fort Lauderdale, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools such as Apache Spark and Python.<br>• Design efficient ETL processes to extract, transform, and load data from various sources.<br>• Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.<br>• Implement and manage big data solutions using Apache Hadoop and Apache Kafka.<br>• Monitor and optimize the performance of data systems to ensure reliability and scalability.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Maintain documentation for data workflows and processes to ensure clarity and consistency.<br>• Stay updated on emerging technologies and best practices in data engineering to continuously improve systems.
We are looking for an experienced Data Engineer to join our team in Cincinnati, Ohio. This long-term contract position offers the opportunity to work on cutting-edge data engineering projects while collaborating with multidisciplinary teams to deliver high-quality solutions. The ideal candidate will have a strong background in Databricks and big data technologies, along with a passion for optimizing data processes and systems.<br><br>Responsibilities:<br>• Design, build, and enhance data pipelines using Databricks Runtime, Delta Lake, Autoloader, and Structured Streaming.<br>• Implement secure and governed data access protocols utilizing Unity Catalog, workspace controls, and audit configurations.<br>• Manage and integrate structured and unstructured data from diverse sources, including APIs and cloud storage.<br>• Develop and maintain notebook-based workflows and manage jobs using Databricks Workflows and Jobs.<br>• Apply best practices for performance tuning, scalability, and cost optimization in Databricks environments.<br>• Collaborate with data scientists, analysts, and business stakeholders to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks jobs and system configurations.<br>• Ensure high standards of data quality and security across all engineering tasks.<br>• Troubleshoot and resolve issues to maintain operational efficiency in data pipelines.
<p>We are seeking a Senior Data Engineer – Ingest to help transform data into meaningful insights and power innovation across the organization. In this role, you will work with a collaborative team of technologists to build scalable data solutions, integrate diverse data sources, and strengthen the core data platform. Your engineering expertise will directly support analytics, data science, operations, and key business stakeholders.</p><p>If you’re passionate about building high‑quality data systems that make a measurable impact, this role offers the opportunity to shape the future of a large, data‑driven organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Maintain, update, and expand configuration‑driven data pipelines within the core data platform.</li><li>Build tools and services supporting data discovery, lineage, governance, and privacy.</li><li>Partner with software engineers, data engineers, architects, and product managers to deliver reliable and scalable data solutions.</li><li>Help define and document data standards, naming conventions, pipeline best practices, and system guidelines.</li><li>Ensure the reliability, accuracy, and operational efficiency of datasets to meet SLAs.</li><li>Participate in Agile/Scrum ceremonies and contribute to ongoing process improvements.</li><li>Collaborate closely with users and stakeholders to understand needs and prioritize enhancements.</li><li>Maintain detailed technical documentation to support data quality, governance, and compliance requirements.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team on a long-term contract basis. Based in Houston, Texas, this role offers an exciting opportunity to work with cutting-edge data technologies, design scalable solutions, and contribute to data-driven decision-making processes. If you are passionate about optimizing data systems and driving innovation, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python.<br>• Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems.<br>• Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Design and maintain data models, ensuring alignment with business requirements.<br>• Conduct thorough testing and validation of data processes to guarantee accuracy.<br>• Document data workflows and processes for future reference and team collaboration.<br>• Provide technical guidance and support to team members on data engineering best practices.<br>• Stay current on emerging technologies and trends in big data and analytics.<br>• Contribute to improving data governance and security protocols.
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
We are looking for a skilled Data Engineer to join our team in Washington, District of Columbia. In this role, you will play a key part in designing and implementing secure, scalable solutions to support data and analytics initiatives. This is a long-term contract position, offering the opportunity to work with cutting-edge technologies and contribute to impactful projects.<br><br>Responsibilities:<br>• Develop, test, and maintain robust data pipelines and engineering solutions to support analytics and integrate new data sources.<br>• Collaborate with team members, stakeholders, and external vendors to evaluate and implement reliable, scalable, and secure technologies.<br>• Create efficient, automated processes to handle repetitive data management tasks.<br>• Conduct targeted data manipulation and analysis across diverse datasets.<br>• Implement advanced security measures within data warehouses and analytics platforms to counter evolving threats.<br>• Document technical processes and solutions to ensure seamless collaboration and knowledge sharing.<br>• Monitor and optimize system performance to ensure scalability and reliability.<br>• Stay updated on emerging data engineering trends and incorporate them into workflows.
<p>Our transportation client is seeking a <strong>Data Engineer</strong> to support large‑scale logistics operations by building reliable, scalable, and cloud‑based data pipelines. This role is hands‑on, focused on delivering high‑quality data flows that improve shipment visibility, operational efficiency, and real‑time analytics across the supply chain.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, build, and maintain <strong>ETL/ELT pipelines</strong> that process high‑volume operational and logistics data</li><li>Develop transformation logic and automation using <strong>Python</strong>, <strong>SQL</strong>, and Azure-native tooling</li><li>Implement and orchestrate workflows in <strong>Azure Data Factory</strong>, <strong>Synapse</strong>, and <strong>Databricks</strong></li><li>Optimize data lake and warehouse performance, including tuning queries, pipelines, and storage layers</li><li>Monitor pipeline health and proactively troubleshoot failures, bottlenecks, and data quality issues</li><li>Contribute to data modeling efforts to support analytics, reporting, and downstream applications</li><li>Collaborate with BI, product, supply chain, and application teams to align pipelines with business needs</li><li>Maintain strong documentation around workflows, standards, and operational procedures</li><li>Support governance initiatives related to <strong>data quality</strong>, lineage, cataloging, and access policies</li><li>Follow best practices for security, compliance, and cloud resource management</li></ul><p><br></p><p><br></p>
<p>We are seeking a highly experienced Senior Data Engineer professional to lead the design, development, and operationalization of advanced data and AI/ML solutions. This role requires a strong technical foundation in cloud platforms, modern data engineering frameworks, ML system deployment, and semantic data modeling. The ideal candidate combines deep technical expertise with strong leadership and communication skills to guide teams and drive strategic initiatives across the organization.</p><p><br></p><p><strong>Technical Leadership</strong></p><ul><li>Lead the end-to-end design, development, deployment, and maintenance of large-scale data engineering and machine learning pipelines.</li><li>Architect and operationalize AI/ML systems in production environments, ensuring high reliability, performance, and observability.</li><li>Leverage cloud platforms (GCP or AWS) to build scalable, secure, and cost‑efficient data and ML infrastructure.</li><li>Utilize streaming and real-time processing technologies such as Apache Kafka and Apache Flink to support event-driven architectures and advanced analytics use cases.</li><li>Develop robust data transformations and semantic models using tools such as dbt.</li><li>Implement and maintain Infrastructure as Code using Terraform or similar frameworks.</li><li>Ensure cloud architectures follow best practices for security, compliance, and governance.</li></ul><p><strong>Team & Cross-Functional Leadership</strong></p><ul><li>Provide technical leadership, mentorship, and guidance to data engineers, ML engineers, and other stakeholders.</li><li>Collaborate closely with Data Science, DevOps, Security, and Product teams to ensure cohesive delivery of data and ML initiatives.</li><li>Communicate complex technical concepts clearly to both technical and non-technical audiences, supporting informed decision‑making.</li></ul><p><strong>Operational Excellence</strong></p><ul><li>Maintain production AI/ML systems with focus on reliability, monitoring, versioning, and lifecycle management.</li><li>Establish and uphold engineering best practices, coding standards, CI/CD frameworks, and documentation.</li><li>Continuously evaluate emerging technologies, frameworks, and methodologies to strengthen the organization’s data and ML capabilities.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work at the intersection of data engineering, analytics, and business strategy. If you have a strong background in building and optimizing data pipelines and are passionate about leveraging technology to drive insights, we encourage you to apply.<br><br>Responsibilities:<br>• Design, develop, and optimize scalable data pipelines and workflows to support business analytics.<br>• Collaborate with cross-functional teams to gather and analyze data requirements.<br>• Implement ETL processes to extract, transform, and load data from diverse sources.<br>• Utilize tools such as Apache Spark and Hadoop to manage large-scale data processing.<br>• Integrate streaming data systems using Apache Kafka to enhance real-time analytics.<br>• Monitor and troubleshoot data flow and systems to ensure high performance and reliability.<br>• Develop and maintain documentation for data engineering processes and systems.<br>• Ensure data security and integrity across all platforms and processes.<br>• Work closely with stakeholders to translate business needs into technical solutions.<br>• Stay updated with industry trends and emerging technologies to improve data engineering practices.
We are looking for an experienced Data Engineer to join our team in Chicago, Illinois. In this role, you will design and implement data solutions that drive business insights and support strategic decision-making. Your expertise in Microsoft Fabric and Azure Databricks will be key in optimizing data workflows and ensuring the reliability of our data systems.<br><br>Responsibilities:<br>• Develop, implement, and maintain scalable data pipelines to support business analytics and reporting needs.<br>• Utilize Microsoft Fabric and Azure Databricks to design efficient data architectures and workflows.<br>• Collaborate with cross-functional teams to understand data requirements and deliver tailored solutions.<br>• Ensure data integrity and security across all systems and processes.<br>• Optimize data storage and retrieval processes for improved performance and scalability.<br>• Monitor system performance and troubleshoot issues as needed to ensure seamless operations.<br>• Document processes and procedures to maintain a clear record of data engineering solutions.<br>• Stay updated with emerging technologies and industry best practices to enhance data engineering capabilities.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
We are looking for a Senior Database Engineer to take on a critical role in shaping the future of our global data platform. In this position, you will lead technical strategy, architect robust multi-cloud systems, and oversee initiatives to ensure reliability, scalability, and cost efficiency. You will have a hands-on approach, providing mentorship and collaborating with leadership to drive impactful technical decisions. This is a contract opportunity with the potential for a permanent position, located in Lehi, Utah.<br><br>Responsibilities:<br>• Develop and execute the technical roadmap for a scalable and reliable data infrastructure.<br>• Architect and implement multi-region, cross-account data platforms to support global operations.<br>• Establish and enforce engineering standards for database design, data pipelines, reliability, and observability.<br>• Lead post-incident reviews and implement solutions to prevent recurring issues.<br>• Collaborate with product and engineering teams to identify technical risks and optimize roadmaps.<br>• Design and oversee large-scale data migrations, ensuring fault tolerance and self-healing capabilities.<br>• Optimize database performance through indexing, query tuning, and capacity planning.<br>• Implement robust security measures, including encryption, secrets management, and access controls.<br>• Partner with cross-functional teams to align business requirements with technical solutions.<br>• Provide hands-on leadership in developing critical systems and resolving complex production incidents.
<p><strong>For immediate response please message Valerie Nielsen on LinkedIn or email!</strong></p><p><br></p><p><strong>Job Title:</strong> Senior Data Engineer</p><p> <strong>Location:</strong> Hybrid – Westwood (Los Angeles, CA) near University of California, Los Angeles</p><p> <strong>Compensation:</strong> $175,000 – $185,000 base salary + 10% annual bonus</p><p> <strong>Employment Type:</strong> Full-Time</p><p><br></p><p>Overview</p><p>We are seeking a <strong>Senior Data Engineer</strong> to join a growing data team in <strong>Westwood, CA</strong>. This role will focus on designing and building scalable data pipelines, supporting analytics and reporting initiatives, and improving data infrastructure across the organization.</p><p>The ideal candidate is highly experienced with <strong>Snowflake, dbt, Python</strong>, and modern data pipeline architecture, and enjoys working closely with analytics and business teams to deliver reliable, high-quality data. Experience integrating data from CRM platforms such as <strong>Salesforce</strong> is a strong plus.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines</strong> supporting analytics, reporting, and operational data needs</li><li>Build and optimize data models and transformations using <strong>dbt</strong> within a <strong>Snowflake</strong> data warehouse environment</li><li>Develop robust ETL/ELT workflows using <strong>Python</strong> and modern data engineering best practices</li><li>Collaborate with analytics teams to deliver clean, reliable datasets used in <strong>Power BI</strong> dashboards and reporting</li><li>Ensure data quality, reliability, and performance across the data platform</li><li>Optimize Snowflake warehouse performance and manage cost-efficient data storage and compute usage</li><li>Integrate data from internal and external systems, including CRM and SaaS platforms</li><li>Partner with stakeholders across engineering, product, and business teams to define data requirements and solutions</li><li>Maintain documentation and promote data engineering standards and best practices</li></ul><p><br></p><p><br></p>
<p><strong>Overview</strong></p><p>We are seeking a Senior Data Engineer to support a major Salesforce Phase 2 data migration initiative. This role will focus heavily on building and optimizing data pipelines, developing ETL workflows, and moving CRM data from Salesforce into Databricks.</p><p>The engineer will work closely with a senior team member, contribute to Scrum ceremonies, and play a key role in developing the core CRM data environment used by the advertising organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><p><strong>Data Engineering & Migration</strong></p><ul><li>Develop ETL jobs that move and transform Salesforce data into Databricks.</li><li>Build, test, and maintain high‑volume data pipelines across AWS + Databricks.</li><li>Perform data migration, data integration, and pipeline development (including Mulesoft-related work).</li><li>Ensure all pipelines are reliable, scalable, and optimized for production.</li></ul><p><strong>Development & Infrastructure</strong></p><ul><li>Use Python and PySpark to build ETL components and transformation logic.</li><li>Leverage Spark/PySpark for distributed processing at scale (must‑have).</li><li>Use Terraform to provision and manage cloud infrastructure.</li><li>Set up CI/CD pipelines using Concourse or GitHub Actions for automated deployments.</li></ul><p><strong>Quality, Documentation & Support</strong></p><ul><li>Document ETL processes, pipelines, and data flows.</li><li>Participate in testing, QA, and validation of migrated datasets.</li><li>Provide post‑delivery support and proactively mitigate project risks or single points of failure (SPOF).</li><li>Troubleshoot production issues and implement long‑term fixes to maintain pipeline stability.</li></ul><p><strong>Collaboration</strong></p><ul><li>Work closely with engineering teammates to translate business requirements into working pipelines.</li><li>Participate in weekly Scrum ceremonies.</li><li>Contribute to shared best practices and continuous improvement across the data engineering team.</li></ul><p><br></p>
<p><strong>***Please email Valerie Nielsen for immediate response*** </strong></p><p><br></p><p><strong>Job Title:</strong> Data Engineer</p><p> <strong>Location:</strong> West Los Angeles, CA (Onsite)</p><p> <strong>Salary:</strong> $150,000 Base + Bonus</p><p><strong>Overview</strong></p><p> We are seeking a <strong>Data Engineer</strong> to join our team onsite in <strong>West Los Angeles</strong>. This role is ideal for someone early in their career who has strong technical fundamentals, enjoys working with data, and has curiosity around modern AI tools. The ideal candidate has a strong analytical mindset and enjoys solving complex data problems while building scalable pipelines and data models.</p><p><strong>Responsibilities</strong></p><ul><li>Build, maintain, and optimize data pipelines and ETL processes</li><li>Write efficient and scalable <strong>SQL and Python</strong> code for data transformation and analysis</li><li>Work with cloud data platforms in <strong>AWS or Azure</strong></li><li>Support data modeling, data warehouse development, and reporting pipelines</li><li>Collaborate with analytics and product teams to deliver clean, reliable datasets</li><li>Explore and leverage <strong>AI tools (e.g., Claude or similar)</strong> to improve workflows and productivity</li><li>Ensure data quality, performance, and scalability across systems</li></ul><p><br></p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Boston, Massachusetts. In this role, you will be responsible for designing and building a robust data platform from the ground up, playing a pivotal part in shaping the data strategy and supporting AI-driven initiatives. This is a unique opportunity to contribute to the creation of a new data engineering function within a dynamic financial services environment. This role is hybrid, onsite in Boston 3 days a week. </p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and implement a scalable data platform using Microsoft Fabric and other technologies within the Microsoft ecosystem.</p><p>• Collaborate with stakeholders to define the data strategy and implement solutions that align with business goals.</p><p>• Oversee and manage external consultants assisting with the development of the data platform.</p><p>• Support AI enablement initiatives by ensuring the data architecture meets analytical and operational needs.</p><p>• Create and maintain ETL processes to ensure efficient data extraction, transformation, and loading.</p><p>• Optimize database performance across SQL, NoSQL, and other database systems.</p><p>• Utilize Python for data engineering tasks, including scripting and automation.</p><p>• Work closely with IT and analytics teams to ensure seamless integration of the data platform into existing systems.</p><p>• Provide technical leadership and guidance while exploring future opportunities to build and expand the data engineering function.</p><p>• Ensure compliance with industry standards and best practices in data security and management.</p>
<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
<p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
We are seeking a Senior Data Engineer to join a growing data engineering team responsible for building and scaling an enterprise data platform. This role will focus on developing cloud-based data pipelines within Google Cloud Platform (GCP) while also supporting elements of a legacy on-premise data warehouse environment during an ongoing cloud migration.<br><br>The ideal candidate will have strong experience building scalable data pipelines, event-driven data architectures, and cloud-native data services. This is a great opportunity to contribute to a rapidly expanding data ecosystem and help drive the transition to modern cloud data platforms.<br><br>Key Responsibilities<br><br>Design, build, and maintain data pipelines within Google Cloud Platform (GCP)<br><br>Develop event-driven data streaming solutions using Pub/Sub<br><br>Build and maintain Python-based services using Cloud Run<br><br>Develop and optimize BigQuery datasets and queries<br><br>Integrate new data sources into the enterprise data platform<br><br>Maintain and support existing ETL processes within SQL Server<br><br>Work with SSIS and stored procedures in legacy data environments<br><br>Monitor, troubleshoot, and optimize data pipeline performance<br><br>Collaborate with engineering teams to support data-driven initiatives<br><br>Participate in on-call rotations for production systems<br><br>Required Qualifications<br><br>5+ years of experience in Data Engineering<br><br>Strong experience with Google Cloud Platform (GCP)<br><br>Experience building data pipelines and ETL processes<br><br>Experience with Pub/Sub or event-driven data streaming<br><br>Strong experience with BigQuery<br><br>Proficiency in Python<br><br>Experience with Cloud Run or similar serverless services<br><br>Strong SQL experience including SQL Server<br><br>Experience with SSIS or similar ETL tools
<p>We are looking for a talented Data Engineer to join our team in Miami, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for an experienced Data Engineer to join our team in New York, New York. In this role, you will design, build, and maintain data infrastructure to support business intelligence and analytics needs. The ideal candidate will have a strong technical background, a passion for working with complex datasets, and expertise in cloud-based data platforms.<br><br>Responsibilities:<br>• Develop, implement, and optimize ETL pipelines to ensure efficient data processing and integration.<br>• Design and maintain scalable data solutions, including data warehouses and data lakes.<br>• Collaborate with cross-functional teams to identify data requirements and deliver actionable insights.<br>• Utilize Snowflake, AWS, and other cloud-based platforms to manage data infrastructure and ensure performance optimization.<br>• Leverage Python and SQL to build robust data workflows and automate processes.<br>• Employ orchestration tools like Airflow and dbt to streamline data operations.<br>• Support data analytics and visualization efforts by enabling the creation of impactful dashboards using tools such as Tableau.<br>• Work with marketing and product data sources, including platforms like Google Analytics, to extract and integrate valuable insights.<br>• Implement CI/CD pipelines and DevOps practices to enhance data engineering processes.<br>• Ensure data security and compliance across all systems and tools.