<p>We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and analytics solutions that support enterprise reporting and advanced dashboards. This role will work with cross‑cloud data sources, including SAP, GCP, and BigQuery, and partner closely with analytics and business teams to deliver high‑quality, analytics‑ready datasets powering BI and AI initiatives.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain data pipelines following <strong>Medallion Architecture (Bronze, Silver, Gold)</strong> best practices.</li><li>Develop and support ETL processes pulling data from <strong>SAP, Google Cloud Platform (GCP), and BigQuery</strong>.</li><li>Ensure high data quality, reliability, and performance across ingestion and transformation layers.</li><li>Support analytics and visualization teams by delivering clean, well‑modeled datasets for:</li><li><strong>Power BI dashboards using DAX</strong></li><li><strong>Google Looker dashboards using LookML</strong></li><li>Collaborate with stakeholders to understand data requirements and translate them into scalable data models.</li><li>Maintain documentation on data sources, transformations, and architecture.</li><li>Support AI and API‑driven initiatives, including planned usage of <strong>Google ADK for API integrations</strong></li></ul><p><br></p><p><br></p>
<p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work at the intersection of data engineering, analytics, and business strategy. If you have a strong background in building and optimizing data pipelines and are passionate about leveraging technology to drive insights, we encourage you to apply.<br><br>Responsibilities:<br>• Design, develop, and optimize scalable data pipelines and workflows to support business analytics.<br>• Collaborate with cross-functional teams to gather and analyze data requirements.<br>• Implement ETL processes to extract, transform, and load data from diverse sources.<br>• Utilize tools such as Apache Spark and Hadoop to manage large-scale data processing.<br>• Integrate streaming data systems using Apache Kafka to enhance real-time analytics.<br>• Monitor and troubleshoot data flow and systems to ensure high performance and reliability.<br>• Develop and maintain documentation for data engineering processes and systems.<br>• Ensure data security and integrity across all platforms and processes.<br>• Work closely with stakeholders to translate business needs into technical solutions.<br>• Stay updated with industry trends and emerging technologies to improve data engineering practices.
We are looking for an experienced Data Engineer to join our team on a long-term contract basis. Based in Houston, Texas, this role offers an exciting opportunity to work with cutting-edge data technologies, design scalable solutions, and contribute to data-driven decision-making processes. If you are passionate about optimizing data systems and driving innovation, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python.<br>• Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems.<br>• Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Design and maintain data models, ensuring alignment with business requirements.<br>• Conduct thorough testing and validation of data processes to guarantee accuracy.<br>• Document data workflows and processes for future reference and team collaboration.<br>• Provide technical guidance and support to team members on data engineering best practices.<br>• Stay current on emerging technologies and trends in big data and analytics.<br>• Contribute to improving data governance and security protocols.
<p>I’m building a world-class team to power our next generation of data products. We’re looking for a Senior Data Engineer who knows AWS inside and out—someone who can <strong>design secure, scalable data pipelines</strong>, <strong>own ETL/ELT workflows</strong>, <strong>engineer cloud data infrastructure</strong>, and <strong>deliver dimensional and semantic models</strong> that our analysts, data scientists, and applications can trust.</p><p>You’ll work closely with product, security, platform engineering, and analytics to move our architecture toward a <strong>real-time, governed, cost-aware</strong>, and <strong>highly automated</strong> data ecosystem.</p><p><strong>What You’ll Do</strong></p><ul><li><strong>Design & build end-to-end pipelines</strong> on AWS (batch and streaming) using services like <strong>Glue, EMR, Lambda, Step Functions, Kinesis, MSK</strong>, and <strong>Fargate</strong>.</li><li><strong>Develop robust ETL/ELT</strong> (PySpark, Spark SQL, SQL, Python) for structured, semi-structured, and unstructured data at scale.</li><li><strong>Own data storage & processing layers</strong>: <strong>S3 (Lake/Lakehouse), Redshift (or Snowflake on AWS), DynamoDB</strong>, and <strong>Athena</strong> with strong partitioning, compaction, and performance tuning.</li><li><strong>Implement data models</strong> (3NF, dimensional/star, Data Vault, Lakehouse medallion) for analytics and operational workloads.</li><li><strong>Engineer secure infrastructure-as-code</strong> with <strong>Terraform</strong> (or <strong>CDK</strong>) across multi-account setups; implement CI/CD via <strong>GitHub Actions</strong> or <strong>AWS CodeBuild/CodePipeline</strong>.</li><li><strong>Harden security & governance</strong>: use <strong>IAM</strong>, <strong>Lake Formation</strong>, <strong>KMS</strong>, <strong>Secrets Manager</strong>, <strong>VPC/PrivateLink</strong>, <strong>GLUE Catalog</strong>, and fine-grained access controls. Partner with SecOps on compliance (e.g., <strong>SOC 2</strong>, <strong>FedRAMP</strong>, <strong>HIPAA</strong> depending on dataset).</li><li><strong>Observability & reliability</strong>: build monitoring with <strong>CloudWatch</strong>, <strong>OpenTelemetry</strong>, and data quality checks (e.g., <strong>Great Expectations</strong>, <strong>Deequ</strong>), implement SLOs and alerts.</li><li><strong>Champion best practices</strong>: code reviews, testing (unit/integration), documentation, runbooks, and blameless postmortems.</li><li><strong>Mentor</strong> mid-level engineers and collaborate on architectural decisions, standards, and technical roadmaps.</li></ul><p><br></p>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>Robert Half is hiring! We are looking for an experienced Data Engineer to join our team in Greenville, South Carolina. This role offers an exciting opportunity to work with modern data technologies, ensuring the efficient operation and optimization of data pipelines and systems. The ideal candidate will bring a strong technical background, leadership skills, and a proactive approach to maintaining and improving data infrastructure.</p><p><br></p><p>Responsibilities:</p><p>• Oversee daily data loads and ensure the smooth operation of data pipelines and related systems.</p><p>• Troubleshoot and resolve issues such as pipeline failures, performance bottlenecks, schema mismatches, and cloud resource disruptions.</p><p>• Conduct root-cause analyses and implement permanent solutions to prevent recurring issues.</p><p>• Maintain and optimize existing data processes, refactoring or retiring outdated workflows as necessary.</p><p>• Design and build scalable data ingestion pipelines using technologies such as Azure Data Factory, Databricks, and Synapse Pipelines.</p><p>• Collaborate with teams to create and improve operational runbooks, monitoring dashboards, and incident response workflows.</p><p>• Develop reusable ingestion patterns for platforms like Guidewire DataHub, InfoCenter, and other business data sources.</p><p>• Lead the implementation of real-time and event-driven data engineering solutions to enable operational insights and automation.</p><p>• Partner with architects to modernize data workloads using advanced frameworks like Delta Lake and Medallion Architecture.</p><p>• Mentor entry-level engineers, enforce coding best practices, and review code to ensure quality and compliance.</p>
<p>We are looking for an experienced Data Engineer to join our team on a contract basis in Columbus, Ohio. In this role, you will take on a leadership position, driving the development and optimization of data pipelines that support enterprise-wide analytics and decision-making. You will also play a key role in mentoring team members, fostering collaboration, and ensuring the integrity and quality of data across various business functions.</p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and maintain efficient data pipelines to support enterprise analytics and reporting.</p><p>• Collaborate with business analysts and data science teams to refine data requirements and ensure alignment with organizational goals.</p><p>• Enhance and automate data integration and management processes to improve operational efficiency.</p><p>• Lead efforts to ensure data quality by testing for accuracy, consistency, and conformity to business rules.</p><p>• Provide training and guidance to team members and other stakeholders on data pipelining and preparation techniques.</p><p>• Partner with data governance teams to promote vetted content into the curated data catalog for reuse.</p><p>• Stay updated on emerging technologies and assess their impact on current systems and processes.</p><p>• Offer leadership, coaching, and mentorship to team members, encouraging attention to detail in their development.</p><p>• Work closely with stakeholders to understand business needs and ensure solutions meet those requirements.</p><p>• Perform additional duties as assigned to support organizational objectives.</p>
<p>We are looking for a talented Data Engineer to join our team in Miami, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
We are looking for an experienced Data Engineer to join our team in New York, New York. In this role, you will design, build, and maintain data infrastructure to support business intelligence and analytics needs. The ideal candidate will have a strong technical background, a passion for working with complex datasets, and expertise in cloud-based data platforms.<br><br>Responsibilities:<br>• Develop, implement, and optimize ETL pipelines to ensure efficient data processing and integration.<br>• Design and maintain scalable data solutions, including data warehouses and data lakes.<br>• Collaborate with cross-functional teams to identify data requirements and deliver actionable insights.<br>• Utilize Snowflake, AWS, and other cloud-based platforms to manage data infrastructure and ensure performance optimization.<br>• Leverage Python and SQL to build robust data workflows and automate processes.<br>• Employ orchestration tools like Airflow and dbt to streamline data operations.<br>• Support data analytics and visualization efforts by enabling the creation of impactful dashboards using tools such as Tableau.<br>• Work with marketing and product data sources, including platforms like Google Analytics, to extract and integrate valuable insights.<br>• Implement CI/CD pipelines and DevOps practices to enhance data engineering processes.<br>• Ensure data security and compliance across all systems and tools.
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.
We are looking for a highly skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work on cutting-edge data solutions and collaborate with cross-functional teams to deliver impactful results. The ideal candidate will possess strong technical expertise and a passion for creating efficient and scalable data systems.<br><br>Responsibilities:<br>• Design and implement scalable data architectures to support business needs and analytics requirements.<br>• Develop and optimize ETL pipelines for data extraction, transformation, and loading across diverse data sources.<br>• Collaborate with stakeholders to gather requirements and translate them into technical solutions.<br>• Utilize tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data processing and real-time streaming.<br>• Ensure data quality and security by implementing best practices and conducting thorough testing.<br>• Develop and maintain technical documentation related to system design, development processes, and operational workflows.<br>• Work with Agile teams to deliver solutions efficiently while actively participating in sprints and ceremonies.<br>• Troubleshoot and resolve issues in existing data systems to maintain optimal performance.<br>• Provide guidance and conduct code reviews for entry level team members.<br>• Stay updated on emerging technologies and recommend improvements to enhance data engineering practices.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
<p><strong>***Please email Valerie Nielsen for immediate response*** </strong></p><p><br></p><p><strong>Job Title:</strong> Data Engineer</p><p> <strong>Location:</strong> West Los Angeles, CA (Onsite)</p><p> <strong>Salary:</strong> $150,000 Base + Bonus</p><p><strong>Overview</strong></p><p> We are seeking a <strong>Data Engineer</strong> to join our team onsite in <strong>West Los Angeles</strong>. This role is ideal for someone early in their career who has strong technical fundamentals, enjoys working with data, and has curiosity around modern AI tools. The ideal candidate has a strong analytical mindset and enjoys solving complex data problems while building scalable pipelines and data models.</p><p><strong>Responsibilities</strong></p><ul><li>Build, maintain, and optimize data pipelines and ETL processes</li><li>Write efficient and scalable <strong>SQL and Python</strong> code for data transformation and analysis</li><li>Work with cloud data platforms in <strong>AWS or Azure</strong></li><li>Support data modeling, data warehouse development, and reporting pipelines</li><li>Collaborate with analytics and product teams to deliver clean, reliable datasets</li><li>Explore and leverage <strong>AI tools (e.g., Claude or similar)</strong> to improve workflows and productivity</li><li>Ensure data quality, performance, and scalability across systems</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
<p>We are looking for an experienced Senior Data Engineer to join our team on a contract basis in Columbus, Ohio. In this role, you will take the lead in designing, building, and optimizing data pipelines to support enterprise-wide data initiatives. You will collaborate with cross-functional teams, ensuring that data solutions are aligned with business needs while maintaining high standards of data quality and consistency. This position offers an excellent opportunity to mentor team members and contribute as a technical leader while driving innovation in data engineering.</p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and maintain scalable data pipelines to support data-driven decision-making across the organization.</p><p>• Collaborate with data scientists and business analysts to refine data requirements and ensure seamless integration for analytics initiatives.</p><p>• Implement automation in data integration processes to enhance efficiency and scalability.</p><p>• Train team members and other stakeholders in data preparation techniques to improve data accessibility and usability.</p><p>• Ensure data quality by testing for accuracy, consistency, and compliance with business rules.</p><p>• Partner with data governance teams to promote curated data content for reuse and standardization.</p><p>• Provide leadership and mentorship to team members, fostering growth and collaboration within the team.</p><p>• Analyze emerging technologies and assess their potential impact on data engineering processes.</p><p>• Work closely with stakeholders to understand business needs and deliver tailored data solutions.</p><p>• Demonstrate attention to detail while building strong relationships across departments.</p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools such as Apache Spark and Python.<br>• Design efficient ETL processes to extract, transform, and load data from various sources.<br>• Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.<br>• Implement and manage big data solutions using Apache Hadoop and Apache Kafka.<br>• Monitor and optimize the performance of data systems to ensure reliability and scalability.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Maintain documentation for data workflows and processes to ensure clarity and consistency.<br>• Stay updated on emerging technologies and best practices in data engineering to continuously improve systems.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
Our client is an early-stage, high-growth startup building products that are actively used and loved by real users. They are looking for a Full Stack Engineer (3–6 years of experience) who is excited about building impactful products in a fast-paced, startup environment — and who has interest or exposure to AI. This is a fully onsite role in San Francisco - (must be already living in San Francisco Bay Area to be considered) <br> About the Role As a Full Stack Engineer, you’ll play a key role in designing, developing, and maintaining modern web applications. You’ll work across the stack to build clean, scalable features and collaborate closely with a small, highly motivated team. This is an opportunity for someone who genuinely enjoys building things — especially products that people use every day. <br> What You’ll Do Design, develop, and maintain full stack applications Build user-facing features using React and Next.js Develop and integrate backend services using Python (Flask) Write clean, efficient, and maintainable TypeScript code Debug, test, and optimize application performance Collaborate closely with cross-functional teammates in a fast-moving startup environment Contribute to AI-powered features and generative AI initiatives
We are looking for an experienced Platform / DevOps Engineer to join our team in Los Angeles, California. This role focuses on enhancing developer workflows, maintaining platform operations, and ensuring system observability to support production environments. As part of a long-term contract, you will play a key role in optimizing cloud resources, managing access controls, and troubleshooting issues across various tools and environments.<br><br>Responsibilities:<br>• Manage user access across Atlassian tools, Azure DevOps, GitHub, and other platforms, ensuring secure and compliant permissions.<br>• Process and oversee access requests using ServiceNow and internal workflows to maintain least-privilege access.<br>• Design, maintain, and troubleshoot CI/CD pipelines within Azure DevOps and GitHub Actions.<br>• Provide support for containerized applications using Docker and Kubernetes, including environment configuration.<br>• Collaborate with Systems Engineering teams to manage cloud resources and optimize configurations in Azure.<br>• Analyze system logs and metrics using Elastic tools to identify and resolve backend service issues.<br>• Investigate and troubleshoot issues related to server environments, databases, and backend services.<br>• Partner with engineering teams to identify root causes of system failures and implement preventive measures.<br>• Participate in incident response efforts and contribute to post-incident reviews and improvements.