<p><strong>Job Title: Data Engineer</strong></p><p><strong>Location: Sherman Oaks, CA (On-site)</strong></p><p><strong>Salary: Up to $160,000 annually</strong></p><p><strong>Company Overview:</strong> We are seeking a talented Data Engineer to join our team in Sherman Oaks. As a Data Engineer, you will play a crucial role in maintaining and optimizing our data platform, with a primary focus on ensuring high performance and accurate data delivery. You will also have opportunities to contribute to the development and maintenance of our web platforms and tackling various backend and DevOps tasks related to general web engineering.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and maintain scalable data pipelines and processes.</li><li>Architect and optimize data warehouse structures to support analytics and reporting needs.</li><li>Develop and implement algorithms to transform raw data into actionable insights.</li><li>Collaborate with the marketing team to understand and address data-driven business needs.</li><li>Automate manual data workflows to improve efficiency and reduce errors.</li><li>Manage ETL/ELT processes, ensuring seamless data integration and transformation.</li><li>Work with tools such as Snowflake and Segment to enhance data infrastructure.</li><li>Utilize a variety of programming languages and environments to support data engineering tasks.</li></ul><p>For immediate consideration, direct message Reid Gormly on LinkedIn and Apply Now</p><p><br></p>
We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio, on a Contract-to-permanent basis. This role focuses on designing and implementing scalable data solutions within the Azure ecosystem, leveraging tools such as Azure Data Factory, Databricks, and Azure Synapse Analytics. The ideal candidate will possess strong expertise in Python, ETL processes, and cloud-based data engineering frameworks.<br><br>Responsibilities:<br>• Develop and maintain efficient data pipelines and workflows using Azure Data Factory and Databricks.<br>• Optimize data transformation processes with Python and SQL for large-scale analytics.<br>• Design and implement scalable data solutions in Azure environments, including Azure Synapse Analytics and Azure SQL Database.<br>• Collaborate with teams to manage CI/CD workflows and maintain version control using GitHub.<br>• Utilize Terraform to automate infrastructure deployment and configuration.<br>• Ensure adherence to software engineering best practices in all stages of the data pipeline lifecycle.<br>• Perform ETL operations to extract, transform, and load data securely and efficiently.<br>• Partner with stakeholders to translate business requirements into technical solutions.
The Opportunity: We are seeking a highly motivated and enthusiastic Data Engineer to join our data team. This is an excellent opportunity in building and maintaining data pipelines within a modern cloud-based data ecosystem. You will play a key role in ensuring our data is accurate, accessible, and ready for analysis, supporting various business functions.<br>Responsibilities:<br>• Assist in the design, development, and maintenance of ETL (Extract, Transform, Load) processes to ingest data from various sources into our data warehouse.<br>• Write and optimize Python scripts for data extraction, transformation, and loading, ensuring data quality and efficiency.<br>• Work with our Azure cloud platform to provision, configure, and manage data services such as Azure Data Lake Storage, Azure SQL Database, and potentially Azure Data Factory.<br>• Develop and maintain data schemas and tables within Snowflake, our cloud data warehouse.<br>• Collaborate with senior data engineers and data analysts to understand data requirements and assist in translating them into technical solutions.<br>• Monitor data pipelines for performance and data integrity issues, troubleshooting and resolving problems as they arise.<br>• Participate in data quality initiatives, identifying and addressing discrepancies in data.<br>• Document data flows, ETL processes, and data models to ensure clear understanding and maintainability.<br>• Learn and apply best practices in data engineering, data governance, and data security.<br>• Contribute to the continuous improvement of our data platform and tooling.<br>Required Skills and Experience:<br>• Bachelor's degree in Computer Science, Engineering, Information Systems, or a related technical field.<br>• 3-5 years of professional experience in a data-related role (e.g., intern, data analyst, entry-level developer) with a strong desire to specialize in data engineering.<br>• Strong foundational knowledge of Python and experience with its data manipulation libraries (e.g., Pandas).<br>• Understanding of ETL (Extract, Transform, Load) concepts and data warehousing principles.<br>• Familiarity with cloud platforms, specifically Microsoft Azure, including an understanding of services like Azure Data Lake Storage or Azure SQL Database.<br>• Basic understanding of relational databases and SQL.<br>• Eagerness to learn and work with Snowflake as a cloud data warehouse.<br>• Excellent problem-solving skills and a keen eye for detail.<br>• Strong communication and collaboration skills, with the ability to work effectively in a team environment.
<p>We are looking for a skilled <strong>Data Engineer</strong> to design and build robust data solutions that align with business objectives. In this role, you will collaborate with cross-functional teams to develop and maintain scalable data architectures, pipelines, and models. Your expertise will ensure the quality, security, and compliance of data systems while contributing to the organization’s data-driven decision-making processes. Call 319-362-8606, or email your resume directly to Shania Lewis - Technology Recruiting Manager at Robert Half (email information is on LinkedIn). Let's talk!!</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Design and implement scalable data architectures, pipelines, and models to support business needs.</p><p>• Collaborate with stakeholders to translate complex requirements into practical data solutions.</p><p>• Ensure data quality, security, and compliance with relevant regulations and standards.</p><p>• Monitor and maintain the existing data infrastructure, identifying opportunities for improvement.</p><p>• Optimize the performance of data systems, ensuring efficiency and reliability.</p><p>• Research and recommend emerging technologies to enhance data processes and solutions.</p><p>• Provide mentorship to team members, fostering a collaborative and innovative environment.</p><p>• Support business intelligence and analytics by designing solutions that enable effective data usage.</p>
We are offering an exciting opportunity for a Data Engineer in the Detail Oriented Services industry, stationed at our New York location. The chosen candidate will be tasked with handling a variety of responsibilities related to data engineering including, but not limited to, Azure Data Bricks, Data Lake, Synapse, Spark, Unity Catalog, and Data Governance.<br><br>Responsibilities<br>• Utilize Azure Databricks to process and analyze large datasets<br>• Develop and maintain Azure Data Lakes to store and manage data<br>• Work with Azure Synapse Analytics to extract valuable insights from data<br>• Leverage Spark for big data processing and analytics<br>• Implement Unity Catalog for data organization and management<br>• Uphold Data Governance principles to ensure data accuracy and integrity<br>• Utilize Python for data science and machine learning tasks, as necessary<br>• Provide detail oriented consulting services based on extensive industry experience.
We are looking for a skilled Data Engineer to join our team in Malvern, Pennsylvania. In this role, you will be instrumental in designing, developing, and maintaining robust data integration processes using Python and Azure Synapse Analytics. By collaborating with cross-functional teams, you will ensure the delivery of high-quality data solutions that empower business insights and decision-making.<br><br>Responsibilities:<br>• Design and implement data integration workflows using Python (PySpark) and Azure Synapse Analytics to support data extraction, transformation, and loading processes.<br>• Develop and optimize data storage solutions such as data warehouses and lakehouses, employing best practices in data modeling, including star schemas, facts, and dimensions.<br>• Extract and transform data from diverse sources, including APIs, database tables, and structured files, ensuring seamless data integration.<br>• Leverage Azure Synapse Analytics features, such as Notebooks and Pipelines, to create scalable, high-performance data solutions.<br>• Contribute to the adoption and implementation of advanced data management concepts, including data lakes, delta lakes, and data cataloging.<br>• Collaborate with data architects to define and implement efficient data models aligned with organizational needs.<br>• Conduct data quality assessments and implement validation procedures to maintain data integrity and reliability.<br>• Monitor and troubleshoot data pipelines to ensure optimal performance and resolve any technical issues.<br>• Document data engineering processes, workflows, and transformations to facilitate knowledge sharing and operational continuity.<br>• Ensure compliance with data governance policies and implement security measures to protect sensitive information.
<p>We are looking for a skilled Data Engineer to join our team! This position offers the opportunity to work primarily remotely while contributing to the design, development, and optimization of robust data solutions. Based in Asheville, North Carolina, this role is ideal for professionals with a strong background in building data pipelines and leveraging Azure technologies.</p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and maintain efficient data pipelines to ensure seamless data integration and processing.</p><p>• Utilize Azure Data Factory to create and manage scalable data workflows.</p><p>• Implement Python scripts to automate and optimize data-related processes.</p><p>• Manage and enhance database systems to ensure data integrity and accessibility.</p><p>• Collaborate with cross-functional teams to define data requirements and develop solutions.</p><p>• Troubleshoot and resolve issues related to data pipelines and workflows.</p><p>• Optimize data storage and retrieval processes to improve system performance.</p><p>• Ensure compliance with data governance standards and best practices.</p><p>• Monitor and analyze system performance, implementing improvements as necessary.</p><p>• Document processes and solutions for future reference and team collaboration.</p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Oxford, Massachusetts. In this role, you will design and maintain data platforms, leveraging cutting-edge technologies to optimize processes and drive analytical insights. This position requires a strong background in Python development, cloud technologies, and big data tools. This role is hybrid, onsite 3 days a week. Candidates must have GC or be USC.</p><p><br></p><p>Responsibilities:</p><p>• Develop, implement, and maintain scalable data platforms to support business needs.</p><p>• Utilize Python and PySpark to design and optimize data workflows.</p><p>• Collaborate with cross-functional teams to integrate data solutions with existing systems.</p><p>• Leverage Snowflake and other cloud technologies to manage and store large datasets.</p><p>• Implement and refine algorithms for data processing and analytics.</p><p>• Work with Apache Spark and Hadoop to build robust data pipelines.</p><p>• Create APIs to enhance data accessibility and integration.</p><p>• Monitor and troubleshoot data platforms to ensure optimal performance.</p><p>• Stay updated on emerging trends in big data and cloud technologies to continuously improve solutions.</p><p>• Participate in technical discussions and provide expertise during team reviews.</p>
<p><strong>About the Role:</strong></p><p> We’re looking for a highly motivated <strong>Big Data Engineer</strong> to join our growing data engineering team. In this role, you'll architect and develop large-scale data processing solutions that empower business intelligence, advanced analytics, and machine learning initiatives. You'll work closely with data scientists, analysts, and other engineers to design scalable and reliable data pipelines across the organization.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable data pipelines for structured and unstructured data</li><li>Develop ETL/ELT workflows to support real-time and batch processing</li><li>Optimize data storage, performance, and cost efficiency across distributed systems</li><li>Integrate diverse data sources using big data technologies like Hadoop, Spark, Kafka, and Hive</li><li>Work with data science and analytics teams to ensure data availability, accuracy, and accessibility</li><li>Monitor and troubleshoot performance issues in production data systems</li><li>Implement best practices in data governance, privacy, and security</li><li>Collaborate cross-functionally with product, engineering, and DevOps teams</li></ul><p><br></p>
We are looking for an experienced Data Engineer to take on a leadership role in designing, deploying, and managing scalable AI platforms and cloud-based infrastructure. This is a contract position based in Eagan, Minnesota, with the potential for future conversion to a permanent role. The position requires a hybrid work schedule, with two in-office days per week (Tuesdays and Wednesdays).<br><br>Responsibilities:<br>• Lead the engineering and operations of AI platforms, including tools such as Dataiku, OpenAI, and Amazon Bedrock, ensuring optimal performance and scalability.<br>• Automate deployments, upgrades, and scaling processes for AI services to meet evolving business needs.<br>• Govern and secure enterprise use of cloud-based APIs, including managing key provisioning, access controls, and compliance with security standards.<br>• Design and maintain cloud infrastructure for AI and data platforms, utilizing tools like Terraform and CloudFormation for infrastructure-as-code.<br>• Collaborate with cross-functional teams to align architectural designs and ensure integrated operations across data platforms.<br>• Monitor and optimize platform performance, availability, and cost efficiency across hosted services.<br>• Implement secure configurations, audit logging, and encryption standards to meet regulatory and compliance requirements.<br>• Partner with Data Science, Analytics, and DevOps teams to enable advanced AI use cases and drive innovation.<br>• Provide support for platform incidents, performance tuning, and user onboarding processes.<br>• Evaluate and integrate emerging technologies to enhance platform resiliency and AI capabilities.
<p>Job Summary</p><p>We are seeking a hands-on Data Engineer to design, code, and deliver Big Data Warehouse solutions. The ideal candidate is passionate about technology, thrives under pressure, and excels in collaborative environments. This role involves working closely with product owners, technical stakeholders, and cross-functional teams to deliver high-impact data solutions.</p><p>Key Responsibilities</p><ul><li>Design and develop scalable Big Data Warehouse solutions across the data supply chain</li><li>Implement metadata management solutions</li><li>Create and maintain technical and user documentation (data models, dictionaries, glossaries, process/data flows, architecture diagrams)</li><li>Extend and enhance the enterprise Data Lake</li><li>Solve complex data integration challenges across multiple systems</li><li>Design and implement real-time data analysis and decisioning strategies</li><li>Collaborate with stakeholders to support data quality initiatives</li><li>Partner with Data Science teams to enhance actionable insights</li><li>Continuously learn and adopt new technologies</li></ul>
<p>We are seeking a skilled and motivated <strong>Data Engineer</strong> to join our growing data team. The ideal candidate will have a strong background in data architecture, ETL development, and cloud-based data platforms. You will play a critical role in building and maintaining scalable data pipelines that support analytics, reporting, and business intelligence initiatives across the organization.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ETL/ELT data pipelines</li><li>Develop and optimize data architectures that support structured and unstructured data</li><li>Ensure the reliability and integrity of data by implementing robust quality checks and monitoring systems</li><li>Collaborate with cross-functional teams, including Data Analysts, Data Scientists, and Software Engineers</li><li>Implement and enforce best practices for data governance, privacy, and compliance</li><li>Maintain comprehensive documentation of data models, systems, and processes</li></ul><p><br></p><p><br></p>
<p>We are offering an excellent long-term opportunity for an AWS Data Engineer with one of our industry leading clients. This role allows for a hybrid work schedule. As an AWS Data Engineer, you will use your skills in data integration, processing, and optimization to support the organization's data-driven decision-making.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement data integration workflows utilizing AWS tools, including Glue/EMR, Lambda, and Redshift</p><p>• Employ Python and PySpark for processing large datasets, ensuring efficiency and accuracy</p><p>• Validate and cleanse data as part of maintaining high data quality</p><p>• Implement monitoring, validation, and error handling mechanisms within data pipelines to ensure data integrity</p><p>• Enhance the performance optimization of data workflows, identifying and resolving performance bottlenecks</p><p>• Fine-tune queries and optimize data processing to enhance Redshift's performance</p><p>• Translate business requirements into technical specifications and coded data pipelines</p><p>• Collaborate with data analysts and business stakeholders to meet their data requirement needs</p><p>• Document all data integration processes, workflows, and technical system specifications</p><p>• Ensure compliance with data governance policies, industry standards, and regulatory requirements.</p>
<p>We're seeking a Data Engineer to take ownership of backend data processes and cloud integrations. This position plays a key role in designing and maintaining data pipelines using SQL Server, SSIS, and Azure tools to support analytics and reporting across the business. You’ll work cross-functionally to ensure reliable, secure, and efficient movement of data, with a focus on integrating data from an ERP system into SQL and Azure-based environments.</p>
<p>We're seeking a Data Engineer to take ownership of backend data processes and cloud integrations. This position plays a key role in designing and maintaining data pipelines using SQL Server, SSIS, and Azure tools to support analytics and reporting across the business. </p>
<p>We are looking for an experienced Informatica Data Catalog Engineer/Governance Administrator to join our team in Southern California. In this long-term contract position, you will play a pivotal role in configuring and managing Informatica Cloud Catalog, Governance, and Marketplace systems, ensuring seamless integration with various platforms and tools. This opportunity is ideal for professionals with a strong background in data governance, security, and compliance, as well as expertise in cloud technologies and database systems.</p><p><br></p><p>Responsibilities:</p><p>• Configure and implement role-based and policy-based access controls within Informatica Cloud Catalog and Governance systems.</p><p>• Develop and set up connections for diverse platforms, including mainframe databases, cloud services, S3, Athena, and Redshift.</p><p>• Troubleshoot and resolve issues encountered during connection creation and data profiling.</p><p>• Optimize performance by identifying and addressing bottlenecks in profiling workflows.</p><p>• Configure and manage Cloud Marketplace integrations to enforce policy-based data protections.</p><p>• Review and communicate Informatica upgrade schedules, exploring new features and coordinating timelines with business and technical teams.</p><p>• Collaborate with infrastructure teams to establish clusters for managing profiling workloads efficiently.</p><p>• Support governance initiatives by classifying and safeguarding sensitive financial and customer data.</p><p>• Create and manage metadata, glossaries, and data quality rules across regions to ensure compliance with governance policies.</p><p>• Set up user groups, certificates, and IP whitelisting to maintain secure access and operations.</p>
We are looking for a Senior Data Engineer with deep expertise in Palantir Foundry to design and implement robust data infrastructures that drive strategic decision-making. In this pivotal role, you will collaborate across teams to develop scalable solutions that unlock the full potential of data for operational and business success. If you thrive in fast-paced environments and enjoy solving complex problems, this position offers an exciting opportunity to make a significant impact.<br><br>Responsibilities:<br>• Design and develop scalable Ontology models, data pipelines, and operational solutions using Palantir Foundry.<br>• Collaborate with cross-functional teams to gather requirements and transform them into secure, high-quality data assets.<br>• Identify opportunities for leveraging data to drive operational efficiencies and strategic decision-making.<br>• Build and maintain data pipelines that support analytics, automation, and other business-critical functions.<br>• Ensure data integrity and availability through proactive validation, monitoring, and troubleshooting.<br>• Provide actionable insights to stakeholders by creating innovative and reliable data services.<br>• Troubleshoot and resolve issues in production and pre-production data systems.<br>• Develop internal tools to streamline deployment automation and enhance platform performance.
<p>Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senior ETL Engineer - Position WEEKLY HYBRID position with major flexibility! FULL Microsoft On-Prem stack.</p><p><br></p><p>LOCATION : HYBRID WEEKLY in Des Moines. You must reside in the Des Moines area for weekly onsite . NO travel back and forth and not a remote position! If you live in Des Moines, eventually you can MOSTLY work remote!! This position has upside with training in Azure.</p><p><br></p><p>IMMEDIATE HIRE ! Solve real Business Problems.</p><p><br></p><p>Hands-On Technical SENIOR Microsoft Stack Data Engineer | SENIOR Data Warehouse Engineer / SENIOR Data Engineer / Senior ETL Developer / Azure Data Engineer / ( Direct Hire) who is looking to help modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD but FIRST REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting.</p><p><br></p><p>YOU WILL DOING ALL ASPECTS OF Data Engineering. Must have data warehouse & Data Lake skills. You will be in the technical weeds and technical data day to day BUT you could grow to the Technical Leader of this team. ETL skills like SSIS., working with disparate data. SSAS is a Plus! Fact and Dimension Data warehouse experience AND experience.</p><p>Hands-On Technical Hands-On Technical SENIOR Microsoft Stack Data Engineer / SENIOR Data Warehouse / SENIOR Data Engineer / Azure Data Factory Data Engineer This is a Permanent Direct Hire Hands-On Technical Manager of Data Engineering position with one of our clients in Des Moines up to 155K Plus bonus</p><p><br></p><p>PERKS: Bonus, 2 1/2 day weekends !</p>
<p>We’re looking to hire a Real-Time Data Engineer / Streaming Engineer for our Full-Time Engagement Professional Division. As an employee of Robert Half, you can build a fulfilling career working on diverse and challenging engagements that leverage your current skills and experiences and help you develop new ones. You can also work with our global consulting firm and learn from industry subject matter experts developing innovative customer solutions.</p><p><br></p><p>The compensation package will include <strong>Base Salary</strong> + <strong>Comprehensive</strong> <strong>Benefits Package: Medical, Dental, Vision, 401(k) plan, Choice Time Off (CTO), Short/Long Term Disability, Life Insurance, ADD Insurance, Health Savings Accounts (HSAs), Flexible Spending Accounts (FSAs), Tuition reimbursement, Employee Assistance Program (EAP), Commuter Benefits, Discount mall, Pet and Legal insurance, and identity theft protection, </strong>and <strong>paid every hour that you work. </strong></p>
We are offering an exciting opportunity for a Lead Data Engineer in the Higher Education sector, based in Philadelphia, Pennsylvania. As a Lead Data Engineer, you will play a pivotal role in developing specific projects in line with business requirements, ensuring efficient data management. You will work closely with diverse teams to facilitate data-driven decision-making, and your role will be critical in designing, developing, and launching data ingestion processes and data products.<br><br>Responsibilities:<br><br>• Guide and mentor entry level engineers to enhance their skills and knowledge.<br>• Design and implement secure data pipelines and data products to meet business objectives.<br>• Troubleshoot data processes and queries and optimize them for efficiency and accuracy.<br>• Establish robust data governance, security, and data privacy practices to ensure data integrity and compliance.<br>• Interpret business requirements and translate them into precise technical specifications to align data solutions with business goals.<br>• Participate actively in project planning, identifying key milestones and resource needs to ensure project success.<br>• Work collaboratively as part of a cross-functional team to enhance data integration across the organization.<br>• Evaluate business needs and priorities in collaboration with the team to inform data solution development.<br>• Oversee production operation and support to ensure smooth data management operations.<br>• Drive the implementation of innovative data solutions to enhance data-driven decision-making.
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
Our client is currently seeking an IT Leader of Data Management reporting to the VP & Chief Information Officer in Midland, Texas. This position will be responsible for leading our data platform initiatives and championing the data infrastructure that enables data-driven decision making at PR. This leader will architect, build and maintain modern data platforms throughout our Exploration & Production oil and gas operations. The ideal candidate will have deep technical expertise in data architecture, platform engineering and data flows. They will have exceptional leadership skills that allow them to influence, educate and collaborate with stakeholders across the organization. There is flexibility with the level and title of this role depending on the candidate’s skillset, qualifications and capabilities.<br>General Responsibilities<br>Leadership<br>• Partner with functional stakeholders (Geology, Engineering, Drilling, Completions, Finance, Accounting, HR, etc..) to improve operations and drive value through data<br>• Educate stakeholders on data governance best practices and platform capabilities<br>• Build inspiration and alignment to raise the bar on reporting, data quality and analytics across the organization<br>• Lead, inspire and manage a team of high-performing data engineers, developers, automation specialists, analysts and data scientists<br>• Foster a collaborative environment and encourage team members to stay current with evolving data architecture patterns, cloud technologies and industry best practices<br>Technical Platform Management<br>• Drive implementation and optimization of our data platform stack including Databricks, Dagster, dbt, Power BI, and Spotfire<br>• Lead the development and maintenance of enterprise data warehouse, data lake, and data mart infrastructure<br>• Ensure platform reliability, performance, and scalability to meet growing business demands<br>• Oversee complex enterprise data flows between applications, ensuring seamless integration across our technology ecosystem<br>Enterprise Data Governance & Quality<br>• Establish and maintain enterprise data catalog and metadata management practices and lead data governance initiatives<br>• Drive data quality improvements across the organization through platform excellence and automation<br>• Promote data lineage and consistency standards across all enterprise data assets<br>Qualifications<br>• 9+ years in data analytics, business intelligence, or related roles<br>• 5+ years in a leadership or management capacity<br>• Deep familiarity with upstream oil & gas operations, data types, and industry-specific challenges<br>• Proven track record of building and leading technical teams in complex enterprise environments<br>• Expert-level understanding of lakehouse architecture, data warehousing concepts, and enterprise data modeling<br>• Hands-on experience with Databricks, Dagster, dbt, Power BI, and Spotfire<br>• Strong command of SQL, ETL/ELT processes, data pipeline automation, and data integration patterns<br>• Proficiency in Python, R, or similar languages for data platform development and automation<br>• Experience with cloud-based big data platforms and modern data stack technologies<br>• Expertise in data governance, metadata management, data lineage, and data quality tools<br>• Proven ability to lead through influence in matrix organizations without direct authority<br>• Strong communication skills to translate technical concepts for business audiences<br>• Track record of building data literacy and promoting best practices organization-wide
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. The ideal candidate will have a strong background in data engineering, leveraging Python and Databricks to build and optimize data pipelines. This role is based in Houston, Texas, and offers the opportunity to work with cutting-edge technologies in a collaborative environment.<br><br>Responsibilities:<br>• Develop, test, and maintain scalable data pipelines using Python and Databricks.<br>• Collaborate with cross-functional teams to gather and define data requirements.<br>• Implement and optimize algorithms for data processing and analytics.<br>• Design and deploy data models that support efficient querying and data visualization.<br>• Leverage cloud technologies, including AWS, to enhance data infrastructure.<br>• Integrate and manage data streaming solutions using Apache Kafka.<br>• Utilize Apache Spark and Hadoop for large-scale data processing tasks.<br>• Create and maintain APIs to support seamless data access and integration.<br>• Troubleshoot and resolve issues in data workflows and pipelines.
We are offering an exciting opportunity for a Data Engineer in Woodland Hills, California. The selected candidate will be an integral part of our team, contributing to the design and development of scalable data pipelines, collaborating with data scientists, and ensuring data governance. This role involves working within the industry to integrate data engineering solutions into our broader product architecture.<br><br>Responsibilities:<br>• Collaborate closely with data scientists to prepare datasets for model training, validation, and deployment<br>• Develop, design, and sustain scalable data pipelines to support dynamic pricing models<br>• Oversee and optimize ETL (Extract, Transform, Load) processes to assure data reliability and accuracy<br>• Contribute to best practices and document data engineering processes<br>• Engage with relevant stakeholders to comprehend data requirements and convert these into technical specifications<br>• Ensure adherence to data governance and compliance with appropriate data privacy and security regulations<br>• Integrate data engineering solutions into the broader product architecture in collaboration with the software development team<br>• Continuously monitor and troubleshoot data workflows to ensure reliable data integration
Job Description<br>We are seeking a highly skilled Snowflake Data Engineer with expertise in Matillion ETL to join our client's dynamic data engineering team. The ideal candidate will play a crucial role in designing, building, and maintaining scalable and high-performance data pipelines within the Snowflake ecosystem using Matillion. This role requires expertise in data modeling, data transformation, and cloud technologies while focusing on ensuring quality, performance, and accuracy for data-driven decision-making.<br>Responsibilities<br>• Design and Build Data Pipelines: Create, implement, and optimize ETL/ELT pipelines using Matillion for ingesting and processing data from multiple sources into Snowflake.<br>• Snowflake Architecture Design: Develop and maintain scalable Snowflake environments, including database design, warehouse management, and performance tuning to support complex queries.<br>• Data Transformation: Develop and maintain robust transformations in Matillion to ensure that raw data is cleansed, enriched, and modeled for business consumption.<br>• Data Integration: Collaborate with stakeholders to integrate a variety of data sources (e.g., APIs, flat files, databases) into Snowflake and ensure data is stored accurately and securely.<br>• Performance Optimization: Monitor and manage performance of ETL pipelines, optimize the use of compute resources in Snowflake, and improve query performance to enhance speed and efficiency.<br>• Collaboration: Work closely with cross-functional teams, including Data Scientists, Analysts, and Developers, to deliver data solutions that meet business needs.<br>• Quality Assurance: Ensure data quality through rigorous testing, validation, and adherence to best practices for data governance and security requirements.<br>• Documentation: Maintain detailed documentation of ETL workflows, data models, and processes to ensure transparency and facilitate support.<br>Qualifications and Required Skills<br>• 7+ of experience working as a Data Engineer <br>• Expertise in Snowflake: Strong experience with Snowflake data warehouse platform, including architecture, performance tuning, and security.<br>• Matillion ETL Expertise: Hands-on experience with Matillion for developing and managing scalable data pipelines.<br>• SQL Proficiency: Advanced proficiency in SQL, including query optimization and debugging.<br>• Data Modeling: Strong knowledge of dimensional and relational database modeling principles for building efficient Snowflake databases.<br>• Cloud Technologies: Experience with cloud platforms (Azure Preferred but not a requirement) like AWS, Azure, or Google Cloud, including relevant services (e.g., S3, Lambda, Data Factory, BigQuery).<br>• Experience with APIs and Data Integration: Familiarity with integrating REST APIs and other data sources into the Snowflake ecosystem.<br>• Pipeline Automation: Knowledge of pipeline orchestration tools or workflows (e.g., Airflow, DBT).<br>• Problem-Solving Skills: Ability to troubleshoot data-related issues, identify root causes, and implement solutions.<br>• Soft Skills: Strong communication and collaboration skills to interact with technical and non-technical stakeholders.