<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a Senior Data Engineer to join our agile data engineering team in Philadelphia, Pennsylvania. This role is vital in creating, optimizing, and deploying high-quality data solutions that support strategic business objectives. The ideal candidate will collaborate with cross-functional teams to ensure efficient data processes, robust governance, and innovative technical solutions.<br><br>Responsibilities:<br>• Design and implement secure and scalable data pipelines and products.<br>• Troubleshoot and enhance existing data workflows and queries for optimal performance.<br>• Develop and enforce data governance, security, and privacy standards.<br>• Translate complex business requirements into clear and actionable technical specifications.<br>• Participate in project planning, identifying key milestones and resource requirements.<br>• Collaborate with stakeholders to evaluate business needs and prioritize data solutions.<br>• Conduct technical peer reviews to ensure the quality of data engineering deliverables.<br>• Support production operations and resolve issues efficiently.<br>• Contribute to architectural improvements and innovation within data systems.
<p><strong>Robert Half </strong>is actively partnering with an Austin-based client to identify a<strong> Data Engineer (contract) </strong>with 5+ years of experience. In this role, you'll build and maintain scalable data pipelines and integrations that support analytics, applications, and machine learning. <strong>This is an on-site role in Austin, Tx.</strong></p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and maintain batch and real-time data pipelines</li><li>Clean, validate, and standardize data across multiple sources</li><li>Build APIs and data integrations for internal systems</li><li>Collaborate with product, analytics, and engineering teams</li><li>Implement monitoring and automated testing for data quality</li><li>Support predictive model deployment and data streaming</li><li>Document processes and mentor entry level engineers</li><li>Architect and manage cloud-based data infrastructure.</li></ul>
We are looking for a skilled and experienced Senior Data Engineer to join our team in New York, New York. This role is ideal for someone who thrives on working with complex datasets, building scalable data solutions, and collaborating with cross-functional teams. If you have a passion for leveraging data to drive strategic decisions, we encourage you to apply.<br><br>Responsibilities:<br>• Design, implement, and maintain scalable data architectures to support business intelligence and analytics needs.<br>• Develop and optimize data pipelines and workflows for efficient data processing and integration.<br>• Collaborate with stakeholders to understand data requirements and translate them into actionable solutions.<br>• Leverage cloud platforms, such as AWS, Azure, or Google Cloud, to manage and enhance data infrastructure.<br>• Utilize Big Data technologies, including Apache Spark and Databricks, to process and analyze large datasets.<br>• Write efficient scripts and perform data manipulation using Python and other relevant programming languages.<br>• Ensure data quality and integrity by implementing robust validation and monitoring processes.<br>• Provide technical guidance and support to team members on best practices in data engineering.<br>• Stay updated on emerging data technologies and tools to continuously improve systems.<br>• Partner with API developers to integrate data systems and enable seamless data access.
We are looking for a skilled Data Engineer to join our team in Cypress, California, specializing in creating scalable and high-performance data integration and analytics solutions. This role involves transforming raw data into actionable insights, utilizing cutting-edge technologies to support business objectives. The ideal candidate will have a strong background in data preparation, optimization, and engineering workflows, along with a collaborative approach to solving complex problems.<br><br>Responsibilities:<br>• Design and develop technical solutions for medium-to-high complexity data integrations across multiple platforms.<br>• Collect, clean, and standardize structured and unstructured data to enable efficient analysis.<br>• Build reusable frameworks and pipelines to streamline data preparation and optimization.<br>• Create and maintain data workflows, troubleshooting issues to ensure seamless operation.<br>• Apply statistical and mathematical methods to generate actionable insights from data.<br>• Collaborate with cross-functional teams to translate business requirements into technical solutions.<br>• Document processes and workflows, ensuring alignment with organizational standards.<br>• Adhere to governance policies, best practices, and performance standards for scalability and reliability.<br>• Proactively recommend and implement system improvements, including new tools and methodologies.<br>• Support the adoption of innovative technologies to enhance data engineering capabilities.
We are looking for an experienced Data Engineer to join our dynamic team in Wyoming, Michigan, for a Contract-to-Permanent position. In this role, you will play a key part in designing and managing data systems, developing data pipelines, and ensuring optimal data governance practices across multi-cloud environments. This position offers an exciting opportunity to contribute to cutting-edge healthcare data solutions while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Design and implement robust data architecture frameworks, including modeling, metadata management, and database security.<br>• Create and maintain scalable data models that support both operational and analytical needs.<br>• Develop and manage data pipelines to extract, transform, and load data from diverse sources into a centralized data warehouse.<br>• Collaborate with various departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, ensuring reliability and efficiency.<br>• Implement and enforce data governance policies, including data retention, backup, and security protocols.<br>• Stay updated on emerging technologies in data engineering, such as AI tools and cloud-based solutions, and integrate them into existing systems.<br>• Establish and track key performance indicators (KPIs) to measure the effectiveness of data systems.<br>• Provide mentorship and technical guidance to team members to foster a collaborative work environment.<br>• Evaluate and adopt new tools and technologies to enhance data capabilities and streamline processes.
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senior ETL Engineer - Position WEEKLY HYBRID position with major flexibility! FULL Microsoft On-Prem stack.</p><p><br></p><p>LOCATION : HYBRID WEEKLY in Des Moines. You must reside in the Des Moines area for weekly onsite . NO travel back and forth and not a remote position! If you live in Des Moines, eventually you can MOSTLY work remote!! This position has upside with training in Azure.</p><p><br></p><p>IMMEDIATE HIRE ! Solve real Business Problems.</p><p><br></p><p>Hands-On Technical SENIOR Microsoft Stack Data Engineer | SENIOR Data Warehouse Engineer / SENIOR Data Engineer / Senior ETL Developer / Azure Data Engineer / ( Direct Hire) who is looking to help modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD but FIRST REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting.</p><p><br></p><p>YOU WILL DOING ALL ASPECTS OF Data Engineering. Must have data warehouse & Data Lake skills. You will be in the technical weeds and technical data day to day BUT you could grow to the Technical Leader of this team. ETL skills like SSIS., working with disparate data. SSAS is a Plus! Fact and Dimension Data warehouse experience AND experience.</p><p>Hands-On Technical Hands-On Technical SENIOR Microsoft Stack Data Engineer / SENIOR Data Warehouse / SENIOR Data Engineer / Azure Data Factory Data Engineer This is a Permanent Direct Hire Hands-On Technical Manager of Data Engineering position with one of our clients in Des Moines up to 155K Plus bonus</p><p><br></p><p>PERKS: Bonus, 2 1/2 day weekends !</p>
We are looking for a driven Data Engineer to join our team in Middleton, Wisconsin. This position offers an exciting opportunity to work on cutting-edge data solutions, collaborating with an agile team to design, implement, and maintain robust data pipelines and reporting tools. The ideal candidate will have hands-on experience with modern data engineering technologies and a strong commitment to delivering high-quality results.<br><br>Responsibilities:<br>• Develop and maintain data pipelines using tools such as Azure Data Factory and Databricks.<br>• Create and optimize Power BI dashboards to visualize and report key business metrics.<br>• Collaborate with business analysts to translate requirements into actionable data solutions.<br>• Support the integration and management of data lakes using Azure Data Lake.<br>• Participate in daily standups and agile team activities to ensure project alignment.<br>• Implement ETL processes to extract, transform, and load data effectively.<br>• Work with Apache Spark and other frameworks to process large datasets efficiently.<br>• Troubleshoot and resolve data-related issues to ensure seamless operations.<br>• Provide technical expertise in the use of DAX technologies to enhance reporting capabilities.<br>• Contribute to the customization of client-specific data solutions as needed.
We are looking for an experienced Data Engineer to join our team in Denton, Texas. In this role, you will leverage your expertise in cloud data engineering and advanced analytics to support strategic initiatives in the higher education sector. This position is ideal for someone dedicated to building robust data solutions and guiding less experienced team members.<br><br>Responsibilities:<br>• Develop and optimize data pipelines and workflows using tools like Microsoft Fabric and Azure.<br>• Design and implement data models and warehouses to support analytics and reporting needs.<br>• Create and manage data visualizations with Power BI or Tableau to present actionable insights.<br>• Write and maintain scripts in Python, PySpark, R, and Windows PowerShell to automate data processes.<br>• Integrate data from multiple sources, ensuring accuracy and consistency across systems.<br>• Collaborate with stakeholders to align data solutions with business strategies.<br>• Lead initiatives to solve complex data challenges by applying innovative problem-solving techniques.<br>• Stay up-to-date with emerging trends in analytics and business intelligence, particularly in higher education.<br>• Provide mentorship and technical guidance to less experienced Data Engineers to enhance team capabilities.
<p>Position Title: Data Engineer</p><p>Location: Onsite – Houston Area</p><p>Compensation:</p><ul><li>Base Salary: $120K–$130K</li><li>Bonus: ~10% </li></ul><p>Overview:</p><p>We’re hiring a Data Engineer to lead the development and optimization of enterprise-grade data pipelines and infrastructure. This role is essential to enabling high-quality analytics, reporting, and business intelligence across the organization. The ideal candidate will bring deep expertise in Azure-based data tools, strong SQL and BI capabilities, and a collaborative mindset to support cross-functional data initiatives.</p><p>Key Responsibilities:</p><ul><li>Design, build, and maintain scalable data pipelines using Azure Data Factory, Microsoft Fabric, PySpark, and Spark SQL</li><li>Develop ETL processes to extract, transform, and load data from diverse sources</li><li>Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions</li><li>Ensure data integrity, security, and compliance with governance standards</li><li>Optimize pipeline performance and troubleshoot data infrastructure issues</li><li>Manage the data platform roadmap, including capacity planning and vendor coordination</li><li>Support reporting and analytics needs using Power BI and SQL</li><li>Drive continuous improvement in data quality, accessibility, and literacy across the organization</li><li>Monitor usage, deprecate unused datasets, and implement data cleansing processes</li><li>Lead initiatives to enhance data modeling, visualization standards, and reporting frameworks</li></ul><p><br></p>
<p>We are seeking a <strong>Senior Data Engineer</strong> with deep expertise in <strong>Microsoft’s data ecosystem</strong> to design, build, and optimize enterprise data solutions. This role is ideal for someone passionate about turning complex data into actionable insights by leveraging <strong>Azure Data Services, Power BI, DAX, and Microsoft Fabric</strong>.</p><p><strong>What You’ll Do</strong></p><ul><li>Design and maintain <strong>scalable data pipelines</strong> and ETL/ELT processes within <strong>Azure Data Factory</strong> and <strong>Synapse Analytics</strong>.</li><li>Architect and optimize <strong>data models</strong> to support reporting and self-service analytics.</li><li>Develop advanced <strong>Power BI dashboards and reports</strong>, using <strong>DAX</strong> to create calculated measures and complex business logic.</li><li>Leverage <strong>Microsoft Fabric</strong> to unify data sources, streamline analytics, and support business intelligence initiatives.</li><li>Ensure data quality, governance, and security across all pipelines and reporting layers.</li><li>Collaborate with analysts, business stakeholders, and cross-functional teams to deliver clean, actionable datasets.</li><li>Monitor and optimize performance of data workflows to ensure scalability and reliability.</li><li>Provide mentorship on BI best practices, data modeling, and efficient DAX usage.</li></ul><p><br></p>
We are seeking a Data Engineer to join our team based in Bethesda, Maryland. As part of our Investment Management team, you will play a crucial role in designing and maintaining data pipelines in our Azure Data Lake, implementing data warehousing strategies, and collaborating with various teams to address data engineering needs.<br><br>Responsibilities:<br><br>• Design robust data pipelines within Azure Data Lake to support our investment management operations.<br>• Implement effective data warehousing strategies that ensure efficient storage and retrieval of data.<br>• Collaborate with Power BI developers to integrate data reporting seamlessly and effectively.<br>• Conduct data validation and audits to uphold the accuracy and quality of our data pipelines.<br>• Troubleshoot pipeline processes and optimize them for improved performance.<br>• Work cross-functionally with different teams to address and fulfill data engineering needs with a focus on scalability and reliability.<br>• Utilize Apache Kafka, Apache Pig, Apache Spark, and other cloud technologies for efficient data visualization and algorithm implementation.<br>• Develop APIs and use AWS technologies to ensure seamless data flow and analytics.<br>• Leverage Apache Hadoop for effective data management and analytics.
Job Summary: We are seeking a experienced Data Engineer with 8+ years of experience to architect, build, and maintain scalable data infrastructure and pipelines. This role is pivotal in enabling advanced analytics and data-driven decision-making across the organization. The ideal candidate will have deep expertise in data architecture, cloud platforms, and modern data engineering tools. <br> Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines and ETL processes. Architect data solutions that support business intelligence, machine learning, and operational reporting. Collaborate with cross-functional teams to gather requirements and deliver data solutions aligned with business goals. Ensure data quality, integrity, and security across all systems and platforms. Optimize data workflows and troubleshoot performance issues. Integrate structured and unstructured data from various internal and external sources. Implement and enforce data governance policies and best practices. Preferred Skills: Experience with real-time data streaming technologies (e.g., Kafka, Spark Streaming). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of CI/CD pipelines and version control systems (e.g., Git). Relevant certifications in cloud or data engineering technologies.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. In this role, you will contribute to the development and optimization of data pipelines, ensuring the seamless integration of platforms and tools. Based in Jericho, New York, this position offers an exciting opportunity to work with advanced technologies in the non-profit sector.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines to support organizational goals.<br>• Develop and maintain data integration processes using tools such as Apache Spark and Python.<br>• Collaborate with cross-functional teams to leverage Tableau for data visualization and reporting.<br>• Work extensively with Salesforce and NetSuite to optimize data flow and system functionality.<br>• Utilize ETL processes to transform and prepare data for analysis and decision-making.<br>• Apply expertise in Apache Hadoop and Apache Kafka to enhance data processing capabilities.<br>• Troubleshoot and resolve issues within cloud-based and on-premise data systems.<br>• Ensure the security and integrity of all data management practices.<br>• Provide technical support and recommendations for system improvements.
<p><strong> IT Data Integration Engineer / AWS Data Engineer</strong></p><p><strong>Location</strong>: Torrance, CA </p><p><strong>Employment Type</strong>: Contract- 81 weeks </p><p><br></p><p><strong>Position Overview</strong></p><p>We are seeking a skilled IT Data Integration Engineer / AWS Data Engineer to join our team and lead the development and optimization of data integration processes. This role is critical to ensuring seamless data flow across systems, enabling high-quality, consistent, and accessible data to support business intelligence and analytics initiatives.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop and Maintain Data Integration Solutions</li><li>Design and implement data workflows using AWS Glue, EMR, Lambda, and Redshift.</li><li>Utilize PySpark, Apache Spark, and Python to process large datasets.</li><li>Ensure accurate and efficient ETL (Extract, Transform, Load) operations.</li></ul><p>Ensure Data Quality and Integrity</p><ul><li>Validate and cleanse data to maintain high standards of quality.</li><li>Implement monitoring, validation, and error-handling mechanisms.</li></ul><p>Optimize Data Integration Processes</p><ul><li>Enhance performance and scalability of data workflows on AWS infrastructure.</li><li>Apply data warehousing concepts including star/snowflake schema design and dimensional modeling.</li><li>Fine-tune queries and optimize Redshift performance.</li></ul><p>Support Business Intelligence and Analytics</p><ul><li>Translate business requirements into technical specifications and data pipelines.</li><li>Collaborate with analysts and stakeholders to deliver timely, integrated data.</li></ul><p>Maintain Documentation and Compliance</p><ul><li>Document workflows, processes, and technical specifications.</li><li>Ensure adherence to data governance policies and regulatory standards.</li></ul>
<p><strong>Data Engineer Opportunity – Build Impactful Solutions in a Mission-Driven Environment</strong></p><p><strong>Location:</strong> Des Moines, IA (On-site hybrid with flexible scheduling)</p><p><strong>Type:</strong> Full-Time | Direct Hire | Competitive Benefits</p><p><strong>Salary Range:</strong> $70K–$85K </p><p><br></p><p>Are you driven by innovation, collaboration, and the chance to make a meaningful impact through data engineering?! This hands-on role puts you at the forefront of designing and implementing data pipelines, integrations, and dashboards that empower nonprofit organizations to achieve their goals—not just through data analysis, but by creating actionable solutions from scratch.</p><p><br></p><p>For immediate and confidential consideration, send a current resume to Kristen Lee on LinkedIn or apply directly to this posting today! </p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Integration:</strong> Develop and optimize workflows using tools such as Domo and Power BI.</li><li><strong>Dashboard Creation:</strong> Craft visually compelling dashboards tailored for actionable insights.</li><li><strong>Development & Automation:</strong> Implement intelligent solutions with SQL, Python, or other programming tools to automate and streamline data processes.</li><li><strong>Team Collaboration:</strong> Partner with data analytics experts to ensure efficient, behind-the-scenes delivery of high-quality data solutions.</li></ul><p><br></p><p><strong>Why You’ll Love This Role:</strong></p><ul><li><strong>Mission-Driven Impact:</strong> Your work will directly contribute to nonprofit success and meaningful causes.</li><li><strong>Room to Grow:</strong> Join a forward-thinking team that values innovation and offers opportunities to expand technical skills and scale processes.</li><li><strong>Collaborative Environment:</strong> Work alongside experienced professionals committed to creating and delivering actionable insights while cultivating your craft.</li></ul><p><br></p>
<p>We are looking for a highly skilled Data Engineering and Software Engineering professional to design, build, and optimize our Data Lake and Data Processing platform on AWS. This role requires deep expertise in data architecture, cloud computing, and software development, as well as the ability to define and implement strategies for deployment, testing, and production workflows.</p><p><br></p><p>Key Responsibilities:</p><ul><li>Design and develop a scalable Data Lake and data processing platform from the ground up on AWS.</li><li>Lead decision-making and provide guidance on code deployment, testing strategies, and production environment workflows.</li><li>Define the roadmap for Data Lake development, ensuring efficient data storage and processing.</li><li>Oversee S3 data storage, Delta.io for change data capture, and AWS data processing services.</li><li>Work with Python and PySpark to process large-scale data efficiently.</li><li>Implement and manage Lambda, Glue, Kafka, and Firehose for seamless data integration and processing.</li><li>Collaborate with stakeholders to align technical strategies with business objectives, while maintaining a hands-on engineering focus.</li><li>Drive innovation and cost optimization in data architecture and cloud infrastructure.</li><li>Provide expertise in data warehousing and transitioning into modern AWS-based data processing practices.</li></ul>
<p>We are looking a Data Engineer to transform the backbone of IT operations and drive innovation! If you're passionate about streamlining infrastructure, automating workflows, and integrating cutting-edge applications to push business goals forward, this role is your opportunity to make an impact. </p><p><br></p><p><strong>Technical Skills:</strong></p><ul><li><strong>DevOps/Data Engineering:</strong> Strong automation mindset, particularly around data pipelines and onboarding workflows.</li><li><strong>Microsoft Ecosystem:</strong> Power Apps, Power Query, and Power BI - essential for dashboarding and internal tools.</li><li><strong>Infrastructure Awareness</strong>: Familiarity with Cisco networking, Palo Alto firewalls, VMware ESXi, Nimble SANs, HP ProLiant servers, hybrid on-prem/cloud environments.</li><li><strong>Analytics & Monitoring</strong>: Develop real-time performance monitoring solutions (e.g., Power BI dashboards for HPC utilization).</li></ul><p><strong>Salary Range</strong>: $90,000 - $110,000</p><p><strong>Work Model</strong>: Hybrid in Corvallis, OR</p><p><strong>Benefits</strong>:</p><ul><li>Comprehensive Medical, Dental, Vision</li><li>Relocation Assistance</li><li>Generous PTO</li><li>Paid Holidays</li><li>Many More!</li></ul>
<p>We are looking for an experienced Data Engineer to lead the development and management of scalable data systems and analytics frameworks. This role is ideal for someone passionate about transforming data into actionable insights that drive business decisions. Based in Nutley, New Jersey, you will play a key role in supporting product, marketing, and operational strategies through robust data solutions.</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Design and implement scalable data pipelines and storage solutions to meet organizational needs.</p><p>• Monitor and analyze platform user behavior to uncover insights and identify opportunities for optimization.</p><p>• Build and maintain analytics dashboards and reporting frameworks for internal teams.</p><p>• Develop schemas, models, and data definitions to support core business operations.</p><p>• Oversee instrumentation of data collection across both backend and frontend systems.</p><p>• Provide ad hoc reporting and data support to enhance product and growth initiatives.</p><p>• Ensure data integrity, accuracy, and performance by implementing industry best practices.</p>
<p><strong>Skills and Knowledge:</strong></p><ul><li>Excellent understanding of Relational Database Design</li><li>Strong technical experience in database development, performing DDL operations, writing queries and stored procedures, and optimizing database objects</li><li>Working knowledge of ETL using SSIS or comparable tool</li><li>Solid reporting skills preferably using SSRS</li><li>Establishment and implementation of reporting tools to create reports and dashboards using SSRS, Power BI, Tableau, or similar analytic tool</li><li>Experience with Data management standards such as data governance is a plus</li><li>Effective analyst able to work closely with non-technical users</li><li>Knowledge of MS Access, MS Visual Studio, Crystal Reports or Datawatch Monarch is a plus</li><li>Proficiency in interpersonal communication, presentation, problem solving and</li></ul><p><br></p>
<p>The Data Engineer will support our Data Analytics team in building our analytics products. This role will be responsible for adding features to our production data pipeline and performing ad-hoc analysis of raw and processed data. The ideal candidate will have experience building and optimizing data pipelines and enjoys learning and working with cutting edge technologies.</p><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania. In this role, you will design, develop, and optimize data pipelines and platforms to support business operations and decision-making. If you have a strong technical background, a passion for data-driven solutions, and experience in the financial services industry, we encourage you to apply.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines and workflows using Python and modern data tools.<br>• Optimize and manage Snowflake environments, including data modeling, security practices, and warehouse performance.<br>• Automate financial operations workflows such as escrow management, investor reporting, and receivables processing.<br>• Collaborate with cross-functional teams to gather requirements and deliver data solutions that align with business objectives.<br>• Implement data governance and privacy practices to ensure compliance with financial regulations.<br>• Build and maintain production-grade data integrations across internal and third-party systems.<br>• Utilize Git version control and CI/CD pipelines to deploy and manage data workflows.<br>• Provide technical expertise and serve as a key resource for Snowflake, data pipelines, and automation processes.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability and efficiency.<br>• Communicate effectively with stakeholders, translating technical concepts into actionable insights.
We are looking for a skilled Data Engineer to join our team in West Chicago, Illinois, and contribute to the development and optimization of data systems and applications. This role requires a highly analytical individual with a strong technical background to ensure system efficiency, data integrity, and seamless integration of business processes. If you are passionate about transforming data into actionable insights and driving business outcomes, we encourage you to apply.<br><br>Responsibilities:<br>• Manage daily operations of IT systems, including maintenance, project coordination, and support tasks to ensure optimal functionality.<br>• Collaborate with IT teams to monitor system health and maintain a secure operational environment.<br>• Analyze and protect sensitive data, recommending solutions to enhance data security and integrity.<br>• Design and document system workflows, integrations, and processes to streamline business operations.<br>• Utilize data analytics tools to uncover patterns, predict outcomes, and support decision-making processes.<br>• Plan and implement system upgrades, feature configurations, and customizations to meet evolving business needs.<br>• Develop policies and procedures to ensure data governance, security, and system reliability.<br>• Lead training sessions and knowledge-sharing activities to promote effective use of IT systems across departments.<br>• Manage change processes and oversee release cycles for business applications.<br>• Execute and oversee projects from initiation to completion, including requirements gathering, implementation, and user testing.
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>