We are looking for a Senior Data Engineer to join our agile data engineering team in Philadelphia, Pennsylvania. This role is vital in creating, optimizing, and deploying high-quality data solutions that support strategic business objectives. The ideal candidate will collaborate with cross-functional teams to ensure efficient data processes, robust governance, and innovative technical solutions.<br><br>Responsibilities:<br>• Design and implement secure and scalable data pipelines and products.<br>• Troubleshoot and enhance existing data workflows and queries for optimal performance.<br>• Develop and enforce data governance, security, and privacy standards.<br>• Translate complex business requirements into clear and actionable technical specifications.<br>• Participate in project planning, identifying key milestones and resource requirements.<br>• Collaborate with stakeholders to evaluate business needs and prioritize data solutions.<br>• Conduct technical peer reviews to ensure the quality of data engineering deliverables.<br>• Support production operations and resolve issues efficiently.<br>• Contribute to architectural improvements and innovation within data systems.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. In this role, you will contribute to the development and optimization of data pipelines, ensuring the seamless integration of platforms and tools. Based in Jericho, New York, this position offers an exciting opportunity to work with advanced technologies in the non-profit sector.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines to support organizational goals.<br>• Develop and maintain data integration processes using tools such as Apache Spark and Python.<br>• Collaborate with cross-functional teams to leverage Tableau for data visualization and reporting.<br>• Work extensively with Salesforce and NetSuite to optimize data flow and system functionality.<br>• Utilize ETL processes to transform and prepare data for analysis and decision-making.<br>• Apply expertise in Apache Hadoop and Apache Kafka to enhance data processing capabilities.<br>• Troubleshoot and resolve issues within cloud-based and on-premise data systems.<br>• Ensure the security and integrity of all data management practices.<br>• Provide technical support and recommendations for system improvements.
We are looking for a skilled Data Engineer to join our team in Cypress, California, specializing in creating scalable and high-performance data integration and analytics solutions. This role involves transforming raw data into actionable insights, utilizing cutting-edge technologies to support business objectives. The ideal candidate will have a strong background in data preparation, optimization, and engineering workflows, along with a collaborative approach to solving complex problems.<br><br>Responsibilities:<br>• Design and develop technical solutions for medium-to-high complexity data integrations across multiple platforms.<br>• Collect, clean, and standardize structured and unstructured data to enable efficient analysis.<br>• Build reusable frameworks and pipelines to streamline data preparation and optimization.<br>• Create and maintain data workflows, troubleshooting issues to ensure seamless operation.<br>• Apply statistical and mathematical methods to generate actionable insights from data.<br>• Collaborate with cross-functional teams to translate business requirements into technical solutions.<br>• Document processes and workflows, ensuring alignment with organizational standards.<br>• Adhere to governance policies, best practices, and performance standards for scalability and reliability.<br>• Proactively recommend and implement system improvements, including new tools and methodologies.<br>• Support the adoption of innovative technologies to enhance data engineering capabilities.
<p>We are looking for an experienced Senior Data Engineer to join our team in Oxford, Massachusetts. In this role, you will design and maintain data platforms, leveraging cutting-edge technologies to optimize processes and drive analytical insights. This position requires a strong background in Python development, cloud technologies, and big data tools. This role is hybrid, onsite 3 days a week. Candidates must have GC or be USC.</p><p><br></p><p>Responsibilities:</p><p>• Develop, implement, and maintain scalable data platforms to support business needs.</p><p>• Utilize Python and PySpark to design and optimize data workflows.</p><p>• Collaborate with cross-functional teams to integrate data solutions with existing systems.</p><p>• Leverage Snowflake and other cloud technologies to manage and store large datasets.</p><p>• Implement and refine algorithms for data processing and analytics.</p><p>• Work with Apache Spark and Hadoop to build robust data pipelines.</p><p>• Create APIs to enhance data accessibility and integration.</p><p>• Monitor and troubleshoot data platforms to ensure optimal performance.</p><p>• Stay updated on emerging trends in big data and cloud technologies to continuously improve solutions.</p><p>• Participate in technical discussions and provide expertise during team reviews.</p>
We are looking for an experienced Data Engineer to join our team in Denton, Texas. In this role, you will leverage your expertise in cloud data engineering and advanced analytics to support strategic initiatives in the higher education sector. This position is ideal for someone dedicated to building robust data solutions and guiding less experienced team members.<br><br>Responsibilities:<br>• Develop and optimize data pipelines and workflows using tools like Microsoft Fabric and Azure.<br>• Design and implement data models and warehouses to support analytics and reporting needs.<br>• Create and manage data visualizations with Power BI or Tableau to present actionable insights.<br>• Write and maintain scripts in Python, PySpark, R, and Windows PowerShell to automate data processes.<br>• Integrate data from multiple sources, ensuring accuracy and consistency across systems.<br>• Collaborate with stakeholders to align data solutions with business strategies.<br>• Lead initiatives to solve complex data challenges by applying innovative problem-solving techniques.<br>• Stay up-to-date with emerging trends in analytics and business intelligence, particularly in higher education.<br>• Provide mentorship and technical guidance to less experienced Data Engineers to enhance team capabilities.
<p>As a Data Engineer, you’ll build and optimize pipelines, integrate data sources, and support reporting across the organization using Azure-based tools.</p>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania. In this role, you will design, develop, and optimize data pipelines and platforms to support business operations and decision-making. If you have a strong technical background, a passion for data-driven solutions, and experience in the financial services industry, we encourage you to apply.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines and workflows using Python and modern data tools.<br>• Optimize and manage Snowflake environments, including data modeling, security practices, and warehouse performance.<br>• Automate financial operations workflows such as escrow management, investor reporting, and receivables processing.<br>• Collaborate with cross-functional teams to gather requirements and deliver data solutions that align with business objectives.<br>• Implement data governance and privacy practices to ensure compliance with financial regulations.<br>• Build and maintain production-grade data integrations across internal and third-party systems.<br>• Utilize Git version control and CI/CD pipelines to deploy and manage data workflows.<br>• Provide technical expertise and serve as a key resource for Snowflake, data pipelines, and automation processes.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability and efficiency.<br>• Communicate effectively with stakeholders, translating technical concepts into actionable insights.
<p>As a Data Engineer, you’ll build and optimize pipelines, integrate data sources, and support reporting across the organization using Azure-based tools.</p>
<p><strong>Robert Half</strong> is actively partnering with an Austin-based client to identify a Data Engineer<strong> (contract).</strong> In this role, you will contribute to data infrastructure and pipeline development. This role is ideal for a candidate with solid SQL Server experience who is ready to take on more complex projects and begin developing leadership skills <strong>This position is in Austin, Tx. Candidates must currently live in Austin, Tx.</strong></p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement scalable data pipelines using SQL Server Integration Services (SSIS) and related technologies</li><li>Develop and manage advanced database structures including schemas, stored procedures, functions, and views in SQL Server</li><li>Establish and enforce data quality standards and validation protocols to maintain data accuracy and consistency</li><li>Enhance database performance through query optimization, indexing, and proactive system monitoring</li><li>Collaborate with business analysts and stakeholders to gather, interpret, and translate data requirements into technical solutions</li><li>Build and support ETL/ELT workflows for both batch and real-time data processing</li><li>Continuously monitor pipeline health and address performance issues or failures proactively</li><li>Maintain comprehensive documentation of data flows, technical processes, and system configurations</li><li>Participate in peer code reviews and contribute to the development of technical standards and best practices</li><li>Provide mentorship and knowledge sharing to entry level team members</li><li>Troubleshoot and resolve data inconsistencies, pipeline errors, and integration issues</li></ul>
<p>We are seeking a <strong>Senior Data Engineer</strong> with strong Azure expertise to design, build, and maintain data pipelines and scalable cloud solutions. In this role, you’ll collaborate with cross-functional teams to support data integration, analytics, and business intelligence initiatives that drive impactful decision-making.</p><p><br></p><p><strong>Responsibilities</strong></p><ul><li>Design, develop, and optimize robust data pipelines and ETL processes within the Azure ecosystem.</li><li>Implement scalable data solutions using Azure Data Factory, Azure Synapse, Databricks, and related services.</li><li>Manage and maintain data lake and data warehouse environments.</li><li>Ensure data quality, governance, and security best practices are followed across all solutions.</li><li>Collaborate with data analysts, BI developers, and business stakeholders to deliver accurate and accessible data.</li><li>Monitor and optimize data infrastructure performance, scalability, and cost efficiency.</li><li>Troubleshoot and resolve data pipeline or integration issues in a timely manner.</li><li>Mentor junior engineers and contribute to best practices and standards for data engineering.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in West Chicago, Illinois, and contribute to the development and optimization of data systems and applications. This role requires a highly analytical individual with a strong technical background to ensure system efficiency, data integrity, and seamless integration of business processes. If you are passionate about transforming data into actionable insights and driving business outcomes, we encourage you to apply.<br><br>Responsibilities:<br>• Manage daily operations of IT systems, including maintenance, project coordination, and support tasks to ensure optimal functionality.<br>• Collaborate with IT teams to monitor system health and maintain a secure operational environment.<br>• Analyze and protect sensitive data, recommending solutions to enhance data security and integrity.<br>• Design and document system workflows, integrations, and processes to streamline business operations.<br>• Utilize data analytics tools to uncover patterns, predict outcomes, and support decision-making processes.<br>• Plan and implement system upgrades, feature configurations, and customizations to meet evolving business needs.<br>• Develop policies and procedures to ensure data governance, security, and system reliability.<br>• Lead training sessions and knowledge-sharing activities to promote effective use of IT systems across departments.<br>• Manage change processes and oversee release cycles for business applications.<br>• Execute and oversee projects from initiation to completion, including requirements gathering, implementation, and user testing.
<p>We are seeking a <strong>Senior Data Engineer</strong> with strong experience in <strong>Azure cloud technologies</strong> to join our team. This individual will play a key role in designing, building, and maintaining scalable data pipelines and solutions that support business intelligence, analytics, and operational reporting.</p><p><strong>Responsibilities</strong></p><ul><li>Design and implement scalable data pipelines, ETL/ELT processes, and data integration workflows using Azure Data Factory and related tools.</li><li>Manage and optimize cloud-based data platforms including <strong>Azure Data Lake, Azure SQL Database, and Synapse Analytics</strong>.</li><li>Collaborate with business analysts, data scientists, and application developers to translate requirements into reliable data solutions.</li><li>Ensure data accuracy, quality, and security across all environments.</li><li>Implement and manage CI/CD pipelines for data solutions.</li><li>Support performance tuning, troubleshooting, and optimization of large-scale data workloads.</li><li>Document architecture, workflows, and data models to ensure knowledge transfer and transparency.</li></ul><p><br></p>
We are looking for a skilled and driven Data Engineer to join our team in San Juan Capistrano, California. This role centers on building and optimizing data pipelines, integrating systems, and implementing cloud-based data solutions. The ideal candidate will have a strong grasp of modern data engineering tools and techniques and a passion for delivering high-quality, scalable solutions.<br><br>Responsibilities:<br>• Design, build, and optimize scalable data pipelines using tools such as Databricks, Apache Spark, and PySpark.<br>• Develop integrations between internal systems and external platforms, including CRMs like Salesforce and HubSpot, utilizing APIs.<br>• Implement cloud-based data architectures aligned with data mesh principles.<br>• Collaborate with cross-functional teams to model, transform, and ensure the quality of data for analytics and reporting.<br>• Create and maintain APIs while testing and documenting them using tools like Postman and Swagger.<br>• Write efficient and modular Python code to support data processing workflows.<br>• Apply best practices such as version control, CI/CD, and code reviews to ensure robust and maintainable solutions.<br>• Uphold data security, integrity, and governance throughout the data lifecycle.<br>• Utilize cloud services for compute, storage, and orchestration, including platforms like AWS or Azure.<br>• Work within Agile and Scrum methodologies to deliver solutions efficiently and collaboratively.
<p>We are looking a Data Engineer to transform the backbone of IT operations and drive innovation! If you're passionate about streamlining infrastructure, automating workflows, and integrating cutting-edge applications to push business goals forward, this role is your opportunity to make an impact. </p><p><br></p><p><strong>Technical Skills:</strong></p><ul><li><strong>DevOps/Data Engineering:</strong> Strong automation mindset, particularly around data pipelines and onboarding workflows.</li><li><strong>Microsoft Ecosystem:</strong> Power Apps, Power Query, and Power BI - essential for dashboarding and internal tools.</li><li><strong>Infrastructure Awareness</strong>: Familiarity with Cisco networking, Palo Alto firewalls, VMware ESXi, Nimble SANs, HP ProLiant servers, hybrid on-prem/cloud environments.</li><li><strong>Analytics & Monitoring</strong>: Develop real-time performance monitoring solutions (e.g., Power BI dashboards for HPC utilization).</li></ul><p><strong>Salary Range</strong>: $90,000 - $110,000</p><p><strong>Work Model</strong>: Hybrid in Corvallis, OR</p><p><strong>Benefits</strong>:</p><ul><li>Comprehensive Medical, Dental, Vision</li><li>Relocation Assistance</li><li>Generous PTO</li><li>Paid Holidays</li><li>Many More!</li></ul>
<p>We are seeking a Senior Data Engineer to maintain legacy ETL processes and lead the development of modern, cloud-native data pipelines on Microsoft Azure. This role involves working closely with stakeholders to design, implement, and optimize data solutions for analytics, reporting, and AI/ML initiatives.</p><p><br></p><p>Key Responsibilities:</p><ul><li>Maintain and operate legacy ETL processes using Microsoft SSIS, PowerShell, SQL procedures, SSAS, .NET code</li><li>Design and implement Azure cloud-native data pipelines using Azure Data Factory, Synapse Pipelines, Apache Spark Notebooks, Python, and SQL</li><li>Redevelop existing SSIS ETL scripts into Azure Data Factory and Synapse Pipelines</li><li>Support data modeling for relational, dimensional, data lakehouse (medallion architecture), data warehouse, and NoSQL environments</li><li>Implement data migration, integrity, quality, metadata management, and security measures</li><li>Monitor and troubleshoot data pipelines to ensure high availability and performance</li><li>Automate platform operations with governance, build, deployment, and monitoring processes</li><li>Actively participate in Agile DevOps practices, including PI planning</li><li>Maintain strict versioning and configuration control for data integrity</li></ul>
<p>🌟 DATA ENGINEER – Build Smarter, Drive Impact 🌟 INTERVIEWS THIS WEEK!</p><p>📍 Location: Des Moines, IA (On-site Hybrid with flexibility) CANNOT BE 100% REMOTE .</p><p>🎯 Type: Full-Time Direct hire with BENEFITS! No SPONSORSHIP REQUIRED !!! </p><p>*** For immediate & confidential consideration, please send a message to CARRIE DANGER, SVP PERMANENT PLACEMENT on LinkedIn or send an email to me with your resume - My email address is on my LinkedIn page. ***</p><p>🌍 Ready to Engineer Data-Driven Success? Transform raw data into actionable insights that fuel business decisions?</p><p><strong>Key Highlights of this Direct Hire 🚀</strong></p><p>✔ Hands-On Engineering: MUST BE ABLE TO BUILD & maintain dashboards, data integrations, and pipelines—not just analyze data but truly engineer solutions AND build data visualizations.</p><p>✔ NOT looking for a Data Analyst. This is a BI Business Intelligence Data Engineer! </p><p>✔ Tooling Expertise: Work with advanced tools like Power BI, Tableau, Domo and create dashboards and integrations from scratch.</p><p>✔ Collaborate w/ data and analytics teams while working behind the scenes to streamline workflows& automate</p><p>________________________________________</p><p>What You'll Do 🤝</p><p>🔹 Data Integrations: Connect systems, design workflows, and maintain data pipelines for seamless delivery of insights.</p><p>🔹 Visualization: Build dashboards that simplify complex data using tools like Domo, Power BI, or Tableau.</p><p>🔹 Analytics Engine: MUST HAVE : Code solutions with SQL, Python, or similar tools to glean meaningful insights from structured data and creating data pipelines</p><p>✅ <strong>MUST HAVES:</strong></p><p>🎓 Education: Bachelor’s degree in Business Analytics, Computer Science, Data Science, Statistics</p><p><strong>🛠️ TECHNICAL SKILLS </strong></p><p>• 2+ years of hands-on professional experience in data engineering or analytics roles.</p><p>• BUILDING dashboards in BI tools like Tableau, Power BI, or DOMO & the creation of custom integrations.</p><p>• Proficiency in SQL or Python for robust data analysis and structuring.</p><p>• Ability to develop and maintain visual dashboards with UX</p><p>• BONUS: Experience with third-party platform integration like Salesforce</p><p>This is a Direct hire permanent position up to $90K plus bonus. For immediate and confidential consideration, please contact me directly, Carrie Danger, SVP, Permanent Placement Team, Iowa Region at Office: 515-259-6087 or Cell: 515-991-0863, Email resume confidentially to Carrie Danger * My email address is on my LinkedIN page. Please find my email address / contact Information on my LinkedIN profile and email me your resume confidentially. OR you can ONE CLICK APPLY AT Robert Half website, and Specifically Apply to this posting.</p>
We are looking for a Senior Data Engineer with deep expertise in Palantir Foundry to design and implement robust data infrastructures that drive strategic decision-making. In this pivotal role, you will collaborate across teams to develop scalable solutions that unlock the full potential of data for operational and business success. If you thrive in fast-paced environments and enjoy solving complex problems, this position offers an exciting opportunity to make a significant impact.<br><br>Responsibilities:<br>• Design and develop scalable Ontology models, data pipelines, and operational solutions using Palantir Foundry.<br>• Collaborate with cross-functional teams to gather requirements and transform them into secure, high-quality data assets.<br>• Identify opportunities for leveraging data to drive operational efficiencies and strategic decision-making.<br>• Build and maintain data pipelines that support analytics, automation, and other business-critical functions.<br>• Ensure data integrity and availability through proactive validation, monitoring, and troubleshooting.<br>• Provide actionable insights to stakeholders by creating innovative and reliable data services.<br>• Troubleshoot and resolve issues in production and pre-production data systems.<br>• Develop internal tools to streamline deployment automation and enhance platform performance.
<p>We are looking for an AWS Data Engineer to join our team based in Seattle, WA. This is a long-term contract position where you will play a key role in overseeing complex engineering projects, guiding technical teams, and ensuring the successful integration and maintenance of software systems. The ideal candidate will bring extensive experience in system planning, design, and execution, as well as a strong ability to lead and collaborate across functional areas.</p><p><br></p><p>Responsibilities:</p><p>· Implement data cleansing, enrichment, and standardization processes.</p><p>· Automate batch and streaming data pipelines for real-time analytics. Build solutions for both streaming (Kinesis, MSK, Lambda) and batch processing (Glue, EMR, Step Functions).</p><p>· Ensure pipelines are optimized for scalability, performance, and fault tolerance.</p><p>· Optimize SQL queries, data models, and pipeline performance.</p><p>· Ensure efficient use of cloud-native resources (compute, storage, networking).</p><p>· Design and implement data architecture across data lakes, data warehouses, and lakehouses.</p><p>· Optimize data storage strategies (partitioning, indexing, schema design).</p><p>· Implement data integration from diverse sources (databases, APIs, IoT, third-party systems).</p><p>· Work with Data Scientists, Analysts, and BI developers to deliver clean, well-structured data.</p><p>· Document data assets and processes for discoverability.</p><p>· Training of existing core staff who will maintain infrastructure and pipelines.</p>
<p><strong> AWS Data Engineer</strong></p><p><br></p><p><strong>Position Overview</strong></p><p>We are seeking a skilled IT Data Integration Engineer / AWS Data Engineer to join our team and lead the development and optimization of data integration processes. This role is critical to ensuring seamless data flow across systems, enabling high-quality, consistent, and accessible data to support business intelligence and analytics initiatives. This is a long-term contract role in Southern California.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop and Maintain Data Integration Solutions</li><li>Design and implement data workflows using AWS Glue, EMR, Lambda, and Redshift.</li><li>Utilize PySpark, Apache Spark, and Python to process large datasets.</li><li>Ensure accurate and efficient ETL (Extract, Transform, Load) operations.</li></ul><p>Ensure Data Quality and Integrity</p><ul><li>Validate and cleanse data to maintain high standards of quality.</li><li>Implement monitoring, validation, and error-handling mechanisms.</li></ul><p>Optimize Data Integration Processes</p><ul><li>Enhance performance and scalability of data workflows on AWS infrastructure.</li><li>Apply data warehousing concepts including star/snowflake schema design and dimensional modeling.</li><li>Fine-tune queries and optimize Redshift performance.</li></ul><p>Support Business Intelligence and Analytics</p><ul><li>Translate business requirements into technical specifications and data pipelines.</li><li>Collaborate with analysts and stakeholders to deliver timely, integrated data.</li></ul><p>Maintain Documentation and Compliance</p><ul><li>Document workflows, processes, and technical specifications.</li><li>Ensure adherence to data governance policies and regulatory standards.</li></ul>
<p>We are looking for a highly skilled Data Engineering and Software Engineering professional to design, build, and optimize our Data Lake and Data Processing platform on AWS. This role requires deep expertise in data architecture, cloud computing, and software development, as well as the ability to define and implement strategies for deployment, testing, and production workflows.</p><p><br></p><p>Key Responsibilities:</p><ul><li>Design and develop a scalable Data Lake and data processing platform from the ground up on AWS.</li><li>Lead decision-making and provide guidance on code deployment, testing strategies, and production environment workflows.</li><li>Define the roadmap for Data Lake development, ensuring efficient data storage and processing.</li><li>Oversee S3 data storage, Delta.io for change data capture, and AWS data processing services.</li><li>Work with Python and PySpark to process large-scale data efficiently.</li><li>Implement and manage Lambda, Glue, Kafka, and Firehose for seamless data integration and processing.</li><li>Collaborate with stakeholders to align technical strategies with business objectives, while maintaining a hands-on engineering focus.</li><li>Drive innovation and cost optimization in data architecture and cloud infrastructure.</li><li>Provide expertise in data warehousing and transitioning into modern AWS-based data processing practices.</li></ul>
We are seeking a Data Engineer to join our team based in Bethesda, Maryland. As part of our Investment Management team, you will play a crucial role in designing and maintaining data pipelines in our Azure Data Lake, implementing data warehousing strategies, and collaborating with various teams to address data engineering needs.<br><br>Responsibilities:<br><br>• Design robust data pipelines within Azure Data Lake to support our investment management operations.<br>• Implement effective data warehousing strategies that ensure efficient storage and retrieval of data.<br>• Collaborate with Power BI developers to integrate data reporting seamlessly and effectively.<br>• Conduct data validation and audits to uphold the accuracy and quality of our data pipelines.<br>• Troubleshoot pipeline processes and optimize them for improved performance.<br>• Work cross-functionally with different teams to address and fulfill data engineering needs with a focus on scalability and reliability.<br>• Utilize Apache Kafka, Apache Pig, Apache Spark, and other cloud technologies for efficient data visualization and algorithm implementation.<br>• Develop APIs and use AWS technologies to ensure seamless data flow and analytics.<br>• Leverage Apache Hadoop for effective data management and analytics.
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
<p>The Data Engineer will support our Data Analytics team in building our analytics products. This role will be responsible for adding features to our production data pipeline and performing ad-hoc analysis of raw and processed data. The ideal candidate will have experience building and optimizing data pipelines and enjoys learning and working with cutting edge technologies.</p><p><br></p>
We are looking for an experienced Data Engineer to join our team in Johns Creek, Georgia, on a Contract to permanent basis. This position offers an exciting opportunity to design and optimize data pipelines, manage cloud systems, and contribute to the scalability of our Azure environment. The ideal candidate will bring advanced technical skills and a collaborative mindset to support critical data infrastructure initiatives.<br><br>Responsibilities:<br>• Develop and enhance data pipelines using Azure Data Factory to ensure efficient data processing and integration.<br>• Manage and administer Azure Managed Instances to support database operations and ensure system reliability.<br>• Implement real-time data replication from on-premises systems to cloud environments to support seamless data accessibility.<br>• Utilize advanced ETL tools and processes to transform and integrate complex data workflows.<br>• Collaborate with cross-functional teams to ensure data integration across systems, with a preference for experience in Salesforce integration.<br>• Leverage real-time streaming technologies such as Confluent Cloud or Apache Kafka to support dynamic data environments.<br>• Optimize data workflows using tools like Apache Spark and Hadoop to enhance processing performance.<br>• Troubleshoot and resolve database-related issues to maintain system stability and performance.<br>• Work closely with stakeholders to understand data requirements and provide innovative solutions.