<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Oxford, Massachusetts. In this role, you will design and maintain data platforms, leveraging cutting-edge technologies to optimize processes and drive analytical insights. This position requires a strong background in Python development, cloud technologies, and big data tools. This role is hybrid, onsite 3 days a week. Candidates must have GC or be USC.</p><p><br></p><p>Responsibilities:</p><p>• Develop, implement, and maintain scalable data platforms to support business needs.</p><p>• Utilize Python and PySpark to design and optimize data workflows.</p><p>• Collaborate with cross-functional teams to integrate data solutions with existing systems.</p><p>• Leverage Snowflake and other cloud technologies to manage and store large datasets.</p><p>• Implement and refine algorithms for data processing and analytics.</p><p>• Work with Apache Spark and Hadoop to build robust data pipelines.</p><p>• Create APIs to enhance data accessibility and integration.</p><p>• Monitor and troubleshoot data platforms to ensure optimal performance.</p><p>• Stay updated on emerging trends in big data and cloud technologies to continuously improve solutions.</p><p>• Participate in technical discussions and provide expertise during team reviews.</p>
We are looking for an experienced Data Engineer to join our team in Johns Creek, Georgia, on a Contract to permanent basis. This position offers an exciting opportunity to design and optimize data pipelines, manage cloud systems, and contribute to the scalability of our Azure environment. The ideal candidate will bring advanced technical skills and a collaborative mindset to support critical data infrastructure initiatives.<br><br>Responsibilities:<br>• Develop and enhance data pipelines using Azure Data Factory to ensure efficient data processing and integration.<br>• Manage and administer Azure Managed Instances to support database operations and ensure system reliability.<br>• Implement real-time data replication from on-premises systems to cloud environments to support seamless data accessibility.<br>• Utilize advanced ETL tools and processes to transform and integrate complex data workflows.<br>• Collaborate with cross-functional teams to ensure data integration across systems, with a preference for experience in Salesforce integration.<br>• Leverage real-time streaming technologies such as Confluent Cloud or Apache Kafka to support dynamic data environments.<br>• Optimize data workflows using tools like Apache Spark and Hadoop to enhance processing performance.<br>• Troubleshoot and resolve database-related issues to maintain system stability and performance.<br>• Work closely with stakeholders to understand data requirements and provide innovative solutions.
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>We are looking for a skilled Data Engineer to join our team. In this role, you will play a key part in designing and building scalable data solutions to support our mission of improving cancer care. The ideal candidate will thrive in a collaborative environment and have a strong background in developing robust data pipelines and working with cloud-based platforms.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable, componentized processes to enhance the business intelligence platform.</p><p>• Develop and optimize robust data pipelines to support data integration and transformation.</p><p>• Collaborate with cross-functional teams to translate requirements into actionable tasks and deliverables.</p><p>• Evaluate and utilize big data and cloud technologies to deliver effective solutions.</p><p>• Troubleshoot and resolve technical issues efficiently, while identifying opportunities for process improvements.</p><p>• Maintain clear documentation and communicate effectively with team members about code functionality.</p><p>• Adapt to shifting priorities and requirements in an agile development environment.</p><p>• Implement instrumentation, logging, and alerting practices to ensure system reliability</p>
<p>Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senior ETL Engineer - Position WEEKLY HYBRID position with major flexibility! FULL Microsoft On-Prem stack.</p><p><br></p><p>LOCATION : HYBRID WEEKLY in Des Moines. You must reside in the Des Moines area for weekly onsite . NO travel back and forth and not a remote position! If you live in Des Moines, eventually you can MOSTLY work remote!! This position has upside with training in Azure.</p><p><br></p><p>IMMEDIATE HIRE ! Solve real Business Problems.</p><p><br></p><p>Hands-On Technical SENIOR Microsoft Stack Data Engineer | SENIOR Data Warehouse Engineer / SENIOR Data Engineer / Senior ETL Developer / Azure Data Engineer / ( Direct Hire) who is looking to help modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD but FIRST REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting.</p><p><br></p><p>YOU WILL DOING ALL ASPECTS OF Data Engineering. Must have data warehouse & Data Lake skills. You will be in the technical weeds and technical data day to day BUT you could grow to the Technical Leader of this team. ETL skills like SSIS., working with disparate data. SSAS is a Plus! Fact and Dimension Data warehouse experience AND experience.</p><p>Hands-On Technical Hands-On Technical SENIOR Microsoft Stack Data Engineer / SENIOR Data Warehouse / SENIOR Data Engineer / Azure Data Factory Data Engineer This is a Permanent Direct Hire Hands-On Technical Manager of Data Engineering position with one of our clients in Des Moines up to 155K Plus bonus</p><p><br></p><p>PERKS: Bonus, 2 1/2 day weekends !</p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. This is a long-term contract opportunity for an individual with a strong background in building and optimizing data pipelines and systems. The ideal candidate will have a passion for working with large-scale data and a proven ability to leverage modern tools and technologies to deliver high-quality solutions.<br><br>Responsibilities:<br>• Design, implement, and maintain scalable data pipelines and ETL processes to support business needs.<br>• Develop and optimize solutions using tools such as Apache Spark, Python, and Apache Hadoop.<br>• Manage and integrate streaming data platforms like Apache Kafka to ensure real-time data processing.<br>• Utilize technologies such as AWS Lambda, Step Functions, Glue, and Redshift for cloud-based data solutions.<br>• Collaborate with cross-functional teams to understand data requirements and provide innovative solutions.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Create and maintain documentation for data workflows, processes, and system architecture.<br>• Implement infrastructure as code using Terraform to enhance system reliability and scalability.<br>• Troubleshoot and resolve data-related technical issues promptly.<br>• Stay updated on emerging trends and technologies in data engineering to continuously improve practices.
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania. This is a Contract-to-permanent position that offers a dynamic opportunity to work with large-scale data sets and contribute to data modeling and automation processes. If you're passionate about leveraging advanced tools and technologies to optimize data workflows, we encourage you to apply.<br><br>Responsibilities:<br>• Develop and implement data models for large-scale datasets, focusing on retailer data such as Walmart.<br>• Utilize tools such as Python, Snowflake, and SQL to design and optimize data pipelines.<br>• Automate data workflows through scripting, including Selenium-based web automation tasks.<br>• Collaborate with cross-functional teams to support data integration and analysis for multiple retailers.<br>• Ensure data accuracy and consistency across all platforms and workflows.<br>• Troubleshoot and resolve issues related to data pipelines and automation processes.<br>• Provide technical expertise to enhance data engineering practices and improve system performance.<br>• Document processes and workflows to maintain clarity and facilitate future updates.
<p>As a Data Engineer, you’ll build and optimize pipelines, integrate data sources, and support reporting across the organization using Azure-based tools.</p>
We are seeking a Data Engineer to join our team based in Bethesda, Maryland. As part of our Investment Management team, you will play a crucial role in designing and maintaining data pipelines in our Azure Data Lake, implementing data warehousing strategies, and collaborating with various teams to address data engineering needs.<br><br>Responsibilities:<br><br>• Design robust data pipelines within Azure Data Lake to support our investment management operations.<br>• Implement effective data warehousing strategies that ensure efficient storage and retrieval of data.<br>• Collaborate with Power BI developers to integrate data reporting seamlessly and effectively.<br>• Conduct data validation and audits to uphold the accuracy and quality of our data pipelines.<br>• Troubleshoot pipeline processes and optimize them for improved performance.<br>• Work cross-functionally with different teams to address and fulfill data engineering needs with a focus on scalability and reliability.<br>• Utilize Apache Kafka, Apache Pig, Apache Spark, and other cloud technologies for efficient data visualization and algorithm implementation.<br>• Develop APIs and use AWS technologies to ensure seamless data flow and analytics.<br>• Leverage Apache Hadoop for effective data management and analytics.
<p>We are seeking a <strong>Senior Data Engineer</strong> with deep expertise in <strong>Power BI, Microsoft Fabric, DAX, and SQL</strong> to join our growing analytics team. This role will focus on building and optimizing data models, pipelines, and reporting solutions to support enterprise-wide business intelligence and decision-making. You’ll collaborate closely with analysts, stakeholders, and IT teams to ensure data solutions are scalable, efficient, and insightful.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable data pipelines and ETL processes within the Microsoft ecosystem.</li><li>Develop and optimize complex SQL queries, stored procedures, and views for efficient data extraction and transformation.</li><li>Build and enhance <strong>Power BI dashboards</strong> and reports with advanced DAX calculations to provide actionable insights.</li><li>Work with <strong>Microsoft Fabric</strong> to manage data integration, governance, and scalable analytics workloads.</li><li>Partner with business stakeholders to gather requirements and translate them into technical solutions.</li><li>Ensure data accuracy, security, and availability across multiple business units.</li><li>Monitor, troubleshoot, and optimize data infrastructure for performance and cost efficiency.</li><li>Mentor junior team members and contribute to best practices around data engineering and BI.</li></ul><p><br></p>
<p>We are looking for an experienced AWS/IDMC Data Engineer to join our team in Southern California. In this long-term contract position, you will play a key role in designing, developing, and optimizing data integration processes. This onsite role offers the opportunity to collaborate with cross-functional teams, ensuring seamless data workflows and high-performing systems. This role will be sitting onsite 4 days per week.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement data integration processes using Informatica Intelligent Cloud Services to meet business requirements.</p><p>• Collaborate with business analysts and data architects to design solutions for data transformation and integration.</p><p>• Create, test, and maintain application interfaces using various connectors, including Salesforce and other technologies.</p><p>• Automate and optimize data validation processes to enhance efficiency and reliability.</p><p>• Monitor and troubleshoot data workflows to ensure consistent system performance.</p><p>• Conduct evaluations of existing data landscapes and recommend architectural improvements.</p><p>• Build and optimize data pipelines into Amazon Redshift, ensuring scalability and performance.</p><p>• Utilize distributed data processing tools such as AWS Glue and Spark to manage large-scale data.</p><p>• Apply advanced data modeling techniques to streamline workflows and enhance system operations.</p><p>• Provide technical support and guidance to ensure compliance with best practices in data engineering.</p>
<p>We are looking a Data Engineer to transform the backbone of IT operations and drive innovation! If you're passionate about streamlining infrastructure, automating workflows, and integrating cutting-edge applications to push business goals forward, this role is your opportunity to make an impact. </p><p><br></p><p><strong>Technical Skills:</strong></p><ul><li><strong>DevOps/Data Engineering:</strong> Strong automation mindset, particularly around data pipelines and onboarding workflows.</li><li><strong>Microsoft Ecosystem:</strong> Power Apps, Power Query, and Power BI - essential for dashboarding and internal tools.</li><li><strong>Infrastructure Awareness</strong>: Familiarity with Cisco networking, Palo Alto firewalls, VMware ESXi, Nimble SANs, HP ProLiant servers, hybrid on-prem/cloud environments.</li><li><strong>Analytics & Monitoring</strong>: Develop real-time performance monitoring solutions (e.g., Power BI dashboards for HPC utilization).</li></ul><p><strong>Salary Range</strong>: $90,000 - $110,000</p><p><strong>Work Model</strong>: Hybrid in Corvallis, OR</p><p><strong>Benefits</strong>:</p><ul><li>Comprehensive Medical, Dental, Vision</li><li>Relocation Assistance</li><li>Generous PTO</li><li>Paid Holidays</li><li>Many More!</li></ul>
<p>We are seeking a Senior Data Engineer to maintain legacy ETL processes and lead the development of modern, cloud-native data pipelines on Microsoft Azure. This role involves working closely with stakeholders to design, implement, and optimize data solutions for analytics, reporting, and AI/ML initiatives.</p><p><br></p><p>Key Responsibilities:</p><ul><li>Maintain and operate legacy ETL processes using Microsoft SSIS, PowerShell, SQL procedures, SSAS, .NET code</li><li>Design and implement Azure cloud-native data pipelines using Azure Data Factory, Synapse Pipelines, Apache Spark Notebooks, Python, and SQL</li><li>Redevelop existing SSIS ETL scripts into Azure Data Factory and Synapse Pipelines</li><li>Support data modeling for relational, dimensional, data lakehouse (medallion architecture), data warehouse, and NoSQL environments</li><li>Implement data migration, integrity, quality, metadata management, and security measures</li><li>Monitor and troubleshoot data pipelines to ensure high availability and performance</li><li>Automate platform operations with governance, build, deployment, and monitoring processes</li><li>Actively participate in Agile DevOps practices, including PI planning</li><li>Maintain strict versioning and configuration control for data integrity</li></ul>
<p><strong> AWS Data Engineer</strong></p><p><br></p><p><strong>Position Overview</strong></p><p>We are seeking a skilled IT Data Integration Engineer / AWS Data Engineer to join our team and lead the development and optimization of data integration processes. This role is critical to ensuring seamless data flow across systems, enabling high-quality, consistent, and accessible data to support business intelligence and analytics initiatives. This is a long-term contract role in Southern California.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop and Maintain Data Integration Solutions</li><li>Design and implement data workflows using AWS Glue, EMR, Lambda, and Redshift.</li><li>Utilize PySpark, Apache Spark, and Python to process large datasets.</li><li>Ensure accurate and efficient ETL (Extract, Transform, Load) operations.</li></ul><p>Ensure Data Quality and Integrity</p><ul><li>Validate and cleanse data to maintain high standards of quality.</li><li>Implement monitoring, validation, and error-handling mechanisms.</li></ul><p>Optimize Data Integration Processes</p><ul><li>Enhance performance and scalability of data workflows on AWS infrastructure.</li><li>Apply data warehousing concepts including star/snowflake schema design and dimensional modeling.</li><li>Fine-tune queries and optimize Redshift performance.</li></ul><p>Support Business Intelligence and Analytics</p><ul><li>Translate business requirements into technical specifications and data pipelines.</li><li>Collaborate with analysts and stakeholders to deliver timely, integrated data.</li></ul><p>Maintain Documentation and Compliance</p><ul><li>Document workflows, processes, and technical specifications.</li><li>Ensure adherence to data governance policies and regulatory standards.</li></ul>
We are looking for a skilled Data Engineer to join our team in West Chicago, Illinois, and contribute to the development and optimization of data systems and applications. This role requires a highly analytical individual with a strong technical background to ensure system efficiency, data integrity, and seamless integration of business processes. If you are passionate about transforming data into actionable insights and driving business outcomes, we encourage you to apply.<br><br>Responsibilities:<br>• Manage daily operations of IT systems, including maintenance, project coordination, and support tasks to ensure optimal functionality.<br>• Collaborate with IT teams to monitor system health and maintain a secure operational environment.<br>• Analyze and protect sensitive data, recommending solutions to enhance data security and integrity.<br>• Design and document system workflows, integrations, and processes to streamline business operations.<br>• Utilize data analytics tools to uncover patterns, predict outcomes, and support decision-making processes.<br>• Plan and implement system upgrades, feature configurations, and customizations to meet evolving business needs.<br>• Develop policies and procedures to ensure data governance, security, and system reliability.<br>• Lead training sessions and knowledge-sharing activities to promote effective use of IT systems across departments.<br>• Manage change processes and oversee release cycles for business applications.<br>• Execute and oversee projects from initiation to completion, including requirements gathering, implementation, and user testing.
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania. In this role, you will design, develop, and optimize data pipelines and platforms to support business operations and decision-making. If you have a strong technical background, a passion for data-driven solutions, and experience in the financial services industry, we encourage you to apply.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines and workflows using Python and modern data tools.<br>• Optimize and manage Snowflake environments, including data modeling, security practices, and warehouse performance.<br>• Automate financial operations workflows such as escrow management, investor reporting, and receivables processing.<br>• Collaborate with cross-functional teams to gather requirements and deliver data solutions that align with business objectives.<br>• Implement data governance and privacy practices to ensure compliance with financial regulations.<br>• Build and maintain production-grade data integrations across internal and third-party systems.<br>• Utilize Git version control and CI/CD pipelines to deploy and manage data workflows.<br>• Provide technical expertise and serve as a key resource for Snowflake, data pipelines, and automation processes.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability and efficiency.<br>• Communicate effectively with stakeholders, translating technical concepts into actionable insights.
We are looking for a skilled Data Engineer to join our team in Johnson City, Texas. In this role, you will design and optimize data solutions to enable seamless data transfer and management in Snowflake. You will work collaboratively with cross-functional teams to enhance data accessibility and support data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design, develop, and implement ETL solutions to facilitate data transfer between diverse sources and Snowflake.<br>• Optimize the performance of Snowflake databases by constructing efficient data structures and utilizing indexes.<br>• Develop and maintain automated, scalable data pipelines within the Snowflake environment.<br>• Deploy and configure monitoring tools to ensure optimal performance of the Snowflake platform.<br>• Collaborate with product managers and agile teams to refine requirements and deliver solutions.<br>• Create integrations to accommodate growing data volume and complexity.<br>• Enhance data models to improve accessibility for business intelligence tools.<br>• Implement systems to ensure data quality and availability for stakeholders.<br>• Write unit and integration tests while documenting technical work.<br>• Automate testing and deployment processes in Snowflake within Azure.
We are looking for a skilled and driven Data Engineer to join our team in San Juan Capistrano, California. This role centers on building and optimizing data pipelines, integrating systems, and implementing cloud-based data solutions. The ideal candidate will have a strong grasp of modern data engineering tools and techniques and a passion for delivering high-quality, scalable solutions.<br><br>Responsibilities:<br>• Design, build, and optimize scalable data pipelines using tools such as Databricks, Apache Spark, and PySpark.<br>• Develop integrations between internal systems and external platforms, including CRMs like Salesforce and HubSpot, utilizing APIs.<br>• Implement cloud-based data architectures aligned with data mesh principles.<br>• Collaborate with cross-functional teams to model, transform, and ensure the quality of data for analytics and reporting.<br>• Create and maintain APIs while testing and documenting them using tools like Postman and Swagger.<br>• Write efficient and modular Python code to support data processing workflows.<br>• Apply best practices such as version control, CI/CD, and code reviews to ensure robust and maintainable solutions.<br>• Uphold data security, integrity, and governance throughout the data lifecycle.<br>• Utilize cloud services for compute, storage, and orchestration, including platforms like AWS or Azure.<br>• Work within Agile and Scrum methodologies to deliver solutions efficiently and collaboratively.
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Las Vegas, Nevada. In this role, you will leverage your expertise in Google Cloud Platform to design and optimize scalable data solutions that power analytics, machine learning, and business intelligence initiatives. This position offers an exciting opportunity to collaborate with cross-functional teams and build high-performance data infrastructure.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines and workflows using Google Cloud Platform services such as BigQuery, Dataflow, Pub/Sub, Cloud Composer, and Dataproc.<br>• Develop and maintain ETL processes to ensure data quality, reliability, and optimal performance.<br>• Collaborate with data scientists, business intelligence teams, and stakeholders to support advanced analytics and machine learning projects.<br>• Establish and enforce data governance protocols, quality checks, and monitoring frameworks.<br>• Optimize query performance and storage solutions within BigQuery and other Google Cloud services.<br>• Ensure solutions adhere to best practices for scalability, security, and cost-efficiency.<br>• Partner with DevOps and cloud infrastructure teams to manage deployment using tools like Terraform and Deployment Manager.<br>• Troubleshoot data pipeline issues and conduct thorough root cause analysis to ensure system reliability.<br>• Implement modern orchestration frameworks such as Airflow to streamline workflows and automation.
<p>We are looking for a skilled Software Developer with expertise to join our team in Washington, DC. This is a long-term contract position offering the opportunity to work on innovative projects and contribute to the development of high-quality software solutions. The ideal candidate will possess strong problem-solving skills and collaborate with cross-functional teams to ensure seamless application performance and scalability.</p><p><br></p><p>As a Data Engineer, you will:</p><ul><li>Design and optimize <strong>data ingestion and indexing pipelines</strong></li><li>Work with <strong>structured and unstructured data</strong> to support real-time search capabilities</li><li>Develop and maintain workflows using <strong>Airflow</strong>, <strong>PySpark</strong>, and <strong>Databricks</strong></li><li>Build and deploy infrastructure using <strong>Terraform</strong></li><li>Index data into <strong>Elasticsearch</strong> or <strong>AWS Search</strong> with low latency</li><li>Troubleshoot <strong>networking and cloud infrastructure issues</strong></li><li>Collaborate on potential <strong>API service development</strong> (in planning phase)</li><li>Contribute to infrastructure scalability (possible use of <strong>Kubernetes)</strong></li></ul>
We are looking for a skilled Data Engineer to join our team in Chicago, Illinois. In this long-term contract role, you will contribute to the development and maintenance of robust data infrastructures that support business intelligence solutions. If you thrive in a collaborative environment and enjoy solving complex problems, this position offers an excellent opportunity to apply your expertise and drive impactful projects forward.<br><br>Responsibilities:<br>• Design, build, and maintain scalable data infrastructures to support business intelligence needs.<br>• Develop and implement data models and frameworks that optimize data analysis and reporting.<br>• Collaborate with business partners to understand data requirements and deliver effective solutions.<br>• Lead initiatives in business intelligence projects, ensuring alignment with organizational goals.<br>• Apply compliance standards within project scope, document processes, and participate in related activities.<br>• Utilize industry knowledge to enhance data solutions and provide informed recommendations.<br>• Work on the full data warehouse lifecycle, including data analysis, dimensional modeling, and design.<br>• Solve complex problems using proven methodologies and adapt them to new challenges.<br>• Assist team members informally with troubleshooting and knowledge sharing.<br>• Leverage tools such as Apache Spark, Python, Hadoop, Kafka, and ETL processes to optimize data workflows.
<p>🌟 DATA ENGINEER – Build Smarter, Drive Impact 🌟 INTERVIEWS THIS WEEK!</p><p>📍 Location: Des Moines, IA (On-site Hybrid with flexibility) CANNOT BE 100% REMOTE .</p><p>🎯 Type: Full-Time Direct hire with BENEFITS! No SPONSORSHIP REQUIRED !!! </p><p>*** For immediate & confidential consideration, please send a message to CARRIE DANGER, SVP PERMANENT PLACEMENT on LinkedIn or send an email to me with your resume - My email address is on my LinkedIn page. ***</p><p>🌍 Ready to Engineer Data-Driven Success? Transform raw data into actionable insights that fuel business decisions?</p><p><strong>Key Highlights of this Direct Hire 🚀</strong></p><p>✔ Hands-On Engineering: MUST BE ABLE TO BUILD & maintain dashboards, data integrations, and pipelines—not just analyze data but truly engineer solutions AND build data visualizations.</p><p>✔ NOT looking for a Data Analyst. This is a BI Business Intelligence Data Engineer! </p><p>✔ Tooling Expertise: Work with advanced tools like Power BI, Tableau, Domo and create dashboards and integrations from scratch.</p><p>✔ Collaborate w/ data and analytics teams while working behind the scenes to streamline workflows& automate</p><p>________________________________________</p><p>What You'll Do 🤝</p><p>🔹 Data Integrations: Connect systems, design workflows, and maintain data pipelines for seamless delivery of insights.</p><p>🔹 Visualization: Build dashboards that simplify complex data using tools like Domo, Power BI, or Tableau.</p><p>🔹 Analytics Engine: MUST HAVE : Code solutions with SQL, Python, or similar tools to glean meaningful insights from structured data and creating data pipelines</p><p>✅ <strong>MUST HAVES:</strong></p><p>🎓 Education: Bachelor’s degree in Business Analytics, Computer Science, Data Science, Statistics</p><p><strong>🛠️ TECHNICAL SKILLS </strong></p><p>• 2+ years of hands-on professional experience in data engineering or analytics roles.</p><p>• BUILDING dashboards in BI tools like Tableau, Power BI, or DOMO & the creation of custom integrations.</p><p>• Proficiency in SQL or Python for robust data analysis and structuring.</p><p>• Ability to develop and maintain visual dashboards with UX</p><p>• BONUS: Experience with third-party platform integration like Salesforce</p><p>This is a Direct hire permanent position up to $90K plus bonus. For immediate and confidential consideration, please contact me directly, Carrie Danger, SVP, Permanent Placement Team, Iowa Region at Office: 515-259-6087 or Cell: 515-991-0863, Email resume confidentially to Carrie Danger * My email address is on my LinkedIN page. Please find my email address / contact Information on my LinkedIN profile and email me your resume confidentially. OR you can ONE CLICK APPLY AT Robert Half website, and Specifically Apply to this posting.</p>
We are looking for an experienced Data Engineer to join our team on a contract basis in Cleveland, Ohio. This role involves creating scalable data solutions, optimizing database environments, and supporting business intelligence reporting for manufacturing metrics. The ideal candidate will have expertise in modern data engineering practices and a strong ability to collaborate with stakeholders.<br><br>Responsibilities:<br>• Redesign and optimize existing data models to improve efficiency and scalability.<br>• Structure and organize incoming data to ensure seamless integration into reporting systems.<br>• Build advanced time intelligence features within Power BI to enhance reporting capabilities.<br>• Craft operational reports that provide actionable insights on manufacturing metrics.<br>• Develop and implement reporting solutions that deliver measurable business value.<br>• Utilize modern data transformation tools, such as dbt, to streamline workflows.<br>• Support analytical reporting and contract review processes by ensuring accurate data representation.<br>• Assist in establishing a robust database environment that integrates well with Power BI.<br>• Collaborate with stakeholders to understand data requirements and translate them into actionable solutions.<br>• Explore and implement forward-thinking data engineering practices to enhance system performance.