<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a highly skilled Data Engineer to join our team on a contract basis in Atlanta, Georgia. This role focuses on optimizing data processes and infrastructure, ensuring efficient data management and performance. The ideal candidate will possess expertise in modern data engineering tools and technologies.<br><br>Responsibilities:<br>• Optimize data indexing and address fragmentation issues to enhance system performance.<br>• Develop and maintain data pipelines using ETL processes to ensure accurate data transformation and integration.<br>• Utilize Apache Spark for scalable data processing and analytics.<br>• Implement and manage big data solutions with Apache Hadoop.<br>• Design and deploy real-time data streaming frameworks using Apache Kafka.<br>• Collaborate with cross-functional teams to identify and resolve data-related challenges.<br>• Monitor and improve system performance by analyzing data usage and storage trends.<br>• Write efficient code in Python to support data engineering tasks.<br>• Document processes and maintain clear records of data workflows and optimizations.<br>• Ensure data security and compliance with organizational standards.
We are looking for a skilled Data Engineer to join our team in Ann Arbor, Michigan, and contribute to the development of a modern, scalable data platform. In this role, you will focus on building efficient data pipelines, ensuring data quality, and enabling seamless integration across systems to support business analytics and decision-making. This position offers an exciting opportunity to work with cutting-edge technologies and play a key role in the transformation of our data environment.<br><br>Responsibilities:<br>• Design and implement robust data pipelines on Azure using tools such as Databricks, Spark, Delta Lake, and Airflow.<br>• Develop workflows to ingest and integrate data from diverse sources into Azure Data Lake.<br>• Build and maintain data transformation layers following the medallion architecture principles.<br>• Apply data quality checks, validation processes, and deduplication techniques to ensure accuracy and reliability.<br>• Create reusable and parameterized notebooks to streamline batch and streaming data processes.<br>• Optimize merge and update logic in Delta Lake by leveraging efficient partitioning strategies.<br>• Collaborate with business and application teams to understand and fulfill data integration requirements.<br>• Enable downstream integrations with APIs, Power BI dashboards, and reporting systems.<br>• Establish monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.<br>• Participate in code reviews, agile development practices, and team design discussions.
<p>We are seeking a skilled Data Engineer to join our team and help design, build, and maintain robust data pipelines and infrastructure. This role is critical for enabling data-driven decision-making across the organization. You will work closely with analysts, developers, and business stakeholders to ensure data is accurate, accessible, and secure.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and optimize scalable data pipelines and ETL processes.</li><li>Integrate data from multiple sources into centralized data platforms.</li><li>Implement and maintain data models, schemas, and storage solutions.</li><li>Ensure data quality, integrity, and compliance with security standards.</li><li>Collaborate with cross-functional teams to support analytics and reporting needs.</li><li>Monitor and troubleshoot data workflows for performance and reliability.</li></ul><p><br></p><p><br></p>
We are looking for a skilled Data Engineer to join our team in Los Angeles, California. This role focuses on designing and implementing advanced data solutions to support innovative advertising technologies. The ideal candidate will have hands-on experience with large datasets, cloud platforms, and machine learning, and will play a critical role in shaping our data infrastructure.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines to ensure seamless data extraction, transformation, and loading processes.<br>• Design scalable architectures that support machine learning models and advanced analytics.<br>• Collaborate with cross-functional teams to deliver business intelligence tools, reporting solutions, and analytical dashboards.<br>• Implement real-time data streaming solutions using platforms like Apache Kafka and Apache Spark.<br>• Optimize database performance and ensure efficient data storage and retrieval.<br>• Build and manage resilient data science programs and personas to support AI initiatives.<br>• Lead and mentor a team of data scientists, machine learning engineers, and data architects.<br>• Design and implement strategies for maintaining large datasets, ensuring data integrity and accessibility.<br>• Create detailed technical documentation for workflows, processes, and system architecture.<br>• Stay up-to-date with emerging technologies to continuously improve data engineering practices.
<p>Robert Half is currently partnering with a well-established company in San Diego that is looking for a Senior Data Engineer, experienced in BigQuery, DBT (Data Build Tool), and GCP. This position is full time (permanent placement) that is 100% onsite in San Diego. We are looking for a Principal Data Engineer that is passionate about optimizing systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing. Must have experience in DBT!</p><p>Responsibilities:</p><ul><li>Design and implement scalable, high-performance data solutions on GCP.</li><li>Develop data pipelines, data warehouses, and data lakes using GCP services (BigQuery, and DBT, etc.).</li><li>Build and maintain ETL/ELT pipelines to ingest, transform, and load data from various sources.</li><li>Ensure data quality, integrity, and security throughout the data lifecycle.</li><li>Design, develop, and implement a new version of a big data tool tailored to client requirements.</li><li>Leverage advanced expertise in DBT (Data Build Tool) and Google BigQuery to model and transform data pipelines.</li><li>Optimize systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing.</li><li>Collaborate cross-functionally with product and technical teams to align project deliverables with client goals.</li><li>Monitor, debug, and refine the performance of the big data tool throughout the development lifecycle.</li></ul><p><strong>Minimum Qualifications:</strong></p><ul><li>5+ years of experience in a data engineering role in GCP .</li><li>Proven experience in designing, building, and deploying data solutions on GCP.</li><li>Strong expertise in SQL, data warehouse design, and data pipeline development.</li><li>Understanding of cloud architecture principles and best practices.</li><li>Proven experience with DBT, BigQuery, and other big data tools.</li><li>Advanced knowledge of partitioning, indexing, and Google Sequences strategies.</li><li>Strong problem-solving skills with the ability to manage and troubleshoot complex systems.</li><li>Excellent written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders.</li><li>Experience with Looker or other data visualization tools.</li></ul>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this long-term contract role, you will design and implement data solutions, ensuring efficient data processing and management. The ideal candidate will have expertise in handling large-scale data systems and a passion for optimizing workflows.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using modern tools and frameworks.<br>• Implement data transformation processes to ensure efficient storage and retrieval.<br>• Collaborate with cross-functional teams to design and optimize data architecture.<br>• Utilize Apache Spark and Python to process and analyze large datasets.<br>• Manage and monitor data workflows, ensuring high performance and reliability.<br>• Integrate and maintain ETL processes to streamline data operations.<br>• Work with Apache Kafka and Hadoop to enhance system capabilities.<br>• Troubleshoot and resolve issues related to data systems and workflows.<br>• Ensure data security and compliance with industry standards.<br>• Document processes and provide technical support to stakeholders.
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. The ideal candidate will play a key role in designing, implementing, and maintaining data applications while ensuring alignment with organizational data standards. This position requires expertise in handling large-scale data processing and a collaborative approach to problem-solving.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with teams to design and implement applications utilizing both established and emerging technology platforms.</p><p>• Ensure all applications adhere to organizational data management standards.</p><p>• Develop and optimize queries, stored procedures, and reports using SQL Server to address user requests.</p><p>• Work closely with team members to monitor application performance and ensure quality.</p><p>• Communicate effectively with users and management to resolve issues and provide updates.</p><p>• Create and maintain technical documentation and application procedures.</p><p>• Ensure compliance with change management and security protocols.</p>
<p>we are seeking a <strong>Data Engineer</strong> to join its growing data team in West LA. This role is perfect for someone early in their data engineering career who wants to work with modern data stacks, cloud technologies, and high-impact analytics projects in a collaborative, fast-paced environment.</p><p><br></p><p> <strong>Compensation:</strong> $100–130K + 5% bonus (flexible for strong candidates)</p><p><br></p><p><strong>About the Role</strong></p><p>In this position, you’ll support the full data lifecycle—from ingesting and transforming raw data to building pipelines, reporting tools, and analytics infrastructure that empower teams across the business. You’ll work with Python, SQL, cloud platforms, ETL solutions, and visualization tools, contributing to the evolution of next-generation data systems supporting large-scale digital operations.</p><p><br></p><p><strong>What You'll Do</strong></p><ul><li>Build, maintain, and optimize ETL/ELT pipelines using tools such as Talend, SSIS, or Informatica</li><li>Work hands-on with cloud platforms (any cloud; GCP preferred) to support data workflows</li><li>Develop reports and dashboards using visualization tools (Looker, Tableau, Power BI, etc.)</li><li>Collaborate with product, analytics, and engineering teams to deliver reliable datasets and insights</li><li>Own data issues end-to-end — from collection and extraction to cleaning and validation</li><li>Support data architecture, pipeline resilience, and performance tuning</li><li>Assist in maintaining and scaling datasets, data models, and analytics environments</li><li>Contribute to real-time streaming initiatives (a plus)</li></ul><p><br></p>
<p><strong>About the Role: </strong>We’re seeking a <strong>Senior Data Engineer</strong> with deep expertise in <strong>Python</strong> to design and implement scalable data solutions for complex, high-volume environments. This role involves building robust data pipelines, optimizing workflows, and collaborating with analytics teams to deliver actionable insights.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Develop and maintain advanced <strong>Python-based ETL pipelines</strong> for large-scale data processing.</li><li>Integrate data from multiple sources into secure, high-performance data platforms.</li><li>Collaborate with data scientists and business analysts to enable predictive analytics and reporting.</li><li>Implement best practices for data governance, security, and compliance.</li><li>Optimize data workflows for speed, reliability, and scalability.</li></ul><p><br></p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
<p>We're seeking a Data Engineer to design, build, and optimize data pipelines supporting advanced analytics, AI, and machine learning initiatives for a client in Watertown, MA. This role combines technical expertise with collaboration across IT and business teams to deliver scalable, secure, and impactful data solutions. This role is onsite 4 days a week in Watertown. </p><p><strong>Responsibilities</strong></p><ul><li>Develop and manage data pipelines using Python, SQL Server, Snowflake, and Azure Data Factory.</li><li>Optimize T-SQL code and database structures for performance and reliability.</li><li>Integrate analytics and machine learning outputs into business workflows.</li><li>Ensure compliance with data governance and security standards.</li><li>Collaborate with stakeholders to refine requirements and deliver solutions.</li><li>Support BI tools such as Power BI, Tableau, and SQL Server BI stack.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team in Raleigh, North Carolina. In this position, you will play a pivotal role in designing, developing, and managing data solutions that support organizational decision-making. The ideal candidate will have a strong background in data engineering and a desire to grow into Data Architecture responsibilities.<br><br>Responsibilities:<br>• Design and implement data pipelines to extract sales data and integrate it into Snowflake using Snowpipe.<br>• Collaborate with executive leadership and vendors to ensure data solutions align with business objectives.<br>• Develop and maintain APIs to connect various systems and streamline data flow.<br>• Optimize data security and access controls within Snowflake, ensuring compliance with organizational standards.<br>• Participate in data modeling updates, focusing on improving principles, solutions, and methodologies.<br>• Work closely with a small team of data analysts, engineers, and managers to address diverse engineering challenges.<br>• Monitor and enhance data pipeline performance through testing, governance, and continuous delivery.<br>• Provide input on enterprise data architecture strategies, including logical, conceptual, and physical data models.
We are looking for an experienced Data Engineer to join our team on a contract basis in Broomfield, Colorado. This position is focused on leveraging advanced tools and methodologies to optimize data workflows and systems within an apparel manufacturing environment. The ideal candidate will bring deep expertise in Astronomer, Airflow, and related technologies, ensuring efficient data orchestration and transformation processes.<br><br>Responsibilities:<br>• Optimize and deploy Astronomer workflows, ensuring best practices for storage, synchronization, and execution.<br>• Transfer data from Databricks into Astronomer and upgrade to the latest version of Astronomer.<br>• Design and implement decoupled architectures using Airflow to streamline transformation logic.<br>• Customize orchestration settings in Astronomer to enhance flow efficiency and scalability.<br>• Address challenges associated with Cosmos by managing complex DAGs and automating tasks where possible.<br>• Configure advanced Airflow settings to support seamless project execution and upgrades.<br>• Collaborate with cross-functional teams to ensure data workflows align with organizational objectives.<br>• Utilize Astronomer as a primary tool for orchestrating, deploying, and executing data pipelines.<br>• Provide expertise in Airflow 2 and prepare for future upgrades to Airflow 3.<br>• Troubleshoot and resolve issues within data workflows and systems in a timely manner.
We are looking for a skilled Data Engineer to design, develop, and maintain data systems that support critical business operations and analytics. This role requires collaboration with various teams to ensure data integrity, reliability, and scalability. You will play a pivotal role in optimizing data workflows and implementing robust solutions to meet organizational needs.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to integrate information from various sources.<br>• Create and manage data models, tables, and warehouses using platforms such as Redshift, BigQuery, or Snowflake.<br>• Implement automated workflows and data quality checks to guarantee accurate and consistent datasets.<br>• Collaborate with cross-functional teams to understand and address their data requirements.<br>• Enhance query efficiency by applying techniques such as indexing, partitioning, and dataset restructuring.<br>• Ensure adherence to best practices in data governance, security, and comprehensive documentation.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Incorporate tools like Airflow, dbt, or Kafka to streamline data processing and management.<br>• Support real-time data processing and streaming initiatives.<br>• Facilitate the integration of DevOps tools like Docker, Terraform, or CI/CD pipelines into data systems.
We are looking for a skilled Data Engineer to join our team in Seguin, Texas. In this role, you will focus on designing, implementing, and optimizing data solutions that drive business decisions and innovation. Your expertise will be crucial in developing scalable systems, ensuring data quality, and collaborating with cross-functional teams to create impactful strategies.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to integrate various sources into the organization’s data systems.<br>• Design and maintain data models and schemas across diverse environments, ensuring alignment with company standards.<br>• Simplify data architecture by creating reusable services and identifying cost-saving opportunities.<br>• Optimize data workflows for enhanced performance, reliability, and cost efficiency.<br>• Define and execute long-term technology strategies, incorporating innovations in analytics and data platforms.<br>• Ensure data quality by establishing and managing standards, guidelines, and processes.<br>• Perform database installation, configuration, and maintenance to support high-performance environments.<br>• Conduct proactive monitoring and optimization of database queries and indexes.<br>• Implement robust backup, recovery, and disaster recovery plans for database systems.<br>• Collaborate with stakeholders to translate business requirements into effective data models and solutions.
We are looking for a skilled Data Engineer to join our team in Johnson City, Texas. In this role, you will design and optimize data solutions to enable seamless data transfer and management in Snowflake. You will work collaboratively with cross-functional teams to enhance data accessibility and support data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design, develop, and implement ETL solutions to facilitate data transfer between diverse sources and Snowflake.<br>• Optimize the performance of Snowflake databases by constructing efficient data structures and utilizing indexes.<br>• Develop and maintain automated, scalable data pipelines within the Snowflake environment.<br>• Deploy and configure monitoring tools to ensure optimal performance of the Snowflake platform.<br>• Collaborate with product managers and agile teams to refine requirements and deliver solutions.<br>• Create integrations to accommodate growing data volume and complexity.<br>• Enhance data models to improve accessibility for business intelligence tools.<br>• Implement systems to ensure data quality and availability for stakeholders.<br>• Write unit and integration tests while documenting technical work.<br>• Automate testing and deployment processes in Snowflake within Azure.
<p>Robert Half is hiring a highly skilled and innovative Intelligent Automation Engineer to design, develop, and deploy advanced automation solutions using Microsoft Power Automate, Python, and AI technologies. This role is ideal for a hands-on technologist passionate about streamlining business processes, integrating systems, and applying cutting-edge AI to drive intelligent decision-making. This role is a hybrid position based in Philadelphia. For consideration, please apply directly. </p><p><br></p><p>Key Responsibilities</p><ul><li>Design and implement end-to-end automation workflows using Microsoft Power Automate (Cloud & Desktop).</li><li>Develop Python scripts and APIs to support automation, system integration, and data pipeline management.</li><li>Integrate Power Automate with Azure services (Logic Apps, Functions, AI Services, App Insights) and enterprise platforms such as SharePoint, Dynamics 365, and Microsoft Teams.</li><li>Apply Generative AI, LLMs, and Conversational AI to enhance automation with intelligent, context-aware interactions.</li><li>Leverage Agentic AI frameworks (LangChain, AutoGen, CrewAI, OpenAI Function Calling) to build dynamic, adaptive automation solutions.</li></ul>
We are looking for an experienced Data Engineer to take a leading role in optimizing and transforming our data architecture. This position will focus on enhancing performance, scalability, and analytical capabilities within our systems. The ideal candidate will have a strong technical background and the ability to mentor teams while delivering innovative solutions.<br><br>Responsibilities:<br>• Redesign the existing PostgreSQL database architecture to support modern analytical models and improve overall system performance.<br>• Optimize schemas by balancing normalization with denormalization techniques to achieve faster analytical reads.<br>• Implement advanced strategies such as indexing, partitioning, caching, and replication to ensure high-throughput and low-latency data delivery.<br>• Develop and maintain scalable data pipelines to guarantee accurate and timely data availability across distributed systems.<br>• Collaborate with engineering teams to provide guidance on data modeling, query optimization, and architectural best practices.<br>• Monitor and troubleshoot database performance issues, ensuring solutions are implemented effectively.<br>• Lead efforts to enhance the reliability and scalability of data infrastructure to support future growth.<br>• Serve as a technical mentor to team members, sharing expertise in database architecture and performance tuning.<br>• Partner with stakeholders to understand data requirements and deliver solutions that meet business needs.
<p><strong>About the Role: </strong>We’re seeking a <strong>Data Engineer</strong> with strong experience in <strong>Microsoft Fabric</strong> to design and optimize data pipelines that power analytics and business intelligence. This role is ideal for someone passionate about building scalable data solutions and leveraging modern cloud technologies to drive insights.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain data pipelines using <strong>Microsoft Fabric components</strong> (Lakehouse, Azure Data Factory, Data Warehouses, Notebooks, Dataflows).</li><li>Develop ETL processes for structured and unstructured data across multiple sources.</li><li>Collaborate with data scientists and analysts to deliver high-quality, reliable data solutions.</li><li>Implement best practices for data governance, security, and compliance.</li><li>Monitor and optimize data infrastructure for performance and scalability.</li></ul>
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
We are looking for a skilled Data Engineer to join our team in San Antonio, Texas. This role offers an opportunity to design, develop, and optimize data solutions that support business operations and strategic decision-making. The ideal candidate will possess a strong technical background, excellent problem-solving skills, and the ability to collaborate effectively across departments.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines using Azure Synapse Analytics, Microsoft Fabric, and Azure Data Factory.<br>• Implement advanced data modeling techniques and design scalable BI solutions that align with business objectives.<br>• Create and maintain dashboards and reports using Power BI, ensuring data accuracy and usability.<br>• Integrate data from various sources, including APIs and Dataverse, into Azure Data Lake Storage Gen2.<br>• Utilize tools like Delta Lake and Parquet to manage and structure data within a lakehouse architecture.<br>• Define and implement BI governance frameworks to ensure consistent data standards and practices.<br>• Collaborate with cross-functional teams such as Operations, Sales, Engineering, and Accounting to gather requirements and deliver actionable insights.<br>• Troubleshoot, document, and resolve data issues independently while driving continuous improvement initiatives.<br>• Lead or contribute to Agile/Scrum-based projects to deliver high-quality data solutions within deadlines.<br>• Stay updated on emerging technologies and trends to enhance data engineering practices.
<p>We are looking for a skilled <strong>Data Engineer</strong> to design and build robust data solutions that align with business objectives. In this role, you will collaborate with cross-functional teams to develop and maintain scalable data architectures, pipelines, and models. Your expertise will ensure the quality, security, and compliance of data systems while contributing to the organization’s data-driven decision-making processes. Call 319-362-8606, or email your resume directly to Shania Lewis - Technology Recruiting Manager at Robert Half (email information is on LinkedIn). Let's talk!!</p><p><br></p><p><strong>Responsibilities:</strong></p><ul><li>Design and implement scalable data architectures, pipelines, and models.</li><li>Translate business requirements into practical data solutions.</li><li>Ensure data quality, security, and regulatory compliance.</li><li>Maintain and improve existing data infrastructure.</li><li>Optimize system performance for efficiency and reliability.</li><li>Research and recommend emerging data technologies.</li><li>Mentor team members and foster collaboration.</li><li>Enable effective analytics through robust data solutions.</li></ul>