<p>We are seeking a Senior Software Engineer (Java, Spring Boot, AWS) to support the design, development, and maintenance of law enforcement applications and web services. This role involves new feature development, modernization initiatives, and migration of .NET monolithic systems into microservices. The platform also includes React applications, mobile apps, and Electron-based desktop applications—requiring the ability to diagnose and troubleshoot across the full technology stack.</p><p>You will collaborate closely with Developers, Cloud Engineers, Cybersecurity professionals, and Quality Engineers while contributing to a highly secure, resilient, and modern cloud-native environment.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><p>· Design, develop, and maintain scalable, secure software using Java, Spring, Spring Boot, and Hibernate.</p><p>· Support modernization projects including migration of .NET monoliths to microservices architecture.</p><p>· Contribute across the full stack, including .NET services, React applications, mobile apps, and Electron desktop applications, assisting with debugging and issue resolution.</p><p>· Collaborate with Cloud, Security, and QA teams to deliver high‑quality solutions and improve platform resilience.</p><p>· Maintain clear documentation for code, processes, and application architecture.</p><p>· Ensure adherence to coding standards, security practices, and organizational policies.</p><p>· Provide guidance and mentorship to junior engineers and contribute to team knowledge sharing.</p><p>· Engage with stakeholders to clarify requirements and translate them into effective technical solutions.</p>
<p>Schedule: Monday–Friday, 8:00 AM–5:00 PM</p><p>Position Overview</p><p>We’re seeking a hands‑on Manufacturing Test Engineer to design, build, validate, and maintain Automated Test Equipment (ATE) and production test tools for medical devices and sub‑assemblies. You’ll partner closely with R&D and Manufacturing to translate requirements into robust hardware/software test solutions, drive continuous improvement on the production floor, and ensure comprehensive documentation, verification, and validation of test systems.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design & Development of ATE: Define ATE architecture (hardware & software), design fixtures/tooling, and develop production test solutions for medical devices and assemblies.</li><li>Software Development: Program test applications and PLCs using C#, LabVIEW, TestStand, and related languages; implement good software architecture and OO design patterns.</li><li>Project Planning: Build and communicate project plans, schedules, milestones, and resource needs; report status and risks.</li><li>Cross‑Functional Collaboration:</li><li>Work with R&D to convert product requirements into test requirements.</li><li>Partner with Manufacturing/Operations during deployment to validate robustness and throughput; lead iterative improvements.</li><li>Validation & Quality: Create requirements, design documentation, perform code reviews, unit tests, software/system validation, and sustain clear DHF/technical records.</li><li>Troubleshooting: Diagnose issues (hardware, software, instrumentation, PLC, network, data) and drive root‑cause corrective actions.</li><li>Sustaining & CI: Monitor field/line feedback, enhance reliability of test equipment, and standardize best practices across lines.</li><li> Support deployment and configuration of test equipment in manufacturing environments.</li></ul>
<p>The AI/ML Solutions Architect will lead the design, development, and deployment of advanced AI/ML solutions. This role combines deep technical expertise with strategic thinking to ensure AI/ML initiatives are successfully integrated into business operations. You will work closely with data scientists, engineers, and stakeholders to create architectures that maximize performance, scalability, and reliability.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design end-to-end AI/ML architectures, including data pipelines, model training, deployment, and monitoring.</li><li>Collaborate with stakeholders to define AI/ML solution requirements aligned with business objectives.</li><li>Provide technical leadership and guidance to teams implementing AI/ML models and systems.</li><li>Develop scalable and secure solutions using cloud platforms (AWS, Azure, GCP) and MLOps best practices.</li><li>Ensure seamless integration of AI/ML models into existing IT systems and workflows.</li><li>Conduct feasibility studies, prototyping, and performance evaluations for new technologies and frameworks.</li><li>Stay updated on advancements in AI/ML and recommend innovative solutions to meet emerging needs.</li><li>Document technical designs, workflows, and implementation plans to ensure clarity and reproducibility.</li></ul><p><br></p>
Our client is an early-stage, high-growth startup building products that are actively used and loved by real users. They are looking for a Full Stack Engineer (3–6 years of experience) who is excited about building impactful products in a fast-paced, startup environment — and who has interest or exposure to AI. This is a fully onsite role in San Francisco - (must be already living in San Francisco Bay Area to be considered) <br> About the Role As a Full Stack Engineer, you’ll play a key role in designing, developing, and maintaining modern web applications. You’ll work across the stack to build clean, scalable features and collaborate closely with a small, highly motivated team. This is an opportunity for someone who genuinely enjoys building things — especially products that people use every day. <br> What You’ll Do Design, develop, and maintain full stack applications Build user-facing features using React and Next.js Develop and integrate backend services using Python (Flask) Write clean, efficient, and maintainable TypeScript code Debug, test, and optimize application performance Collaborate closely with cross-functional teammates in a fast-moving startup environment Contribute to AI-powered features and generative AI initiatives
<p>We are looking for a skilled Validation Engineer to join our growing team. In this role, you will ensure our software systems meet quality standards and comply with regulatory requirements. This role is to help build and define its validation function from the ground up. As an early team member, you’ll shape core processes, own key deliverables, and <strong>interface with many different teams across the organization</strong> to embed quality throughout the SDLC.</p><p><br></p><p>Responsibilities:</p><p>• Lead software validation activities aligned with internal procedures and regulatory expectations (GxP, FDA, ISO).</p><p> • Create and maintain validation documentation, including <strong>IQ/OQ scripts</strong>, validation plans, risk assessments, protocols, reports, and traceability matrices.</p><p> • Review requirements and design specs for testability and proper controls.</p><p> • Execute or review functional, integration, regression, UAT, and <strong>IQ/OQ testing</strong>.</p><p> • Support change control and update validation documentation as systems evolve.</p><p> • <strong>Collaborate closely with cross‑functional partners</strong> to identify risks early and build quality into development.</p><p> • Prepare validation materials for audits and inspections.</p><p> • Continuously refine validation processes, templates, and tools for scale and efficiency</p>
<p>We are looking for a skilled Validation Engineer to join our team in San Francisco, California. In this role, you will ensure that software systems meet established quality standards and adhere to regulatory requirements. You will work closely with cross-functional teams to design, execute, and maintain validation activities throughout the software development lifecycle. This position requires someone with keen attention to detail who is committed to improving processes and ensuring compliance.</p><p><br></p><p>Responsibilities:</p><p>• Plan and execute software validation activities in alignment with company procedures and regulatory standards.</p><p>• Develop and maintain validation documentation, including plans, test protocols, reports, and risk assessments.</p><p>• Analyze system requirements, design specifications, and user stories to confirm testability and compliance.</p><p>• Conduct and review functional, integration, regression, and user acceptance testing.</p><p>• Assess validation impacts during change control activities and update related documentation accordingly.</p><p>• Collaborate with engineering and product teams to identify risks and integrate quality measures during development.</p><p>• Prepare and provide documentation for audits and inspections, offering support as needed.</p><p>• Implement improvements to validation processes, templates, and tools for enhanced efficiency.</p><p>• Support customer or internal audits by contributing to validation-related responses and documentation.An emerging technology leader in the regulated software space is hiring its <strong>first dedicated Quality Engineer</strong>. This is a rare opportunity to build a Quality Management System (QMS) from the ground up, shape internal quality culture, and directly influence the company’s ability to win customers in regulated industries such as pharma, biotech, medical devices, and specialty chemicals.</p><p>If you thrive in early‑stage environments, enjoy creating structure where none exists, and want real ownership and impact, this role offers all of that—and a direct line to leadership.</p><p><em>U.S. work authorization required; sponsorship is not available.</em></p><p><strong>What You’ll Own:</strong></p><p>• Design, implement, and maintain a QMS aligned with ISO 9001, ISO 13485, 21 CFR Part 11, and GAMP 5.</p><p>• Develop SOPs, work instructions, quality plans, and validation protocols (IQ/OQ/PQ).</p><p>• Lead software validation efforts for customers operating in regulated environments.</p><p>• Partner with Sales and Customer Success on quality questionnaires, RFPs, and customer audit needs.</p><p>• Support customer audits; serve as the internal SME on compliance topics.</p><p>• Own supplier quality, including vendor qualification and assessments.</p><p>• Drive continuous improvement across processes, documentation, and tools.</p><p>• Build quality training programs and foster a quality‑first mindset across teams.</p><p><br></p>
We are looking for a skilled Data Engineer to join our logistics team in Lithonia, Georgia. In this role, you will design, construct, and maintain data pipelines and infrastructure to support analytics and operational systems. You will play a key role in enabling data visualization tools, optimizing data processes, and ensuring the accuracy and availability of critical information.<br><br>Responsibilities:<br>• Design and implement data pipelines to efficiently extract, transform, and load data from multiple sources.<br>• Develop and maintain data models and storage solutions to support analytics and reporting needs.<br>• Collaborate with stakeholders to troubleshoot data inconsistencies and resolve technical issues.<br>• Utilize Tableau or Power BI to create meaningful data visualizations that drive business insights.<br>• Write and optimize database procedures, triggers, and other SQL-based functionalities.<br>• Manage and monitor databases to ensure their performance and reliability.<br>• Provide technical guidance to analysts on best practices in data governance and performance optimization.<br>• Participate in cross-functional projects to enhance data accessibility and quality across departments.<br>• Explore and integrate Python-based solutions to enhance data engineering processes.<br>• Assist in training and development related to data availability and analytics tools.
We are looking for a skilled Data Engineer to support our organization's data initiatives in Savannah, Georgia. This Contract to permanent role focuses on managing, optimizing, and securing data systems to drive strategic decision-making and improve overall performance. The ideal candidate will work closely with technology teams, analytics departments, and business stakeholders to ensure seamless data integration, accuracy, and scalability.<br><br>Responsibilities:<br>• Design and implement robust data lake and warehouse architectures to support organizational needs.<br>• Develop efficient ETL pipelines to process and integrate data from multiple sources.<br>• Collaborate with analytics teams to create and refine data models for reporting and visualization.<br>• Monitor and maintain data systems to ensure quality, security, and availability.<br>• Troubleshoot data-related issues and perform in-depth analyses to identify solutions.<br>• Define and manage organizational data assets, including SaaS tools and platforms.<br>• Partner with IT and security teams to meet compliance and governance standards.<br>• Document workflows, pipelines, and architecture for knowledge sharing and long-term use.<br>• Translate business requirements into technical solutions that meet reporting and analytics needs.<br>• Provide guidance and mentorship to team members on data usage and best practices.
We are looking for an experienced Data Engineer to join our team in Cincinnati, Ohio. This long-term contract position offers the opportunity to work on cutting-edge data engineering projects while collaborating with multidisciplinary teams to deliver high-quality solutions. The ideal candidate will have a strong background in Databricks and big data technologies, along with a passion for optimizing data processes and systems.<br><br>Responsibilities:<br>• Design, build, and enhance data pipelines using Databricks Runtime, Delta Lake, Autoloader, and Structured Streaming.<br>• Implement secure and governed data access protocols utilizing Unity Catalog, workspace controls, and audit configurations.<br>• Manage and integrate structured and unstructured data from diverse sources, including APIs and cloud storage.<br>• Develop and maintain notebook-based workflows and manage jobs using Databricks Workflows and Jobs.<br>• Apply best practices for performance tuning, scalability, and cost optimization in Databricks environments.<br>• Collaborate with data scientists, analysts, and business stakeholders to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks jobs and system configurations.<br>• Ensure high standards of data quality and security across all engineering tasks.<br>• Troubleshoot and resolve issues to maintain operational efficiency in data pipelines.
We are looking for a Senior Data Engineer to develop and optimize enterprise data systems that support analytics and digital solutions. In this role, you will design and implement robust data architectures, ensuring seamless data integration and transformation processes across the organization. Your expertise will drive the creation of reliable pipelines and scalable infrastructure, enabling advanced analytics and machine learning capabilities.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines using Databricks, Spark, and Delta Lake to support enterprise-level analytics.<br>• Develop and maintain efficient data models tailored for AI, analytics, and operational systems.<br>• Lead Master Data Management initiatives to establish unified and accurate data records across platforms.<br>• Create batch and near-real-time data processing workflows for structured and semi-structured datasets.<br>• Collaborate with AI and software development teams to ensure delivery of high-quality datasets for machine learning.<br>• Define and enforce data architecture standards, ensuring scalability, reliability, and governance.<br>• Troubleshoot and optimize data systems to maintain performance and reliability in complex environments.<br>• Partner with cloud and IT teams to integrate modern data platforms and ensure seamless functionality.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p>Robert half has a brand new opening for a Data Engineer with a reputable client here in Tampa.</p><p>Full-time position, HYBRID schedule out of their Tampa office.</p><p>Compensation ranging $100-115K depending on experience</p><p>*Medical benefits are also 100% covered after onboarding period*</p><p><br></p><p>Data Engineer (BI/ETL) focused on building and optimizing ETL/ELT pipelines, migrating/cleaning data between internal, vendor, and legacy systems, and improving data quality. SQL is absolutely required, and this role leans heavily into backend data movement — not dashboarding.</p><p><br></p><p><strong>Top Skills Looking For:</strong></p><ul><li>Strong <strong>SQL </strong>(non negotiable)</li><li>Experience designing and maintaining <strong>ETL / ELT pipelines</strong> using frameworks such as <strong>Apache Airflow, DBT (Data Build Tool), or equivalent orchestration systems</strong>, with the ability to schedule, monitor, and recover complex multi-stage jobs.</li><li><strong>Experience moving data across multiple systems</strong></li></ul><p>Description:</p><p>Build and maintain business intelligence solutions to include law enforcement, detention, human resources, finance, and integration of data from agency criminal justice partners.</p><p>• Design and develop BI solutions.</p><p>• Gather user requirements, develop technical and functional requirements, produce reporting solutions, and document the design and development process, metadata, and business rules.</p><p>• Model, implement, and maintain databases and data marts to support BI reporting.</p><p>• Develop extract, transform, load (ETL) to support the loading of data into data marts.</p><p>• Monitor the data quality of existing databases and data marts and recommend governance and control around self-service BI/Analytics considering the evolution of the BI Industry’s best practices.</p><p>• Perform other related duties as required.</p>
<p>We are looking for a talented Data Engineer to join our team in Fort Lauderdale, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
We are looking for an experienced Data Engineer to join our team in New York, New York. In this role, you will design, build, and maintain data infrastructure to support business intelligence and analytics needs. The ideal candidate will have a strong technical background, a passion for working with complex datasets, and expertise in cloud-based data platforms.<br><br>Responsibilities:<br>• Develop, implement, and optimize ETL pipelines to ensure efficient data processing and integration.<br>• Design and maintain scalable data solutions, including data warehouses and data lakes.<br>• Collaborate with cross-functional teams to identify data requirements and deliver actionable insights.<br>• Utilize Snowflake, AWS, and other cloud-based platforms to manage data infrastructure and ensure performance optimization.<br>• Leverage Python and SQL to build robust data workflows and automate processes.<br>• Employ orchestration tools like Airflow and dbt to streamline data operations.<br>• Support data analytics and visualization efforts by enabling the creation of impactful dashboards using tools such as Tableau.<br>• Work with marketing and product data sources, including platforms like Google Analytics, to extract and integrate valuable insights.<br>• Implement CI/CD pipelines and DevOps practices to enhance data engineering processes.<br>• Ensure data security and compliance across all systems and tools.
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>IMMEDIATE HIRE NEEDED. Interviews to begin the first week of February. </p><p><br></p><p>We are looking for a skilled Snowflake Marketing Data Engineer to join our team in Tampa, Florida in a hybrid in-office work schedule (2 to 3 days remote per week) preferably, remote candidates may be considered depending of the quality in match. </p><p><br></p><p>In this role, you will be responsible for designing, implementing, and maintaining data solutions that support critical business operations. Your expertise will play a key part in driving data-driven decisions and optimizing performance across various platforms.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain ETL processes to efficiently extract, transform, and load data from multiple sources.</p><p>• Analyze marketing data to uncover insights and support strategic decision-making.</p><p>• Create and manage dashboards and reports using Power BI to visualize data effectively.</p><p>• Integrate and leverage tools like Braze and Google Analytics to enhance data tracking and reporting capabilities.</p><p>• Collaborate with cross-functional teams to ensure the accuracy and reliability of data systems.</p><p>• Optimize database performance and troubleshoot any issues related to data pipelines.</p><p>• Document data workflows and provide training to stakeholders on best practices.</p><p>• Work with cloud-based platforms, such as Snowflake, to store and manage large datasets.</p><p>• Ensure data security and compliance with company policies and standards.</p>
<p>We are looking for a talented Data Engineer to join our team in Miami, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for an experienced Data Engineer to join our dynamic team in Mayville, Wisconsin. In this role, you will play a key part in developing and enhancing reporting and analytics solutions within a modern data environment. The ideal candidate is passionate about transforming complex data into actionable insights, improving processes, and creating reliable reporting systems. This is a long-term contract position offering the opportunity to make a meaningful impact within a collaborative and forward-thinking team.<br><br>Responsibilities:<br>• Design, develop, and maintain scalable data pipelines to support reporting and analytics needs.<br>• Create and optimize Power BI dashboards and reports to deliver accessible and trustworthy insights.<br>• Automate workflows using Power Automate to improve operational efficiency.<br>• Develop scripts using languages such as PowerShell or Python to streamline data processing tasks.<br>• Integrate and manage data sources including Oracle, Snowflake (hosted within Azure), and other enterprise systems.<br>• Collaborate with stakeholders to gather requirements and deliver customized solutions.<br>• Support the transition to cloud-based data environments, including Azure Data Warehouse and Fabric.<br>• Troubleshoot and resolve data-related issues, ensuring data integrity and reliability.<br>• Document processes and workflows to ensure clarity and maintainability.<br>• Stay updated on industry trends to recommend and implement innovative data solutions.
<p>I’m building a world-class team to power our next generation of data products. We’re looking for a Senior Data Engineer who knows AWS inside and out—someone who can <strong>design secure, scalable data pipelines</strong>, <strong>own ETL/ELT workflows</strong>, <strong>engineer cloud data infrastructure</strong>, and <strong>deliver dimensional and semantic models</strong> that our analysts, data scientists, and applications can trust.</p><p>You’ll work closely with product, security, platform engineering, and analytics to move our architecture toward a <strong>real-time, governed, cost-aware</strong>, and <strong>highly automated</strong> data ecosystem.</p><p><strong>What You’ll Do</strong></p><ul><li><strong>Design & build end-to-end pipelines</strong> on AWS (batch and streaming) using services like <strong>Glue, EMR, Lambda, Step Functions, Kinesis, MSK</strong>, and <strong>Fargate</strong>.</li><li><strong>Develop robust ETL/ELT</strong> (PySpark, Spark SQL, SQL, Python) for structured, semi-structured, and unstructured data at scale.</li><li><strong>Own data storage & processing layers</strong>: <strong>S3 (Lake/Lakehouse), Redshift (or Snowflake on AWS), DynamoDB</strong>, and <strong>Athena</strong> with strong partitioning, compaction, and performance tuning.</li><li><strong>Implement data models</strong> (3NF, dimensional/star, Data Vault, Lakehouse medallion) for analytics and operational workloads.</li><li><strong>Engineer secure infrastructure-as-code</strong> with <strong>Terraform</strong> (or <strong>CDK</strong>) across multi-account setups; implement CI/CD via <strong>GitHub Actions</strong> or <strong>AWS CodeBuild/CodePipeline</strong>.</li><li><strong>Harden security & governance</strong>: use <strong>IAM</strong>, <strong>Lake Formation</strong>, <strong>KMS</strong>, <strong>Secrets Manager</strong>, <strong>VPC/PrivateLink</strong>, <strong>GLUE Catalog</strong>, and fine-grained access controls. Partner with SecOps on compliance (e.g., <strong>SOC 2</strong>, <strong>FedRAMP</strong>, <strong>HIPAA</strong> depending on dataset).</li><li><strong>Observability & reliability</strong>: build monitoring with <strong>CloudWatch</strong>, <strong>OpenTelemetry</strong>, and data quality checks (e.g., <strong>Great Expectations</strong>, <strong>Deequ</strong>), implement SLOs and alerts.</li><li><strong>Champion best practices</strong>: code reviews, testing (unit/integration), documentation, runbooks, and blameless postmortems.</li><li><strong>Mentor</strong> mid-level engineers and collaborate on architectural decisions, standards, and technical roadmaps.</li></ul><p><br></p>
<p>Robert Half is hiring! We are looking for an experienced Data Engineer to join our team in Greenville, South Carolina. This role offers an exciting opportunity to work with modern data technologies, ensuring the efficient operation and optimization of data pipelines and systems. The ideal candidate will bring a strong technical background, leadership skills, and a proactive approach to maintaining and improving data infrastructure.</p><p><br></p><p>Responsibilities:</p><p>• Oversee daily data loads and ensure the smooth operation of data pipelines and related systems.</p><p>• Troubleshoot and resolve issues such as pipeline failures, performance bottlenecks, schema mismatches, and cloud resource disruptions.</p><p>• Conduct root-cause analyses and implement permanent solutions to prevent recurring issues.</p><p>• Maintain and optimize existing data processes, refactoring or retiring outdated workflows as necessary.</p><p>• Design and build scalable data ingestion pipelines using technologies such as Azure Data Factory, Databricks, and Synapse Pipelines.</p><p>• Collaborate with teams to create and improve operational runbooks, monitoring dashboards, and incident response workflows.</p><p>• Develop reusable ingestion patterns for platforms like Guidewire DataHub, InfoCenter, and other business data sources.</p><p>• Lead the implementation of real-time and event-driven data engineering solutions to enable operational insights and automation.</p><p>• Partner with architects to modernize data workloads using advanced frameworks like Delta Lake and Medallion Architecture.</p><p>• Mentor entry-level engineers, enforce coding best practices, and review code to ensure quality and compliance.</p>
<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
<p>Since it’s 2026, the Data Engineering landscape in DC has shifted heavily toward <strong>Cloud-Native architectures</strong> and <strong>GenAI-ready pipelines</strong>. Robert Half typically recruits for both their internal corporate teams and their high-end consulting arm (Protiviti).</p><p>Here is a tailored job description based on current 2026 market standards and Robert Half’s specific hiring trends in the District.</p><p><br></p><p>Job Title: Data Engineer</p><p><strong>Location:</strong> Washington, DC (Hybrid – Downtown DC Office)</p><p><strong>Company:</strong> Robert Half </p><p><strong>Employment Type: </strong>Contract-to-Hire</p><p>Role Overview</p><p>As a Data Engineer at Robert Half, you will be the backbone of our data-driven decision-making process. You aren't just "moving data"; you are architecting the flow of information that powers our localized market analytics and global recruitment engines. In the DC market, this often involves handling high-compliance data environments and integrating cutting-edge AI frameworks into traditional ETL workflows.</p><p><br></p><p><br></p>