<p>We are seeking an experienced SAP Techno-Functional Lead with a strong IT engineering and solution architecture background to design, develop, and implement SAP solutions supporting end-to-end manufacturing, production-to-ship, and order fulfillment processes. This role requires deep expertise across SAP PP/DS, WMS, ABAP, and Fiori, with hands-on experience integrating SAP S/4HANA with MES, PLM, and B2B platforms. Familiarity with CAP and RAP is a plus.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and implement SAP solutions across production planning and scheduling (PP/DS), warehouse operations (WMS), and order-to-ship processes</li><li>Lead solution architecture efforts, defining technical architectures, integration patterns, and cross-platform designs</li><li>Analyze business processes to identify gaps and translate requirements into functional and technical solutions</li><li>Deliver custom solutions using ABAP and Fiori to meet business and operational needs</li><li>Partner with manufacturing, supply chain, warehouse, engineering, and operations teams to ensure alignment with business objectives</li><li>Lead cross-functional and cross-platform integrations between SAP S/4HANA, MES, PLM, WMS, and B2B systems</li><li>Oversee configuration, development, and programming to ensure scalable, efficient, and maintainable solutions</li><li>Support analytics and reporting using SAP Analytics Cloud (SAC) to provide actionable operational insights</li><li>Act as a bridge between technical teams and business stakeholders to ensure clear communication and successful delivery</li></ul><p>Qualifications</p><ul><li>Bachelor’s degree in Information Technology, Computer Engineering, or a related field (advanced degree preferred)</li><li>8+ years of SAP experience, including 4+ years in techno-functional leadership or solution architecture roles</li><li>Strong expertise in SAP PP/DS, WMS, production planning, order-to-ship, and manufacturing operations</li><li>Hands-on experience with ABAP and Fiori development</li><li>Experience with SAP S/4HANA, SAP Analytics Cloud (SAC), and integration technologies (MES, PLM, B2B)</li><li>Knowledge of CAP, RAP, or other modern SAP development frameworks is a plus</li><li>Proven ability to design and deliver end-to-end manufacturing and warehouse solutions</li><li>Excellent communication, stakeholder management, and cross-functional collaboration skills</li><li>Manufacturing, warehouse, or supply chain domain experience strongly preferred</li></ul><p><br></p>
<p>Opening for a Facilities Plant Manager/Engineer</p><p><br></p><p>Location: Miami/Ft Lauderdale area around Medley/Miramar 33182</p><p>Schedule: In-office; standard business hours</p><p> </p><p>Salary: $160,000-$180,000 salary (may have some stretch)</p><p>Bonus: Discretionary/TBD</p><p>Benefits: Full Package with medical, vacation, holidays, retirement/401k, bonus and other.</p><p> </p><p>Requirements:</p><ul><li>Experience in setting up and managing an industrial operation: facilities - warehouse - plant</li><li>Safety, environmental and business regulations</li><li>Experience in mining, mineral processing, or heavy industrial environments</li><li>Bachelor’s degree in Industrial/Mechanical Engineering, Supply Chain or Business Administration</li><li>Proficiency in English; Portuguese or Spanish are a plus</li><li>Travel: 2-3 weeks for initial training in Brazil HQ</li></ul><p> </p><p>Company Overview: A reputable organization at the forefront of new technology advances in mineral processes and materials science for the mining sector. This an exciting time to join a stable and growing brand name through expansion.</p><p> </p><p>Position Overview: The Facilities Operations Manager is tasked with the logistics of opening a new Warehouse-Plant in Miami "Ground -up". This entails Operations, Administration, Vendor coordination, business Compliance and Licenses and Regulations etc. For a team of professionals comprised of Engineers and Laboratory personnel growing to 20 employees. This person is the Go-To for Miami and Laison with International HQ in Brazil and will wear multiple hats. The ideal candidate can also cross-over assisting with Plant Manager oversight duties in the start-up stages.</p><p> </p><p>The ideal candidate: has experience working in an industrial facility and any knowledge within plants, technology-equipment, laboratory, or R&D research & development within mining technology and process development is highly needed. Proficiency with English in addition to Portuguese or Spanish is highly preferred.</p><p> </p><p>Job Duties: Overseeing the setup of facility and day-to-day operation of the plant including administration, functionality, regulatory compliance, equipment, operational readiness and in conjunction with corporate standards.</p><ul><li>Oversees facility operations & plant performance adhering to safety, OSHA and other regulations.</li><li>Laison to headquarters engineering, procurement, and technical teams during plant assembly and ramp-up</li><li>Acts as the primary contact for project management tracking, municipality and permitting</li><li>Manages contractors, utilities, technical support services, service providers, and third-party vendors</li><li>Maintains operational procedures, maintenance routines, and performance monitoring systems</li><li>Reports on budgeting, cost control, and operational planning</li><li>Fosters a culture of accountability, safety, and continuous improvement</li></ul><p> </p>
<p><strong>Primary scope:</strong> Overseeing and expanding a massive Facebook ecosystem exceeding 100M followers across numerous pages and content categories.</p><p><br></p><p><strong>Salary:</strong> </p><p>$55,000–$70,000 - Junior to Mid</p><p>$80-100,000 - Senior </p><p><br></p><p><strong>Duration:</strong> 3-6 month Contract to Hire</p><p> </p><p><strong>Role Overview</strong></p><p>Content performance is the foundation of Client’s AI influencer platform. Every visual, video, or media asset must capture attention, generate interaction, and support sustained audience growth. That outcome is driven by one critical factor: how the AI is prompted.</p><p>Prompt exploration today is informal and fragmented. This role is dedicated to changing that. We’re hiring prompt engineers whose sole focus is to rigorously test the platform, uncover high‑performing prompt approaches, identify impactful feature combinations, and convert those discoveries into reliable, repeatable frameworks.</p><p><br></p><p>This position is not about building application code. It’s about building systems that consistently produce scroll‑stopping content.</p><p><br></p><p>Successful candidates will directly influence how AI influencers across the platform create media. Their work will be operationalized into structured prompt templates and integrated tooling used by operators and end users at scale.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Work extensively within Client to systematically evaluate features, settings, and creative workflows to identify what generates high‑engagement, shareable AI content</li><li>Design, execute, and refine prompt strategies across multiple media formats, including portraits, lifestyle imagery, trend‑based formats, video, and newly launched content types</li><li>Identify high‑impact feature combinations by blending generation methods, editing tools, styling controls, and post‑processing techniques to achieve superior results</li><li>Assess all discoveries through a performance lens: attention capture, shareability, and audience growth potential</li><li>Build modular, reusable prompt templates with clear parameters that operators can deploy without advanced prompting expertise</li><li>Develop prompt structures suitable for product integration, helping inform future built‑in prompt builders and creation tools</li><li>Document findings in a structured way, categorized by use case, content format, complexity, and expected results, maintaining an evolving internal knowledge base</li><li>Monitor content trends across major social platforms (Instagram, TikTok, X, Facebook, YouTube) and translate emerging patterns into actionable prompt strategies</li><li>Run ongoing experimentation, including comparative testing of prompt variations and analysis of real‑world performance metrics</li></ul>
<p>We are seeking a highly experienced <strong>Microsoft Security Framework Engineer</strong> to lead the full activation and optimization of the Microsoft Security stack within our Microsoft 365 E5 environment. This is a critical contract role focused on designing, configuring, and deploying a comprehensive security framework to protect endpoints, identities, cloud applications, email, and sensitive Protected Health Information (PHI).</p><p>The successful candidate will serve as the subject matter expert responsible for implementing Microsoft Defender and Purview capabilities from the ground up, ensuring alignment with industry best practices and regulatory requirements.</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Lead the end-to-end design, configuration, and deployment of the full Microsoft Security stack, including:</li><li>Microsoft Defender for Endpoint</li><li>Microsoft Defender for Office 365 Plan 2</li><li>Microsoft Defender for Identity</li><li>Microsoft Defender for Cloud Apps</li><li>Entra ID Protection</li><li>Microsoft Purview (Data Loss Prevention (DLP) and Sensitivity Labels for PHI)</li><li>Architect and implement a cohesive Microsoft security framework that integrates all components for maximum protection and visibility</li><li>Configure advanced threat protection, automated investigation and response (AIR), attack surface reduction rules, and device control policies</li><li>Design and deploy Purview DLP policies and sensitivity labeling strategies tailored for PHI protection and regulatory compliance (HIPAA, etc.)</li><li>Implement Entra ID Protection policies, Conditional Access, and identity threat detection capabilities</li><li>Conduct security assessments, gap analysis, and provide recommendations to strengthen the overall security posture</li><li>Collaborate with internal IT, security, and compliance teams to ensure successful adoption and operational handover</li><li>Develop documentation, runbooks, and knowledge transfer materials for ongoing management and maintenance</li><li>Provide expert guidance on Microsoft 365 E5 security licensing, features, and roadmap</li></ul><p><br></p>
Are you ready to lead and optimize a robust global network environment? Join our team as a Senior Network Engineer and take ownership of both daily operations and future development of our Global Network, deploying high-level automation and secure solutions that support customer missions and internal requirements. What You'll Do: Lead project-based initiatives for both customer and internal network services, ensuring structured operational handovers and on-time delivery. Serve as the senior technical expert across projects, advising teams on complex network challenges. Manage new customer onboarding and collaborate with business development on project launches. Execute continuous improvement programs and optimize the global network infrastructure. Provide third-line technical support, troubleshooting, and maintenance for internal teams and the Network Operations Center (NOC). Standardize network services and drive automation for enhanced performance, scalability, and reliability. Maintain comprehensive documentation for all systems and services. Mentor and train Tier 1 and Tier 2 support teams. Collaborate closely with security teams to implement advanced controls, segmentation, and monitoring.
Robert Half is seeking a Senior Software Engineer III to join our Platform Engineering team, supporting the infrastructure, platforms, and services that power our applications and ELT/ETL processes. This is a high-impact role within a small, collaborative team, backfilling a recent retirement with an opportunity for knowledge transfer and cross-training. <br> What You’ll Do Design, build, and deploy scalable infrastructure and platform components supporting cloud-native applications Lead development of CI/CD pipelines, infrastructure-as-code (IaC), and automation frameworks Own platform systems end-to-end, ensuring reliability, scalability, and performance Troubleshoot and resolve production issues, including outages, deployment failures, and infrastructure instability Support on-call rotation (every 3 weeks), handling P1 incidents and coordinating with external vendors (e.g., Microsoft) Collaborate across engineering, application, and security teams to improve platform capabilities Mentor entry level engineers, conduct code reviews, and drive engineering best practices Contribute to system design, documentation, and continuous improvement initiatives Work Breakdown 70% project-based work (building and deploying infrastructure) 30% operations/support (maintenance, troubleshooting, production support)
We are looking for a talented Data Engineer to join our team in Glendale, California. In this long-term contract role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and platforms that support critical business operations. Through collaboration with cross-functional teams, you will contribute to innovative data solutions that enhance decision-making processes and drive operational excellence.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support the Core Data platform.<br>• Create tools and services to enhance data discovery, governance, and privacy.<br>• Collaborate with product managers, architects, and software engineers to ensure the success of data platforms.<br>• Apply technologies such as Airflow, Spark, Databricks, Delta Lake, and Kubernetes to build advanced data solutions.<br>• Establish and document best practices for pipeline configurations, naming conventions, and operational standards.<br>• Monitor and ensure the accuracy, reliability, and efficiency of datasets to meet service level agreements (SLAs).<br>• Participate in agile and scrum ceremonies to improve collaboration and team processes.<br>• Foster relationships with stakeholders to understand their needs and prioritize platform enhancements.<br>• Maintain detailed documentation to support data governance and quality initiatives.
We are looking for a Data Engineer to strengthen our data and analytics capabilities in West Chester, Pennsylvania. This role will shape reliable data architecture, support enterprise reporting, and help turn complex information into practical business insight. The position is ideal for someone who enjoys building scalable data solutions, improving performance, and working across Microsoft-based data technologies.<br><br>Responsibilities:<br>• Design and support enterprise data solutions that enable dependable analytics, reporting, and operational decision-making.<br>• Build, optimize, and maintain database structures and data processing workflows using SQL Server, Azure SQL Database, and T-SQL.<br>• Develop and enhance SSIS packages and related data pipelines to ensure accurate, timely, and efficient movement of information across systems.<br>• Create scalable datasets and reporting foundations that support Power BI dashboards and broader business intelligence needs.<br>• Monitor data platform performance, troubleshoot issues, and implement improvements that increase stability, security, and efficiency.<br>• Partner with business and technical stakeholders to translate reporting and analytics goals into practical data engineering solutions.<br>• Lead efforts to move legacy SQL Server workloads into Azure-based services while maintaining data integrity and minimizing disruption.<br>• Establish standards and best practices for data quality, documentation, and ongoing platform maintenance.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
<p>We are looking for a Data Engineer to strengthen and expand an established Microsoft Fabric data environment. This Long-term Contract position is ideal for someone who can turn business data into reliable, well-structured assets that support reporting and decision-making. The role requires a hands-on engineer who can shape data architecture, build scalable pipelines, and communicate clearly with both technical teams and business stakeholders.</p><p><br></p><p>Responsibilities:</p><p>• Expand and improve an existing Microsoft Fabric platform to support dependable, scalable analytics solutions.</p><p>• Create and maintain a layered data architecture across Bronze, Silver, and Gold tiers, with emphasis on delivering trusted and business-ready curated datasets.</p><p>• Build ingestion and transformation processes for Salesforce data along with information from additional enterprise sources.</p><p>• Develop data models that improve accuracy, usability, and reporting value by evaluating structure, relationships, and downstream needs.</p><p>• Support the shift away from older warehouse and spreadsheet-driven reporting practices by introducing more modern data engineering approaches.</p><p>• Work autonomously to manage priorities while providing regular updates on progress, technical decisions, and potential risks.</p><p>• Collaborate with business partners to understand reporting goals and translate them into practical data solutions.</p><p>• Contribute to data processing and integration workflows using technologies such as Python, Spark, ETL frameworks, and related platform tools.</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.
<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half is seeking a Data Engineer to design, build, and maintain enterprise data infrastructure and analytics platforms. This role will serve as the technical owner of data architecture, ensuring data quality, governance, and accessibility across the organization.</p><p>This is a highly visible role supporting leadership and business teams by enabling reliable, data-driven decision-making through scalable data solutions and modern analytics tools.</p><p><br></p><p><strong>Job Responsibilities</strong></p><ul><li>Design and implement enterprise data architecture, including data models and integration patterns to establish a single source of truth </li><li>Build and manage analytics platforms to support reporting and business intelligence initiatives </li><li>Develop and maintain high-impact dashboards using Power BI or similar tools for leadership and operational teams </li><li>Design and build automated ETL/ELT pipelines across multiple systems and data sources </li><li>Define and enforce data governance standards, including metric definitions, data quality rules, and access controls </li><li>Monitor and optimize data pipeline performance, including troubleshooting failures and implementing automated error handling </li><li>Investigate and resolve data quality issues (e.g., duplicates, sync failures) and implement proactive monitoring solutions </li><li>Enable self-service analytics by creating user-friendly data models and supporting end users with training and documentation </li><li>Ensure compliance with data security and regulatory requirements, including proper data handling and access controls </li><li>Partner with IT leadership to recommend tools, technologies, and best practices to enhance data capabilities </li></ul>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
<p><strong>Mid-Level Data Engineer (On-Site | Los Angeles, CA)</strong></p><p><em>Build systems that actually drive business decisions.</em></p><p><br></p><p>This is not a “maintain the pipeline and go home” kind of role.</p><p><br></p><p>We’re looking for a sharp, early-career Data Engineer who wants to operate close to the business, own meaningful projects end-to-end, and build systems that directly impact how decisions get made across an entire organization. You’ll join a small, high-performing team where your work won’t get buried—it will be seen, used, and relied on daily.</p><p><br></p><p>If you’re someone who enjoys solving messy problems, building from scratch, and working in a fast-paced, high-expectation environment, this is the kind of role where you’ll grow quickly.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and build automated data systems (e.g., billing workflows, internal tools)</li><li>Create and maintain BI dashboards and reports using Python, Excel, and visualization tools</li><li>Write and optimize SQL queries and ETL pipelines for clean, reliable data flow</li><li>Analyze large datasets to uncover actionable insights and trends</li><li>Partner with stakeholders across the business to translate needs into technical solutions</li><li>Help improve data accessibility and usability across departments</li><li>Ensure data integrity and accuracy through audits and troubleshooting</li><li>Contribute to a growing data function with high visibility and ownership</li></ul><p>Why This Role Stands Out</p><ul><li>High ownership: You’ll build systems from the ground up, not just maintain them</li><li>Small team, big impact: Work directly with senior leadership and decision-makers</li><li>Growth opportunity: The team is expanding—this role can evolve quickly</li><li>Flexibility within intensity: While this is a high-performance environment, there’s trust and flexibility when needed</li></ul>
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>