<p>We are looking for an experienced Azure Cloud Engineer to join our team North Houston. In this role, you will leverage your expertise to manage cloud infrastructure, ensure system reliability, and collaborate with team members on key projects. This position requires a strong background in Azure administration and Infrastructure as Code (IaC) tools, along with a commitment to delivering high-quality solutions.</p><p><br></p><p>Responsibilities:</p><p>• Design, implement, and manage Azure cloud infrastructure to support business needs.</p><p>• Utilize tools such as Terraform and Ansible to develop and maintain Infrastructure as Code (IaC) solutions.</p><p>• Collaborate with team members to maintain Office 365, Exchange Online, Intune, and Active Directory systems.</p><p>• Ensure the scalability and reliability of cloud-based systems by implementing auto-scaling solutions.</p><p>• Regularly assess and optimize cloud environments to enhance performance and security.</p><p>• Provide on-site support five days a week, with half-day Fridays.</p><p>• Travel to Midland quarterly to participate in team collaborations and align on project objectives.</p><p>• Maintain documentation for cloud processes and configurations to ensure clarity and compliance.</p><p>• Work closely with stakeholders to identify and address technical challenges.</p><p>• Support and contribute to the development of cloud strategies aligned with organizational goals.</p>
<p>We are looking for a senior Azure Engineer to join a Long-term Contract engagement with a client in Nashville, Tennessee. This role is focused on improving the stability, security, and governance of an existing cloud environment while addressing priority audit remediation work. The ideal candidate is comfortable stepping into a partially matured Azure landscape, quickly identifying practical improvements, and delivering measurable progress in partnership with infrastructure and security teams.</p><p><br></p><p>Responsibilities:</p><p>• Evaluate the current Azure environment to uncover configuration weaknesses, control gaps, and areas requiring remediation.</p><p>• Execute corrective actions related to identity access, permission structures, and broader cloud security controls.</p><p>• Partner with security stakeholders to resolve audit findings and strengthen compliance alignment across Azure services.</p><p>• Streamline and standardize Azure configurations to improve consistency, maintainability, and operational readiness.</p><p>• Contribute to governance efforts by helping enforce policies, increase visibility, and improve control coverage within the platform.</p><p>• Document technical changes, remediation steps, and environment updates to support traceability and team knowledge sharing.</p><p>• Provide limited production support when needed while maintaining primary focus on audit-driven infrastructure work.</p><p>• Collaborate with a small cross-functional technology team, including infrastructure, engineering, and security resources, to advance remediation priorities.</p>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p><strong>Azure Developer</strong></p><p>We are seeking a knowledgeable <strong>Azure Developer</strong> to build cloud-native applications and services using Microsoft Azure technologies. This role is ideal for someone who enjoys designing scalable solutions, working with modern cloud tools, and collaborating closely with software and cloud engineering teams. The ideal candidate will have strong development skills, deep understanding of Azure services, and a passion for cloud innovation.</p><p><strong>Responsibilities</strong></p><ul><li>Develop cloud-based applications using Azure Functions, App Services, Logic Apps, and related services</li><li>Build APIs, microservices, and serverless workloads using .NET, C#, or other Azure-supported languages</li><li>Implement Azure integrations using Service Bus, Event Hub, API Management, or Durable Functions</li><li>Create and optimize Azure DevOps pipelines for CI/CD automation</li><li>Develop Infrastructure-as-Code templates using ARM, Bicep, or Terraform</li><li>Collaborate with architects and DevOps teams to ensure scalable cloud designs</li><li>Troubleshoot application issues, performance bottlenecks, and integration problems</li><li>Monitor cloud workloads, logs, costs, and performance metrics</li><li>Maintain documentation for Azure solutions, APIs, and deployment procedures</li><li>Participate in code reviews, design sessions, and architectural discussions</li></ul><p><br></p>
<p><strong>Our client is seeking a Senior AWS Data Engineer for a long term, multi-year assignment.</strong></p><p><br></p><p><strong>This role is onsite 4 days/week in Torrance, CA. </strong></p><p><br></p><p>This role is to support and enhance enterprise business intelligence and analytics environments. This role focuses on designing, building, and maintaining scalable data pipelines and cloud‑based data platforms using AWS services. The ideal candidate brings deep hands‑on experience with AWS Glue, PySpark, Redshift, and serverless architectures, along with strong SQL and data analysis skills.</p><p>This role will collaborate closely with architecture, security, compliance, and development teams to ensure data solutions are performant, secure, and compliant with regulatory requirements.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue with PySpark for large‑scale data processing</li><li>Develop and support serverless integrations using AWS Lambda for event‑driven workflows and system integrations</li><li>Design and optimize Amazon Redshift data warehouse solutions, including:</li><li>Advanced SQL analytics</li><li>Stored procedures</li><li>Performance tuning</li><li>Lead implementation of secure vendor file transfer and ingestion solutions using AWS Transfer Family</li><li>Design and implement database migration and replication pipelines using AWS Database Migration Service (DMS)</li><li>Build and manage workflow orchestration using Apache Airflow or similar orchestration tools</li><li>Analyze data quality, transformation logic, and pipeline performance using SQL and data analysis techniques</li><li>Troubleshoot and resolve production data pipeline and integration issues across AWS services</li><li>Provide technical guidance to development team members on:</li><li>AWS best practices</li><li>Cost optimization</li><li>Performance optimization</li><li>Partner with enterprise architecture, security, and compliance teams to ensure SOX and regulatory compliance</li></ul>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p><strong>Key Responsibilities</strong></p><ul><li>Design and implement <strong>secure, scalable, and highly available Azure cloud architectures</strong>.</li><li>Build and manage Azure infrastructure using <strong>Terraform, ARM templates, and Azure CLI</strong>.</li><li>Provision and support Azure services including <strong>compute, networking, storage, and PaaS</strong> offerings.</li><li>Partner with network and security teams to implement <strong>IAM, network security, and data protection controls</strong>.</li><li>Implement monitoring, logging, and alerting using <strong>Azure Monitor, Log Analytics, and Application Insights</strong>.</li><li>Troubleshoot performance, availability, and reliability issues across Azure environments.</li><li>Automate deployments and operational workflows, including integrations with <strong>ServiceNow APIs</strong>.</li><li>Support CI/CD and infrastructure automation initiatives to improve deployment consistency and efficiency.</li></ul><p><br></p>
We are looking for an experienced Azure Cloud and Network Administrator to join our team in Farmington Hills, Michigan. As a key contributor, you will oversee the design, implementation, and management of cloud and network infrastructure, ensuring optimal performance and security across our systems. This is a Contract to permanent position within the healthcare industry, offering an opportunity to work on cutting-edge technology solutions.<br><br>Responsibilities:<br>• Configure, maintain, and optimize Azure networking infrastructure, including virtual networks, functions, and resource performance.<br>• Collaborate with development teams to support DevOps initiatives and CI/CD pipelines.<br>• Administer Azure Virtual Desktop environments, including host pools, workspace configurations, and golden image management.<br>• Harden Azure images to enhance security and mitigate vulnerabilities.<br>• Implement and manage Azure Active Directory, including user and group administration, and conditional access policies.<br>• Deploy and maintain Windows Server virtual machines and ensure their security and reliability within the Azure environment.<br>• Utilize Microsoft Intune to automate vulnerability patching and enforce compliance policies across devices.<br>• Design and execute backup and disaster recovery strategies for cloud-based resources.<br>• Monitor and optimize cloud spending by identifying unused assets and implementing cost-effective solutions.<br>• Provide technical expertise and documentation for all managed systems, ensuring comprehensive support and compliance with company policies.
<p>Overview</p><p>We are seeking a <strong>Senior Cloud & Infrastructure Engineer (Azure)</strong> to lead the design, implementation, and support of secure, scalable, and resilient cloud infrastructure environments. This role is ideal for a hands-on engineer with deep Microsoft Azure expertise and a strong background in production cloud operations, infrastructure reliability, and disaster recovery.</p><p>The Senior Cloud & Infrastructure Engineer will play a key role in building and maintaining modern Azure environments, with a focus on networking, security, identity, backup, and recovery strategies. This individual will collaborate closely with cross-functional teams to ensure cloud platforms are optimized, secure, and aligned with business needs.</p><p>Key Responsibilities</p><ul><li>Design, implement, and support Azure-based infrastructure solutions for production environments</li><li>Manage and optimize cloud infrastructure with a focus on performance, availability, scalability, and security</li><li>Support Azure networking, identity, and security configurations, including private and hybrid connectivity models</li><li>Maintain and enhance backup, recovery, and disaster recovery solutions across cloud infrastructure</li><li>Troubleshoot complex infrastructure and cloud-related issues in production environments</li><li>Partner with application, security, and operations teams to support modern platforms hosted in Azure</li><li>Contribute to infrastructure standards, architecture decisions, and operational best practices</li><li>Implement and support secure cloud designs that align with organizational compliance and security requirements</li><li>Develop and maintain infrastructure automation and repeatable deployment processes where applicable</li><li>Provide technical leadership and guidance on cloud infrastructure strategy and operational excellence</li></ul><p><br></p>
<p>We are seeking an experienced <strong>Enterprise Data Warehouse (EDW) Architect</strong> to lead the design, evaluation, and implementation of a modern analytics and reporting platform. This role will be responsible for defining the end‑to‑end data architecture, selecting appropriate technologies, and ensuring scalable, governed, and business‑aligned data solutions across multiple subject areas.</p><p>The EDW Architect will partner closely with business stakeholders and technical teams to translate business requirements into a sustainable enterprise data warehouse and BI ecosystem.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Evaluate and recommend data warehousing platforms (e.g., Snowflake, Databricks, BigQuery, Redshift, Azure Synapse)</li><li>Select and design ETL/ELT solutions and orchestration frameworks (e.g., dbt, Fivetran, ADF, Informatica, Talend)</li><li>Design dimensional data models, including star and snowflake schemas, aligned to business use cases</li><li>Lead subject area modeling and architecture across Supply Chain, Sales, Finance, HR, and Procurement</li><li>Define BI‑layer architecture and reporting standards using tools such as Power BI, Tableau, or Looker</li><li>Establish data governance, lineage, metadata, and data quality frameworks</li><li>Produce architecture documentation, implementation roadmaps, and conduct knowledge transfer and handover to delivery teams</li></ul>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.
We are looking for a talented Data Engineer to join our team in Glendale, California. In this long-term contract role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and platforms that support critical business operations. Through collaboration with cross-functional teams, you will contribute to innovative data solutions that enhance decision-making processes and drive operational excellence.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support the Core Data platform.<br>• Create tools and services to enhance data discovery, governance, and privacy.<br>• Collaborate with product managers, architects, and software engineers to ensure the success of data platforms.<br>• Apply technologies such as Airflow, Spark, Databricks, Delta Lake, and Kubernetes to build advanced data solutions.<br>• Establish and document best practices for pipeline configurations, naming conventions, and operational standards.<br>• Monitor and ensure the accuracy, reliability, and efficiency of datasets to meet service level agreements (SLAs).<br>• Participate in agile and scrum ceremonies to improve collaboration and team processes.<br>• Foster relationships with stakeholders to understand their needs and prioritize platform enhancements.<br>• Maintain detailed documentation to support data governance and quality initiatives.
We are looking for a Data Engineer to strengthen our data and analytics capabilities in West Chester, Pennsylvania. This role will shape reliable data architecture, support enterprise reporting, and help turn complex information into practical business insight. The position is ideal for someone who enjoys building scalable data solutions, improving performance, and working across Microsoft-based data technologies.<br><br>Responsibilities:<br>• Design and support enterprise data solutions that enable dependable analytics, reporting, and operational decision-making.<br>• Build, optimize, and maintain database structures and data processing workflows using SQL Server, Azure SQL Database, and T-SQL.<br>• Develop and enhance SSIS packages and related data pipelines to ensure accurate, timely, and efficient movement of information across systems.<br>• Create scalable datasets and reporting foundations that support Power BI dashboards and broader business intelligence needs.<br>• Monitor data platform performance, troubleshoot issues, and implement improvements that increase stability, security, and efficiency.<br>• Partner with business and technical stakeholders to translate reporting and analytics goals into practical data engineering solutions.<br>• Lead efforts to move legacy SQL Server workloads into Azure-based services while maintaining data integrity and minimizing disruption.<br>• Establish standards and best practices for data quality, documentation, and ongoing platform maintenance.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
<p>We are looking for a Data Engineer to strengthen and expand an established Microsoft Fabric data environment. This Long-term Contract position is ideal for someone who can turn business data into reliable, well-structured assets that support reporting and decision-making. The role requires a hands-on engineer who can shape data architecture, build scalable pipelines, and communicate clearly with both technical teams and business stakeholders.</p><p><br></p><p>Responsibilities:</p><p>• Expand and improve an existing Microsoft Fabric platform to support dependable, scalable analytics solutions.</p><p>• Create and maintain a layered data architecture across Bronze, Silver, and Gold tiers, with emphasis on delivering trusted and business-ready curated datasets.</p><p>• Build ingestion and transformation processes for Salesforce data along with information from additional enterprise sources.</p><p>• Develop data models that improve accuracy, usability, and reporting value by evaluating structure, relationships, and downstream needs.</p><p>• Support the shift away from older warehouse and spreadsheet-driven reporting practices by introducing more modern data engineering approaches.</p><p>• Work autonomously to manage priorities while providing regular updates on progress, technical decisions, and potential risks.</p><p>• Collaborate with business partners to understand reporting goals and translate them into practical data solutions.</p><p>• Contribute to data processing and integration workflows using technologies such as Python, Spark, ETL frameworks, and related platform tools.</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
<p><strong>Mid-Level Data Engineer (On-Site | Los Angeles, CA)</strong></p><p><em>Build systems that actually drive business decisions.</em></p><p><br></p><p>This is not a “maintain the pipeline and go home” kind of role.</p><p><br></p><p>We’re looking for a sharp, early-career Data Engineer who wants to operate close to the business, own meaningful projects end-to-end, and build systems that directly impact how decisions get made across an entire organization. You’ll join a small, high-performing team where your work won’t get buried—it will be seen, used, and relied on daily.</p><p><br></p><p>If you’re someone who enjoys solving messy problems, building from scratch, and working in a fast-paced, high-expectation environment, this is the kind of role where you’ll grow quickly.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and build automated data systems (e.g., billing workflows, internal tools)</li><li>Create and maintain BI dashboards and reports using Python, Excel, and visualization tools</li><li>Write and optimize SQL queries and ETL pipelines for clean, reliable data flow</li><li>Analyze large datasets to uncover actionable insights and trends</li><li>Partner with stakeholders across the business to translate needs into technical solutions</li><li>Help improve data accessibility and usability across departments</li><li>Ensure data integrity and accuracy through audits and troubleshooting</li><li>Contribute to a growing data function with high visibility and ownership</li></ul><p>Why This Role Stands Out</p><ul><li>High ownership: You’ll build systems from the ground up, not just maintain them</li><li>Small team, big impact: Work directly with senior leadership and decision-makers</li><li>Growth opportunity: The team is expanding—this role can evolve quickly</li><li>Flexibility within intensity: While this is a high-performance environment, there’s trust and flexibility when needed</li></ul>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>