<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p><strong>Azure Developer</strong></p><p>We are seeking a knowledgeable <strong>Azure Developer</strong> to build cloud-native applications and services using Microsoft Azure technologies. This role is ideal for someone who enjoys designing scalable solutions, working with modern cloud tools, and collaborating closely with software and cloud engineering teams. The ideal candidate will have strong development skills, deep understanding of Azure services, and a passion for cloud innovation.</p><p><strong>Responsibilities</strong></p><ul><li>Develop cloud-based applications using Azure Functions, App Services, Logic Apps, and related services</li><li>Build APIs, microservices, and serverless workloads using .NET, C#, or other Azure-supported languages</li><li>Implement Azure integrations using Service Bus, Event Hub, API Management, or Durable Functions</li><li>Create and optimize Azure DevOps pipelines for CI/CD automation</li><li>Develop Infrastructure-as-Code templates using ARM, Bicep, or Terraform</li><li>Collaborate with architects and DevOps teams to ensure scalable cloud designs</li><li>Troubleshoot application issues, performance bottlenecks, and integration problems</li><li>Monitor cloud workloads, logs, costs, and performance metrics</li><li>Maintain documentation for Azure solutions, APIs, and deployment procedures</li><li>Participate in code reviews, design sessions, and architectural discussions</li></ul><p><br></p>
We are looking for an experienced AWS/Databricks Engineer to join our team in Houston, Texas. This is a long-term contract position ideal for professionals with a strong background in data engineering and cloud technologies. The role will focus on leveraging Python and Databricks to optimize data processes and enhance system performance.<br><br>Responsibilities:<br>• Develop and implement scalable data engineering solutions using Python and Databricks.<br>• Collaborate with cross-functional teams to design and optimize data workflows.<br>• Migrate and enhance existing Python scripts to Databricks for improved functionality.<br>• Utilize cloud technologies to support data integration and analytics processes.<br>• Implement algorithms and data visualization methods to present actionable insights.<br>• Design and maintain APIs to streamline data interactions and integrations.<br>• Work with tools like Apache Kafka, Spark, and Hadoop to manage large-scale data systems.<br>• Perform data analysis and develop strategies to improve system efficiency.<br>• Ensure high-quality data pipelines and address performance bottlenecks.<br>• Stay updated on emerging trends in data engineering and recommend innovative solutions.
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
We are looking for an experienced Azure Cloud and Network Administrator to join our team in Farmington Hills, Michigan. As a key contributor, you will oversee the design, implementation, and management of cloud and network infrastructure, ensuring optimal performance and security across our systems. This is a Contract to permanent position within the healthcare industry, offering an opportunity to work on cutting-edge technology solutions.<br><br>Responsibilities:<br>• Configure, maintain, and optimize Azure networking infrastructure, including virtual networks, functions, and resource performance.<br>• Collaborate with development teams to support DevOps initiatives and CI/CD pipelines.<br>• Administer Azure Virtual Desktop environments, including host pools, workspace configurations, and golden image management.<br>• Harden Azure images to enhance security and mitigate vulnerabilities.<br>• Implement and manage Azure Active Directory, including user and group administration, and conditional access policies.<br>• Deploy and maintain Windows Server virtual machines and ensure their security and reliability within the Azure environment.<br>• Utilize Microsoft Intune to automate vulnerability patching and enforce compliance policies across devices.<br>• Design and execute backup and disaster recovery strategies for cloud-based resources.<br>• Monitor and optimize cloud spending by identifying unused assets and implementing cost-effective solutions.<br>• Provide technical expertise and documentation for all managed systems, ensuring comprehensive support and compliance with company policies.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.
<p>Our transportation client is seeking a <strong>Data Engineer</strong> to support large‑scale logistics operations by building reliable, scalable, and cloud‑based data pipelines. This role is hands‑on, focused on delivering high‑quality data flows that improve shipment visibility, operational efficiency, and real‑time analytics across the supply chain.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, build, and maintain <strong>ETL/ELT pipelines</strong> that process high‑volume operational and logistics data</li><li>Develop transformation logic and automation using <strong>Python</strong>, <strong>SQL</strong>, and Azure-native tooling</li><li>Implement and orchestrate workflows in <strong>Azure Data Factory</strong>, <strong>Synapse</strong>, and <strong>Databricks</strong></li><li>Optimize data lake and warehouse performance, including tuning queries, pipelines, and storage layers</li><li>Monitor pipeline health and proactively troubleshoot failures, bottlenecks, and data quality issues</li><li>Contribute to data modeling efforts to support analytics, reporting, and downstream applications</li><li>Collaborate with BI, product, supply chain, and application teams to align pipelines with business needs</li><li>Maintain strong documentation around workflows, standards, and operational procedures</li><li>Support governance initiatives related to <strong>data quality</strong>, lineage, cataloging, and access policies</li><li>Follow best practices for security, compliance, and cloud resource management</li></ul><p><br></p><p><br></p>
We are looking for a Senior Data Engineer to develop and optimize enterprise data systems that support analytics and digital solutions. In this role, you will design and implement robust data architectures, ensuring seamless data integration and transformation processes across the organization. Your expertise will drive the creation of reliable pipelines and scalable infrastructure, enabling advanced analytics and machine learning capabilities.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines using Databricks, Spark, and Delta Lake to support enterprise-level analytics.<br>• Develop and maintain efficient data models tailored for AI, analytics, and operational systems.<br>• Lead Master Data Management initiatives to establish unified and accurate data records across platforms.<br>• Create batch and near-real-time data processing workflows for structured and semi-structured datasets.<br>• Collaborate with AI and software development teams to ensure delivery of high-quality datasets for machine learning.<br>• Define and enforce data architecture standards, ensuring scalability, reliability, and governance.<br>• Troubleshoot and optimize data systems to maintain performance and reliability in complex environments.<br>• Partner with cloud and IT teams to integrate modern data platforms and ensure seamless functionality.
<p><strong>Data Modeling and Analysis</strong></p><ul><li>Design data models and optimize performance: Creating the structure of data relationships ensuring efficient data retrieval and calculations.</li><li>Create calculated columns and measures: Using DAX to calculate derived values and aggregate metrics.</li><li>Perform exploratory data analysis (EDA): Using BI tools to explore data, identify trends, and patterns.</li><li>Apply advanced data analysis techniques (e.g., statistical analysis, time series analysis, predictive modeling).</li><li>Integrate machine learning models into Power BI dashboards.</li><li>Experience building semantic models</li></ul><p><strong>Dashboard Development and Visualization</strong></p><ul><li>Designing dashboards: Creating visually appealing and interactive dashboards.</li><li>Creating visualizations: Using charts, graphs, and other visual elements to represent data.</li><li>Implementing interactivity: Adding filters, slicers, and drill-down capabilities.</li><li>Expertise in SQL and DAX and knowledge of Python, R.</li><li>Strong proficiency in Power BI.</li><li>Data modeling and visualization skills.</li><li>Strong problem-solving skills to address technical challenges and data quality issues.</li><li>Analytical skills with capacity to analyze complex data problems and draw meaningful insights.</li></ul>
We are looking for a skilled Data Engineer to join our team in Foxborough, Massachusetts, on a long-term contract basis. In this role, you will design, optimize, and maintain data pipelines and storage solutions, leveraging modern tools to ensure high performance and reliability. This position offers an exciting opportunity to collaborate across teams and implement cutting-edge practices in data engineering and analytics.<br><br>Responsibilities:<br>• Optimize Amazon Redshift performance by configuring distribution keys, sort keys, and fine-tuning queries.<br>• Develop and maintain robust data pipelines using AWS Glue and orchestrate workflows with Airflow.<br>• Manage semantic layers and metadata to support reliable analytics and AI-driven insights.<br>• Implement best practices for data partitioning, compression, and columnar storage formats.<br>• Monitor and troubleshoot data workflows to ensure high availability, reliability, and automated observability.<br>• Automate data processing tasks using Python and AWS native tools.<br>• Enforce data security and governance policies, including row- and column-level controls, using Lake Formation and AWS services.<br>• Oversee compliance monitoring and auditing through CloudWatch, CloudTrail, and similar tools.<br>• Continuously refine and improve data architecture by adopting emerging AWS best practices and patterns.<br>• Collaborate closely with Operations, Data Governance, and other teams to align with standards and achieve delivery objectives.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work at the intersection of data engineering, analytics, and business strategy. If you have a strong background in building and optimizing data pipelines and are passionate about leveraging technology to drive insights, we encourage you to apply.<br><br>Responsibilities:<br>• Design, develop, and optimize scalable data pipelines and workflows to support business analytics.<br>• Collaborate with cross-functional teams to gather and analyze data requirements.<br>• Implement ETL processes to extract, transform, and load data from diverse sources.<br>• Utilize tools such as Apache Spark and Hadoop to manage large-scale data processing.<br>• Integrate streaming data systems using Apache Kafka to enhance real-time analytics.<br>• Monitor and troubleshoot data flow and systems to ensure high performance and reliability.<br>• Develop and maintain documentation for data engineering processes and systems.<br>• Ensure data security and integrity across all platforms and processes.<br>• Work closely with stakeholders to translate business needs into technical solutions.<br>• Stay updated with industry trends and emerging technologies to improve data engineering practices.
We are looking for an experienced Data Engineer to join our team on a long-term contract basis. Based in Houston, Texas, this role offers an exciting opportunity to work with cutting-edge data technologies, design scalable solutions, and contribute to data-driven decision-making processes. If you are passionate about optimizing data systems and driving innovation, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python.<br>• Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems.<br>• Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Design and maintain data models, ensuring alignment with business requirements.<br>• Conduct thorough testing and validation of data processes to guarantee accuracy.<br>• Document data workflows and processes for future reference and team collaboration.<br>• Provide technical guidance and support to team members on data engineering best practices.<br>• Stay current on emerging technologies and trends in big data and analytics.<br>• Contribute to improving data governance and security protocols.
<p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
<p>We are looking for a talented Data Engineer to join our team in Fort Lauderdale, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
<p>I’m building a world-class team to power our next generation of data products. We’re looking for a Senior Data Engineer who knows AWS inside and out—someone who can <strong>design secure, scalable data pipelines</strong>, <strong>own ETL/ELT workflows</strong>, <strong>engineer cloud data infrastructure</strong>, and <strong>deliver dimensional and semantic models</strong> that our analysts, data scientists, and applications can trust.</p><p>You’ll work closely with product, security, platform engineering, and analytics to move our architecture toward a <strong>real-time, governed, cost-aware</strong>, and <strong>highly automated</strong> data ecosystem.</p><p><strong>What You’ll Do</strong></p><ul><li><strong>Design & build end-to-end pipelines</strong> on AWS (batch and streaming) using services like <strong>Glue, EMR, Lambda, Step Functions, Kinesis, MSK</strong>, and <strong>Fargate</strong>.</li><li><strong>Develop robust ETL/ELT</strong> (PySpark, Spark SQL, SQL, Python) for structured, semi-structured, and unstructured data at scale.</li><li><strong>Own data storage & processing layers</strong>: <strong>S3 (Lake/Lakehouse), Redshift (or Snowflake on AWS), DynamoDB</strong>, and <strong>Athena</strong> with strong partitioning, compaction, and performance tuning.</li><li><strong>Implement data models</strong> (3NF, dimensional/star, Data Vault, Lakehouse medallion) for analytics and operational workloads.</li><li><strong>Engineer secure infrastructure-as-code</strong> with <strong>Terraform</strong> (or <strong>CDK</strong>) across multi-account setups; implement CI/CD via <strong>GitHub Actions</strong> or <strong>AWS CodeBuild/CodePipeline</strong>.</li><li><strong>Harden security & governance</strong>: use <strong>IAM</strong>, <strong>Lake Formation</strong>, <strong>KMS</strong>, <strong>Secrets Manager</strong>, <strong>VPC/PrivateLink</strong>, <strong>GLUE Catalog</strong>, and fine-grained access controls. Partner with SecOps on compliance (e.g., <strong>SOC 2</strong>, <strong>FedRAMP</strong>, <strong>HIPAA</strong> depending on dataset).</li><li><strong>Observability & reliability</strong>: build monitoring with <strong>CloudWatch</strong>, <strong>OpenTelemetry</strong>, and data quality checks (e.g., <strong>Great Expectations</strong>, <strong>Deequ</strong>), implement SLOs and alerts.</li><li><strong>Champion best practices</strong>: code reviews, testing (unit/integration), documentation, runbooks, and blameless postmortems.</li><li><strong>Mentor</strong> mid-level engineers and collaborate on architectural decisions, standards, and technical roadmaps.</li></ul><p><br></p>
We are looking for a highly skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work on cutting-edge data solutions and collaborate with cross-functional teams to deliver impactful results. The ideal candidate will possess strong technical expertise and a passion for creating efficient and scalable data systems.<br><br>Responsibilities:<br>• Design and implement scalable data architectures to support business needs and analytics requirements.<br>• Develop and optimize ETL pipelines for data extraction, transformation, and loading across diverse data sources.<br>• Collaborate with stakeholders to gather requirements and translate them into technical solutions.<br>• Utilize tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data processing and real-time streaming.<br>• Ensure data quality and security by implementing best practices and conducting thorough testing.<br>• Develop and maintain technical documentation related to system design, development processes, and operational workflows.<br>• Work with Agile teams to deliver solutions efficiently while actively participating in sprints and ceremonies.<br>• Troubleshoot and resolve issues in existing data systems to maintain optimal performance.<br>• Provide guidance and conduct code reviews for entry level team members.<br>• Stay updated on emerging technologies and recommend improvements to enhance data engineering practices.
<p><strong>***Please email Valerie Nielsen for immediate response*** </strong></p><p><br></p><p><strong>Job Title:</strong> Data Engineer</p><p> <strong>Location:</strong> West Los Angeles, CA (Onsite)</p><p> <strong>Salary:</strong> $150,000 Base + Bonus</p><p><strong>Overview</strong></p><p> We are seeking a <strong>Data Engineer</strong> to join our team onsite in <strong>West Los Angeles</strong>. This role is ideal for someone early in their career who has strong technical fundamentals, enjoys working with data, and has curiosity around modern AI tools. The ideal candidate has a strong analytical mindset and enjoys solving complex data problems while building scalable pipelines and data models.</p><p><strong>Responsibilities</strong></p><ul><li>Build, maintain, and optimize data pipelines and ETL processes</li><li>Write efficient and scalable <strong>SQL and Python</strong> code for data transformation and analysis</li><li>Work with cloud data platforms in <strong>AWS or Azure</strong></li><li>Support data modeling, data warehouse development, and reporting pipelines</li><li>Collaborate with analytics and product teams to deliver clean, reliable datasets</li><li>Explore and leverage <strong>AI tools (e.g., Claude or similar)</strong> to improve workflows and productivity</li><li>Ensure data quality, performance, and scalability across systems</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
<p>Robert Half is hiring! We are looking for an experienced Data Engineer to join our team in Greenville, South Carolina. This role offers an exciting opportunity to work with modern data technologies, ensuring the efficient operation and optimization of data pipelines and systems. The ideal candidate will bring a strong technical background, leadership skills, and a proactive approach to maintaining and improving data infrastructure.</p><p><br></p><p>Responsibilities:</p><p>• Oversee daily data loads and ensure the smooth operation of data pipelines and related systems.</p><p>• Troubleshoot and resolve issues such as pipeline failures, performance bottlenecks, schema mismatches, and cloud resource disruptions.</p><p>• Conduct root-cause analyses and implement permanent solutions to prevent recurring issues.</p><p>• Maintain and optimize existing data processes, refactoring or retiring outdated workflows as necessary.</p><p>• Design and build scalable data ingestion pipelines using technologies such as Azure Data Factory, Databricks, and Synapse Pipelines.</p><p>• Collaborate with teams to create and improve operational runbooks, monitoring dashboards, and incident response workflows.</p><p>• Develop reusable ingestion patterns for platforms like Guidewire DataHub, InfoCenter, and other business data sources.</p><p>• Lead the implementation of real-time and event-driven data engineering solutions to enable operational insights and automation.</p><p>• Partner with architects to modernize data workloads using advanced frameworks like Delta Lake and Medallion Architecture.</p><p>• Mentor entry-level engineers, enforce coding best practices, and review code to ensure quality and compliance.</p>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.
<p>We are looking for a talented Data Engineer to join our team in Miami, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>