<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
<p>Robert Half is seeking a Data Engineer to design, build, and maintain enterprise data infrastructure and analytics platforms. This role will serve as the technical owner of data architecture, ensuring data quality, governance, and accessibility across the organization.</p><p>This is a highly visible role supporting leadership and business teams by enabling reliable, data-driven decision-making through scalable data solutions and modern analytics tools.</p><p><br></p><p><strong>Job Responsibilities</strong></p><ul><li>Design and implement enterprise data architecture, including data models and integration patterns to establish a single source of truth </li><li>Build and manage analytics platforms to support reporting and business intelligence initiatives </li><li>Develop and maintain high-impact dashboards using Power BI or similar tools for leadership and operational teams </li><li>Design and build automated ETL/ELT pipelines across multiple systems and data sources </li><li>Define and enforce data governance standards, including metric definitions, data quality rules, and access controls </li><li>Monitor and optimize data pipeline performance, including troubleshooting failures and implementing automated error handling </li><li>Investigate and resolve data quality issues (e.g., duplicates, sync failures) and implement proactive monitoring solutions </li><li>Enable self-service analytics by creating user-friendly data models and supporting end users with training and documentation </li><li>Ensure compliance with data security and regulatory requirements, including proper data handling and access controls </li><li>Partner with IT leadership to recommend tools, technologies, and best practices to enhance data capabilities </li></ul>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
<p><strong>Mid-Level Data Engineer (On-Site | Los Angeles, CA)</strong></p><p><em>Build systems that actually drive business decisions.</em></p><p><br></p><p>This is not a “maintain the pipeline and go home” kind of role.</p><p><br></p><p>We’re looking for a sharp, early-career Data Engineer who wants to operate close to the business, own meaningful projects end-to-end, and build systems that directly impact how decisions get made across an entire organization. You’ll join a small, high-performing team where your work won’t get buried—it will be seen, used, and relied on daily.</p><p><br></p><p>If you’re someone who enjoys solving messy problems, building from scratch, and working in a fast-paced, high-expectation environment, this is the kind of role where you’ll grow quickly.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and build automated data systems (e.g., billing workflows, internal tools)</li><li>Create and maintain BI dashboards and reports using Python, Excel, and visualization tools</li><li>Write and optimize SQL queries and ETL pipelines for clean, reliable data flow</li><li>Analyze large datasets to uncover actionable insights and trends</li><li>Partner with stakeholders across the business to translate needs into technical solutions</li><li>Help improve data accessibility and usability across departments</li><li>Ensure data integrity and accuracy through audits and troubleshooting</li><li>Contribute to a growing data function with high visibility and ownership</li></ul><p>Why This Role Stands Out</p><ul><li>High ownership: You’ll build systems from the ground up, not just maintain them</li><li>Small team, big impact: Work directly with senior leadership and decision-makers</li><li>Growth opportunity: The team is expanding—this role can evolve quickly</li><li>Flexibility within intensity: While this is a high-performance environment, there’s trust and flexibility when needed</li></ul>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, build, and manage data pipelines and systems to support business operations and decision-making processes. This position offers an exciting opportunity to work with cutting-edge technologies within the energy and natural resources sector.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to efficiently process large volumes of data.<br>• Collaborate with cross-functional teams to gather requirements and design data solutions that meet business needs.<br>• Implement and optimize ETL processes to ensure the accuracy and reliability of data flows.<br>• Utilize technologies such as Apache Spark, Hadoop, and Kafka to manage and process data streams.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Perform data integration from multiple sources to create unified datasets for analysis.<br>• Ensure data security and compliance with organizational and industry standards.<br>• Continuously evaluate and adopt new tools and technologies to enhance data engineering practices.<br>• Provide technical guidance and mentorship to entry-level team members as needed.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools such as Apache Spark and Python.<br>• Design efficient ETL processes to extract, transform, and load data from various sources.<br>• Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.<br>• Implement and manage big data solutions using Apache Hadoop and Apache Kafka.<br>• Monitor and optimize the performance of data systems to ensure reliability and scalability.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Maintain documentation for data workflows and processes to ensure clarity and consistency.<br>• Stay updated on emerging technologies and best practices in data engineering to continuously improve systems.
<p>We are supporting our client in hiring a Product Data Engineer who will take full ownership of their product information environment. This role centers on managing their PIM solution (Salsify), improving data structures, and building automated, API‑driven integrations that ensure product data is clean, scalable, and synchronized across platforms.</p><p>This position will be deeply involved in a major product‑data overhaul, including cleanup, restructuring, and long‑term system improvements. The ideal candidate is someone who enjoys solving data problems, building automated workflows, and improving the reliability of product information across systems.</p><p><br></p><p> Key Responsibilities</p><p>Product Data Platform Ownership</p><ul><li>Act as the primary administrator for the PIM platform</li><li>Define and maintain product attributes, hierarchies, and data relationships</li><li>Create validation rules, formulas, and workflows to enforce data standards</li><li>Manage permissions, governance, and platform configuration</li><li>Troubleshoot issues related to imports, exports, and publishing</li></ul><p>Integrations & Automation</p><ul><li>Manage integrations between the PIM and internal/external systems (eCommerce, retail, etc.)</li><li>Build and support API‑based data flows with a focus on reliability and scale</li><li>Develop automation using scripting (Python preferred)</li><li>Support event‑driven or automated pipelines to reduce manual work</li><li>Monitor integration performance and proactively resolve failures</li></ul><p>Product Data Improvements</p><ul><li>Contribute to a large‑scale product data cleanup and restructuring effort</li><li>Identify gaps in current data models and workflows</li><li>Partner with cross‑functional teams to define scalable data standards</li><li>Improve system design to support long‑term growth</li></ul><p>Channel Syndication</p><ul><li>Manage product data distribution to digital and retail channels</li><li>Ensure data meets channel‑specific requirements</li><li>Troubleshoot publishing issues and improve success rates</li><li>Support product launches and updates across channels</li></ul><p>Data Governance & Quality</p><ul><li>Establish naming conventions, validation rules, and governance standards</li><li>Define and track data quality KPIs (accuracy, completeness, timeliness)</li><li>Utilize or support data governance tools</li><li>Work with business teams to improve data accountability</li></ul><p>Reporting & Metrics</p><ul><li>Build dashboards and reports on data quality and system performance</li><li>Provide insights to leadership to support decision‑making</li><li>Track syndication outcomes and operational metrics</li></ul><p>Operational Support</p><ul><li>Handle day‑to‑day platform usage, enhancements, and issue resolution</li><li>Prioritize incoming requests and tickets</li><li>Ensure stability and reliability of product data operations</li></ul><p><br></p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
We are looking for an experienced Epicor Database Developer to join our team in New Orleans, Louisiana. The ideal candidate will be skilled in designing and managing database systems, optimizing stored procedures, and working with Epicor ERP solutions. This role is perfect for someone who thrives on solving complex data challenges and ensuring seamless system integrations.<br><br>Responsibilities:<br>• Develop, maintain, and optimize SQL databases to support business processes and data requirements.<br>• Write and refine stored procedures and queries using T-SQL to ensure efficient data handling.<br>• Design and implement ETL processes to extract, transform, and load data between systems.<br>• Collaborate with stakeholders to analyze data needs and develop solutions within Epicor ERP systems.<br>• Troubleshoot and resolve database performance issues to ensure reliable operations.<br>• Implement best practices for database security and data integrity.<br>• Work closely with cross-functional teams to support system integrations and upgrades.<br>• Provide documentation and training for database processes and workflows.<br>• Stay updated on the latest developments in database technologies and Epicor ERP enhancements.
<p><strong>Azure Developer</strong></p><p>We are seeking a knowledgeable <strong>Azure Developer</strong> to build cloud-native applications and services using Microsoft Azure technologies. This role is ideal for someone who enjoys designing scalable solutions, working with modern cloud tools, and collaborating closely with software and cloud engineering teams. The ideal candidate will have strong development skills, deep understanding of Azure services, and a passion for cloud innovation.</p><p><strong>Responsibilities</strong></p><ul><li>Develop cloud-based applications using Azure Functions, App Services, Logic Apps, and related services</li><li>Build APIs, microservices, and serverless workloads using .NET, C#, or other Azure-supported languages</li><li>Implement Azure integrations using Service Bus, Event Hub, API Management, or Durable Functions</li><li>Create and optimize Azure DevOps pipelines for CI/CD automation</li><li>Develop Infrastructure-as-Code templates using ARM, Bicep, or Terraform</li><li>Collaborate with architects and DevOps teams to ensure scalable cloud designs</li><li>Troubleshoot application issues, performance bottlenecks, and integration problems</li><li>Monitor cloud workloads, logs, costs, and performance metrics</li><li>Maintain documentation for Azure solutions, APIs, and deployment procedures</li><li>Participate in code reviews, design sessions, and architectural discussions</li></ul><p><br></p>
We are looking for an Oracle Integration Cloud Dev to support a growing enterprise integration landscape in Irvine, California. This Long-term Contract position will focus on building and enhancing cloud-based integrations across Oracle SaaS environments, with an emphasis on reliable data movement and scalable interface design. The role works closely with HR and Finance technology stakeholders to strengthen data ownership models and deliver efficient, well-structured integration solutions.<br><br>Responsibilities:<br>• Design, develop, and maintain integration solutions primarily within Oracle Integration Cloud to connect Oracle SaaS applications and related enterprise platforms.<br>• Partner with HR and Finance information systems teams to support data stewardship objectives and ensure integrations align with business ownership needs.<br>• Create and optimize interfaces that move data accurately between source and target systems while supporting performance, reliability, and maintainability.<br>• Contribute to the evolution of integration methods by reducing dependence on legacy extract and reporting-based approaches where appropriate.<br>• Support a high-volume integration environment by monitoring existing interfaces, troubleshooting issues, and implementing enhancements as business demands grow.<br>• Work across core Oracle Fusion Cloud modules and connected systems, including timekeeping-related interfaces that feed Oracle Time and Labor.<br>• Produce technical documentation, mapping details, and development standards to support consistent delivery and long-term supportability.<br>• Collaborate with cross-functional teams to test, deploy, and refine integrations while helping minimize reliance on older middleware tools such as Boomi.
<p>We are seeking a Data Architect to lead the design and evolution of data architecture. This individual will play a critical role in shaping how data is collected, stored, integrated, and consumed across the organization, supporting analytics, and reporting. The ideal candidate brings a balance of hands‑on technical depth, architectural leadership, and strong collaboration with business and technology stakeholders</p><p><br></p><p>Responsibilities</p><ul><li>Lead the design and implementation of scalable, secure, and high‑performing enterprise data architectures.</li><li>Define data architecture standards, reference architectures, and best practices across platforms.</li><li>Architect data solutions supporting analytics, BI, data science, and operational reporting.</li><li>Design and oversee data models for data warehouses, data lakes, and lakehouse architectures.</li><li>Partner with application, infrastructure, security, and analytics teams to ensure seamless data integration.</li><li>Evaluate and recommend data technologies, tools, and platforms aligned to business strategy.</li><li>Establish and enforce governance, data quality, metadata management, and lineage practices.</li><li>Provide technical leadership and mentorship to data engineers and analytics teams.</li><li>Translate business requirements into actionable data solutions for senior stakeholders.</li></ul>
We are seeking a hands-on Senior Enterprise Architect in Artificial Intelligence (AI) to join our global Enterprise Architecture team. This role blends deep technical expertise with architectural design and practical implementation to drive AI-powered transformation initiatives.<br><br>As part of a forward-thinking global technology team, you’ll collaborate across business, data, and product functions to design and implement AI/ML solutions that enable digital products and services.<br><br>Key Responsibilities<br><br>Design and architect enterprise-scale AI/ML solutions across areas such as Machine Learning, Generative AI, Deep Learning, Virtual Assistants, and Cognitive Services (Vision/Image, Text/Language processing).<br>Develop and communicate AI roadmaps, future-state architectures, and design artifacts.<br>Rapidly prototype and build proof-of-concepts (PoCs) and MVPs for AI models and algorithms.<br>Evaluate and recommend AI/ML tools, platforms, and frameworks; conduct ROI analysis.<br>Experiment with and fine-tune LLMs, train custom models, and assess performance metrics.<br>Perform data exploration, cleansing, and feature engineering to prepare datasets for model training.<br>Guide and mentor engineering and data science teams through AI/ML solution design, deployment, and integration into enterprise workflows.<br>Continuously scan industry innovations and apply emerging AI/ML technologies to business problems.<br>What We’re Looking For<br><br>Strong technical and business acumen in creating technology-driven solutions.<br>Passion for experimenting with and adopting emerging AI/ML technologies.<br>Excellent communication and influencing skills; ability to present complex technical concepts to both technical and non-technical audiences.<br>Proven ability to balance timeliness, cost, and quality in solution design.<br>Experience leading digital transformation, target operating models, and performance improvement initiatives.<br>Qualifications<br><br>Bachelor’s degree in STEM or related field (MBA a plus).<br>5+ years in AI/ML solution architecture, prototyping, and experimentation.<br>5+ years working with AWS and/or Azure data, analytics, and AI services.<br>3+ years of experience with data science tools and frameworks.<br>Recent, hands-on experience with Generative AI, LLMs, and Agentic AI platforms.<br>Knowledge of cloud-native services (data storage, compute, networking, security).<br>Strong understanding of statistical methods, data preprocessing, and feature engineering.
<p>Our client is running Oracle E-Business Suite 12.2 within an Oracle VM (OVM) environment and is currently experiencing a critical infrastructure and virtualization failure following a hardware incident. They are seeking a senior-level Oracle Infrastructure consultant to stabilize Oracle VM, protect production, and avoid a full bare-metal rebuild. This is a high-risk recovery engagement with no current backup, requiring deep hands-on expertise and strong judgment in production environments.</p><p>Current Environment</p><p>• Oracle E-Business Suite 12.2</p><p>• Oracle VM / Oracle VM Server (OVS) 3.4.6</p><p>• WebLogic in use</p><p>• MySQL running</p><p>• Application and database tiers virtualized</p><p>• Oracle VM Manager previously running with over 5 years of uptime</p><p><br></p><p>Key Responsibilities</p><p>• Diagnose and troubleshoot Oracle VM Manager and OVS failures</p><p>• Review system services, logs, and Oracle VM configuration to identify root cause</p><p>• Determine whether Oracle VM can be safely restarted or repaired</p><p>• Advise on risks and validate safe recovery actions in a no-backup scenario</p><p>• Enable safe migration on and off servers once OVM is operational</p><p>• Provide clear guidance on next steps to stabilize production</p><p>• Support the client via screen sharing and live walkthroughs</p>
<p><strong>Senior Data Engineer</strong></p><p><strong>Location:</strong> Philadelphia, PA (Hybrid/Onsite as required)</p><p><strong>Employment Type: </strong>39 Week Contract, Potential for Extension</p><p><strong>Position Overview</strong></p><p>We are seeking an experienced <strong>Data Engineer</strong> to support the development and ongoing operation of a large-scale, cloud-based IoT platform. This role focuses on building and supporting scalable, secure, and high‑performance infrastructure, tooling, and frameworks that enable engineering teams to efficiently develop, test, deploy, and operate modern microservices.</p><p>The ideal candidate brings strong cloud engineering experience, a passion for quality and security, and the ability to collaborate in a fast‑paced Agile environment.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop, operate, and support DevOps and platform engineering tools that enable cloud-based IoT services</li><li>Build and promote horizontal tools, frameworks, and best practices supporting microservices, CI/CD, security, monitoring, and performance</li><li>Collaborate with engineering teams to define development standards, workflows, and methodologies</li><li>Design and implement shared libraries and frameworks to support scalable and highly available systems</li><li>Support production platform operations, troubleshooting, and continuous improvement with focus on quality, performance, and security</li><li>Translate system architecture and product requirements into well-designed, tested software solutions</li><li>Work in an Agile environment delivering incremental, high-quality software</li><li>Provide technical guidance and promote modern engineering practices across teams</li></ul>