<p><strong>Mid-Level Data Engineer (On-Site | Los Angeles, CA)</strong></p><p><em>Build systems that actually drive business decisions.</em></p><p><br></p><p>This is not a “maintain the pipeline and go home” kind of role.</p><p><br></p><p>We’re looking for a sharp, early-career Data Engineer who wants to operate close to the business, own meaningful projects end-to-end, and build systems that directly impact how decisions get made across an entire organization. You’ll join a small, high-performing team where your work won’t get buried—it will be seen, used, and relied on daily.</p><p><br></p><p>If you’re someone who enjoys solving messy problems, building from scratch, and working in a fast-paced, high-expectation environment, this is the kind of role where you’ll grow quickly.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and build automated data systems (e.g., billing workflows, internal tools)</li><li>Create and maintain BI dashboards and reports using Python, Excel, and visualization tools</li><li>Write and optimize SQL queries and ETL pipelines for clean, reliable data flow</li><li>Analyze large datasets to uncover actionable insights and trends</li><li>Partner with stakeholders across the business to translate needs into technical solutions</li><li>Help improve data accessibility and usability across departments</li><li>Ensure data integrity and accuracy through audits and troubleshooting</li><li>Contribute to a growing data function with high visibility and ownership</li></ul><p>Why This Role Stands Out</p><ul><li>High ownership: You’ll build systems from the ground up, not just maintain them</li><li>Small team, big impact: Work directly with senior leadership and decision-makers</li><li>Growth opportunity: The team is expanding—this role can evolve quickly</li><li>Flexibility within intensity: While this is a high-performance environment, there’s trust and flexibility when needed</li></ul>
We are looking for a highly skilled Senior Google Cloud Engineer to join our team on a long-term contract basis in Salt Lake City, Utah. In this role, you will be responsible for designing, implementing, and maintaining secure, reliable, and scalable cloud infrastructure to support critical business operations. Your contributions will help ensure that teams across the organization can operate efficiently and deliver impactful solutions. If you are passionate about leveraging your expertise in Google Cloud to drive innovation, we want to hear from you.<br><br>Responsibilities:<br>• Design and manage production-grade workloads on Google Cloud, ensuring security, reliability, and cost-effectiveness.<br>• Develop and enforce infrastructure standards for identity, networking, data protection, and secrets management.<br>• Build and maintain automated CI/CD pipelines to streamline infrastructure provisioning, testing, and deployment.<br>• Implement Infrastructure as Code (IaC) solutions using tools like Terraform to enhance scalability and repeatability.<br>• Troubleshoot incidents, conduct root-cause analysis, and refine system monitoring and alerting mechanisms.<br>• Enhance and maintain private connectivity, firewall policies, and least-privilege access for secure cloud environments.<br>• Collaborate with cross-functional teams to review and optimize designs, threat models, and operational readiness.<br>• Mentor team members on cloud best practices, operational excellence, and sustainable on-call strategies.<br>• Improve visibility into cloud costs, system performance, and logging to support organizational goals.<br>• Participate in post-incident reviews to drive continuous improvement of backup, recovery, and capacity management strategies.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools such as Apache Spark and Python.<br>• Design efficient ETL processes to extract, transform, and load data from various sources.<br>• Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.<br>• Implement and manage big data solutions using Apache Hadoop and Apache Kafka.<br>• Monitor and optimize the performance of data systems to ensure reliability and scalability.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Maintain documentation for data workflows and processes to ensure clarity and consistency.<br>• Stay updated on emerging technologies and best practices in data engineering to continuously improve systems.
Position: IT INFRASTRUCTURE ENGINEER / IT HELP DESK MANAGER<br>Location: QUAD CITIES - ONSITE<br>Salary: up to $85K + exceptional benefits<br><br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***<br><br>Robert Half is looking for a IT HELP DESK MANAGER / IT INFRASTRUCTURE ANALYST - ONSITE IN QUAD CITIES for a permanent direct hire full time position for our client company. <br><br> In this unique IT HELP DESK MANAGER / IT INFRASTRUCTURE ANALYST - ONSITE IN QUAD CITIES permanent position you will join a highly successful company.<br> <br>This is a thriving organization with a close knit team. You will have autonomy to manage the IT Help Desk Team and assist in other IT Infrastructure Administration initiatives and projects. You will feel a true sense or ownership and relationship building as you will be a go-to person for your own team and customers across the entire organization. <br><br><br>Responsibilities will include managing and assisting with Help Desk Tier 1-3 tickets and any special projects. A wide breadth of IT experience and a proven track record of IT customer service success are essential. You will build strong collaboration and trust with the Senior Leaders and the IT Infrastructure Teams.<br><br>This is a FANTASTIC opportunity to apply ALL OF YOUR SKILLS, BE VALUED AND REWARDED FOR YOUR CONTRIBUTIONS . You will not be bored in this position and your contributions will be recognized and rewarded.<br><br>Requirements:<br> • 7+ years of IT Help Desk and Infrastructure experience in various roles including: desktop support analyst, help desk manager, system administrator, network administrator, security administrator and other.<br> • Technical skills will include: MS O365, desktop, hardware, software, Active Directory, user accounts, installing network hardware and software, setting up network external devices, rouble-shooting connectivity issues and other research and resolution.<br> • Must possess exceptional communication, presentation, customer service skills<br> <br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. **
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.
We are looking for a dedicated Application Support Engineer to join our team in Piscataway, New Jersey. In this long-term contract role, you will leverage your technical expertise to support and maintain the organization's learning management system and provide assistance to internal and external partners. This position requires a strong blend of problem-solving skills, administrative capabilities, and technical knowledge to ensure the optimal functionality of our platform.<br><br>Responsibilities:<br>• Provide technical support as the administrator of the organization's learning management system, diagnosing and resolving platform-related issues.<br>• Collaborate with internal and external teams to implement technical fixes and enhancements for the platform.<br>• Research emerging technologies and recommend improvements to enhance user support and operational efficiency.<br>• Create and update user guides and documentation to assist stakeholders in navigating the platform.<br>• Manage course uploads and produce both standard and customized reports to support organizational needs.<br>• Assist with onboarding and technical implementations for new partners using the learning management system.<br>• Participate in testing and reviewing requirements for platform enhancements, including writing user acceptance testing scripts.<br>• Coordinate virtual events and webinars, handling registration setup, production support, and post-event reporting.<br>• Conduct keyword research and content audits to optimize the platform's website and improve data integrity.<br>• Generate analytics and usage reports for eLearning products, identifying trends and actionable insights to support decision-making.
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, build, and manage data pipelines and systems to support business operations and decision-making processes. This position offers an exciting opportunity to work with cutting-edge technologies within the energy and natural resources sector.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to efficiently process large volumes of data.<br>• Collaborate with cross-functional teams to gather requirements and design data solutions that meet business needs.<br>• Implement and optimize ETL processes to ensure the accuracy and reliability of data flows.<br>• Utilize technologies such as Apache Spark, Hadoop, and Kafka to manage and process data streams.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Perform data integration from multiple sources to create unified datasets for analysis.<br>• Ensure data security and compliance with organizational and industry standards.<br>• Continuously evaluate and adopt new tools and technologies to enhance data engineering practices.<br>• Provide technical guidance and mentorship to entry-level team members as needed.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work at the intersection of data engineering, analytics, and business strategy. If you have a strong background in building and optimizing data pipelines and are passionate about leveraging technology to drive insights, we encourage you to apply.<br><br>Responsibilities:<br>• Design, develop, and optimize scalable data pipelines and workflows to support business analytics.<br>• Collaborate with cross-functional teams to gather and analyze data requirements.<br>• Implement ETL processes to extract, transform, and load data from diverse sources.<br>• Utilize tools such as Apache Spark and Hadoop to manage large-scale data processing.<br>• Integrate streaming data systems using Apache Kafka to enhance real-time analytics.<br>• Monitor and troubleshoot data flow and systems to ensure high performance and reliability.<br>• Develop and maintain documentation for data engineering processes and systems.<br>• Ensure data security and integrity across all platforms and processes.<br>• Work closely with stakeholders to translate business needs into technical solutions.<br>• Stay updated with industry trends and emerging technologies to improve data engineering practices.
<p>We are looking for a detail-oriented Data Migration Specialist to join our team on a contract basis in Maple Plain, Minnesota. In this role, you will collaborate with cross-functional teams to assess, cleanse, and migrate data while ensuring its accuracy and usability. This position offers an exciting opportunity to contribute to critical data transformation projects within the manufacturing industry.</p><p><br></p><p>Responsibilities:</p><p>• Assess existing data to identify quality issues, duplication, and structural gaps, ensuring readiness for migration from HubSpot to Salesforce.</p><p>• Cleanse and standardize data using Snowflake, including deduplication, normalization, and application of business rules.</p><p>• Develop and document field mappings and transformation logic to align data with Salesforce requirements.</p><p>• Support test migrations, validate data integrity, and reconcile discrepancies post-migration.</p><p>• Prepare comprehensive documentation, including migration steps, validation checks, and governance guidelines.</p><p>• Standardize field naming conventions, formats, and reference data for consistent usage.</p><p>• Collaborate with stakeholders to define data entry standards and long-term maintenance expectations.</p><p>• Produce migration-ready datasets and ensure alignment with Salesforce’s data model.</p><p>• Deliver clear work instructions and governance documentation to prevent future data issues.</p><p>• Facilitate stakeholder sign-off on data quality and migration outcomes.</p>
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
<p><strong>Data Engineer</strong></p><p>On-site | Austin, TX | Contract-to-Hire</p><p><br></p><p><strong>Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable data pipelines and ETL/ELT processes</li><li>Develop and optimize data architectures for data lakes, warehouses, and analytics platforms</li><li>Ingest, transform, and integrate data from multiple sources (databases, APIs, streaming systems)</li><li>Ensure data quality, reliability, and performance across data systems</li><li>Collaborate with data scientists, analysts, and business stakeholders to support reporting and analytics needs</li><li>Optimize database performance, queries, and data storage strategies</li><li>Implement data governance, security, and compliance best practices</li><li>Automate data workflows and monitoring processes</li><li>Troubleshoot and resolve data pipeline failures and performance issues</li><li>Document data models, workflows, and technical processes</li></ul>
We are looking for a skilled Deployment Engineer to join our team on a contract basis in Reading, Pennsylvania. In this role, you will be responsible for ensuring the seamless installation, configuration, and integration of various devices and software across multiple platforms. This position requires expertise in managing deployments, troubleshooting technical issues, and collaborating with team members to deliver efficient solutions.<br><br>Responsibilities:<br>• Oversee the deployment and setup of Android devices, Chromebooks, iPads, and other hardware.<br>• Utilize deployment tools to streamline installation processes and ensure accuracy.<br>• Configure and manage Active Directory settings to support device integrations.<br>• Provide regular status updates and documentation to track deployment progress.<br>• Install and maintain medical software across designated systems.<br>• Perform desktop administration tasks, including troubleshooting and resolving technical issues.<br>• Collaborate with team members to identify and address deployment challenges.<br>• Ensure compliance with company standards and protocols during all deployment activities.<br>• Train end-users on device usage and software functionalities as needed.
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
<p>We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and analytics solutions that support enterprise reporting and advanced dashboards. This role will work with cross‑cloud data sources, including SAP, GCP, and BigQuery, and partner closely with analytics and business teams to deliver high‑quality, analytics‑ready datasets powering BI and AI initiatives.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain data pipelines following <strong>Medallion Architecture (Bronze, Silver, Gold)</strong> best practices.</li><li>Develop and support ETL processes pulling data from <strong>SAP, Google Cloud Platform (GCP), and BigQuery</strong>.</li><li>Ensure high data quality, reliability, and performance across ingestion and transformation layers.</li><li>Support analytics and visualization teams by delivering clean, well‑modeled datasets for:</li><li><strong>Power BI dashboards using DAX</strong></li><li><strong>Google Looker dashboards using LookML</strong></li><li>Collaborate with stakeholders to understand data requirements and translate them into scalable data models.</li><li>Maintain documentation on data sources, transformations, and architecture.</li><li>Support AI and API‑driven initiatives, including planned usage of <strong>Google ADK for API integrations</strong></li></ul><p><br></p><p><br></p>
We are looking for a talented Data Engineer to join our team in Glendale, California. In this long-term contract role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and platforms that support critical business operations. Through collaboration with cross-functional teams, you will contribute to innovative data solutions that enhance decision-making processes and drive operational excellence.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support the Core Data platform.<br>• Create tools and services to enhance data discovery, governance, and privacy.<br>• Collaborate with product managers, architects, and software engineers to ensure the success of data platforms.<br>• Apply technologies such as Airflow, Spark, Databricks, Delta Lake, and Kubernetes to build advanced data solutions.<br>• Establish and document best practices for pipeline configurations, naming conventions, and operational standards.<br>• Monitor and ensure the accuracy, reliability, and efficiency of datasets to meet service level agreements (SLAs).<br>• Participate in agile and scrum ceremonies to improve collaboration and team processes.<br>• Foster relationships with stakeholders to understand their needs and prioritize platform enhancements.<br>• Maintain detailed documentation to support data governance and quality initiatives.
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
<p>This is a Tuesday through Saturday shift, 10am to 6pm in Ashburn, leading infrastructure and operations for a data center site. There is opportunity for long term contract and possible conversion to permanent </p><p>· Manages infrastructure installation requests from initiation to closure</p><p>· Reports to the Infrastructure Manager.</p><p>· Supports building Power Downs/Single Sided Events.</p><p>· Certify infrastructure network connections and peripherals prior to handing off to DCS.</p><p>· Interface with the BAU team to provide any additional infrastructure</p><p>· Interface with the BU’s, Hines engineering.</p><p>· Coordinates work requests with electrical contractor to ensure all Service Levels are met.</p><p>· Provides detailed estimates of assigned projects, documents requirements through direct communication with other members of IT or Project Management team as well as interpretation of schematics and drawings.</p><p>· Create Tickets/TCMs & monitor for approval, Closeout/Cancel them once work is complete.</p><p>· Coordinate power remediation from start to completion.</p><p>· Assist with rack level capacity management and MOA monitoring.</p><p>· Participate in any and all meetings pertaining to BAU or projects INF work.</p><p>· Provides Root Cause Analysis reports regarding power issues.</p><p>· Understanding of the Power and Cooling of a 2N+1 Data Center.</p><p>· Management of M& E Vendors.</p><p>· RPP surveys for Single sided events.</p><p>· Creation of GPC notifications.</p><p>· Incident Management</p><p>· Coordinate the Infrastructure Installs including but not limited to the following:</p><ul><li>Survey for Power installs.</li><li>Infrastructure & Rack installs.</li><li>Survey for Infrastructure Cabling (Fiber and copper).</li><li>New Data Center Build-outs.</li><li>Responsible for the ordering of all infrastructure materials.</li><li>Responsible for tracking Purchase Orders.</li><li>Tracking Infrastructure Stock.</li><li>Storeroom Management.</li></ul>
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team on a long-term contract basis. Based in Houston, Texas, this role offers an exciting opportunity to work with cutting-edge data technologies, design scalable solutions, and contribute to data-driven decision-making processes. If you are passionate about optimizing data systems and driving innovation, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python.<br>• Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems.<br>• Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Design and maintain data models, ensuring alignment with business requirements.<br>• Conduct thorough testing and validation of data processes to guarantee accuracy.<br>• Document data workflows and processes for future reference and team collaboration.<br>• Provide technical guidance and support to team members on data engineering best practices.<br>• Stay current on emerging technologies and trends in big data and analytics.<br>• Contribute to improving data governance and security protocols.
<p><strong>Cost Accounting Analyst – Global Manufacturing Organization</strong></p><p> Location: Metro Detroit | Hybrid Work Environment</p><p>Our client, a <strong>globally recognized leader in its industry</strong>, is seeking a <strong>Cost Accounting Analyst</strong> to join its growing finance organization. This role offers an excellent opportunity for an accounting professional who wants to deepen their expertise in <strong>absorption costing, manufacturing finance, and global cost analysis</strong> while working in a highly collaborative environment.</p><p>This position partners closely with <strong>operations, FP& A, and corporate finance</strong> to support accurate financial reporting, strengthen cost visibility, and provide insights that support operational decision-making across a global manufacturing platform. While the primary focus of the role is <strong>cost accounting and inventory valuation</strong>, the organization will <strong>train the individual on broader general ledger activities</strong>, creating a strong development path for future advancement.</p><p><strong>Key Responsibilities</strong></p><p><strong>Cost Accounting & Inventory</strong></p><ul><li>Maintain and analyze product costs including <strong>materials, labor, overhead, and subcontracting costs</strong> within an absorption costing environment.</li><li>Perform <strong>monthly cost roll-ups</strong> and validate the accuracy of bills of materials and manufacturing routings.</li><li>Analyze and explain <strong>manufacturing variances</strong> including material usage, labor efficiency, and overhead absorption.</li><li>Support inventory valuation processes in accordance with <strong>U.S. GAAP</strong>, including raw materials, work-in-process, and finished goods.</li><li>Monitor <strong>slow-moving, obsolete, and excess inventory reserves</strong> and assist with related analysis.</li><li>Participate in <strong>physical inventory counts and cycle counts</strong>, reconciling discrepancies and improving inventory accuracy.</li></ul><p><strong>General Ledger & Close Support</strong></p><ul><li>Assist with <strong>journal entries, reconciliations, and cost-related adjustments</strong> during the monthly close.</li><li>Reconcile accounts including <strong>inventory, COGS, manufacturing variances, accruals, and reserves</strong>.</li><li>Maintain fixed asset records and support depreciation tracking.</li><li>Assist with <strong>intercompany transactions and reconciliations</strong> within a global structure.</li><li>Ensure compliance with company policies and <strong>U.S. GAAP</strong>.</li></ul><p><strong>Financial Analysis & Business Partnership</strong></p><ul><li>Prepare <strong>product and customer margin analysis</strong> and explain key drivers of profitability.</li><li>Support <strong>budgeting, forecasting, and operational cost modeling</strong> related to production activity.</li><li>Partner with FP& A to explain <strong>cost drivers, operational trends, and P& L fluctuations</strong>.</li><li>Participate in <strong>continuous improvement and cost reduction initiatives</strong>.</li></ul><p>For immediate consideration, or if you have questions, please contact Jeff Sokolowski directly at (248)365-6131.</p>
We are looking for an experienced Data Engineer to join our team in Chicago, Illinois. In this role, you will design and implement data solutions that drive business insights and support strategic decision-making. Your expertise in Microsoft Fabric and Azure Databricks will be key in optimizing data workflows and ensuring the reliability of our data systems.<br><br>Responsibilities:<br>• Develop, implement, and maintain scalable data pipelines to support business analytics and reporting needs.<br>• Utilize Microsoft Fabric and Azure Databricks to design efficient data architectures and workflows.<br>• Collaborate with cross-functional teams to understand data requirements and deliver tailored solutions.<br>• Ensure data integrity and security across all systems and processes.<br>• Optimize data storage and retrieval processes for improved performance and scalability.<br>• Monitor system performance and troubleshoot issues as needed to ensure seamless operations.<br>• Document processes and procedures to maintain a clear record of data engineering solutions.<br>• Stay updated with emerging technologies and industry best practices to enhance data engineering capabilities.