<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half is seeking a Data Engineer to design, build, and maintain enterprise data infrastructure and analytics platforms. This role will serve as the technical owner of data architecture, ensuring data quality, governance, and accessibility across the organization.</p><p>This is a highly visible role supporting leadership and business teams by enabling reliable, data-driven decision-making through scalable data solutions and modern analytics tools.</p><p><br></p><p><strong>Job Responsibilities</strong></p><ul><li>Design and implement enterprise data architecture, including data models and integration patterns to establish a single source of truth </li><li>Build and manage analytics platforms to support reporting and business intelligence initiatives </li><li>Develop and maintain high-impact dashboards using Power BI or similar tools for leadership and operational teams </li><li>Design and build automated ETL/ELT pipelines across multiple systems and data sources </li><li>Define and enforce data governance standards, including metric definitions, data quality rules, and access controls </li><li>Monitor and optimize data pipeline performance, including troubleshooting failures and implementing automated error handling </li><li>Investigate and resolve data quality issues (e.g., duplicates, sync failures) and implement proactive monitoring solutions </li><li>Enable self-service analytics by creating user-friendly data models and supporting end users with training and documentation </li><li>Ensure compliance with data security and regulatory requirements, including proper data handling and access controls </li><li>Partner with IT leadership to recommend tools, technologies, and best practices to enhance data capabilities </li></ul>
<p><strong>Mid-Level Data Engineer (On-Site | Los Angeles, CA)</strong></p><p><em>Build systems that actually drive business decisions.</em></p><p><br></p><p>This is not a “maintain the pipeline and go home” kind of role.</p><p><br></p><p>We’re looking for a sharp, early-career Data Engineer who wants to operate close to the business, own meaningful projects end-to-end, and build systems that directly impact how decisions get made across an entire organization. You’ll join a small, high-performing team where your work won’t get buried—it will be seen, used, and relied on daily.</p><p><br></p><p>If you’re someone who enjoys solving messy problems, building from scratch, and working in a fast-paced, high-expectation environment, this is the kind of role where you’ll grow quickly.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and build automated data systems (e.g., billing workflows, internal tools)</li><li>Create and maintain BI dashboards and reports using Python, Excel, and visualization tools</li><li>Write and optimize SQL queries and ETL pipelines for clean, reliable data flow</li><li>Analyze large datasets to uncover actionable insights and trends</li><li>Partner with stakeholders across the business to translate needs into technical solutions</li><li>Help improve data accessibility and usability across departments</li><li>Ensure data integrity and accuracy through audits and troubleshooting</li><li>Contribute to a growing data function with high visibility and ownership</li></ul><p>Why This Role Stands Out</p><ul><li>High ownership: You’ll build systems from the ground up, not just maintain them</li><li>Small team, big impact: Work directly with senior leadership and decision-makers</li><li>Growth opportunity: The team is expanding—this role can evolve quickly</li><li>Flexibility within intensity: While this is a high-performance environment, there’s trust and flexibility when needed</li></ul>
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
<p>We are supporting our client in hiring a Product Data Engineer who will take full ownership of their product information environment. This role centers on managing their PIM solution (Salsify), improving data structures, and building automated, API‑driven integrations that ensure product data is clean, scalable, and synchronized across platforms.</p><p>This position will be deeply involved in a major product‑data overhaul, including cleanup, restructuring, and long‑term system improvements. The ideal candidate is someone who enjoys solving data problems, building automated workflows, and improving the reliability of product information across systems.</p><p><br></p><p> Key Responsibilities</p><p>Product Data Platform Ownership</p><ul><li>Act as the primary administrator for the PIM platform</li><li>Define and maintain product attributes, hierarchies, and data relationships</li><li>Create validation rules, formulas, and workflows to enforce data standards</li><li>Manage permissions, governance, and platform configuration</li><li>Troubleshoot issues related to imports, exports, and publishing</li></ul><p>Integrations & Automation</p><ul><li>Manage integrations between the PIM and internal/external systems (eCommerce, retail, etc.)</li><li>Build and support API‑based data flows with a focus on reliability and scale</li><li>Develop automation using scripting (Python preferred)</li><li>Support event‑driven or automated pipelines to reduce manual work</li><li>Monitor integration performance and proactively resolve failures</li></ul><p>Product Data Improvements</p><ul><li>Contribute to a large‑scale product data cleanup and restructuring effort</li><li>Identify gaps in current data models and workflows</li><li>Partner with cross‑functional teams to define scalable data standards</li><li>Improve system design to support long‑term growth</li></ul><p>Channel Syndication</p><ul><li>Manage product data distribution to digital and retail channels</li><li>Ensure data meets channel‑specific requirements</li><li>Troubleshoot publishing issues and improve success rates</li><li>Support product launches and updates across channels</li></ul><p>Data Governance & Quality</p><ul><li>Establish naming conventions, validation rules, and governance standards</li><li>Define and track data quality KPIs (accuracy, completeness, timeliness)</li><li>Utilize or support data governance tools</li><li>Work with business teams to improve data accountability</li></ul><p>Reporting & Metrics</p><ul><li>Build dashboards and reports on data quality and system performance</li><li>Provide insights to leadership to support decision‑making</li><li>Track syndication outcomes and operational metrics</li></ul><p>Operational Support</p><ul><li>Handle day‑to‑day platform usage, enhancements, and issue resolution</li><li>Prioritize incoming requests and tickets</li><li>Ensure stability and reliability of product data operations</li></ul><p><br></p>
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This long-term contract position offers an exciting opportunity to work in the manufacturing industry, leveraging your expertise in data processing and engineering. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using tools such as Apache Spark and Python.<br>• Design efficient ETL processes to extract, transform, and load data from various sources.<br>• Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.<br>• Implement and manage big data solutions using Apache Hadoop and Apache Kafka.<br>• Monitor and optimize the performance of data systems to ensure reliability and scalability.<br>• Ensure data quality and integrity through rigorous testing and validation processes.<br>• Troubleshoot and resolve issues related to data pipelines and infrastructure.<br>• Maintain documentation for data workflows and processes to ensure clarity and consistency.<br>• Stay updated on emerging technologies and best practices in data engineering to continuously improve systems.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, build, and manage data pipelines and systems to support business operations and decision-making processes. This position offers an exciting opportunity to work with cutting-edge technologies within the energy and natural resources sector.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to efficiently process large volumes of data.<br>• Collaborate with cross-functional teams to gather requirements and design data solutions that meet business needs.<br>• Implement and optimize ETL processes to ensure the accuracy and reliability of data flows.<br>• Utilize technologies such as Apache Spark, Hadoop, and Kafka to manage and process data streams.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Perform data integration from multiple sources to create unified datasets for analysis.<br>• Ensure data security and compliance with organizational and industry standards.<br>• Continuously evaluate and adopt new tools and technologies to enhance data engineering practices.<br>• Provide technical guidance and mentorship to entry-level team members as needed.
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
We are looking for a skilled Salesforce Data Reporting Analyst to join our team on a contract basis in West Conshohocken, Pennsylvania. In this role, you will transform sales performance data into actionable insights, enabling leadership to refine processes and achieve measurable results. This position requires expertise in data analysis, reporting, and visualization, with a strong focus on tools like Power BI, Excel, and Salesforce platforms.<br><br>Responsibilities:<br>• Evaluate and measure the success of sales-enabled programs, including campaigns, events, promotions, and onboarding initiatives.<br>• Develop and maintain measurement frameworks, scorecards, and dashboards to track program performance and identify improvement opportunities.<br>• Analyze customer and practitioner behaviors to uncover trends that drive conversion, retention, and engagement.<br>• Provide insights into optimizing sales and marketing processes, including segmentation, timing, and follow-up strategies.<br>• Create and manage dashboards and data models in Power BI, ensuring accurate and up-to-date reporting.<br>• Perform ad-hoc analyses using advanced Excel techniques, including PivotTables and Power Query, to address leadership inquiries.<br>• Design and deliver thorough and well-structured presentations in PowerPoint, effectively translating complex data into clear recommendations.<br>• Extract and interpret data from Salesforce, leveraging its reporting tools and analytics capabilities.<br>• Maintain consistent documentation of reporting logic and data definitions to ensure reliability and trust in the numbers.<br>• Manage multiple priorities efficiently while delivering high-quality results within deadlines.
<p>We are looking for a skilled Business Intelligence Analyst to join our client's team on a short-term contract basis. In this role, you will leverage your expertise in data analysis, reporting, and dashboard creation to support executive-level decision-making and optimize IT operations. This position offers the flexibility of working remotely, making it ideal for professionals who thrive in independent environments.</p><p><br></p><p>Responsibilities:</p><p>• Analyze and interpret complex datasets to extract actionable insights for business operations.</p><p>• Design and develop dynamic reports and dashboards tailored for executive audiences.</p><p>• Collaborate with stakeholders to gather requirements and align analytics solutions with business needs.</p><p>• Utilize Power BI and Tableau to create visually compelling and user-friendly dashboards.</p><p>• Perform integrations between organizational tools and Power BI, ensuring seamless data flow.</p><p>• Support IT operations reporting and network operations reporting through detailed analyses.</p><p>• Apply advanced data modeling techniques to structure and organize information effectively.</p><p>• Develop APIs to facilitate data sharing and enhance reporting capabilities.</p><p>• Enable data-driven decision-making by providing accurate and timely insights.</p><p>• Work independently while contributing to team objectives and fostering collaboration.</p><p><br></p><p>Top 3 Hard Skills:</p><p>1 Past experience performing API integrations between SAAS tools and Power BI</p><p>2 Experience building dashboards for an executive level audience</p><p>3 Prior experience in IT Operations reporting or network operations reporting</p>
<p>Our client is seeking a Data Scientist II – Generative AI to join a cutting-edge team focused on building scalable, production-ready AI solutions that transform business workflows and deliver measurable impact across global operations. This role is ideal for professionals passionate about leveraging Generative AI technologies, creating intelligent agents, and driving innovation at scale.</p><p><br></p><p>You will design and implement GenAI-powered agents that streamline internal processes, enhance productivity, and support business development initiatives. Responsibilities include developing robust prompt engineering frameworks, building RAG pipelines, and converting prototypes into production-ready solutions. You’ll collaborate closely with engineering and business teams to ensure solutions meet diverse client needs and are optimized for global deployment.</p><p><br></p><p>Key projects include extending the company’s GPT platform, creating AI agents that improve efficiencies for RFP development, onboarding materials, and SOW requirements. Success in this role means quickly ramping up on backlog projects, delivering high-priority initiatives, and staying ahead of emerging GenAI frameworks to continuously advance internal AI capabilities.</p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.
<p>We are seeking an experienced <strong>Enterprise Data Warehouse (EDW) Architect</strong> to lead the design, evaluation, and implementation of a modern analytics and reporting platform. This role will be responsible for defining the end‑to‑end data architecture, selecting appropriate technologies, and ensuring scalable, governed, and business‑aligned data solutions across multiple subject areas.</p><p>The EDW Architect will partner closely with business stakeholders and technical teams to translate business requirements into a sustainable enterprise data warehouse and BI ecosystem.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Evaluate and recommend data warehousing platforms (e.g., Snowflake, Databricks, BigQuery, Redshift, Azure Synapse)</li><li>Select and design ETL/ELT solutions and orchestration frameworks (e.g., dbt, Fivetran, ADF, Informatica, Talend)</li><li>Design dimensional data models, including star and snowflake schemas, aligned to business use cases</li><li>Lead subject area modeling and architecture across Supply Chain, Sales, Finance, HR, and Procurement</li><li>Define BI‑layer architecture and reporting standards using tools such as Power BI, Tableau, or Looker</li><li>Establish data governance, lineage, metadata, and data quality frameworks</li><li>Produce architecture documentation, implementation roadmaps, and conduct knowledge transfer and handover to delivery teams</li></ul>