We are looking for a skilled Data Analytics Engineer with deep expertise in Power BI to join our team in Ankeny, Iowa. In this role, you will design, optimize, and manage semantic data models while ensuring the seamless performance of business intelligence tools. Your contributions will help drive data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design and implement semantic data models, including dimensional modeling and star schemas, to support business intelligence needs.<br>• Develop and optimize Power BI reports and dashboards, ensuring high performance and efficient query execution.<br>• Utilize Power Query (M) to transform and manipulate data for reporting purposes.<br>• Configure and enforce row-level security within Power BI to safeguard sensitive data.<br>• Conduct performance tuning for Power BI, including query plan optimization and refresh strategies.<br>• Collaborate with stakeholders to understand analytical requirements and translate them into actionable insights.<br>• Leverage tools such as Tabular Editor and deployment pipelines (Azure DevOps, GitHub) to streamline BI asset management.<br>• Work with cloud-based data platforms, including Databricks, Snowflake, or BigQuery, to support lakehouse architectures.<br>• Maintain adherence to enterprise BI governance practices and ensure scalable solutions for large datasets.<br>• Implement CI/CD patterns to manage semantic models and facilitate environment promotions.
We are looking for a talented Data Engineer to join our team in Glendale, California. In this long-term contract role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and platforms that support critical business operations. Through collaboration with cross-functional teams, you will contribute to innovative data solutions that enhance decision-making processes and drive operational excellence.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support the Core Data platform.<br>• Create tools and services to enhance data discovery, governance, and privacy.<br>• Collaborate with product managers, architects, and software engineers to ensure the success of data platforms.<br>• Apply technologies such as Airflow, Spark, Databricks, Delta Lake, and Kubernetes to build advanced data solutions.<br>• Establish and document best practices for pipeline configurations, naming conventions, and operational standards.<br>• Monitor and ensure the accuracy, reliability, and efficiency of datasets to meet service level agreements (SLAs).<br>• Participate in agile and scrum ceremonies to improve collaboration and team processes.<br>• Foster relationships with stakeholders to understand their needs and prioritize platform enhancements.<br>• Maintain detailed documentation to support data governance and quality initiatives.
We are looking for a Data Engineer to strengthen our data and analytics capabilities in West Chester, Pennsylvania. This role will shape reliable data architecture, support enterprise reporting, and help turn complex information into practical business insight. The position is ideal for someone who enjoys building scalable data solutions, improving performance, and working across Microsoft-based data technologies.<br><br>Responsibilities:<br>• Design and support enterprise data solutions that enable dependable analytics, reporting, and operational decision-making.<br>• Build, optimize, and maintain database structures and data processing workflows using SQL Server, Azure SQL Database, and T-SQL.<br>• Develop and enhance SSIS packages and related data pipelines to ensure accurate, timely, and efficient movement of information across systems.<br>• Create scalable datasets and reporting foundations that support Power BI dashboards and broader business intelligence needs.<br>• Monitor data platform performance, troubleshoot issues, and implement improvements that increase stability, security, and efficiency.<br>• Partner with business and technical stakeholders to translate reporting and analytics goals into practical data engineering solutions.<br>• Lead efforts to move legacy SQL Server workloads into Azure-based services while maintaining data integrity and minimizing disruption.<br>• Establish standards and best practices for data quality, documentation, and ongoing platform maintenance.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
<p>We are looking for a Data Engineer to strengthen and expand an established Microsoft Fabric data environment. This Long-term Contract position is ideal for someone who can turn business data into reliable, well-structured assets that support reporting and decision-making. The role requires a hands-on engineer who can shape data architecture, build scalable pipelines, and communicate clearly with both technical teams and business stakeholders.</p><p><br></p><p>Responsibilities:</p><p>• Expand and improve an existing Microsoft Fabric platform to support dependable, scalable analytics solutions.</p><p>• Create and maintain a layered data architecture across Bronze, Silver, and Gold tiers, with emphasis on delivering trusted and business-ready curated datasets.</p><p>• Build ingestion and transformation processes for Salesforce data along with information from additional enterprise sources.</p><p>• Develop data models that improve accuracy, usability, and reporting value by evaluating structure, relationships, and downstream needs.</p><p>• Support the shift away from older warehouse and spreadsheet-driven reporting practices by introducing more modern data engineering approaches.</p><p>• Work autonomously to manage priorities while providing regular updates on progress, technical decisions, and potential risks.</p><p>• Collaborate with business partners to understand reporting goals and translate them into practical data solutions.</p><p>• Contribute to data processing and integration workflows using technologies such as Python, Spark, ETL frameworks, and related platform tools.</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
<p>We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.</p><p>• Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.</p><p>• Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.</p><p>• Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.</p><p>• Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.</p><p>• Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.</p><p>• Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.</p><p>• Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.</p><p>• Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.</p><p>• Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.</p>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
<p>We are supporting our client in hiring a Product Data Engineer who will take full ownership of their product information environment. This role centers on managing their PIM solution (Salsify), improving data structures, and building automated, API‑driven integrations that ensure product data is clean, scalable, and synchronized across platforms.</p><p>This position will be deeply involved in a major product‑data overhaul, including cleanup, restructuring, and long‑term system improvements. The ideal candidate is someone who enjoys solving data problems, building automated workflows, and improving the reliability of product information across systems.</p><p><br></p><p> Key Responsibilities</p><p>Product Data Platform Ownership</p><ul><li>Act as the primary administrator for the PIM platform</li><li>Define and maintain product attributes, hierarchies, and data relationships</li><li>Create validation rules, formulas, and workflows to enforce data standards</li><li>Manage permissions, governance, and platform configuration</li><li>Troubleshoot issues related to imports, exports, and publishing</li></ul><p>Integrations & Automation</p><ul><li>Manage integrations between the PIM and internal/external systems (eCommerce, retail, etc.)</li><li>Build and support API‑based data flows with a focus on reliability and scale</li><li>Develop automation using scripting (Python preferred)</li><li>Support event‑driven or automated pipelines to reduce manual work</li><li>Monitor integration performance and proactively resolve failures</li></ul><p>Product Data Improvements</p><ul><li>Contribute to a large‑scale product data cleanup and restructuring effort</li><li>Identify gaps in current data models and workflows</li><li>Partner with cross‑functional teams to define scalable data standards</li><li>Improve system design to support long‑term growth</li></ul><p>Channel Syndication</p><ul><li>Manage product data distribution to digital and retail channels</li><li>Ensure data meets channel‑specific requirements</li><li>Troubleshoot publishing issues and improve success rates</li><li>Support product launches and updates across channels</li></ul><p>Data Governance & Quality</p><ul><li>Establish naming conventions, validation rules, and governance standards</li><li>Define and track data quality KPIs (accuracy, completeness, timeliness)</li><li>Utilize or support data governance tools</li><li>Work with business teams to improve data accountability</li></ul><p>Reporting & Metrics</p><ul><li>Build dashboards and reports on data quality and system performance</li><li>Provide insights to leadership to support decision‑making</li><li>Track syndication outcomes and operational metrics</li></ul><p>Operational Support</p><ul><li>Handle day‑to‑day platform usage, enhancements, and issue resolution</li><li>Prioritize incoming requests and tickets</li><li>Ensure stability and reliability of product data operations</li></ul><p><br></p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
We are looking for an experienced Data Analyst to join our team on a long-term contract basis. In this role, you will play a vital part in transforming business requirements into actionable insights by utilizing advanced data analysis and reporting techniques. Based in Mequon, Wisconsin, this position offers an excellent opportunity to design and deliver custom reporting solutions that empower decision-making across the organization.<br><br>Responsibilities:<br>• Develop and implement reporting solutions using SQL Server Reporting Services (SSRS) and other technologies based on detailed requirement analysis.<br>• Conduct rigorous testing of reports to ensure accuracy, performance, and alignment with business needs.<br>• Deploy reporting solutions to both development and production environments while adhering to best practices.<br>• Create dynamic data visualizations that simplify complex information for decision-makers.<br>• Collaborate proactively with stakeholders to understand their needs and develop tailored reporting solutions.<br>• Ensure compliance with security policies for data and reporting systems.<br>• Design and maintain clear documentation for data points and reporting definitions.<br>• Establish subscriptions and manage user permissions for reports to ensure accessibility.<br>• Validate reports against trusted data sources to maintain consistency and reliability.<br>• Translate business needs into measurable insights using graphs, charts, and actionable data points.
<p><strong>Overview</strong></p><p>The Digital Marketing Analyst is responsible for collecting, analyzing, and interpreting digital marketing data to help drive strategic decisions across campaigns, channels, and customer journeys. This role supports marketing teams with actionable insights, reporting dashboards, testing recommendations, and performance optimization to maximize ROI and improve overall marketing effectiveness.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Analyze performance data across digital channels including paid search, paid social, email, display, SEO, and website analytics.</li><li>Build and maintain dashboards, weekly/monthly reports, and KPI scorecards using tools such as Google Analytics, Looker, Tableau, Power BI, or similar.</li><li>Partner with channel managers to provide insights that improve CTR, conversion rates, CAC, ROAS, and engagement metrics.</li><li>Conduct deep‑dive analysis on campaigns, audiences, funnels, and attribution paths.</li><li>Support A/B testing and experimentation by forming hypotheses, building test plans, and evaluating results.</li><li>Monitor website traffic patterns, user behavior, and key conversion events to uncover opportunities for optimization.</li><li>Work with marketing operations and CRM teams to ensure data accuracy, segmentation quality, and tracking integrity.</li><li>Assist in forecasting, budgeting, and performance modeling efforts.</li><li>Ensure tracking frameworks, UTM parameters, and tagging structures are accurate and properly implemented.</li><li>Present findings and recommendations to stakeholders in a clear, data‑driven format.</li></ul><p><br></p>
<p><strong>Overview</strong></p><p>The Digital Marketing Analyst is responsible for collecting, analyzing, and interpreting digital marketing data to help drive strategic decisions across campaigns, channels, and customer journeys. This role supports marketing teams with actionable insights, reporting dashboards, testing recommendations, and performance optimization to maximize ROI and improve overall marketing effectiveness.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Analyze performance data across digital channels including paid search, paid social, email, display, SEO, and website analytics.</li><li>Build and maintain dashboards, weekly/monthly reports, and KPI scorecards using tools such as Google Analytics, Looker, Tableau, Power BI, or similar.</li><li>Partner with channel managers to provide insights that improve CTR, conversion rates, CAC, ROAS, and engagement metrics.</li><li>Conduct deep‑dive analysis on campaigns, audiences, funnels, and attribution paths.</li><li>Support A/B testing and experimentation by forming hypotheses, building test plans, and evaluating results.</li><li>Monitor website traffic patterns, user behavior, and key conversion events to uncover opportunities for optimization.</li><li>Work with marketing operations and CRM teams to ensure data accuracy, segmentation quality, and tracking integrity.</li><li>Assist in forecasting, budgeting, and performance modeling efforts.</li><li>Ensure tracking frameworks, UTM parameters, and tagging structures are accurate and properly implemented.</li><li>Present findings and recommendations to stakeholders in a clear, data‑driven format.</li></ul><p><br></p>
<p>This role is responsible for data aggregation, data quality, reporting, and trend analysis to evaluate program and pharmacy performance. The individual must be skilled in querying and analyzing data, while also supporting end users in interpreting and visualizing insights. Success in this role requires translating business needs into technical solutions that drive accurate and actionable reporting.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Translate business requirements into technical specifications to support data analysis and visualization</li><li>Develop a strong understanding of stakeholder objectives to create clear, impactful dashboards and reports</li><li>Write SQL queries and generate reports by extracting accurate data from multiple databases</li><li>Design and build interactive dashboards and reports using Power BI</li><li>Analyze data to identify trends, optimize processes, and deliver timely insights</li><li>Evaluate datasets for accuracy, completeness, and scope; explain anomalies or inconsistencies</li><li>Support pharmaceutical manufacturer clients with data requests, reporting, and insights</li><li>Aggregate data from multiple sources to support reporting needs</li><li>Investigate and resolve data discrepancies using SQL or statistical analysis tools</li><li>Manage daily reporting and trend analysis for multiple programs</li><li>Ensure reports meet program requirements and are delivered accurately and on time</li><li>Collaborate effectively with cross-functional teams to achieve shared objectives</li><li>Develop reporting solutions by assessing information needs, consulting with users, analyzing workflows, and following the software development lifecycle</li><li>Maintain compliance with all applicable healthcare data privacy regulations, including HIPAA</li></ul><p><strong>Performance Criteria</strong></p><p>Performance is measured by the accuracy, quality, and timeliness of reporting, as well as effective communication with internal teams and external stakeholders. Meeting performance targets across assigned programs is essential.</p>
<p>We are looking for a Business Analyst for a Contract engagement supporting Finance Operations. In this role, you will work closely with finance leaders and cross-functional stakeholders to turn complex data into meaningful business insights that strengthen planning, reporting, and decision-making. This position is ideal for someone who combines analytical thinking with strong business acumen and can use modern reporting tools to improve visibility into financial performance.</p><p><br></p><p>Responsibilities:</p><p>• Partner with Global Finance leaders and stakeholders to identify reporting needs and translate business questions into actionable analytics solutions.</p><p>• Build and enhance interactive dashboards and reporting tools using Microsoft Power BI to improve access to financial and operational data.</p><p>• Analyze large data sets to uncover trends, variances, and opportunities that support better business decisions.</p><p>• Develop recurring and ad hoc reports that increase efficiency, accuracy, and consistency across finance reporting processes.</p><p>• Collaborate with teams across multiple locations to gather requirements, validate outputs, and ensure reporting solutions meet business needs.</p><p>• Use Microsoft Excel to perform detailed data reviews, reconciliations, and supplemental analysis where needed.</p><p>• Monitor data quality and resolve reporting issues by investigating inconsistencies and recommending practical improvements.</p><p>• Support changes to reporting processes or systems by documenting requirements, testing outputs, and helping stakeholders adopt updated tools and workflows.</p>