We are looking for a Data Engineer to strengthen our data and analytics capabilities in West Chester, Pennsylvania. This role will shape reliable data architecture, support enterprise reporting, and help turn complex information into practical business insight. The position is ideal for someone who enjoys building scalable data solutions, improving performance, and working across Microsoft-based data technologies.<br><br>Responsibilities:<br>• Design and support enterprise data solutions that enable dependable analytics, reporting, and operational decision-making.<br>• Build, optimize, and maintain database structures and data processing workflows using SQL Server, Azure SQL Database, and T-SQL.<br>• Develop and enhance SSIS packages and related data pipelines to ensure accurate, timely, and efficient movement of information across systems.<br>• Create scalable datasets and reporting foundations that support Power BI dashboards and broader business intelligence needs.<br>• Monitor data platform performance, troubleshoot issues, and implement improvements that increase stability, security, and efficiency.<br>• Partner with business and technical stakeholders to translate reporting and analytics goals into practical data engineering solutions.<br>• Lead efforts to move legacy SQL Server workloads into Azure-based services while maintaining data integrity and minimizing disruption.<br>• Establish standards and best practices for data quality, documentation, and ongoing platform maintenance.
We are looking for a Data Engineer to join a growing Data & Reporting team in Austin, Texas. This contract opportunity with potential for a permanent role is ideal for a detail-oriented candidate who can turn complex business data into reliable reporting solutions that support informed decision-making across the organization. In this role, you will help build and maintain the data foundation behind dashboards, reports, and analytics tools while partnering with stakeholders to translate business needs into scalable technical solutions.<br><br>Responsibilities:<br>• Build and enhance data architectures that collect, structure, and deliver information for reporting and analytics use across the organization.<br>• Design, develop, and support end-to-end ETL processes, moving data from source systems into data warehouses, operational stores, and business intelligence platforms.<br>• Partner with business subject matter experts to identify data sources, define reporting needs, and shape data models that align with operational goals.<br>• Create and maintain database solutions that support efficient storage, retrieval, performance tuning, and ongoing system reliability.<br>• Administer server databases by handling upgrades, patching, troubleshooting, and optimization activities to ensure stable production environments.<br>• Develop logical and conceptual data models and expand enterprise data structures to accommodate new business requirements and incoming data sources.<br>• Translate business questions into technical specifications and build reporting-ready datasets, metadata, and presentation-layer objects for dashboards and self-service analytics.<br>• Support data migration efforts from legacy platforms to modern solutions, including development, testing, deployment preparation, and code promotion activities.<br>• Work closely with stakeholders to determine which metrics and data elements provide the most value for visual reporting and decision support.<br>• Contribute to additional data engineering and reporting initiatives as business priorities evolve.
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
<p>Data Engineer</p><p>On-site | Austin, TX | Contract</p><p><br></p><p>Robert Half is partnering with a financial services organization to hire a Data Engineer in Austin, TX. This contract opportunity is ideal for someone with 3 years of experience building and optimizing modern data pipelines and analytics environments. The role focuses on moving data across cloud-based platforms to support reliable reporting, stronger data visibility, and informed decision-making across the organization.</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Build and maintain data pipelines that move information from data lake environments into structured warehouse and reporting platforms.</p><p>• Develop, schedule, and optimize ETL and ELT workflows using Matillion to support dependable data delivery.</p><p>• Design and manage Snowflake data models that improve accessibility, performance, and scalability for business users.</p><p>• Partner with analytics and reporting stakeholders to prepare datasets that support Tableau dashboards and visual insights.</p><p>• Monitor data processing jobs, troubleshoot failures, and resolve quality issues to maintain trusted data assets.</p><p>• Work within AWS-based environments to support secure, efficient, and scalable data integration processes.</p><p>• Collaborate with cross-functional teams to understand data needs and translate them into practical engineering solutions.</p>
We are looking for a Data Engineer to support the design, development, and optimization of modern data solutions in Houston, Texas. This Long-term Contract position is ideal for someone who enjoys building reliable pipelines, working with large-scale datasets, and improving the flow of information across systems. The role offers the opportunity to contribute technical expertise in a collaborative environment focused on performance, scalability, and data quality.<br><br>Responsibilities:<br>• Build and maintain scalable data pipelines that collect, transform, and deliver data for analytics and operational use.<br>• Develop ETL processes that improve the accuracy, consistency, and availability of data across multiple sources.<br>• Use Python and Apache Spark to process large datasets efficiently and support advanced data engineering workflows.<br>• Work with Hadoop-based environments to manage distributed data processing and storage activities.<br>• Integrate streaming and messaging solutions using Apache Kafka to support timely data movement and event-driven processing.<br>• Monitor pipeline performance, troubleshoot failures, and implement enhancements that strengthen reliability and efficiency.<br>• Partner with technical and business stakeholders to understand data needs and translate them into practical engineering solutions.
<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
We are looking for a hands-on Data Engineer to help build and expand an enterprise data platform in Mason, Ohio. This role will focus on creating a scalable Azure and Microsoft Fabric environment that brings together information from multiple business systems to support reliable reporting, analytics, and future data-driven innovation. The position is ideal for someone who enjoys designing core data architecture, improving data quality, and enabling business teams with trusted insights across manufacturing, service, supply chain, sales, and finance.<br><br>Responsibilities:<br>• Design and develop a modern enterprise data ecosystem using Azure and Microsoft Fabric, covering ingestion, storage, transformation, and delivery for analytics use cases.<br>• Create and support automated data pipelines that collect information from operational applications, external portals, and databases into a centralized environment.<br>• Structure raw, refined, and business-ready data layers so teams can access consistent data for dashboards, reporting, and self-service analysis.<br>• Consolidate and standardize data from platforms such as Epicor, JobBOSS, Salesforce, and other internal or third-party systems used across the organization.<br>• Build reusable data models and business logic that support reporting across order management, procurement, inventory, manufacturing operations, service, and finance.<br>• Introduce data validation, reconciliation, monitoring, and error-handling processes to strengthen data accuracy and reduce manual correction efforts.<br>• Partner with reporting teams by enabling governed semantic models, optimizing datasets, and supporting secure access to detailed transactional data in Power BI.<br>• Define and apply security controls, including role-based permissions and data access rules, in alignment with internal governance standards and privacy expectations.<br>• Maintain clear technical documentation for mappings, lineage, transformation rules, data definitions, and engineering standards.<br>• Assess existing integration methods, including manual and legacy approaches, and help implement more scalable and controlled data delivery patterns over time.
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.</p><p>• Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.</p><p>• Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.</p><p>• Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.</p><p>• Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.</p><p>• Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.</p><p>• Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.</p><p>• Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.</p><p>• Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.</p><p>• Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.</p>
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
<p>We are seeking a skilled and motivated Data Engineer to join our team, with deep hands-on experience building and optimizing data pipelines and lakehouse solutions in Databricks. In this role, you will collaborate with cross-functional teams to design, develop, and operate scalable, reliable data products that drive business value.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain batch and streaming data pipelines using Databricks (Spark, Delta Lake, Jobs/Workflows).</li><li>Partner with data scientists, analysts, and application teams to deliver trusted, well-modeled data sets and features in the Databricks Lakehouse.</li><li>Optimize Spark jobs (partitioning, caching, join strategies) and Databricks cluster configurations for performance, scalability, and cost.</li><li>Implement data quality checks, observability, governance, and security controls (e.g., Unity Catalog, access policies) within Databricks.</li><li>Troubleshoot and resolve pipeline failures, data issues, and production incidents; perform root-cause analysis and implement preventative improvements.</li></ul><p><br></p>
We are looking for a Data Engineer to help shape and strengthen the organization’s data ecosystem in Spartanburg, South Carolina. This role focuses on building scalable data structures and reliable integration solutions that support analytics, operational reporting, and long-term business goals. The ideal candidate brings a strong background in modern data platforms and enjoys partnering with technical and business teams to deliver secure, high-quality data solutions.<br><br>Responsibilities:<br>• Develop and refine data models, storage frameworks, and analytical repositories that enable efficient access to trusted information.<br>• Create scalable architecture approaches that support enterprise objectives while improving performance, reliability, and long-term maintainability.<br>• Establish data design standards, architectural patterns, and governance practices that promote consistency, quality, and security across platforms.<br>• Partner with software developers, analytics teams, and business stakeholders to translate operational needs into practical data solutions.<br>• Build and enhance data pipelines and integration processes that move information accurately across systems and support reporting and analysis.<br>• Implement processes for master data, metadata, and data quality management to strengthen governance and regulatory compliance.<br>• Assess emerging tools, cloud technologies, and platform options to recommend solutions that balance cost, scalability, and functionality.<br>• Work closely with data engineering peers to encourage strong technical alignment, knowledge sharing, and continuous improvement across the team.
<p>Seeking a Data Engineer to build and maintain data pipelines and reporting systems.</p><p><strong>Responsibilities</strong></p><ul><li>Design and maintain ETL processes</li><li>Work with large datasets in SQL</li><li>Optimize database performance</li><li>Support BI/reporting teams</li></ul><p><br></p>
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
<p>We are looking for an experienced Data Engineer to design and support data exchange solutions that connect external business partners with internal systems. This role will mainly work remotely with different office locations. We are looking for a candidate who lives in NC, within 2 hours of Greensboro, NC. This role focuses on building reliable integration processes, transforming structured files and API-based data, and ensuring critical information is available for reporting and operational use. The ideal candidate brings strong technical depth in data movement and troubleshooting, along with a practical understanding of manufacturing and supply chain workflows.</p><p><br></p><p>Responsibilities:</p><p>• Build and maintain business-to-business data interfaces that onboard new partner organizations and align incoming data with internal database structures.</p><p>• Develop automated workflows that ingest, transform, validate, and deliver data using file-based exchanges, APIs, and structured transaction formats such as EDI and X12.</p><p>• Configure and manage end-to-end integration processes across system interfaces, including flat-file handling, file sharing, and reporting-related data movement.</p><p>• Lead data transformation efforts through the full lifecycle by designing solutions, testing functionality, deploying processes, and stabilizing production performance.</p><p>• Investigate integration failures or data quality issues, identify root causes, and implement corrective actions to restore reliable processing.</p><p>• Partner with business intelligence and reporting teams to provide access to accurate, usable data sources that support analysis and operational decision-making.</p><p>• Apply manufacturing and supply chain process knowledge to structure data flows that support purchasing, components, orders, and assembly-related transactions.</p><p>• Use available tools and platforms to execute integration projects independently, including extracting data from enterprise applications and translating it into usable formats.</p><p>• Create scalable data pipelines that enable customer and order transactions to move through systems with minimal manual intervention.</p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p>Robert Half is seeking a <strong>Contract Data Engineer</strong> to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Development:</strong> Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources.</li><li><strong>Data Architecture:</strong> Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs.</li><li><strong>Cloud Data Platforms:</strong> Work within cloud environments such as <strong>AWS, Azure, or GCP</strong> to deploy and manage data solutions.</li><li><strong>Data Warehousing:</strong> Design and support enterprise data warehouses using platforms such as <strong>Snowflake, Redshift, BigQuery, or Azure Synapse</strong>.</li><li><strong>Big Data Processing:</strong> Develop solutions using big data technologies such as <strong>Spark, Databricks, Kafka, and Hadoop</strong> when required.</li><li><strong>Performance Optimization:</strong> Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency.</li><li><strong>Data Quality & Reliability:</strong> Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability.</li><li><strong>Collaboration:</strong> Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions.</li><li><strong>Documentation:</strong> Maintain detailed documentation for pipelines, data flows, and system architecture.</li></ul><p><br></p>
<p>We are seeking a Data Scientist to support and enhance production analytics and machine learning solutions. This role focuses on improving model performance, scalability, and reliability while partnering with cross-functional teams to deliver impactful data-driven outcomes.</p><p><br></p><p><strong>Responsibilities</strong></p><ul><li>Develop, evaluate, and deploy machine learning and analytics solutions in production environments.</li><li>Analyze existing models and data workflows; identify opportunities for improvement and modernization.</li><li>Collaborate with product, engineering, and business teams to deliver scalable solutions.</li><li>Establish performance monitoring, testing, and iteration processes for continuous improvement.</li><li>Contribute to data pipeline development and ensure high-quality, reliable datasets.</li></ul><p><br></p>
We are looking for a Data Scientist to join a fast-moving IT consulting environment in Atlanta, Georgia. This role focuses on turning complex data into practical business insights, with a strong emphasis on forecasting, predictive modeling, and customer-focused problem solving. The ideal candidate combines advanced machine learning expertise with hands-on data preparation skills and can clearly explain how analytical work influences end users and business outcomes.<br><br>Responsibilities:<br>• Build, validate, and refine forecasting and predictive models using Python and modern machine learning frameworks for business-driven use cases.<br>• Develop analytical solutions with tools such as scikit-learn, XGBoost, LightGBM, and time-series or deep learning methods based on project needs.<br>• Use Databricks and Apache Spark to process large datasets efficiently and support scalable model development workflows.<br>• Prepare, transform, and organize data by writing queries, performing ETL tasks, and improving data quality for downstream analysis.<br>• Translate technical findings into clear recommendations for clients and stakeholders, emphasizing business impact and user experience.<br>• Partner with customer-facing teams to define problem statements, shape data-driven approaches, and deliver actionable insights in a fast-paced setting.<br>• Apply product thinking when designing models and analytical outputs to ensure solutions align with customer needs and practical use.<br>• Contribute domain knowledge to projects involving retail or consumer goods data, helping tailor models to industry-specific patterns and challenges.
<p>We are looking for an experienced Data Scientist to join our team in San Francisco, California. In this senior-level role, you will lead initiatives from concept to completion, leveraging your expertise in data analytics, finance, and product development. This is a hybrid (2-3 days) long-term contract position based in San Francisco and offers a unique opportunity to work on impactful projects while mentoring less experienced team members.</p><p><br></p><p>Responsibilities:</p><p>• Take full ownership of projects, managing them from initial planning through delivery and implementation.</p><p>• Develop and prototype innovative, data-driven tools, including internal tools such as pricing engines.</p><p>• Define and implement parameters and logic for pricing outputs based on large datasets.</p><p>• Build and maintain data models and prototypes to support business decision-making processes.</p><p>• Collaborate with stakeholders to ensure tools and models align with business needs and fund-related use cases.</p><p>• Perform hands-on analysis and modeling of large datasets to support pricing and tool development.</p><p>• Create solutions that integrate data insights with financial operations, enhancing efficiency and accuracy.</p><p>• Mentor and oversee less experienced team members, ensuring team goals and project deliverables are met.</p><p>• Drive execution of projects by defining requirements, managing timelines, and delivering functional products.</p><p>• Experience with large scale data processing and distributed systems like Spark or Airflow.</p><p><br></p>
<p>We are seeking a highly skilled Full Stack Data Engineer who thrives in building modern, scalable data platforms from the ground up. This is an opportunity to work on a cloud-native data stack, influence architecture decisions, and deliver solutions that directly power business insights and operations.</p><p>If you enjoy owning the full lifecycle—from data ingestion to application layer—this role will be a strong fit.</p><p><br></p><p><strong>What You’ll Do</strong></p><p>You will operate as a hands-on engineer across the full data stack:</p><ul><li>Design, build, and maintain scalable ELT pipelines and workflows</li><li>Develop and optimize data models and warehouse structures in Snowflake</li><li>Build full stack data applications and backend services</li><li>Write clean, efficient Python and SQL code</li><li>Develop reusable data frameworks and components</li><li>Implement automated testing for data quality and reliability</li><li>Build and maintain CI/CD pipelines (GitHub-based)</li><li>Create reporting and visualization solutions (Power BI or similar)</li><li>Monitor production systems and troubleshoot data issues proactively</li></ul><p><strong>Tech Stack</strong></p><ul><li>Data Platform: Snowflake</li><li>Languages: Python, SQL</li><li>Cloud: AWS / Azure / GCP (environment dependent)</li><li>DevOps: GitHub, CI/CD pipelines</li><li>Visualization: Power BI (or similar BI tools)</li></ul>