<p>An organization operating at global scale is seeking a <strong>Senior Data Scientist</strong> to join its corporate data science team. This role goes far beyond traditional analytics—you will develop innovative data-driven technologies, advanced machine learning models, and decision‑support systems that directly influence strategic and operational outcomes across a complex aviation and logistics environment.</p><p>The ideal candidate combines strong programming capability, deep machine learning expertise, and a solid foundation in operations research, along with the ability to clearly communicate technical insights to leaders across the business.</p><p>If you’re excited about applying science, experimentation, and advanced analytics to solve real operational problems, this role is an excellent opportunity to make a measurable impact.</p><p><br></p><p><strong>Position Summary</strong></p><p>As a Senior Data Scientist, you will partner with engineering, product, and business teams to identify opportunities where data science can drive measurable value. You will lead end‑to‑end development of predictive and prescriptive models, contribute to experimentation frameworks, and help shape the organization’s long‑term data science strategy. This role also provides technical leadership and mentorship to junior data science team members.</p><p><br></p><p><strong>Key Responsibilities</strong></p><p><strong>Strategic & Technical Leadership</strong></p><ul><li>Lead the planning, design, and execution of complex data science initiatives aligned with business objectives.</li><li>Translate ambiguous business requirements into well‑defined data science projects, deliverables, and timelines.</li><li>Partner with cross‑functional stakeholders to identify impactful use cases across operations and corporate functions.</li></ul><p><strong>Data Science & Modeling</strong></p><ul><li>Develop predictive, prescriptive, and optimization models to enhance operational efficiency and strategic planning.</li><li>Design and run experiments to test hypotheses and measure impact using statistical methods.</li><li>Perform exploratory data analysis to uncover insights, trends, and relationships across large and complex datasets.</li><li>Build and maintain decision‑support tools powered by machine learning and operations research techniques.</li><li>Monitor model performance in production and iterate as needed.</li></ul><p><strong>Data Engineering & Analysis</strong></p><ul><li>Collect, clean, and process structured and unstructured data from internal and external sources.</li><li>Perform ETL tasks, develop data pipelines, and apply advanced data preparation techniques.</li><li>Use tools such as SQL, Python, R, Pandas, Spark, or similar frameworks to analyze large datasets.</li></ul><p><strong>Communication & Collaboration</strong></p><ul><li>Create compelling visualizations, dashboards, and presentations to communicate complex insights to technical and non‑technical audiences.</li><li>Provide mentorship, guidance, and subject‑matter expertise to junior data scientists.</li><li>Contribute to the evaluation and adoption of emerging technologies, modeling techniques, and industry best practices.</li></ul><p><br></p>
<p>We are looking for a skilled Data Scientist to join our team on a long-term contract basis. This role involves designing innovative data models, ensuring data quality, and integrating diverse data ecosystems to support advanced audience insights and campaign management. If you have a passion for leveraging big data technologies to drive impactful results, we encourage you to apply.</p><p><br></p><p>Responsibilities:</p><p>• Develop scalable data architectures and pipelines to support real-time audience data processing and activation.</p><p>• Design and optimize graph models for representing complex relationships between individuals, devices, and households.</p><p>• Integrate data from various platforms, including DSPs, SSPs, DMPs, CDPs, and clean rooms, to enable audience onboarding and campaign measurement.</p><p>• Establish and maintain data validation, deduplication, and matching pipelines to ensure data accuracy and compliance with privacy regulations.</p><p>• Collaborate with cross-functional teams, including Data Science, Product, and Ad Operations, to operationalize audience data within workflows and optimization platforms.</p><p>• Build dashboards and monitoring tools to track performance metrics such as data quality, match rates, and audience graph efficiency.</p><p>• Create and maintain data models and pipelines to support household graph development and campaign lifecycle management.</p><p>• Integrate identity provider data with first-party data to enhance audience insights.</p><p>• Ensure data systems are optimized for scalability and performance, managing billions of identifiers across millions of households.</p>
<p><strong>About the Role</strong></p><p>We’re seeking a Data Scientist to help build predictive models, develop advanced analytics solutions, and uncover insights that drive strategic decision‑making. This role partners closely with data engineering, analytics, business leaders, and product stakeholders to solve complex problems using statistical modeling, machine learning, and exploratory analysis.</p><p><strong>Key Responsibilities</strong></p><ul><li>Build, train, and validate predictive and statistical models using Python, R, or similar tools.</li><li>Conduct data exploration, feature engineering, and hypothesis testing across large, complex datasets.</li><li>Develop machine learning pipelines and deploy models into production environments.</li><li>Partner with cross‑functional teams to translate business needs into analytical solutions.</li><li>Build dashboards, presentations, and visualizations to communicate insights clearly.</li><li>Evaluate model performance and monitor metrics to ensure operational accuracy and stability.</li><li>Work with data engineers on data quality, pipeline optimization, and scalable architectures.</li><li>Document methodologies, assumptions, and model outputs for technical and non‑technical audiences.</li></ul><p></p>
<p>We are looking for an experienced Data Analyst III to join our team. In this role, you will apply advanced mathematical and data modeling techniques to deliver insightful business analyses and recommendations. You will collaborate with multiple business groups and senior stakeholders to drive informed decision-making and enhance processes. This is a long-term contract position offering an exciting opportunity to work on complex projects and influence strategic outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Conduct detailed analyses to identify trends and provide actionable recommendations for business solutions.</p><p>• Summarize and present findings through reports, charts, and presentations to stakeholders.</p><p>• Develop and refine analytical models to support future business decisions.</p><p>• Collaborate with business teams to gather requirements and design effective data analysis strategies.</p><p>• Retrieve, verify, and prepare data from various sources for accurate reporting.</p><p>• Create advanced queries and tools to simplify data management and reporting processes.</p><p>• Forecast outcomes and analyze trends to support strategic planning and process improvements.</p><p>• Act as a liaison between departments, providing data-driven insights and answering queries about business processes.</p><p>• Mentor and guide less experienced team members, assigning tasks and ensuring project deliverables.</p><p>• Support cross-functional projects and provide input to external groups, vendors, or agencies as needed.</p>
<p>We are looking for a skilled Master Data Analyst to play a critical role in managing and maintaining master data across various platforms. In this long-term contract position, you will ensure data accuracy, integrity, and compliance with industry standards while collaborating with cross-functional teams to optimize system performance. Based in Northern KY, this role offers an opportunity to contribute to global data harmonization initiatives and support business intelligence efforts. This is a 6-month contract opportunity - <strong>hybrid with 3 days onsite</strong>. No remote availability for this role.</p><p><br></p><p>Responsibilities:</p><p>• Create and maintain accurate master data for finished goods across multiple platforms, adhering to established standards.</p><p>• Collaborate with corporate IT teams to enhance system performance and implement data optimization strategies.</p><p>• Oversee the publication process to ensure accurate data sharing with customers through designated platforms.</p><p>• Conduct regular data audits and utilize customer scorecards to verify the integrity of master data.</p><p>• Ensure compliance with GS1 guidelines, including updates to product dimensions, weights, and hierarchical information.</p><p>• Partner with cross-functional teams to verify the accuracy of nutritional, allergen, and barcode information on artwork.</p><p>• Develop and manage bills of materials (BOMs) and routers for co-packed manufacturing operations.</p><p>• Lead initiatives to harmonize global data standards across U.S. and Canadian operations.</p><p>• Manage the lifecycle of items, including the setup of new products and the obsolescence of outdated items.</p><p>• Provide key metrics and analytics to track the efficiency of new product introductions and other data processes.</p>
<p>We are looking for an Information Systems Developer to join our client's team in Birmingham, Alabama. In this role, you will play a key part in developing and optimizing data-driven solutions that support manufacturing operations and decision-making processes. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative projects within the automotive manufacturing industry.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement advanced data analytics and business intelligence solutions to enhance operational efficiency.</p><p>• Create and maintain dashboards and visualization tools using Power BI to present actionable insights.</p><p>• Analyze business systems, workflows, and manufacturing processes to identify opportunities for improvement.</p><p>• Design and deploy scalable software applications and automation tools using programming languages such as Python and C#.</p><p>• Integrate data from shop-floor systems and automation equipment into unified reporting platforms.</p><p>• Build and configure real-time monitoring and data collection systems using tools like Ignition.</p><p>• Collaborate with cross-functional teams to present project updates and technical information.</p><p>• Conduct troubleshooting and root cause analysis to improve digital manufacturing processes.</p><p>• Mentor entry level developers and provide technical guidance on Industry 4.0 initiatives.</p><p>• Ensure seamless communication and connectivity between machines, systems, and data interfaces</p>
<p>We are looking for a skilled Data Warehouse Analyst to join our team in New Jersey. In this role, you will transform logistics challenges into actionable insights through advanced data analysis and reporting. By collaborating with cross-functional teams, you will play a pivotal role in enhancing operational efficiency and driving key business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with Operations, Transportation, and Finance teams to establish and refine KPIs that drive logistics and fulfillment performance.</p><p>• Develop and optimize labor planning and forecasting models for warehouse and delivery operations, partnering closely with recruitment teams.</p><p>• Analyze distribution and fulfillment data to uncover performance trends and identify cost-saving opportunities.</p><p>• Design and maintain dashboards and reports to provide real-time insights into logistics metrics, including delivery times, warehouse productivity, and route optimization.</p><p>• Automate reporting processes to improve accuracy and timeliness of operational data.</p><p>• Continuously enhance data integrity and streamline workflows to optimize logistics operations.</p><p>• Work on data modeling and warehousing projects to support scalable analytics and reporting solutions.</p><p>• Partner with stakeholders to deliver clear and actionable insights to improve decision-making processes.</p><p>• Investigate and implement tools and techniques to improve overall business intelligence capabilities.</p>
We are looking for a skilled Data Analyst to join our team in Modesto, California, within the healthcare industry. This Contract to permanent position offers an exciting opportunity to contribute to data reporting and system integration efforts that support behavioral health services. If you thrive in a collaborative environment and have a passion for leveraging technology to drive impactful outcomes, we encourage you to apply.<br><br>Responsibilities:<br>• Coordinate data reporting initiatives to ensure accuracy and consistency across systems.<br>• Conduct research to maintain compliance with state and federal behavioral health reporting regulations.<br>• Support integration efforts for electronic health record (EHR) systems and other organizational technologies.<br>• Collaborate with cross-functional teams to enhance data management and reporting processes.<br>• Utilize advanced analytical skills to support data-driven decision-making and improve service delivery.<br>• Monitor data quality and implement strategies to address discrepancies or inefficiencies.<br>• Provide technical expertise to optimize system performance and align with regulatory standards.<br>• Assist in developing documentation and training materials related to data reporting and system usage.<br>• Identify opportunities for process improvements and present actionable recommendations.<br>• Ensure the confidentiality and security of sensitive health information during data handling and reporting.
<p><strong>Data Modeling and Analysis</strong></p><ul><li>Design data models and optimize performance: Creating the structure of data relationships ensuring efficient data retrieval and calculations.</li><li>Create calculated columns and measures: Using DAX to calculate derived values and aggregate metrics.</li><li>Perform exploratory data analysis (EDA): Using BI tools to explore data, identify trends, and patterns.</li><li>Apply advanced data analysis techniques (e.g., statistical analysis, time series analysis, predictive modeling).</li><li>Integrate machine learning models into Power BI dashboards.</li><li>Experience building semantic models</li></ul><p><strong>Dashboard Development and Visualization</strong></p><ul><li>Designing dashboards: Creating visually appealing and interactive dashboards.</li><li>Creating visualizations: Using charts, graphs, and other visual elements to represent data.</li><li>Implementing interactivity: Adding filters, slicers, and drill-down capabilities.</li><li>Expertise in SQL and DAX and knowledge of Python, R.</li><li>Strong proficiency in Power BI.</li><li>Data modeling and visualization skills.</li><li>Strong problem-solving skills to address technical challenges and data quality issues.</li><li>Analytical skills with capacity to analyze complex data problems and draw meaningful insights.</li></ul>
We are looking for an experienced Data Engineer to join our team in Cincinnati, Ohio. This long-term contract position offers the opportunity to work on cutting-edge data engineering projects while collaborating with multidisciplinary teams to deliver high-quality solutions. The ideal candidate will have a strong background in Databricks and big data technologies, along with a passion for optimizing data processes and systems.<br><br>Responsibilities:<br>• Design, build, and enhance data pipelines using Databricks Runtime, Delta Lake, Autoloader, and Structured Streaming.<br>• Implement secure and governed data access protocols utilizing Unity Catalog, workspace controls, and audit configurations.<br>• Manage and integrate structured and unstructured data from diverse sources, including APIs and cloud storage.<br>• Develop and maintain notebook-based workflows and manage jobs using Databricks Workflows and Jobs.<br>• Apply best practices for performance tuning, scalability, and cost optimization in Databricks environments.<br>• Collaborate with data scientists, analysts, and business stakeholders to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks jobs and system configurations.<br>• Ensure high standards of data quality and security across all engineering tasks.<br>• Troubleshoot and resolve issues to maintain operational efficiency in data pipelines.
We are looking for a skilled Data Engineer to join our team in Foxborough, Massachusetts, on a long-term contract basis. In this role, you will design, optimize, and maintain data pipelines and storage solutions, leveraging modern tools to ensure high performance and reliability. This position offers an exciting opportunity to collaborate across teams and implement cutting-edge practices in data engineering and analytics.<br><br>Responsibilities:<br>• Optimize Amazon Redshift performance by configuring distribution keys, sort keys, and fine-tuning queries.<br>• Develop and maintain robust data pipelines using AWS Glue and orchestrate workflows with Airflow.<br>• Manage semantic layers and metadata to support reliable analytics and AI-driven insights.<br>• Implement best practices for data partitioning, compression, and columnar storage formats.<br>• Monitor and troubleshoot data workflows to ensure high availability, reliability, and automated observability.<br>• Automate data processing tasks using Python and AWS native tools.<br>• Enforce data security and governance policies, including row- and column-level controls, using Lake Formation and AWS services.<br>• Oversee compliance monitoring and auditing through CloudWatch, CloudTrail, and similar tools.<br>• Continuously refine and improve data architecture by adopting emerging AWS best practices and patterns.<br>• Collaborate closely with Operations, Data Governance, and other teams to align with standards and achieve delivery objectives.
<p>Our transportation client is seeking a <strong>Data Engineer</strong> to support large‑scale logistics operations by building reliable, scalable, and cloud‑based data pipelines. This role is hands‑on, focused on delivering high‑quality data flows that improve shipment visibility, operational efficiency, and real‑time analytics across the supply chain.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, build, and maintain <strong>ETL/ELT pipelines</strong> that process high‑volume operational and logistics data</li><li>Develop transformation logic and automation using <strong>Python</strong>, <strong>SQL</strong>, and Azure-native tooling</li><li>Implement and orchestrate workflows in <strong>Azure Data Factory</strong>, <strong>Synapse</strong>, and <strong>Databricks</strong></li><li>Optimize data lake and warehouse performance, including tuning queries, pipelines, and storage layers</li><li>Monitor pipeline health and proactively troubleshoot failures, bottlenecks, and data quality issues</li><li>Contribute to data modeling efforts to support analytics, reporting, and downstream applications</li><li>Collaborate with BI, product, supply chain, and application teams to align pipelines with business needs</li><li>Maintain strong documentation around workflows, standards, and operational procedures</li><li>Support governance initiatives related to <strong>data quality</strong>, lineage, cataloging, and access policies</li><li>Follow best practices for security, compliance, and cloud resource management</li></ul><p><br></p><p><br></p>
We are looking for a Senior Data Engineer to develop and optimize enterprise data systems that support analytics and digital solutions. In this role, you will design and implement robust data architectures, ensuring seamless data integration and transformation processes across the organization. Your expertise will drive the creation of reliable pipelines and scalable infrastructure, enabling advanced analytics and machine learning capabilities.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines using Databricks, Spark, and Delta Lake to support enterprise-level analytics.<br>• Develop and maintain efficient data models tailored for AI, analytics, and operational systems.<br>• Lead Master Data Management initiatives to establish unified and accurate data records across platforms.<br>• Create batch and near-real-time data processing workflows for structured and semi-structured datasets.<br>• Collaborate with AI and software development teams to ensure delivery of high-quality datasets for machine learning.<br>• Define and enforce data architecture standards, ensuring scalability, reliability, and governance.<br>• Troubleshoot and optimize data systems to maintain performance and reliability in complex environments.<br>• Partner with cloud and IT teams to integrate modern data platforms and ensure seamless functionality.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work at the intersection of data engineering, analytics, and business strategy. If you have a strong background in building and optimizing data pipelines and are passionate about leveraging technology to drive insights, we encourage you to apply.<br><br>Responsibilities:<br>• Design, develop, and optimize scalable data pipelines and workflows to support business analytics.<br>• Collaborate with cross-functional teams to gather and analyze data requirements.<br>• Implement ETL processes to extract, transform, and load data from diverse sources.<br>• Utilize tools such as Apache Spark and Hadoop to manage large-scale data processing.<br>• Integrate streaming data systems using Apache Kafka to enhance real-time analytics.<br>• Monitor and troubleshoot data flow and systems to ensure high performance and reliability.<br>• Develop and maintain documentation for data engineering processes and systems.<br>• Ensure data security and integrity across all platforms and processes.<br>• Work closely with stakeholders to translate business needs into technical solutions.<br>• Stay updated with industry trends and emerging technologies to improve data engineering practices.
We are looking for an experienced Data Engineer to join our team on a long-term contract basis. Based in Houston, Texas, this role offers an exciting opportunity to work with cutting-edge data technologies, design scalable solutions, and contribute to data-driven decision-making processes. If you are passionate about optimizing data systems and driving innovation, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python.<br>• Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems.<br>• Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Design and maintain data models, ensuring alignment with business requirements.<br>• Conduct thorough testing and validation of data processes to guarantee accuracy.<br>• Document data workflows and processes for future reference and team collaboration.<br>• Provide technical guidance and support to team members on data engineering best practices.<br>• Stay current on emerging technologies and trends in big data and analytics.<br>• Contribute to improving data governance and security protocols.
<p>We are looking for a talented Data Engineer to join our team in Fort Lauderdale, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
<p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
<p>I’m building a world-class team to power our next generation of data products. We’re looking for a Senior Data Engineer who knows AWS inside and out—someone who can <strong>design secure, scalable data pipelines</strong>, <strong>own ETL/ELT workflows</strong>, <strong>engineer cloud data infrastructure</strong>, and <strong>deliver dimensional and semantic models</strong> that our analysts, data scientists, and applications can trust.</p><p>You’ll work closely with product, security, platform engineering, and analytics to move our architecture toward a <strong>real-time, governed, cost-aware</strong>, and <strong>highly automated</strong> data ecosystem.</p><p><strong>What You’ll Do</strong></p><ul><li><strong>Design & build end-to-end pipelines</strong> on AWS (batch and streaming) using services like <strong>Glue, EMR, Lambda, Step Functions, Kinesis, MSK</strong>, and <strong>Fargate</strong>.</li><li><strong>Develop robust ETL/ELT</strong> (PySpark, Spark SQL, SQL, Python) for structured, semi-structured, and unstructured data at scale.</li><li><strong>Own data storage & processing layers</strong>: <strong>S3 (Lake/Lakehouse), Redshift (or Snowflake on AWS), DynamoDB</strong>, and <strong>Athena</strong> with strong partitioning, compaction, and performance tuning.</li><li><strong>Implement data models</strong> (3NF, dimensional/star, Data Vault, Lakehouse medallion) for analytics and operational workloads.</li><li><strong>Engineer secure infrastructure-as-code</strong> with <strong>Terraform</strong> (or <strong>CDK</strong>) across multi-account setups; implement CI/CD via <strong>GitHub Actions</strong> or <strong>AWS CodeBuild/CodePipeline</strong>.</li><li><strong>Harden security & governance</strong>: use <strong>IAM</strong>, <strong>Lake Formation</strong>, <strong>KMS</strong>, <strong>Secrets Manager</strong>, <strong>VPC/PrivateLink</strong>, <strong>GLUE Catalog</strong>, and fine-grained access controls. Partner with SecOps on compliance (e.g., <strong>SOC 2</strong>, <strong>FedRAMP</strong>, <strong>HIPAA</strong> depending on dataset).</li><li><strong>Observability & reliability</strong>: build monitoring with <strong>CloudWatch</strong>, <strong>OpenTelemetry</strong>, and data quality checks (e.g., <strong>Great Expectations</strong>, <strong>Deequ</strong>), implement SLOs and alerts.</li><li><strong>Champion best practices</strong>: code reviews, testing (unit/integration), documentation, runbooks, and blameless postmortems.</li><li><strong>Mentor</strong> mid-level engineers and collaborate on architectural decisions, standards, and technical roadmaps.</li></ul><p><br></p>
<p>We are looking for a talented Data Engineer to join our team in Miami, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>We are looking for an experienced Data Engineer to join our team on a contract basis in Columbus, Ohio. In this role, you will take on a leadership position, driving the development and optimization of data pipelines that support enterprise-wide analytics and decision-making. You will also play a key role in mentoring team members, fostering collaboration, and ensuring the integrity and quality of data across various business functions.</p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and maintain efficient data pipelines to support enterprise analytics and reporting.</p><p>• Collaborate with business analysts and data science teams to refine data requirements and ensure alignment with organizational goals.</p><p>• Enhance and automate data integration and management processes to improve operational efficiency.</p><p>• Lead efforts to ensure data quality by testing for accuracy, consistency, and conformity to business rules.</p><p>• Provide training and guidance to team members and other stakeholders on data pipelining and preparation techniques.</p><p>• Partner with data governance teams to promote vetted content into the curated data catalog for reuse.</p><p>• Stay updated on emerging technologies and assess their impact on current systems and processes.</p><p>• Offer leadership, coaching, and mentorship to team members, encouraging attention to detail in their development.</p><p>• Work closely with stakeholders to understand business needs and ensure solutions meet those requirements.</p><p>• Perform additional duties as assigned to support organizational objectives.</p>
<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
<p><strong>Overview</strong></p><p>We are looking for a <strong>Data Engineer </strong>to design, build, and maintain data solutions that enable reporting, analytics, and informed decision‑making.</p><p><strong>Responsibilities</strong></p><ul><li>Design and maintain data pipelines and data models</li><li>Extract, transform, and load (ETL) data from multiple sources</li><li>Develop dashboards, reports, and analytics for business users</li><li>Ensure data accuracy, integrity, and governance</li><li>Collaborate with stakeholders to understand reporting needs</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team in Newtown Square, Pennsylvania. In this long-term contract position, you will play a pivotal role in designing and implementing robust data solutions to support organizational goals. This is an exciting opportunity to lead the development of modern data architectures and collaborate with diverse teams to drive impactful results.<br><br>Responsibilities:<br>• Lead the implementation of an enterprise Snowflake data lake, ensuring timely delivery and optimal performance.<br>• Oversee the integration of multiple data sources, including Oracle Financials, PostgreSQL, and Salesforce, into a unified data platform.<br>• Collaborate with finance teams to facilitate a transition to a 12-month accounting calendar and support accelerated financial close processes.<br>• Develop and maintain multi-source analytics dashboards to enhance operational insights and decision-making.<br>• Manage day-to-day operations of the Snowflake platform, focusing on performance tuning and cost optimization.<br>• Ensure data quality and reliability, providing business users with a trustworthy platform.<br>• Document architectural designs, data workflows, and operational procedures to support sustainable data management.<br>• Coordinate with external vendors to meet project deadlines and ensure successful implementations.