We are looking for an experienced Senior Data Scientist to join our dynamic team in Boston, Massachusetts. In this role, you will leverage your expertise in statistical modeling, machine learning, and cloud-based analytics to drive impactful decisions and solutions. The ideal candidate will bring a strong technical background, a passion for working with regulated data, and a commitment to ethical AI practices.<br><br>Responsibilities:<br>• Develop and implement advanced statistical models and machine learning algorithms to solve complex business problems.<br>• Monitor and evaluate the performance of AI models, ensuring reliability, fairness, and compliance with ethical standards.<br>• Collaborate with engineering and product teams to translate data-driven insights into actionable strategies.<br>• Utilize cloud-based tools such as AWS SageMaker and Redshift to design and deploy scalable analytics solutions.<br>• Handle sensitive healthcare or clinical trial datasets while adhering to strict data privacy and security regulations.<br>• Conduct exploratory data analysis and create visualizations to communicate findings effectively.<br>• Build and optimize ETL pipelines for efficient data transformation and integration.<br>• Apply Bayesian statistics and time-series forecasting techniques to improve predictive accuracy.<br>• Maintain comprehensive documentation of data science workflows and processes.<br>• Stay updated on industry trends and advancements to continuously enhance methodologies and tools.
<p><strong>Data Scientist III (Senior)</strong></p><p><strong>Location: </strong>Hybrid Columbus, OH </p><p><strong>Service Type:</strong> 13 Week Contract to Hire </p><p><strong>Pay: </strong>Available on W2 Basis </p><p><strong>Job Summary</strong></p><p>A leading financial services organization is seeking a skilled and data‑driven <strong>Senior Data Scientist</strong> to join its Enterprise Analytics team. This role supports the company’s growth strategy by delivering data‑driven insights and uncovering opportunities that enhance customer experience and value delivery.</p><p>The Senior Data Scientist will partner closely with business stakeholders using a consultative approach to analyze customer, product, channel, and digital data. This role focuses on translating complex analytical findings into actionable, intuitive insights that guide strategic decision‑making. The position requires strong technical expertise, clear communication skills, and a passion for optimization, data storytelling, and visualization.</p><p>This is a highly visible role with the opportunity to collaborate across teams such as Product, Marketing, Finance, Data, and senior leadership, while contributing to initiatives that shape enterprise‑level strategy.</p><p><strong>Team Overview</strong></p><p>This position sits within an Enterprise Data & Analytics organization composed of analytics and data science professionals experienced with tools such as <strong>R, Python, SAS, SQL, Tableau, and Adobe Analytics/Target</strong>. The team primarily supports Digital and Omnichannel initiatives but frequently partners across the organization, including Product, Marketing, Consumer Sales and Operations, Business and Commercial Banking, Private Client, Payments, and IT.</p><p>The team operates within an Agile framework and supports projects across the full lifecycle—from ideation through delivery—on large, high‑impact initiatives. The environment is fast‑paced and collaborative, offering opportunities to influence both individual projects and broader organizational direction.</p><p><strong> Key Responsibilities</strong></p><ul><li>Apply advanced analytics methods to extract value from complex business data</li><li>Design and execute large‑scale experiments and build data‑driven models to address business questions</li><li>Research and evaluate emerging techniques and tools in machine learning, deep learning, and artificial intelligence</li><li>Define data and modeling requirements to train and evolve predictive and deep learning models</li><li>Present data‑driven insights and recommendations to influence product and business teams</li><li>Partner with cross‑functional stakeholders to support data‑informed decision‑making</li><li>Perform additional duties as assigned</li></ul>
We are looking for an experienced Data Engineer to join our team in Chicago, Illinois. In this role, you will design and implement data solutions that drive business insights and support strategic decision-making. Your expertise in Microsoft Fabric and Azure Databricks will be key in optimizing data workflows and ensuring the reliability of our data systems.<br><br>Responsibilities:<br>• Develop, implement, and maintain scalable data pipelines to support business analytics and reporting needs.<br>• Utilize Microsoft Fabric and Azure Databricks to design efficient data architectures and workflows.<br>• Collaborate with cross-functional teams to understand data requirements and deliver tailored solutions.<br>• Ensure data integrity and security across all systems and processes.<br>• Optimize data storage and retrieval processes for improved performance and scalability.<br>• Monitor system performance and troubleshoot issues as needed to ensure seamless operations.<br>• Document processes and procedures to maintain a clear record of data engineering solutions.<br>• Stay updated with emerging technologies and industry best practices to enhance data engineering capabilities.
<p><strong>Data Scientist (Big Data) III – Contractor</strong></p><p><strong>Employment Type:</strong> 27 Week Contract, Potential for Extension or Conversion</p><p><strong>Location: </strong>MUST CURRENTLY RESIDE in Philadelphia Region</p><p><strong>Employment Type:</strong> Contract / Temporary</p><p><strong>Pay: </strong>Available on W2 </p><p><strong>Position Overview</strong></p><p>The Senior Data Scientist (Big Data) will support large‑scale data science initiatives by designing, developing, and deploying advanced analytical and machine learning solutions. This role collaborates closely with data engineers, analysts, software developers, and business stakeholders to deliver scalable, production‑ready data products that drive data‑informed decision making.</p><p>The successful candidate will apply statistical modeling, machine learning, and big data technologies to solve complex business problems, while also providing technical guidance and mentorship across project teams.</p><p><strong>Key Responsibilities</strong></p><ul><li>Lead complex, cross‑functional data science initiatives delivering solutions across multiple technologies and platforms.</li><li>Design, develop, and deploy data mining, statistical, machine learning, and graph‑based algorithms for large‑scale data sets.</li><li>Partner with data engineering teams to ensure proper implementation, performance, and operational use of analytical solutions.</li><li>Review and assess data science programs and models at an enterprise level to evaluate suitability, performance, and scalability.</li><li>Build and maintain scalable big‑data analytics solutions supporting accurate targeting, forecasting, and advanced insights.</li><li>Develop and support end‑to‑end machine learning pipelines, including data preparation, training, testing, validation, and deployment.</li><li>Establish performance metrics, monitoring, and evaluation procedures for models in production.</li><li>Translate complex analytical findings into clear, actionable insights for technical and non‑technical stakeholders.</li><li>Provide mentorship and technical guidance to junior team members.</li><li>Contribute to data strategy, methodology selection, and continuous improvement of analytics capabilities.</li><li>Support testing, validation, and user acceptance activities to ensure alignment with business requirements.</li><li>Perform additional related duties as needed to support analytics and data initiatives.</li></ul>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul><p><br></p>
<p>We are looking for an experienced Data Architect to join our team on a long-term contract basis in Cleveland, Ohio. This role involves designing scalable enterprise data platforms, ensuring data quality, and implementing robust data governance frameworks. You will play a pivotal role in leveraging Azure services and AI-driven analytics to optimize data architecture and enhance operational insights.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement enterprise-wide data architectures and canonical data models.</p><p>• Establish data ownership protocols, governance standards, and quality benchmarks.</p><p>• Analyze and stabilize data pipelines across distributed systems and platforms.</p><p>• Perform detailed data analysis and reconciliation to identify and resolve integrity issues.</p><p>• Design and implement monitoring tools to validate and improve data quality.</p><p>• Enhance data observability and lineage tracking to streamline governance processes.</p><p>• Utilize AI-driven analytics and automation to detect anomalies and accelerate decision-making.</p><p>• Collaborate with engineering teams to align data architecture with integration services and platform requirements.</p><p>• Optimize event-driven and distributed data systems for scalability and reliability.</p><p>• Conduct hands-on work with Azure services, such as Azure Data Factory and Synapse, to implement solutions.</p>
<p>Our client in the Lower Fairfield, CT area has an opening for Business Analyst Consultant. In this long-term contract role, you will play a pivotal part in driving data-driven insights and supporting strategic decisions across a dynamic organization. The ideal candidate will bring a wealth of expertise in data analysis and business intelligence, paired with a deep understanding of industry-specific practices.</p><p><br></p><p>Responsibilities:</p><p>• Produce accurate and detail-oriented reports tailored to a diverse range of stakeholders.</p><p>• Leverage best practices to ensure data analysis aligns with industry standards and organizational objectives.</p><p>• Collaborate with teams to deliver actionable insights based on cleaned and organized data.</p><p>• Utilize expertise in multiple data systems to streamline reporting processes and enhance efficiency.</p><p>• Ensure high levels of accuracy and attention to detail in all data-related outputs.</p><p>• Support strategic decision-making by analyzing transaction timelines and other critical metrics.</p><p>• Apply knowledge of private equity operations to optimize data reporting and visualization.</p><p>• Develop and maintain data visualization tools to communicate findings effectively.</p><p>• Manage the integration and analysis of data across multiple operating systems.</p><p>• Partner with senior executives to align analytics initiatives with organizational goals.</p><p>If you are interested in this Business Analyst role, please email your resume in a Word format to joseph.colagiacomo@roberthalf with the subject line: "Business Analyst"</p>
We are looking for a skilled Business Analytics/Data Analyst to support data-driven decision-making and enhance business intelligence processes. In this long-term contract role, you will play a key part in analyzing complex datasets and providing actionable insights to drive organizational success. The position is based in Arlington, Texas.<br><br>Responsibilities:<br>• Conduct detailed data analysis to identify patterns, trends, and actionable insights.<br>• Develop and maintain business intelligence reports using tools such as BusinessObjects Technologies.<br>• Perform audits of commercial contracts to ensure compliance and accuracy.<br>• Abstract and analyze key terms from contracts to support decision-making.<br>• Collaborate with cross-functional teams to gather data requirements and optimize analytics processes.<br>• Utilize advanced data analytics tools to process and interpret large datasets.<br>• Provide recommendations to improve business operations based on data findings.<br>• Ensure the integrity and security of data used for analysis and reporting.<br>• Support the organization in leveraging data for strategic planning and forecasting.
<p>We are looking for a dynamic Director of Data Science to join our team on a contract to hire basis in Saint Charles, Missouri. This is a unique opportunity to lead data governance initiatives, transform data into a strategic asset, and drive organizational impact within the education sector. The ideal candidate will thrive in a collaborative environment, working across departments to establish robust data frameworks and ensure compliance with industry standards.</p><p><br></p><p><strong>Responsibilities:</strong></p><ul><li>Develop and implement a comprehensive data governance strategy aligned with organizational goals, AI readiness, and regulatory requirements.</li><li>Establish data governance frameworks, policies, and standards to ensure data integrity, access, and compliance.</li><li>Lead cross-functional teams to drive data stewardship, data quality initiatives, and data lifecycle management.</li><li>Select and implement data governance tools and platforms (e.g., data cataloging, stewardship, lineage).</li><li>Define and operationalize data stewardship models across the university.</li><li>Oversee data quality initiatives, including monitoring, remediation, and reporting.</li><li>Develop and enforce data quality KPIs and dashboards to track progress and impact.</li><li>Facilitate training and awareness programs on data governance best practices across the organization.</li><li>Collaborate with IT, legal, and compliance teams to ensure data governance aligns with technological and regulatory landscapes.</li><li>Measure and report on data governance effectiveness and drive continuous improvement initiatives.</li><li>Serve as a trusted advisor to executive leadership on data governance and management issues.</li></ul><p><br></p>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
We are looking for a Data Visualization Specialist to join our team in Cincinnati, Ohio. In this role, you will leverage your expertise in business systems analysis to transform complex data into actionable insights through effective visualization techniques. The ideal candidate will have a passion for understanding business processes and applying technical skills to create impactful data solutions.<br><br>Responsibilities:<br>• Collaborate with stakeholders to analyze business processes and identify data visualization needs.<br>• Design and develop interactive dashboards and reports using tools such as Power BI, Tableau, and Qlik.<br>• Apply best practices in data visualization to ensure clear and accurate representation of data.<br>• Utilize data modeling techniques to enhance reporting capabilities and support business decision-making.<br>• Perform data analysis to identify opportunities for enrichment and aggregation, utilizing tools like Excel.<br>• Work with on-premises or cloud-based data platforms to ensure seamless integration and accessibility of data.<br>• Provide technical expertise and guidance on visualization tools and techniques to team members and clients.<br>• Stay updated on industry trends and advancements in data visualization technologies.
<p><strong>Data Reporting Analyst</strong></p><p>We are seeking a Data Reporting Analyst to support business reporting and data visualization initiatives. This role will focus on building and maintaining dashboards that provide insight into operational performance, SLAs, and KPIs. The position is an ongoing contract with potential for conversion and follows a hybrid work schedule.</p><p><strong>Responsibilities</strong></p><ul><li>Design, develop, and maintain Power BI dashboards to support business and IT reporting needs</li><li>Analyze and interpret data to track SLAs, KPIs, and operational performance</li><li>Partner closely with the IT Director to review trends, metrics, and reporting requirements</li><li>Ensure data accuracy, consistency, and usability across reports and dashboards</li><li>Support ongoing reporting enhancements and ad-hoc data analysis requests</li></ul><p><br></p>
We are looking for a skilled Database Engineer to join our team in Westlake, Ohio. In this role, you will design, implement, and optimize database solutions to support business operations and data-driven decision-making. You will collaborate with cross-functional teams and leverage advanced technologies to ensure robust database performance and scalability.<br><br>Responsibilities:<br>• Design and implement database solutions that align with business requirements and technical specifications.<br>• Optimize and tune database performance to ensure efficiency and scalability.<br>• Collaborate with cross-functional teams to analyze data needs and develop appropriate solutions.<br>• Write, test, and troubleshoot complex SQL queries, including joins, aggregations, and stored procedures.<br>• Monitor and maintain database systems, performing regular updates and performance checks.<br>• Utilize source control and CI/CD tools, such as Azure DevOps, to manage database development and deployment.<br>• Stay updated on emerging technologies and integrate new tools into existing systems as needed.<br>• Provide technical guidance and support to team members regarding database-related issues.<br>• Ensure data integrity and security through regular audits and implementation of best practices.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.
We are looking for a skilled Data Analytics Engineer with deep expertise in Power BI to join our team in Ankeny, Iowa. In this role, you will design, optimize, and manage semantic data models while ensuring the seamless performance of business intelligence tools. Your contributions will help drive data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design and implement semantic data models, including dimensional modeling and star schemas, to support business intelligence needs.<br>• Develop and optimize Power BI reports and dashboards, ensuring high performance and efficient query execution.<br>• Utilize Power Query (M) to transform and manipulate data for reporting purposes.<br>• Configure and enforce row-level security within Power BI to safeguard sensitive data.<br>• Conduct performance tuning for Power BI, including query plan optimization and refresh strategies.<br>• Collaborate with stakeholders to understand analytical requirements and translate them into actionable insights.<br>• Leverage tools such as Tabular Editor and deployment pipelines (Azure DevOps, GitHub) to streamline BI asset management.<br>• Work with cloud-based data platforms, including Databricks, Snowflake, or BigQuery, to support lakehouse architectures.<br>• Maintain adherence to enterprise BI governance practices and ensure scalable solutions for large datasets.<br>• Implement CI/CD patterns to manage semantic models and facilitate environment promotions.
We are looking for an experienced Healthcare SQL/Python Data Analyst to join our team in Sarasota, Florida. In this role, you will play a critical part in analyzing and integrating healthcare data to support organizational goals. If you have a strong technical background and a passion for leveraging data to improve healthcare solutions, we encourage you to apply.<br><br>Responsibilities:<br>• Perform in-depth data analysis to identify trends, insights, and opportunities for improvement within healthcare datasets.<br>• Develop and maintain SQL queries and scripts to support data extraction, manipulation, and reporting needs.<br>• Utilize Python for advanced data processing, automation, and analytical tasks.<br>• Integrate and manage data from multiple sources, ensuring accuracy and consistency.<br>• Collaborate with stakeholders to gather requirements and translate them into actionable data solutions.<br>• Design and generate reports using SSRS to present findings to internal teams and leadership.<br>• Work with HL7 standards to facilitate seamless data exchange and interoperability within healthcare systems.<br>• Troubleshoot and resolve issues related to data quality, integration, and system performance.<br>• Contribute to the development of data-driven strategies to enhance operational efficiency and patient care outcomes.
<p>Position Overview</p><p>We are seeking a delivery‑focused Data Automation Engineer to design and implement innovative automation solutions across a Microsoft Azure‑based data analytics platform. This role partners closely with engineering teams and stakeholders to translate business requirements into scalable data engineering and AI‑enabled solutions.</p><p>The ideal candidate is hands‑on with Azure Data Factory, Synapse Pipelines, Apache Spark, Python, and SQL, and brings experience building reliable ETL pipelines across SQL and NoSQL environments. This role emphasizes performance optimization, automation, and proactive data quality within Agile DevOps delivery models.</p><p><br></p><p>Key Responsibilities</p><p>Data Engineering & Automation</p><ul><li>Develop high‑performance data pipelines using Azure Data Factory, Synapse Pipelines, Spark Notebooks, Python, and SQL.</li><li>Design ETL workflows supporting advanced analytics, reporting, and AI/ML use cases.</li><li>Implement data migration, integrity, quality, metadata, and security controls across pipelines.</li><li>Monitor, troubleshoot, and optimize pipelines for availability, scalability, and performance.</li></ul><p>Performance Testing & Optimization</p><ul><li>Execute ETL performance testing and validate load performance against benchmarks.</li><li>Analyze pipeline runtime, throughput, latency, and resource utilization.</li><li>Support tuning activities (e.g., query optimization, partitioning, indexing).</li><li>Validate data completeness and consistency after high‑volume processing.</li></ul><p>Platform Collaboration & DevOps Support</p><ul><li>Collaborate with DevOps and infrastructure teams to optimize compute, memory, and scaling.</li><li>Maintain versioning and configuration control across environments.</li><li>Support production, testing, development, and integration environments.</li><li>Actively participate in Agile delivery processes including Program Increment planning.</li></ul>
<p><strong>Senior Data Engineer</strong></p><p><strong>Location:</strong> Philadelphia, PA (Hybrid/Onsite as required)</p><p><strong>Employment Type: </strong>39 Week Contract, Potential for Extension</p><p><strong>Position Overview</strong></p><p>We are seeking an experienced <strong>Data Engineer</strong> to support the development and ongoing operation of a large-scale, cloud-based IoT platform. This role focuses on building and supporting scalable, secure, and high‑performance infrastructure, tooling, and frameworks that enable engineering teams to efficiently develop, test, deploy, and operate modern microservices.</p><p>The ideal candidate brings strong cloud engineering experience, a passion for quality and security, and the ability to collaborate in a fast‑paced Agile environment.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop, operate, and support DevOps and platform engineering tools that enable cloud-based IoT services</li><li>Build and promote horizontal tools, frameworks, and best practices supporting microservices, CI/CD, security, monitoring, and performance</li><li>Collaborate with engineering teams to define development standards, workflows, and methodologies</li><li>Design and implement shared libraries and frameworks to support scalable and highly available systems</li><li>Support production platform operations, troubleshooting, and continuous improvement with focus on quality, performance, and security</li><li>Translate system architecture and product requirements into well-designed, tested software solutions</li><li>Work in an Agile environment delivering incremental, high-quality software</li><li>Provide technical guidance and promote modern engineering practices across teams</li></ul>
<p>We are seeking a Data Architect to lead the design and evolution of data architecture. This individual will play a critical role in shaping how data is collected, stored, integrated, and consumed across the organization, supporting analytics, and reporting. The ideal candidate brings a balance of hands‑on technical depth, architectural leadership, and strong collaboration with business and technology stakeholders</p><p><br></p><p>Responsibilities</p><ul><li>Lead the design and implementation of scalable, secure, and high‑performing enterprise data architectures.</li><li>Define data architecture standards, reference architectures, and best practices across platforms.</li><li>Architect data solutions supporting analytics, BI, data science, and operational reporting.</li><li>Design and oversee data models for data warehouses, data lakes, and lakehouse architectures.</li><li>Partner with application, infrastructure, security, and analytics teams to ensure seamless data integration.</li><li>Evaluate and recommend data technologies, tools, and platforms aligned to business strategy.</li><li>Establish and enforce governance, data quality, metadata management, and lineage practices.</li><li>Provide technical leadership and mentorship to data engineers and analytics teams.</li><li>Translate business requirements into actionable data solutions for senior stakeholders.</li></ul>
We are seeking a hands-on Senior Enterprise Architect in Artificial Intelligence (AI) to join our global Enterprise Architecture team. This role blends deep technical expertise with architectural design and practical implementation to drive AI-powered transformation initiatives.<br><br>As part of a forward-thinking global technology team, you’ll collaborate across business, data, and product functions to design and implement AI/ML solutions that enable digital products and services.<br><br>Key Responsibilities<br><br>Design and architect enterprise-scale AI/ML solutions across areas such as Machine Learning, Generative AI, Deep Learning, Virtual Assistants, and Cognitive Services (Vision/Image, Text/Language processing).<br>Develop and communicate AI roadmaps, future-state architectures, and design artifacts.<br>Rapidly prototype and build proof-of-concepts (PoCs) and MVPs for AI models and algorithms.<br>Evaluate and recommend AI/ML tools, platforms, and frameworks; conduct ROI analysis.<br>Experiment with and fine-tune LLMs, train custom models, and assess performance metrics.<br>Perform data exploration, cleansing, and feature engineering to prepare datasets for model training.<br>Guide and mentor engineering and data science teams through AI/ML solution design, deployment, and integration into enterprise workflows.<br>Continuously scan industry innovations and apply emerging AI/ML technologies to business problems.<br>What We’re Looking For<br><br>Strong technical and business acumen in creating technology-driven solutions.<br>Passion for experimenting with and adopting emerging AI/ML technologies.<br>Excellent communication and influencing skills; ability to present complex technical concepts to both technical and non-technical audiences.<br>Proven ability to balance timeliness, cost, and quality in solution design.<br>Experience leading digital transformation, target operating models, and performance improvement initiatives.<br>Qualifications<br><br>Bachelor’s degree in STEM or related field (MBA a plus).<br>5+ years in AI/ML solution architecture, prototyping, and experimentation.<br>5+ years working with AWS and/or Azure data, analytics, and AI services.<br>3+ years of experience with data science tools and frameworks.<br>Recent, hands-on experience with Generative AI, LLMs, and Agentic AI platforms.<br>Knowledge of cloud-native services (data storage, compute, networking, security).<br>Strong understanding of statistical methods, data preprocessing, and feature engineering.
Additional Skills:<br><br>Deep hands-on expertise with dbt (Cloud or Core), including model development, testing, macros, packages, documentation, scheduling, and performance optimization.<br>Strong command of dbt project structure, materializations (including incremental models and snapshots), and integration with BI-owned metric certification and semantic layers.<br>Ability to evaluate when to leverage community dbt packages versus building custom solutions.<br>Expert-level SQL for complex analytical transformations and performance optimization.<br>Strong data modeling skills across dimensional (Kimball), Data Vault, and domain-oriented patterns, including temporal modeling, SCDs, and surrogate keys.<br>Proven judgment in balancing normalization vs. denormalization for performance, flexibility, and downstream analytics use cases.<br>Experience designing and implementing automated data quality testing and validation frameworks.<br>Familiarity with data quality tooling (e.g., Great Expectations) and core data quality dimensions across analytics workflows.<br>Familiarity with modern analytics stacks and how analytics engineering integrates with cloud data platforms, ingestion tools, dbt, and BI systems.<br>Working knowledge of DataOps practices such as version control, CI/CD, and automated testing.<br>Knowledge of K–12 education data domains and metrics, including enrollment, attendance, assessments, staffing, and multi-state reporting requirements.<br>Familiarity with education data privacy (FERPA), academic calendars, and operational rhythms.<br>Proven ability to lead technical teams, facilitate requirements and design discussions, and manage competing stakeholder priorities.<br>Strong communication and change management skills, translating technical capabilities into clear business value. <br> <br><br>Required experience:<br><br>Bachelor’s degree in Computer Science, Information Systems, Data Science, Statistics, Mathematics, or a related field, or equivalent practical experience.<br>7+ years of experience in analytics engineering, data engineering, data analytics, or closely related technical roles.<br>3+ years of experience in technical leadership or people management, leading analytics, data, or BI teams.<br>Demonstrated hands-on experience with dbt (2+ years) building and maintaining production data models and transformations.<br>Strong data modeling expertise, with a proven track record designing dimensional models, analytics data marts, or business-facing data products.<br>Expert-level SQL skills, including complex analytical queries and performance optimization<br>Experience partnering with non-technical stakeholders to gather requirements and translate them into effective technical solutions.<br> <br>Preferred Education and Experience:<br><br>Master’s degree in Data Science, Statistics, Computer Science, or a related analytical field.<br>dbt Analytics Engineering certification or equivalent demonstrated expertise<br>Hands-on experience with Snowflake or comparable cloud data warehouse platforms.<br>Experience working with K–12 education data, student information systems, or education analytics.<br>Experience building data solutions for multi-state or geographically distributed organizations.<br>Exposure to data governance practices, including business glossaries and data quality frameworks<br>Familiarity with modern data stack tools (e.g., ingestion, orchestration, BI, and data quality platforms).<br>Experience leading analytics teams using Agile or iterative delivery methodologies.
We are looking for an experienced Data Engineering Manager to lead the strategic development and management of our enterprise data warehouse in Columbus, Ohio. This position combines technical expertise with leadership responsibilities to ensure data assets are efficiently structured, integrated, and utilized for operational processes, analytics, compliance, and external partnerships. The ideal candidate will drive innovation while maintaining robust data architecture standards to support the organization's long-term goals.<br><br>Responsibilities:<br>• Oversee the design, implementation, and optimization of the enterprise data warehouse and associated reporting systems.<br>• Ensure seamless data integration between source systems, analytics platforms, and reporting tools to maintain accuracy and reliability.<br>• Collaborate with various teams to align data structures and solutions with organizational objectives.<br>• Provide strategic direction for data architecture and recommend scalable solutions aligned with industry best practices.<br>• Develop and enforce standards for enterprise reporting, key performance indicators, and consistent data definitions.<br>• Promote uniformity in business rules and metric calculations across departments to ensure credible and authoritative data outputs.<br>• Review and validate data workflows, transformations, and reports to ensure completeness and accuracy.<br>• Identify and implement system improvements to enhance the functionality and efficiency of data platforms.<br>• Address and resolve issues related to data integrity or reporting disruptions, ensuring minimal downtime.<br>• Mentor team members and provide technical guidance to build a highly skilled and capable team.
<p><strong>Data Engineer (Python / AWS)</strong></p><p><strong>Location:</strong> Remote (Northeast / Greater Boston area preferred)</p><p><strong>Type:</strong> Full-Time</p><p><strong>Level:</strong> Mid-to-Senior Individual Contributor</p><p><strong>About the Role</strong></p><p>We are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You’ll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.</p><p>This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We’re not looking for someone who is aiming to move immediately into architecture or leadership.</p><p>This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Build and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.</li><li>Develop Python‑based data processing workflows deployed on AWS cloud services.</li><li>Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.</li><li>Help modernize existing workflows and assist in the gradual migration away from a legacy data system.</li><li>Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.</li><li>Troubleshoot pipeline issues, optimize performance, and improve overall system stability.</li><li>Contribute to best practices around code quality, testing, documentation, and data governance.</li></ul><p><br></p>
<p>We are seeking a highly skilled Full Stack Data Engineer who thrives in building modern, scalable data platforms from the ground up. This is an opportunity to work on a cloud-native data stack, influence architecture decisions, and deliver solutions that directly power business insights and operations.</p><p>If you enjoy owning the full lifecycle—from data ingestion to application layer—this role will be a strong fit.</p><p><br></p><p><strong>What You’ll Do</strong></p><p>You will operate as a hands-on engineer across the full data stack:</p><ul><li>Design, build, and maintain scalable ELT pipelines and workflows</li><li>Develop and optimize data models and warehouse structures in Snowflake</li><li>Build full stack data applications and backend services</li><li>Write clean, efficient Python and SQL code</li><li>Develop reusable data frameworks and components</li><li>Implement automated testing for data quality and reliability</li><li>Build and maintain CI/CD pipelines (GitHub-based)</li><li>Create reporting and visualization solutions (Power BI or similar)</li><li>Monitor production systems and troubleshoot data issues proactively</li></ul><p><strong>Tech Stack</strong></p><ul><li>Data Platform: Snowflake</li><li>Languages: Python, SQL</li><li>Cloud: AWS / Azure / GCP (environment dependent)</li><li>DevOps: GitHub, CI/CD pipelines</li><li>Visualization: Power BI (or similar BI tools)</li></ul>