<p>Position Overview</p><p>We are seeking a talented <strong>Data Engineer</strong> with strong experience in <strong>Python, AWS, and Databricks</strong> to design and build scalable data pipelines and modern data platforms. The ideal candidate will help develop and maintain data infrastructure that supports analytics, machine learning, and business intelligence initiatives. This role requires hands-on experience working with large datasets, cloud-native architectures, and distributed data processing frameworks.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain <strong>scalable data pipelines and ETL/ELT workflows</strong> using Python and cloud technologies.</li><li>Develop and optimize data solutions using <strong>AWS services and Databricks</strong>.</li><li>Build and manage <strong>data lakes and data warehouses</strong> for structured and unstructured data.</li><li>Implement <strong>data transformation and processing pipelines</strong> using Apache Spark within Databricks.</li><li>Integrate data from multiple sources including APIs, databases, and streaming systems.</li><li>Ensure <strong>data quality, governance, security, and compliance</strong> across the data platform.</li><li>Monitor pipeline performance and troubleshoot <strong>data pipeline failures or latency issues</strong>.</li><li>Collaborate with <strong>data analysts, data scientists, and business stakeholders</strong> to deliver reliable datasets.</li><li>Optimize storage and compute costs within the AWS ecosystem.</li><li><br></li></ul><p><br></p>
We are looking for a highly skilled Data Scientist to contribute to a long-term contract position within the healthcare industry. This role focuses on supporting the enterprise-wide launch of Power BI by creating and delivering engaging, high-quality learning materials. The ideal candidate will work remotely, collaborating closely with leadership and subject matter experts to empower analytics and non-analytics professionals to efficiently use Power BI in their daily tasks.<br><br>Responsibilities:<br>• Develop scalable learning experiences tailored to diverse user personas and varying levels of technical expertise.<br>• Collaborate with the data literacy program team and Power BI specialists to ensure instructional content aligns with program objectives.<br>• Translate complex concepts related to Power BI and business intelligence into accessible and engaging educational materials.<br>• Design and deliver training programs using instructional design best practices and tools such as Camtasia, Adobe Creative Suite, or Articulate.<br>• Conduct user interviews to understand learning challenges and tailor content to meet specific needs.<br>• Enhance or create new data literacy resources, such as courses, modules, and curricula, to address emerging needs and best practices.<br>• Evaluate and adapt existing educational materials to make them sustainable and applicable across the organization.<br>• Participate in marketing efforts for the Data Literacy Program, including speaking engagements, blog posts, and other creative channels.<br>• Identify opportunities for new program initiatives that support analytics tools and data literacy.<br>• Serve as a subject matter expert in data literacy on national platforms through networking and conference participation.
<p><strong>Responsibilities:</strong></p><ul><li>Collect, process, and analyze large structured and unstructured datasets to identify meaningful trends, patterns, and opportunities for business improvement</li><li>Develop, test, and deploy predictive models, machine learning algorithms, and statistical analyses to address key business challenges </li><li>Collaborate with cross-functional teams, including business analysts, engineers, and stakeholders, to identify analytics solutions and align deliverables with strategic goals </li><li>Communicate complex findings and recommendations clearly to technical and non-technical audiences through reports, dashboards, and visualizations</li><li>Automate repetitive tasks, streamline data flows, and ensure data quality and governance throughout the analytics lifecycle</li><li>Stay updated on industry trends, emerging technologies, and best practices in data science and AI to continuously enhance solutions</li></ul><p><br></p>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
<p><strong>Overview</strong></p><p>We are seeking a Senior Data Engineer to support a major Salesforce Phase 2 data migration initiative. This role will focus heavily on building and optimizing data pipelines, developing ETL workflows, and moving CRM data from Salesforce into Databricks.</p><p>The engineer will work closely with a senior team member, contribute to Scrum ceremonies, and play a key role in developing the core CRM data environment used by the advertising organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><p><strong>Data Engineering & Migration</strong></p><ul><li>Develop ETL jobs that move and transform Salesforce data into Databricks.</li><li>Build, test, and maintain high‑volume data pipelines across AWS + Databricks.</li><li>Perform data migration, data integration, and pipeline development (including Mulesoft-related work).</li><li>Ensure all pipelines are reliable, scalable, and optimized for production.</li></ul><p><strong>Development & Infrastructure</strong></p><ul><li>Use Python and PySpark to build ETL components and transformation logic.</li><li>Leverage Spark/PySpark for distributed processing at scale (must‑have).</li><li>Use Terraform to provision and manage cloud infrastructure.</li><li>Set up CI/CD pipelines using Concourse or GitHub Actions for automated deployments.</li></ul><p><strong>Quality, Documentation & Support</strong></p><ul><li>Document ETL processes, pipelines, and data flows.</li><li>Participate in testing, QA, and validation of migrated datasets.</li><li>Provide post‑delivery support and proactively mitigate project risks or single points of failure (SPOF).</li><li>Troubleshoot production issues and implement long‑term fixes to maintain pipeline stability.</li></ul><p><strong>Collaboration</strong></p><ul><li>Work closely with engineering teammates to translate business requirements into working pipelines.</li><li>Participate in weekly Scrum ceremonies.</li><li>Contribute to shared best practices and continuous improvement across the data engineering team.</li></ul><p><br></p>
<p><strong>For immediate response please message Valerie Nielsen on LinkedIn or email!</strong></p><p><br></p><p><strong>Job Title:</strong> Senior Data Engineer</p><p> <strong>Location:</strong> Hybrid – Westwood (Los Angeles, CA) near University of California, Los Angeles</p><p> <strong>Compensation:</strong> $175,000 – $185,000 base salary + 10% annual bonus</p><p> <strong>Employment Type:</strong> Full-Time</p><p><br></p><p>Overview</p><p>We are seeking a <strong>Senior Data Engineer</strong> to join a growing data team in <strong>Westwood, CA</strong>. This role will focus on designing and building scalable data pipelines, supporting analytics and reporting initiatives, and improving data infrastructure across the organization.</p><p>The ideal candidate is highly experienced with <strong>Snowflake, dbt, Python</strong>, and modern data pipeline architecture, and enjoys working closely with analytics and business teams to deliver reliable, high-quality data. Experience integrating data from CRM platforms such as <strong>Salesforce</strong> is a strong plus.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines</strong> supporting analytics, reporting, and operational data needs</li><li>Build and optimize data models and transformations using <strong>dbt</strong> within a <strong>Snowflake</strong> data warehouse environment</li><li>Develop robust ETL/ELT workflows using <strong>Python</strong> and modern data engineering best practices</li><li>Collaborate with analytics teams to deliver clean, reliable datasets used in <strong>Power BI</strong> dashboards and reporting</li><li>Ensure data quality, reliability, and performance across the data platform</li><li>Optimize Snowflake warehouse performance and manage cost-efficient data storage and compute usage</li><li>Integrate data from internal and external systems, including CRM and SaaS platforms</li><li>Partner with stakeholders across engineering, product, and business teams to define data requirements and solutions</li><li>Maintain documentation and promote data engineering standards and best practices</li></ul><p><br></p><p><br></p>
We are looking for a highly skilled Senior Data Engineer to join our team in Edgewood, New York. This role is ideal for someone who is detail oriented and has expertise in developing scalable data pipelines, modeling data structures, and optimizing data infrastructure for performance and reliability. The right candidate will play a key role in shaping our data engineering function and collaborating with cross-functional teams to deliver impactful solutions.<br><br>Responsibilities:<br>• Design and maintain efficient and scalable data pipelines to support various operational and commercial systems.<br>• Develop and manage modern data warehouse infrastructure using tools such as BigQuery and dbt.<br>• Integrate, transform, and organize data from multiple sources into structured, queryable formats.<br>• Create and manage logical and physical data models to enhance analytics and reporting capabilities.<br>• Collaborate with stakeholders to enable self-service reporting and build dashboards using platforms like Looker and Looker Studio.<br>• Implement best practices for data engineering, including testing, monitoring, and ensuring pipeline reliability.<br>• Optimize the performance, scalability, and cost-efficiency of data pipelines and warehouses.<br>• Partner with engineering, operations, and business teams to translate data needs into scalable solutions.<br>• Contribute to the improvement of engineering processes, coding standards, and documentation.<br>• Mentor team members and support onboarding as the team grows.
<p>We are seeking a Senior Data Engineer – Ingest to help transform data into meaningful insights and power innovation across the organization. In this role, you will work with a collaborative team of technologists to build scalable data solutions, integrate diverse data sources, and strengthen the core data platform. Your engineering expertise will directly support analytics, data science, operations, and key business stakeholders.</p><p>If you’re passionate about building high‑quality data systems that make a measurable impact, this role offers the opportunity to shape the future of a large, data‑driven organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Maintain, update, and expand configuration‑driven data pipelines within the core data platform.</li><li>Build tools and services supporting data discovery, lineage, governance, and privacy.</li><li>Partner with software engineers, data engineers, architects, and product managers to deliver reliable and scalable data solutions.</li><li>Help define and document data standards, naming conventions, pipeline best practices, and system guidelines.</li><li>Ensure the reliability, accuracy, and operational efficiency of datasets to meet SLAs.</li><li>Participate in Agile/Scrum ceremonies and contribute to ongoing process improvements.</li><li>Collaborate closely with users and stakeholders to understand needs and prioritize enhancements.</li><li>Maintain detailed technical documentation to support data quality, governance, and compliance requirements.</li></ul><p><br></p>
<p>Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies.</p><p>Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives.</p><p>Oversee and execute migrations from legacy relational databases into Databricks-based ecosystems.</p><p>Design and structure scalable data pipelines and foundational data infrastructure aligned with organizational goals.</p><p>Create and maintain ETL/ELT processes within Databricks to ensure efficient ingestion, transformation, and delivery of data.</p><p>Continuously refine and optimize data workflows to improve performance, stability, and data quality across all processes.</p><p>Manage end-to-end data transitions to ensure operational continuity with minimal business disruption.</p><p>Monitor Databricks workloads and optimize performance, scalability, and cost efficiency across compute and storage layers.</p><p>Partner with data engineers, scientists, analysts, and product stakeholders to gather requirements and build fit‑for‑purpose data solutions.</p><p>Establish and enforce data engineering best practices, development standards, and architectural guidelines.</p><p>Assess emerging tools and technologies to enhance pipeline efficiency, reliability, and automation capabilities.</p><p>Provide technical direction, guidance, and mentorship to junior engineers and team members.</p><p>Collaborate closely with DevOps and infrastructure teams to deploy, manage, and support data systems in production.</p><p>Ensure all data solutions meet compliance standards, organizational security policies, and regulatory obligations.</p><p>Work with enterprise architects and IT leadership to align data architecture with broader technology strategies and long-term roadmaps</p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Boston, Massachusetts. In this role, you will be responsible for designing and building a robust data platform from the ground up, playing a pivotal part in shaping the data strategy and supporting AI-driven initiatives. This is a unique opportunity to contribute to the creation of a new data engineering function within a dynamic financial services environment. This role is hybrid, onsite in Boston 3 days a week. </p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and implement a scalable data platform using Microsoft Fabric and other technologies within the Microsoft ecosystem.</p><p>• Collaborate with stakeholders to define the data strategy and implement solutions that align with business goals.</p><p>• Oversee and manage external consultants assisting with the development of the data platform.</p><p>• Support AI enablement initiatives by ensuring the data architecture meets analytical and operational needs.</p><p>• Create and maintain ETL processes to ensure efficient data extraction, transformation, and loading.</p><p>• Optimize database performance across SQL, NoSQL, and other database systems.</p><p>• Utilize Python for data engineering tasks, including scripting and automation.</p><p>• Work closely with IT and analytics teams to ensure seamless integration of the data platform into existing systems.</p><p>• Provide technical leadership and guidance while exploring future opportunities to build and expand the data engineering function.</p><p>• Ensure compliance with industry standards and best practices in data security and management.</p>
We are looking for an experienced Senior Data Engineer to join our team in Woodbury, Minnesota. In this role, you will play a key part in designing and optimizing data systems, ensuring scalability and reliability for business-critical operations. The ideal candidate will have a strong background in data engineering and a passion for leveraging technology to drive impactful solutions.<br><br>Responsibilities:<br>• Redesign and optimize complex business logic embedded in Postgres functions to improve functionality.<br>• Develop scalable database schemas and create data models that are optimized for analytics and AI applications.<br>• Implement database partitioning, indexing, and performance tuning to ensure data growth is supported efficiently.<br>• Build and maintain production-grade data pipelines from data ingestion to end-user consumption.<br>• Establish robust processes for data quality assurance, monitoring, and operational reliability within pipelines.<br>• Troubleshoot and resolve data-related and performance issues directly in production environments.<br>• Collaborate with cross-functional teams to ensure seamless integration of data systems into business processes.
<p>We are looking for a skilled Data Analyst / Engineer to join our team on a contract basis remotely. This role focuses on financial data processing, automation, and reporting within a dynamic environment. The ideal candidate will excel at managing data workflows, automating manual processes, and delivering accurate insights to support business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Extract and reconcile financial data from multiple databases, ensuring accuracy and consistency across accounts receivable, accounts payable, and general ledger lanes.</p><p>• Automate manual reporting processes by developing repeatable daily and month-end pipelines for reliable and auditable data.</p><p>• Design and oversee data workflows across development, production, and utility databases, ensuring secure and efficient access.</p><p>• Create and deliver advanced Excel-based reports using macros, formulas, and Power Query to enhance usability for finance teams.</p><p>• Implement data validation and snapshot techniques to support reconciliation and decision-making processes.</p><p>• Ensure the traceability and accuracy of financial data by establishing robust controls and audit mechanisms.</p><p>• Collaborate with stakeholders to understand reporting requirements and translate them into scalable solutions.</p><p>• Utilize expertise in SQL and Teradata Data Warehouse to optimize database objects and queries for performance.</p><p>• Develop and maintain documentation for automated processes and data workflows to ensure clarity and continuity.</p>
We are looking for an experienced Data Engineering Manager to lead the strategic development and management of our enterprise data warehouse in Columbus, Ohio. This position combines technical expertise with leadership responsibilities to ensure data assets are efficiently structured, integrated, and utilized for operational processes, analytics, compliance, and external partnerships. The ideal candidate will drive innovation while maintaining robust data architecture standards to support the organization's long-term goals.<br><br>Responsibilities:<br>• Oversee the design, implementation, and optimization of the enterprise data warehouse and associated reporting systems.<br>• Ensure seamless data integration between source systems, analytics platforms, and reporting tools to maintain accuracy and reliability.<br>• Collaborate with various teams to align data structures and solutions with organizational objectives.<br>• Provide strategic direction for data architecture and recommend scalable solutions aligned with industry best practices.<br>• Develop and enforce standards for enterprise reporting, key performance indicators, and consistent data definitions.<br>• Promote uniformity in business rules and metric calculations across departments to ensure credible and authoritative data outputs.<br>• Review and validate data workflows, transformations, and reports to ensure completeness and accuracy.<br>• Identify and implement system improvements to enhance the functionality and efficiency of data platforms.<br>• Address and resolve issues related to data integrity or reporting disruptions, ensuring minimal downtime.<br>• Mentor team members and provide technical guidance to build a highly skilled and capable team.
Additional Skills:<br><br>Deep hands-on expertise with dbt (Cloud or Core), including model development, testing, macros, packages, documentation, scheduling, and performance optimization.<br>Strong command of dbt project structure, materializations (including incremental models and snapshots), and integration with BI-owned metric certification and semantic layers.<br>Ability to evaluate when to leverage community dbt packages versus building custom solutions.<br>Expert-level SQL for complex analytical transformations and performance optimization.<br>Strong data modeling skills across dimensional (Kimball), Data Vault, and domain-oriented patterns, including temporal modeling, SCDs, and surrogate keys.<br>Proven judgment in balancing normalization vs. denormalization for performance, flexibility, and downstream analytics use cases.<br>Experience designing and implementing automated data quality testing and validation frameworks.<br>Familiarity with data quality tooling (e.g., Great Expectations) and core data quality dimensions across analytics workflows.<br>Familiarity with modern analytics stacks and how analytics engineering integrates with cloud data platforms, ingestion tools, dbt, and BI systems.<br>Working knowledge of DataOps practices such as version control, CI/CD, and automated testing.<br>Knowledge of K–12 education data domains and metrics, including enrollment, attendance, assessments, staffing, and multi-state reporting requirements.<br>Familiarity with education data privacy (FERPA), academic calendars, and operational rhythms.<br>Proven ability to lead technical teams, facilitate requirements and design discussions, and manage competing stakeholder priorities.<br>Strong communication and change management skills, translating technical capabilities into clear business value. <br> <br><br>Required experience:<br><br>Bachelor’s degree in Computer Science, Information Systems, Data Science, Statistics, Mathematics, or a related field, or equivalent practical experience.<br>7+ years of experience in analytics engineering, data engineering, data analytics, or closely related technical roles.<br>3+ years of experience in technical leadership or people management, leading analytics, data, or BI teams.<br>Demonstrated hands-on experience with dbt (2+ years) building and maintaining production data models and transformations.<br>Strong data modeling expertise, with a proven track record designing dimensional models, analytics data marts, or business-facing data products.<br>Expert-level SQL skills, including complex analytical queries and performance optimization<br>Experience partnering with non-technical stakeholders to gather requirements and translate them into effective technical solutions.<br> <br>Preferred Education and Experience:<br><br>Master’s degree in Data Science, Statistics, Computer Science, or a related analytical field.<br>dbt Analytics Engineering certification or equivalent demonstrated expertise<br>Hands-on experience with Snowflake or comparable cloud data warehouse platforms.<br>Experience working with K–12 education data, student information systems, or education analytics.<br>Experience building data solutions for multi-state or geographically distributed organizations.<br>Exposure to data governance practices, including business glossaries and data quality frameworks<br>Familiarity with modern data stack tools (e.g., ingestion, orchestration, BI, and data quality platforms).<br>Experience leading analytics teams using Agile or iterative delivery methodologies.
<p>Architect and lead the modern data platform using <strong>Microsoft Fabric</strong>. You’ll define data models, pipelines, governance, and performance patterns that power analytics at scale.</p><p><strong>What You’ll Do</strong></p><ul><li>Design end‑to‑end architectures leveraging Fabric (OneLake, Lakehouses, Warehouses)</li><li>Define medallion/layered models, dimensional designs, and semantic layers</li><li>Lead ingestion and transformation with Dataflows Gen2 / Data Factory / Notebooks</li><li>Establish governance (data quality, lineage, security, RLS/OLS)</li><li>Optimize performance and cost; standardize reuseable patterns</li><li>Mentor data engineers/analysts; review solutions and set best practices</li><li>Partner with business to translate use cases into scalable models</li></ul><p><br></p>
We are looking for an experienced Data Architect to design and implement cutting-edge data solutions that meet the evolving needs of our enterprise. This role involves building secure, scalable, and high-performing data platforms while leveraging modern technologies and aligning with organizational goals. The ideal candidate will have expertise in cloud-based architecture, data governance, and advanced analytics, driving innovation across diverse business functions.<br><br>Responsibilities:<br>• Develop comprehensive data architecture strategies for advanced analytics and big data solutions using Azure Databricks.<br>• Design and implement Databricks Delta Lake-based Lakehouse architecture, utilizing PySpark Jobs, Databricks Workflows, Unity Catalog, and Medallion architecture.<br>• Optimize and configure Databricks clusters, notebooks, and workflows to ensure efficiency and scalability.<br>• Integrate Databricks with Azure services such as Azure Data Lake Storage, Azure Data Factory, Azure Key Vault, and Microsoft Fabric.<br>• Establish and enforce best practices for data governance, security, and cost management.<br>• Collaborate with data engineers, analysts, and business stakeholders to translate functional requirements into robust technical solutions.<br>• Provide technical mentoring and leadership to team members focused on Databricks and Azure technologies.<br>• Monitor, troubleshoot, and enhance data pipelines and workflows to maintain reliability and performance.<br>• Ensure compliance with organizational and regulatory standards regarding data security and privacy.<br>• Document configurations, processes, and governance standards to support long-term scalability and usability.
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul>
We are looking for an experienced Power BI Business Intelligence Engineer to join our team in Niceville, Florida. In this role, you will play a vital part in managing and enhancing our reporting and business intelligence platforms to provide actionable insights. Your expertise will drive data analysis, dashboard creation, and the development of solutions that support key business decisions.<br><br>Responsibilities:<br>• Oversee the company's reporting and business intelligence systems to ensure optimal performance and accuracy.<br>• Develop a deep understanding of the organization's business models, operations, and decision-making processes.<br>• Analyze data architecture and gather requirements from stakeholders to create tailored solutions.<br>• Build and manage data sources, models, and integrations for reporting and analytics purposes.<br>• Design and maintain dashboards and reports using enterprise business intelligence tools.<br>• Facilitate seamless data integration processes to retrieve, transform, and analyze datasets.<br>• Support leadership in creating management information and KPIs to drive data-driven decision-making.<br>• Ensure data quality and integrity across all business intelligence deliverables.<br>• Stay updated with the latest advancements in BI technologies, tools, and practices to recommend improvements.<br>• Document systems and processes comprehensively while adhering to governance, security, and privacy standards.
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
We are looking for an experienced Senior Data Scientist to join our dynamic team in Boston, Massachusetts. In this role, you will leverage your expertise in statistical modeling, machine learning, and cloud-based analytics to drive impactful decisions and solutions. The ideal candidate will bring a strong technical background, a passion for working with regulated data, and a commitment to ethical AI practices.<br><br>Responsibilities:<br>• Develop and implement advanced statistical models and machine learning algorithms to solve complex business problems.<br>• Monitor and evaluate the performance of AI models, ensuring reliability, fairness, and compliance with ethical standards.<br>• Collaborate with engineering and product teams to translate data-driven insights into actionable strategies.<br>• Utilize cloud-based tools such as AWS SageMaker and Redshift to design and deploy scalable analytics solutions.<br>• Handle sensitive healthcare or clinical trial datasets while adhering to strict data privacy and security regulations.<br>• Conduct exploratory data analysis and create visualizations to communicate findings effectively.<br>• Build and optimize ETL pipelines for efficient data transformation and integration.<br>• Apply Bayesian statistics and time-series forecasting techniques to improve predictive accuracy.<br>• Maintain comprehensive documentation of data science workflows and processes.<br>• Stay updated on industry trends and advancements to continuously enhance methodologies and tools.
<p>Position Overview</p><p>We are seeking a highly detail‑oriented Environmental Data Specialist to support data accuracy, regulatory documentation, and compliance activities related to environmental and waste management operations. This role is essential for maintaining precise records, updating internal databases, ensuring regulatory alignment, and validating waste tracking information. Attention to detail is critical, and all submissions must include a basic typing/data entry test result for consideration.</p><p>Key Responsibilities</p><ul><li>Update internal databases and shared sites with new, recertified, or amended waste profile data</li><li>Upload and maintain current approvals, notifications, and treatment standards documentation</li><li>Conduct technical regulatory reviews of new, amended, and recertified waste profile data, including:</li><li>DOT compliance</li><li>RCRA Land Ban requirements</li><li>Internal consistency checks</li><li>Efficiently track and run reports, including those requiring advanced Excel knowledge</li><li>Validate shipment and waste tracking data to ensure accuracy and compliance</li><li>Support documentation, data entry, and administrative tasks related to environmental and regulatory workflows</li></ul>
We are looking for a detail-oriented Data Analyst to oversee grant-related processes and manage communications with community partners. The ideal candidate will ensure accurate tracking, submission of reimbursements, and provide timely responses to inquiries from management. This position offers an opportunity to contribute to impactful community initiatives while applying strong data analysis skills.<br><br>Responsibilities:<br>• Manage grant-related data, including tracking expenditures and reimbursements to ensure accuracy and compliance.<br>• Collaborate with community partners to maintain clear and effective communication regarding grant processes and requirements.<br>• Prepare and submit reimbursement documentation in alignment with organizational policies.<br>• Respond to inquiries from management and stakeholders in a timely and thorough manner.<br>• Utilize data manipulation tools and queries to analyze and interpret grant-related information.<br>• Create and maintain reports using Microsoft Access and Excel to support decision-making processes.<br>• Assist in organizing community events and initiatives tied to grant programs.<br>• Ensure data integrity and accuracy in all reporting and analysis efforts.<br>• Support the team with additional administrative tasks related to data management and grant tracking.
<p>Our company is seeking a motivated Data Analyst with expertise in SQL to join our growing team. In this role, you will leverage your analytical skills to turn data into actionable insights, helping our organization make data-driven decisions. You will work closely with stakeholders across departments to define analysis needs, prepare and interpret datasets, and deliver meaningful reports and visualizations.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Extract, clean, and analyze large datasets using SQL.</li><li>Develop, optimize, and maintain SQL queries, scripts, and procedures for data extraction and reporting.</li><li>Produce insightful analysis, dashboards, and reports for cross-functional teams to support key business initiatives.</li><li>Work closely with business partners to understand requirements and translate them into analytical solutions.</li><li>Identify opportunities to improve data quality, processes, and reporting efficiency.</li><li>Present findings and recommendations to stakeholders in a clear and concise manner.</li><li>Stay current on best practices and recent advancements in data analytics and reporting tools</li></ul><p><br></p>
<p>We are looking for an experienced Data Analyst to join our team on a long-term contract basis for a global finance firm. This role is fully remote. This role requires someone with strong attention to detail who excels in data reconciliation, fraud investigation, and analytics. You will play a key role in analyzing data for accuracy and identifying potential discrepancies or fraudulent activities.</p><p><br></p><p><strong><u>Responsibilities:</u></strong></p><p>• Conduct regular account reconciliations to ensure data consistency and accuracy.</p><p>• Investigate suspected fraudulent activities using advanced analytics tools.</p><p>• Perform in-depth data analysis to identify trends and anomalies.</p><p>• Utilize VLOOKUP and other Excel functions to organize and interpret complex datasets.</p><p>• Collaborate with cross-functional teams to address data discrepancies and improve processes.</p><p>• Develop and implement anti-fraud strategies based on identified risks.</p><p>• Maintain detailed documentation of findings and reconciliation processes.</p><p>• Provide actionable insights to support decision-making and enhance operational efficiency.</p><p>• Ensure compliance with relevant regulations and standards during data analysis and investigations.</p>
<p>As a Business Data Analyst or Specialist you will be responsible for managing data retrieval and analysis within SCMHC</p><p>The position includes organizing data points, communicating between upper management and the IT department, and analyzing data to determine business needs</p><p>Become subject matter expert for the organization’s EMR/EHR (Credible Behavioral Health) and champions it’s adoption</p><p>Develops data solutions from multiple data sources / applications</p><p>Develops and manages data reporting for internal and external consumers</p><p>Daily, monthly, & quarterly uploads/ Submits data as required</p><p>Ensure data is secure via proper access controls</p><p>Provides EMR/HER assistance and support to end-users</p><p>Organizes, implements and manages new hire as well as ongoing training for EMR/HER</p>