<p>We are seeking a Data Architect to lead the design and evolution of data architecture. This individual will play a critical role in shaping how data is collected, stored, integrated, and consumed across the organization, supporting analytics, and reporting. The ideal candidate brings a balance of hands‑on technical depth, architectural leadership, and strong collaboration with business and technology stakeholders</p><p><br></p><p>Responsibilities</p><ul><li>Lead the design and implementation of scalable, secure, and high‑performing enterprise data architectures.</li><li>Define data architecture standards, reference architectures, and best practices across platforms.</li><li>Architect data solutions supporting analytics, BI, data science, and operational reporting.</li><li>Design and oversee data models for data warehouses, data lakes, and lakehouse architectures.</li><li>Partner with application, infrastructure, security, and analytics teams to ensure seamless data integration.</li><li>Evaluate and recommend data technologies, tools, and platforms aligned to business strategy.</li><li>Establish and enforce governance, data quality, metadata management, and lineage practices.</li><li>Provide technical leadership and mentorship to data engineers and analytics teams.</li><li>Translate business requirements into actionable data solutions for senior stakeholders.</li></ul>
We are looking for a skilled Data Analyst to join our team on a long-term contract basis in Cambridge, Massachusetts. This role is ideal for someone who thrives in analyzing complex datasets, identifying trends, and delivering actionable insights. Your expertise in data reporting and visualization will be essential in supporting key organizational goals.<br><br>Responsibilities:<br>• Develop comprehensive reports and dashboards using Tableau to visualize and present data effectively.<br>• Utilize PeopleSoft to extract and manage data for reporting and analytical purposes.<br>• Conduct detailed investigations into suspected fraud cases and provide analytical insights.<br>• Monitor and analyze patterns in fraud analytics to identify risks and recommend solutions.<br>• Collaborate with cross-functional teams to ensure data accuracy and integrity.<br>• Apply anti-fraud measures to detect and prevent fraudulent activities.<br>• Perform thorough data analysis to uncover trends and deliver actionable recommendations.<br>• Maintain documentation of findings and processes to support organizational transparency.<br>• Present findings and recommendations to stakeholders in a clear and concise manner.<br>• Continuously improve reporting processes and tools to enhance data-driven decision-making.
<p>We are looking for a skilled Data Analyst / Engineer to join our team on a contract basis remotely. This role focuses on financial data processing, automation, and reporting within a dynamic environment. The ideal candidate will excel at managing data workflows, automating manual processes, and delivering accurate insights to support business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Extract and reconcile financial data from multiple databases, ensuring accuracy and consistency across accounts receivable, accounts payable, and general ledger lanes.</p><p>• Automate manual reporting processes by developing repeatable daily and month-end pipelines for reliable and auditable data.</p><p>• Design and oversee data workflows across development, production, and utility databases, ensuring secure and efficient access.</p><p>• Create and deliver advanced Excel-based reports using macros, formulas, and Power Query to enhance usability for finance teams.</p><p>• Implement data validation and snapshot techniques to support reconciliation and decision-making processes.</p><p>• Ensure the traceability and accuracy of financial data by establishing robust controls and audit mechanisms.</p><p>• Collaborate with stakeholders to understand reporting requirements and translate them into scalable solutions.</p><p>• Utilize expertise in SQL and Teradata Data Warehouse to optimize database objects and queries for performance.</p><p>• Develop and maintain documentation for automated processes and data workflows to ensure clarity and continuity.</p>
<p>We’re looking for a Principal Java Developer to lead the design and development of high‑performance applications in a modern, containerized environment. In this role, you’ll drive architectural decisions, mentor the engineering team, and own the full development lifecycle from concept to deployment.</p><p><br></p><p><strong>What You’ll Bring:</strong></p><ul><li>Deep expertise in Java and modern Java frameworks</li><li>Strong experience deploying code to containerized environments (Docker, Kubernetes, etc.)</li><li>Background in building scalable, cloud‑ready services</li><li>Ability to lead technical strategy and guide engineering best practices</li></ul><p>If you thrive in a hands‑on leadership role and enjoy building robust, container‑first systems, we’d love to connect. Apply today!</p>
We are looking for an experienced Senior Data Scientist to join our dynamic team in Boston, Massachusetts. In this role, you will leverage your expertise in statistical modeling, machine learning, and cloud-based analytics to drive impactful decisions and solutions. The ideal candidate will bring a strong technical background, a passion for working with regulated data, and a commitment to ethical AI practices.<br><br>Responsibilities:<br>• Develop and implement advanced statistical models and machine learning algorithms to solve complex business problems.<br>• Monitor and evaluate the performance of AI models, ensuring reliability, fairness, and compliance with ethical standards.<br>• Collaborate with engineering and product teams to translate data-driven insights into actionable strategies.<br>• Utilize cloud-based tools such as AWS SageMaker and Redshift to design and deploy scalable analytics solutions.<br>• Handle sensitive healthcare or clinical trial datasets while adhering to strict data privacy and security regulations.<br>• Conduct exploratory data analysis and create visualizations to communicate findings effectively.<br>• Build and optimize ETL pipelines for efficient data transformation and integration.<br>• Apply Bayesian statistics and time-series forecasting techniques to improve predictive accuracy.<br>• Maintain comprehensive documentation of data science workflows and processes.<br>• Stay updated on industry trends and advancements to continuously enhance methodologies and tools.
<p><strong>Data Engineer (Python / AWS)</strong></p><p><strong>Location:</strong> Remote (Northeast / Greater Boston area preferred)</p><p><strong>Type:</strong> Full-Time</p><p><strong>Level:</strong> Mid-to-Senior Individual Contributor</p><p><strong>About the Role</strong></p><p>We are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You’ll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.</p><p>This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We’re not looking for someone who is aiming to move immediately into architecture or leadership.</p><p>This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Build and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.</li><li>Develop Python‑based data processing workflows deployed on AWS cloud services.</li><li>Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.</li><li>Help modernize existing workflows and assist in the gradual migration away from a legacy data system.</li><li>Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.</li><li>Troubleshoot pipeline issues, optimize performance, and improve overall system stability.</li><li>Contribute to best practices around code quality, testing, documentation, and data governance.</li></ul><p><br></p>