We are looking for a skilled Data Engineer to join our team in Foxborough, Massachusetts, on a long-term contract basis. In this role, you will design, optimize, and maintain data pipelines and storage solutions, leveraging modern tools to ensure high performance and reliability. This position offers an exciting opportunity to collaborate across teams and implement cutting-edge practices in data engineering and analytics.<br><br>Responsibilities:<br>• Optimize Amazon Redshift performance by configuring distribution keys, sort keys, and fine-tuning queries.<br>• Develop and maintain robust data pipelines using AWS Glue and orchestrate workflows with Airflow.<br>• Manage semantic layers and metadata to support reliable analytics and AI-driven insights.<br>• Implement best practices for data partitioning, compression, and columnar storage formats.<br>• Monitor and troubleshoot data workflows to ensure high availability, reliability, and automated observability.<br>• Automate data processing tasks using Python and AWS native tools.<br>• Enforce data security and governance policies, including row- and column-level controls, using Lake Formation and AWS services.<br>• Oversee compliance monitoring and auditing through CloudWatch, CloudTrail, and similar tools.<br>• Continuously refine and improve data architecture by adopting emerging AWS best practices and patterns.<br>• Collaborate closely with Operations, Data Governance, and other teams to align with standards and achieve delivery objectives.
<p><strong>Data Engineer (Python / AWS)</strong></p><p><strong>Location:</strong> Remote (Northeast / Greater Boston area preferred)</p><p><strong>Type:</strong> Full-Time</p><p><strong>Level:</strong> Mid-to-Senior Individual Contributor</p><p><strong>About the Role</strong></p><p>We are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You’ll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.</p><p>This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We’re not looking for someone who is aiming to move immediately into architecture or leadership.</p><p>This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Build and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.</li><li>Develop Python‑based data processing workflows deployed on AWS cloud services.</li><li>Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.</li><li>Help modernize existing workflows and assist in the gradual migration away from a legacy data system.</li><li>Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.</li><li>Troubleshoot pipeline issues, optimize performance, and improve overall system stability.</li><li>Contribute to best practices around code quality, testing, documentation, and data governance.</li></ul><p><br></p>