We are looking for a skilled Database Technology Manager to oversee and optimize our database systems and SharePoint platform. This role involves ensuring the reliability, security, and efficiency of data infrastructure while collaborating with teams to support business operations. The ideal candidate will have a strong technical background and a passion for delivering high-quality database solutions.<br><br>Responsibilities:<br>• Design, implement, and maintain database systems to ensure optimal performance and scalability.<br>• Manage and administer SharePoint platforms, ensuring seamless integration with organizational workflows.<br>• Monitor and troubleshoot database and SharePoint issues to maintain system availability and reliability.<br>• Collaborate with cross-functional teams to identify and address data-related requirements.<br>• Develop and enforce security protocols to protect sensitive information within databases.<br>• Provide technical support for desktop administration and remote desktop access.<br>• Oversee Active Directory configurations and ensure proper user access management.<br>• Evaluate and implement hardware solutions to support database and SharePoint operations.<br>• Stay updated on advancements in database technologies and recommend improvements.<br>• Document processes and procedures for database and SharePoint management.
<p><strong>Data Engineer (Python / AWS)</strong></p><p><strong>Location:</strong> Remote (Northeast / Greater Boston area preferred)</p><p><strong>Type:</strong> Full-Time</p><p><strong>Level:</strong> Mid-to-Senior Individual Contributor</p><p><strong>About the Role</strong></p><p>We are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You’ll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.</p><p>This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We’re not looking for someone who is aiming to move immediately into architecture or leadership.</p><p>This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Build and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.</li><li>Develop Python‑based data processing workflows deployed on AWS cloud services.</li><li>Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.</li><li>Help modernize existing workflows and assist in the gradual migration away from a legacy data system.</li><li>Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.</li><li>Troubleshoot pipeline issues, optimize performance, and improve overall system stability.</li><li>Contribute to best practices around code quality, testing, documentation, and data governance.</li></ul><p><br></p>