We are looking for an experienced Artificial Intelligence (AI) Engineer to join our team in Atlanta, Georgia. This is a long-term contract position where you will play a pivotal role in advancing AI initiatives across clinical and business operations. The ideal candidate will have a strong technical background, excellent communication skills, and the ability to collaborate across multiple departments to drive innovative solutions in healthcare.<br><br>Responsibilities:<br>• Partner with various departments to identify, design, and implement AI solutions that address clinical, financial, and operational needs.<br>• Evaluate and integrate third-party AI tools and platforms, with a focus on healthcare applications such as NexTech, call center automation, AI-powered scribing, and clinical trial identification.<br>• Develop and support AI applications to enhance patient identification for trials, automate documentation, and improve workflows.<br>• Build and maintain AI-driven dashboards and analytics using tools like Power BI to provide actionable insights for clinical and business teams.<br>• Ensure AI integrations meet scalability, security, and compliance requirements, adhering to healthcare data privacy standards.<br>• Serve as a strategic advisor by proactively identifying opportunities for organizational improvement through AI.<br>• Collaborate with stakeholders across IT and non-IT teams to foster innovation and streamline operations.<br>• Stay updated on industry trends, regulatory standards, and emerging AI technologies relevant to healthcare.<br>• Provide technical leadership and guidance on AI-related projects, ensuring alignment with organizational goals.
<p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p><strong>Overview</strong></p><p>Our client is seeking an Atlanta-based AI Engineer to accelerate their artificial intelligence strategy and project implementation across clinical and business operations. This cross-functional role will serve as a trusted partner for both clinical and financial teams, driving AI adoption with tools such as NexTech (AI scribe and clinical trials), call center AI, AI-based revenue cycle management bots, and advanced analytics through PowerBI and data integrations. Our ideal candidate is a self-starter—an “athlete” or "one-man-band"—who can communicate effectively to both IT and non-IT professionals working across different teams, catalyzing innovation, and providing strategic guidance for optimizing workflows and extracting value from healthcare data.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Collaborate with 12 different departments to identify, scope, and deliver AI use cases in clinical, financial, and operational domains.</li><li>Evaluate, recommend, and integrate third-party AI vendors and solutions, especially in healthcare applications such as NexTech, call center automation, AI-powered scribing, clinical trial identification, and revenue cycle management bots.</li><li>Build, customize, and deploy AI tools (including Gemini, ChatGPT, Claude, and Copilot) to create efficiencies, automate workflows, and generate actionable insights from company data (databases, emails, PowerPoint presentations).</li><li>Support clinical teams by automating patient identification for trials, improving documentation, and leveraging AI scribes in NexTech.</li><li>Assist management in leveraging AI-driven dashboards and analytics (PowerBI) to monitor key business and clinical metrics.</li><li>Ensure scalable, secure, and compliant integration of AI applications.</li><li>Maintain up-to-date knowledge of healthcare data requirements, privacy/regulatory standards, and industry best practices.</li><li>Serve as an internal consultant, staying knowledgeable about company operations and proactively looking for opportunities to drive organizational change via AI.</li></ul>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p><strong>Overview</strong></p><p>Our client is implementing HubSpot for the first time and is seeking a <strong>HubSpot Engineering Expert/SME</strong> to support the initial stand‑up, integration design, and validation efforts. This role is critical due to client's operating on a <strong>custom-built ERP platform</strong> with a <strong>SQL Server / C# backend</strong>, requiring <strong>custom HubSpot integrations</strong> that go beyond out‑of‑the‑box capabilities.</p><p>The internal development team will handle the core programming work. This consultant will provide <strong>HubSpot technical leadership</strong>, owning <strong>integration design, data modeling guidance, and testing strategy</strong>, with a strong understanding of HubSpot’s backend data schema and APIs.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Serve as the <strong>HubSpot technical SME</strong> for initial implementation and ongoing support</li><li>Lead the <strong>design and validation</strong> of integrations between HubSpot and a <strong>custom ERP system</strong></li><li>Advise on HubSpot data architecture, including:</li><li>Object modeling</li><li>Property strategy</li><li>Required field mapping</li><li>Data sync rules and dependencies</li><li>Partner with internal engineers integrating HubSpot with:</li><li>Custom REST APIs</li><li>SQL Server database</li><li>C# backend services</li><li>Ensure HubSpot data feeds reliably into <strong>key required fields</strong> within the ERP system</li><li>Support HubSpot configuration decisions during initial setup</li><li>Assist with <strong>integration testing, troubleshooting, and refinement</strong></li><li>Act as an ongoing HubSpot resource for future enhancements and support needs</li></ul><p><br></p>
We are looking for a skilled and innovative Sr. Engineer specializing in integrations (iPaaS) to join our team on a contract basis. This role is based in Atlanta, Georgia, and offers the opportunity to lead technical projects that drive seamless connectivity between systems and platforms. You will play a pivotal role in designing, implementing, and optimizing integration frameworks while collaborating with cross-functional teams to align technical solutions with business goals.<br><br>Responsibilities:<br>• Design and implement scalable integration frameworks that ensure efficient data pipelines and connectivity between systems.<br>• Provide technical leadership and guidance to a team of engineers, fostering accountability and a collaborative work environment.<br>• Develop and enhance integrations with third-party systems, focusing on data extraction, transformation, and loading processes.<br>• Define and uphold best practices for integration development, documentation, and team collaboration.<br>• Work closely with stakeholders, including product managers and engineering teams, to ensure alignment with business requirements.<br>• Manage the end-to-end delivery of integration projects, ensuring deadlines are met and quality standards are maintained.<br>• Lead initiatives to explore and integrate innovative technologies to optimize data handling and system connectivity.<br>• Utilize AI-driven tools and techniques to enhance coding productivity and integration efficiency.<br>• Collaborate with enterprise systems such as Salesforce, NetSuite, and other relevant platforms to build seamless integrations.
<p><strong>Key Responsibilities </strong> </p><ul><li>Architect and Implement Integrations Framework: Develop a scalable and resilient integrations framework that prioritizes ETL techniques and data pipeline efficiency. </li><li>Technical Leadership & Mentorship: Lead and mentor a team of 3 engineers, promoting a culture of extreme ownership, accountability, and clear, effective communication. </li><li>Develop Data Integrations: Design and develop robust integrations with third-party systems, emphasizing data extraction, transformation, and loading combined with API-driven approaches. </li><li>Establish Best Practices: Define and enforce best practices for integration design, development, documentation, and open team communication. </li><li>Collaborate with Stakeholders: Work closely with product managers, engineering teams, and other stakeholders, ensuring alignment with business objectives through transparent and proactive communication. </li><li>Oversee Project Delivery: Manage end-to-end delivery of integration projects, ensuring timely completion and accountability at every stage. </li><li>Drive Innovation: Lead initiatives to innovate our integration strategies and technologies, continuously improving our data handling and ETL processes.</li></ul>