<p>We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and infrastructure. This role will support data-driven decision-making by ensuring reliable data flow, transformation, and accessibility across the organization.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain ETL/ELT data pipelines</li><li>Develop and optimize data models and data architectures</li><li>Integrate data from multiple sources (APIs, databases, third-party systems)</li><li>Ensure data quality, integrity, and reliability</li><li>Collaborate with data analysts, data scientists, and business stakeholders</li><li>Monitor and troubleshoot data pipeline performance issues</li><li>Implement best practices for data governance and security</li></ul><p><br></p>
We are looking for a skilled Database Administrator to join our team in Johns Creek, Georgia. In this Contract to permanent role, you will play a critical part in managing and optimizing our database systems to ensure high performance, security, and reliability. This is an excellent opportunity to work with cutting-edge technologies and contribute to a dynamic team within the motor freight forwarding industry.<br><br>Responsibilities:<br>• Monitor and analyze database performance, identifying and resolving bottlenecks to ensure optimal efficiency.<br>• Manage user and application permissions, maintaining secure access controls and implementing encryption standards.<br>• Optimize database structures, including normalization, indexing, and stored procedures, to enhance performance and data integrity.<br>• Collaborate with developers and stakeholders to address technical challenges and support database-related needs.<br>• Develop and execute backup and disaster recovery strategies, ensuring data availability and reliability.<br>• Conduct regular tests of restore processes and ensure backups are completed successfully on schedule.<br>• Manage database replication across multiple servers, mitigating potential impacts of updates on replication.<br>• Implement and maintain data-tier application tools to ensure schema versioning and consistency.<br>• Ensure sensitive data is sanitized in non-production environments to minimize exposure risks.<br>• Stay informed about advancements in database technologies to continuously improve systems and processes.
We are looking for a skilled Database Administrator to join our team in Johns Creek, Georgia. This is a Contract to permanent position where you will play a pivotal role in managing, optimizing, and securing our database systems. Your expertise will ensure the seamless operation of critical data processes, supporting both performance and data integrity.<br><br>Responsibilities:<br>• Monitor and analyze database performance, identifying and resolving bottlenecks to ensure optimal operation.<br>• Manage user and application permissions, implementing access controls and maintaining encryption across clustered environments.<br>• Optimize database configurations, queries, and stored procedures to improve efficiency and reliability.<br>• Collaborate with developers and stakeholders to address technical challenges and support database-related initiatives.<br>• Establish and maintain comprehensive backup and disaster recovery strategies, including regular testing of restore processes.<br>• Implement database replication across multiple servers to ensure data availability and consistency.<br>• Conduct database normalization and leverage foreign keys, indexes, and other techniques to enhance data integrity.<br>• Provide guidance and training on database best practices and emerging technologies.<br>• Sanitize sensitive data in non-production environments to minimize exposure and ensure compliance with data protection standards.
<p>We are looking for an experienced Product Architect who will be responsible for the operational ownership and continuous improvement of enterprise Pricing and Promotions solutions. This includes contributing to product roadmap planning, driving innovation initiatives, writing and refining user stories, guiding solution design and development, supporting deployment activities, and assisting with ongoing user support and value optimization efforts.</p><p>The Product Architect works within an agile delivery environment alongside internal team members and external partners, collaborating closely with cross-functional product, business, and technology teams across multiple regions.</p><p><br></p><p>Key Responsibilities</p><ul><li>Provide functional and technical expertise related to enterprise Pricing and Promotions platforms to business and technology stakeholders.</li><li>Participate in pricing and rebate initiatives from concept through implementation and value realization, partnering with vendors, implementation teams, and internal stakeholders.</li><li>Execute backlog items as prioritized by leadership, including features, user stories, incidents, change requests, and value optimization initiatives.</li><li>Develop user stories, cost estimates, solution designs, training materials, and system documentation.</li><li>Support development, configuration, testing, deployment, and post-go-live activities.</li><li>Assist with change management, support tickets, automated test case development, and ongoing product enhancements.</li><li>Collaborate with other product and technology teams based on evolving priorities.</li><li>Follow Agile and SDLC practices to ensure consistent and transparent execution across product areas.</li><li>Ensure solutions comply with enterprise architecture standards, security requirements, and modern data/analytics practices.</li><li>Track product performance and support value realization efforts.</li><li>Ensure product-related SLAs and operational metrics are achieved.</li><li>Work closely with cross-functional teams to deliver scalable, maintainable Pricing & Promotions solutions.</li></ul>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p>We are partnering with a leading Atlanta-based organization to identify a <strong>Data Enablement & Governance Manager</strong> who will play a critical role in shaping and advancing enterprise-wide data governance practices. This position is ideal for a strategic leader who is passionate about data integrity, compliance, and enabling data-driven decision-making across the business.</p><p>In this role, you will lead the development and execution of a scalable data governance framework, ensuring that data across the organization is accurate, secure, and responsibly managed. You will collaborate with cross-functional stakeholders to establish policies, standards, and accountability models that strengthen data quality and trust.</p><p><strong>What You’ll Do</strong></p><ul><li>Lead the development and execution of a comprehensive data governance strategy aligned with business goals and regulatory requirements</li><li>Define, implement, and enforce enterprise-wide data quality standards across business units</li><li>Oversee governance artifacts including data policies, dictionaries, lineage documentation, and metadata repositories</li><li>Partner with data owners, stewards, and technical teams to establish data classification, access controls, and lifecycle management practices</li><li>Drive implementation and adoption of governance tools and platforms (e.g., Collibra, Informatica, Alation)</li><li>Manage data quality issue resolution through root cause analysis and remediation planning</li><li>Provide guidance to operational teams on data standards and best practices</li><li>Track and communicate governance metrics, risks, and progress to executive leadership</li><li>Stay current on regulatory requirements (HIPAA, GDPR, etc.) and industry best practices</li></ul><p><br></p>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>