<p><strong>Software Engineer II</strong></p><p><strong>Location:</strong> Remote</p><p><strong>Type:</strong> 26 Week Contract</p><p><strong>Experience Level:</strong> Mid-Level (2–4 years)</p><p><strong>Security Clearance Required:</strong> Public Trust</p><p><br></p><p><strong>About the Role</strong></p><p>We’re seeking a <strong>Software Engineer II</strong> to join our team and contribute to the design, development, testing, and maintenance of software systems and tools. This role involves working across the full software development lifecycle, applying industry best practices to build and enhance software-intensive products and infrastructure.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Build and enhance software applications and tools, ensuring scalability, performance, and maintainability.</li><li>Conduct unit, integration, and system testing; troubleshoot and resolve software defects and performance issues.</li><li>Collaborate with stakeholders to gather and analyze software requirements, ensuring alignment with business and technical goals.</li><li>Evaluate and ensure software compatibility with various hardware platforms and configurations.</li><li>Apply software engineering principles and best practices throughout the development lifecycle—from planning and design to deployment and maintenance.</li><li>Work closely with cross-functional teams including project managers, QA engineers, and other developers to deliver high-quality solutions.</li><li>Create and maintain technical documentation for code, processes, and system configurations.</li><li>Identify opportunities for process optimization and contribute to the evolution of development standards and practices.</li></ul>
<p>We are looking for an experienced DevOps Engineer to join our team in Alpharetta, GA. In this role, you will contribute to the development and optimization of complex CI/CD pipelines, ensuring efficient and secure deployment workflows across various environments and servers. This is a long-term contract position, offering the opportunity to work on innovative projects in a collaborative and dynamic environment.</p><p><br></p><p><strong>Location:</strong> Alpharetta, GA (Remote candidates considered)</p><p><strong>Duration:</strong> 1 year (Potential for extension)</p><p><strong>Pay:</strong> $56/hour with benefits (Health, Vision, Dental, 401K)</p><p><br></p><p><strong>Position Overview</strong></p><p>We are seeking a <strong>DevOps Engineer</strong> to manage and evolve a complex CI/CD pipeline across <strong>Octopus Deploy</strong>, <strong>GitHub</strong>, and <strong>AWS</strong>. </p><p>The role involves automating deployment workflows across <strong>50+ environments</strong> and <strong>400+ servers</strong>.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Customize and stabilize CI/CD pipelines using Octopus Deploy and GitHub Actions with a focus on security</li><li>Develop and maintain deployment scripts in PowerShell</li><li>Troubleshoot configurations involving Terraform and Ansible</li><li>Collaborate with development and SRE teams to reduce manual deployment steps</li><li>Manage secrets using AWS KMS</li><li>Create reusable scripts and templates for automation</li></ul>
<p>We are looking for a skilled GenAI Application Developer to join our 100% remote team. In this role, you will focus on creating and maintaining software applications while utilizing industry-standard frameworks. This is a Long-term Contract position that offers the opportunity to work on moderately complex projects and collaborate with technical teams to deliver impactful solutions.</p><p><br></p><p>Amazon Connect application development and configuration: </p><ul><li>Build, update, and maintain Amazon Connect contact flows, routing profiles, and queues to support both voice and chat channels.</li><li>Integrate Lex V2 bots into Amazon Connect flows for call deflection, self-service transactions, and escalation.</li><li>Enable and configure Contact Lens for real-time and post-contact analytics, transcription, summarization, sentiment analysis, and redaction.</li><li>Configure Amazon Q in Connect domains, knowledge sources, guided workflows, and step-by-step agent assist experiences.</li><li>Implement S3-based call and chat transcript storage with encryption, lifecycle policies, and retention compliance.</li></ul><p>AI orchestration and backend services: </p><ul><li>Write AWS Lambda (Python) functions to orchestrate Bedrock LLM calls, embeddings workflows, and model invocation logging.</li><li>Implement OpenSearch indexing, vector/keyword queries, and knowledge synchronization triggers.</li><li>Build retrieval-augmented generation (RAG) pipelines to enhance Amazon Connect agent assist and self-service knowledge.</li><li>Apply structured logging, unit/integration tests, error handling, and performance/cost safeguards.</li></ul><p>Data safety and compliance: </p><ul><li>Implement AI guardrails, prompt templates, and output evaluation for safety and accuracy.</li><li>Enforce PII minimization and redaction policies in Amazon Connect conversation logs.</li><li>Participate in threat modeling and support remediation of findings for contact center integrations.</li></ul><p>Web and chat integration: </p><ul><li>Support CloudFront + WAF configurations for secure web chat entry points.</li><li>Build APIs and event hooks to pass conversation context between web chat, Amazon Connect, and AI services.</li><li>DevSecOps and operations: </li><li>Contribute to Git-based CI/CD pipelines, code reviews, and documentation.</li><li>Maintain runbooks, architecture diagrams, and SOPs for contact flows, bot integrations, and AI workflows.</li><li>Create and monitor CloudWatch dashboards/alarms for call deflection rate, average handle time (AHT), contact resolution, and AI usage metrics.</li></ul><p><br></p>
We are looking for a highly skilled Data Engineer to join our team on a contract basis in Atlanta, Georgia. This role focuses on optimizing data processes and infrastructure, ensuring efficient data management and performance. The ideal candidate will possess expertise in modern data engineering tools and technologies.<br><br>Responsibilities:<br>• Optimize data indexing and address fragmentation issues to enhance system performance.<br>• Develop and maintain data pipelines using ETL processes to ensure accurate data transformation and integration.<br>• Utilize Apache Spark for scalable data processing and analytics.<br>• Implement and manage big data solutions with Apache Hadoop.<br>• Design and deploy real-time data streaming frameworks using Apache Kafka.<br>• Collaborate with cross-functional teams to identify and resolve data-related challenges.<br>• Monitor and improve system performance by analyzing data usage and storage trends.<br>• Write efficient code in Python to support data engineering tasks.<br>• Document processes and maintain clear records of data workflows and optimizations.<br>• Ensure data security and compliance with organizational standards.