Search jobs now Find the right job type for you Create a job alert Explore how we help job seekers Contract talent Permanent talent Learn how we work with you Executive search Finance and Accounting Technology Marketing and Creative Legal Administrative and Customer Support Technology Risk, Audit and Compliance Finance and Accounting Digital, Marketing and Customer Experience Legal Operations Human Resources 2026 Salary Guide Demand for Skilled Talent Report Job Market Outlook Press Room Tech insights Labor market overview AI in recruiting Navigating the AI era Staffing for small businesses Cost of a bad hire Browse jobs Find your next hire Our locations

Add your latest resume to match with open positions.

5 results for Rpa Engineer in Charlotte, NC

Full Stack Data Engineer
  • Charlotte, NC
  • onsite
  • Temporary / Contract
  • 60.00 - 72.00 USD / Hourly
  • <p>We are seeking a highly skilled Full Stack Data Engineer who thrives in building modern, scalable data platforms from the ground up. This is an opportunity to work on a cloud-native data stack, influence architecture decisions, and deliver solutions that directly power business insights and operations.</p><p>If you enjoy owning the full lifecycle—from data ingestion to application layer—this role will be a strong fit.</p><p><br></p><p><strong>What You’ll Do</strong></p><p>You will operate as a hands-on engineer across the full data stack:</p><ul><li>Design, build, and maintain scalable ELT pipelines and workflows</li><li>Develop and optimize data models and warehouse structures in Snowflake</li><li>Build full stack data applications and backend services</li><li>Write clean, efficient Python and SQL code</li><li>Develop reusable data frameworks and components</li><li>Implement automated testing for data quality and reliability</li><li>Build and maintain CI/CD pipelines (GitHub-based)</li><li>Create reporting and visualization solutions (Power BI or similar)</li><li>Monitor production systems and troubleshoot data issues proactively</li></ul><p><strong>Tech Stack</strong></p><ul><li>Data Platform: Snowflake</li><li>Languages: Python, SQL</li><li>Cloud: AWS / Azure / GCP (environment dependent)</li><li>DevOps: GitHub, CI/CD pipelines</li><li>Visualization: Power BI (or similar BI tools)</li></ul>
  • 2026-04-22T15:07:49Z
Data Engineer
  • Charlotte, NC
  • onsite
  • Temporary / Contract
  • 55.00 - 65.00 USD / Hourly
  • <p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
  • 2026-04-14T17:18:43Z
LLM Programmer
  • Charlotte, NC
  • onsite
  • Temporary / Contract
  • 75.00 - 90.00 USD / Hourly
  • <p>The LLM Programmer will be responsible for building and optimizing applications that leverage large language models (LLMs) to solve business problems, improve user experiences, and automate complex workflows. You will work closely with engineering, product, and data teams to bring AI-driven features from concept to production.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design and develop applications using large language models (e.g., GPT-style systems)</li><li>Build and maintain RAG (Retrieval-Augmented Generation) pipelines to integrate enterprise data with LLMs</li><li>Develop prompt engineering strategies and reusable AI workflows</li><li>Fine-tune or adapt models for domain-specific use cases when needed</li><li>Integrate LLM APIs into production systems and applications</li><li>Optimize performance, latency, cost, and accuracy of AI solutions</li><li>Evaluate model outputs for quality, reliability, and safety</li><li>Collaborate with cross-functional teams to identify and implement AI opportunities</li><li>Stay current with advancements in generative AI and LLM tooling</li></ul>
  • 2026-04-22T15:13:51Z
Senior Kerberos Engineer
  • Charlotte, NC
  • onsite
  • Temporary / Contract
  • 50.00 - 65.00 USD / Hourly
  • <p>We are proactively building a network of <strong>Senior Active Directory / Kerberos Engineers</strong> for upcoming consulting and full-time opportunities with enterprise and growing organizations. These roles focus on stabilizing and modernizing identity infrastructure across complex environments, including on-premises Active Directory and cloud-connected systems.</p><p>The Senior Active Directory Engineer will be responsible for maintaining a secure, reliable, and high-performing directory services environment. This role works closely with infrastructure and security teams to support day-to-day operations while also leading remediation efforts, migrations, and long-term identity initiatives.</p><p><strong>Key Responsibilities</strong></p><ul><li>Assess and support Active Directory health, including replication, domain controllers, and overall environment stability</li><li>Troubleshoot and resolve Kerberos authentication issues, including those impacting system access and patching workflows</li><li>Support and guide Windows Server patching, including security updates and identity-related changes</li><li>Assist with domain controller migrations and decommissioning, ensuring continuity and minimal disruption</li><li>Contribute to file server migrations, including data transfer, permission validation, and cutover activities</li><li>Perform environment cleanup and optimization to support future cloud and identity initiatives</li><li>Document processes, findings, and recommendations for ongoing support and improvements</li></ul>
  • 2026-04-06T13:53:42Z
Data Engineer
  • Charlotte, NC
  • remote
  • Permanent / Full Time
  • - USD / Yearly
  • <ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
  • 2026-04-22T15:13:51Z