A top-tier client of ours is seeking a Software Developer / Data Engineer to play a key role in supporting mission-critical data systems within a government intelligence environment. You’ll design and deliver high-performance data pipelines and architectures that drive advanced analytics and real-time insights. <br> Key Responsibilities Design and implement scalable data pipelines and data architectures Develop and optimize data storage solutions (SQL, NoSQL, graph databases) Support ETL processes and ensure efficient data throughput and performance Work closely with stakeholders to translate data requirements into technical solutions Maintain and enhance data infrastructure using tools like Apache Airflow and Docker
We are looking for an experienced Principal Software Engineer to design, develop, and optimize large-scale systems while ensuring high availability and performance. This role requires expertise in cloud-based platforms and distributed architectures, along with a commitment to secure coding practices and innovative problem-solving. Based in Bowie, Maryland, this position offers an exciting opportunity to contribute to cutting-edge software solutions.<br><br>Responsibilities:<br>• Develop and maintain large-scale, always-on data systems using Kotlin/Java, C#, and JavaScript.<br>• Design and implement distributed systems and high-availability architectures on cloud-based platforms.<br>• Utilize Infrastructure as Code to manage both managed and unmanaged services effectively.<br>• Optimize performance, conduct profiling, and execute tuning for complex systems to ensure efficiency.<br>• Build and maintain large data warehouse systems such as Snowflake or BigQuery.<br>• Implement DevOps practices, including the development and management of CI/CD pipelines.<br>• Ensure adherence to security best practices and secure coding standards across projects.<br>• Engineer software solutions capable of processing and managing extensive volumes of data.<br>• Collaborate with cross-functional teams to understand and adapt to new problem spaces.<br>• Communicate technical concepts effectively to diverse audiences, both in writing and verbally.
<p>We are seeking a highly skilled Full Stack Data Engineer who thrives in building modern, scalable data platforms from the ground up. This is an opportunity to work on a cloud-native data stack, influence architecture decisions, and deliver solutions that directly power business insights and operations.</p><p>If you enjoy owning the full lifecycle—from data ingestion to application layer—this role will be a strong fit.</p><p><br></p><p><strong>What You’ll Do</strong></p><p>You will operate as a hands-on engineer across the full data stack:</p><ul><li>Design, build, and maintain scalable ELT pipelines and workflows</li><li>Develop and optimize data models and warehouse structures in Snowflake</li><li>Build full stack data applications and backend services</li><li>Write clean, efficient Python and SQL code</li><li>Develop reusable data frameworks and components</li><li>Implement automated testing for data quality and reliability</li><li>Build and maintain CI/CD pipelines (GitHub-based)</li><li>Create reporting and visualization solutions (Power BI or similar)</li><li>Monitor production systems and troubleshoot data issues proactively</li></ul><p><strong>Tech Stack</strong></p><ul><li>Data Platform: Snowflake</li><li>Languages: Python, SQL</li><li>Cloud: AWS / Azure / GCP (environment dependent)</li><li>DevOps: GitHub, CI/CD pipelines</li><li>Visualization: Power BI (or similar BI tools)</li></ul>
<p><strong>Software Engineer (Databricks/Data Platform)</strong></p><p><strong>Hybrid 3-4 days onsite in Alpharetta, GA</strong></p><p><strong>Duration through 10/30/26</strong></p><p><br></p><p>We are looking for an experienced Software Engineer III to join our team in Alpharetta, GA. In this role, you will play a critical part in supporting and developing a Databricks-based data platform, focusing on creating scalable and efficient solutions during the development phase. This is a long-term contract position, requiring in-office work three to four days per week.</p><p><br></p><p>Responsibilities:</p><ul><li>Develop and support Databricks notebooks, jobs, and workflows</li><li>Write, optimize, and maintain PySpark and Python code for data processing</li><li>Help design scalable, reliable, and efficient data pipelines</li><li>Apply Spark best practices (partitioning, caching, joins, file sizing)</li><li>Work with Delta Lake tables and data models</li><li>Perform data validation and quality checks during development</li><li>Support cluster configuration and sizing for development workloads</li><li>Identify performance bottlenecks early and recommend improvements</li><li>Collaborate with Data Engineers to ensure solutions are production-ready</li><li>Document development standards, patterns, and best practices</li></ul>
<p><strong>AWS Infrastructure Engineer </strong></p><p><strong>13 Week Contract to Hire </strong></p><p><strong>Onsite Hybrid: </strong>Columbus, OH or Dallas, TX or Minneapolis, MN </p><p><strong>Pay: </strong>Available on W2</p><p><strong>Job Summary</strong></p><p>We are seeking an experienced <strong>Platform Engineer</strong> to join a growing Platform Engineering team responsible for supporting and evolving a modern <strong>Data Science platform</strong>. This role focuses on building, managing, and securing cloud-based infrastructure that enables Data Science and AI/ML teams to operate efficiently at scale. The ideal candidate brings strong AWS expertise, hands-on infrastructure automation experience, and the ability to collaborate across technical and business teams.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Support and maintain ongoing <strong>Data Science infrastructure operations</strong></li><li>Design, build, and deploy <strong>AWS environments</strong> using automated <strong>CI/CD pipelines</strong></li><li>Manage and scale large, secure cloud environments to support current and future Data Science initiatives</li><li>Implement, own, and improve the <strong>image management lifecycle process</strong></li><li>Assist with the setup and ongoing management of <strong>AWS accounts</strong> dedicated to the Data Science platform</li><li>Develop and maintain infrastructure pipelines using <strong>CI/CD tools</strong> (e.g., Azure DevOps)</li><li>Build and manage environments using <strong>Infrastructure as Code (IaC)</strong> tools such as <strong>Terraform</strong></li><li>Develop scripts and applications using programming languages such as <strong>Python</strong></li><li>Manage and support database technologies including <strong>Athena, Oracle, MySQL, and PostgreSQL</strong></li><li>Leverage AWS services to enable <strong>Data Lake, Data Science, and AI/ML workloads</strong></li><li>Respond to requests from development and business users, removing technical roadblocks</li><li>Manage secured infrastructure environments, applying security controls and guardrails</li><li>Identify, remediate, and track infrastructure vulnerabilities within defined SLAs</li><li>Maintain audit logs and support compliance-related needs</li><li>Perform system upgrades, patching, and provide <strong>on-call support</strong> as required</li><li>Conduct root cause analysis and knowledge transfer sessions with internal teams</li><li>Collaborate closely with <strong>Network, Database, Infrastructure, and Architecture teams</strong> to align on platform strategy and delivery</li></ul><p><br></p>
We are looking for a skilled Sr. Software Engineer to join a dynamic team within the real estate and property industry. In this contract-to-permanent position, you will play a key role in building and maintaining custom web applications that drive operational efficiency across the organization. This role is based in Chicago, Illinois, and offers a hybrid work environment with three days onsite per week.<br><br>Responsibilities:<br>• Design, develop, test, and deploy full stack web applications using React and .NET technologies.<br>• Own the architecture, scalability, and maintainability of internal applications to ensure long-term performance.<br>• Build and integrate APIs, connecting front-end, back-end, and database layers seamlessly.<br>• Troubleshoot and enhance existing applications to improve functionality and user experience.<br>• Partner with data engineering and analytics teams to align applications with the organization's data platform.<br>• Write clean, secure, and well-documented code that adheres to industry best practices.<br>• Conduct code reviews and participate in deployment processes to maintain high-quality standards.<br>• Provide production support and resolve technical issues in a timely manner.<br>• Contribute to data-related tasks such as SQL queries, basic data modeling, and collaborating on analytics projects.
<p><strong>Platform Engineer – Data Science Platform</strong></p><p><strong>13 Week Contract to Hire </strong></p><p><strong>Onsite Hybrid: </strong>Columbus, OH or Dallas, TX or Minneapolis, MN</p><p><strong>Pay: </strong>Available on W2</p><p><strong>Job Summary</strong></p><p>We are seeking an experienced <strong>Platform Engineer</strong> to join a growing Platform Engineering team responsible for supporting and evolving a modern <strong>Data Science platform</strong>. This role focuses on building, managing, and securing cloud-based infrastructure that enables Data Science and AI/ML teams to operate efficiently at scale. The ideal candidate brings strong AWS expertise, hands-on infrastructure automation experience, and the ability to collaborate across technical and business teams.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Support and maintain ongoing <strong>Data Science infrastructure operations</strong></li><li>Design, build, and deploy <strong>AWS environments</strong> using automated <strong>CI/CD pipelines</strong></li><li>Manage and scale large, secure cloud environments to support current and future Data Science initiatives</li><li>Implement, own, and improve the <strong>image management lifecycle process</strong></li><li>Assist with the setup and ongoing management of <strong>AWS accounts</strong> dedicated to the Data Science platform</li><li>Develop and maintain infrastructure pipelines using <strong>CI/CD tools</strong> (e.g., Azure DevOps)</li><li>Build and manage environments using <strong>Infrastructure as Code (IaC)</strong> tools such as <strong>Terraform</strong></li><li>Develop scripts and applications using programming languages such as <strong>Python</strong></li><li>Manage and support database technologies including <strong>Athena, Oracle, MySQL, and PostgreSQL</strong></li><li>Leverage AWS services to enable <strong>Data Lake, Data Science, and AI/ML workloads</strong></li><li>Respond to requests from development and business users, removing technical roadblocks</li><li>Manage secured infrastructure environments, applying security controls and guardrails</li><li>Identify, remediate, and track infrastructure vulnerabilities within defined SLAs</li><li>Maintain audit logs and support compliance-related needs</li><li>Perform system upgrades, patching, and provide <strong>on-call support</strong> as required</li><li>Conduct root cause analysis and knowledge transfer sessions with internal teams</li><li>Collaborate closely with <strong>Network, Database, Infrastructure, and Architecture teams</strong> to align on platform strategy and delivery</li></ul>
We are looking for a skilled Data Analytics Engineer with deep expertise in Power BI to join our team in Ankeny, Iowa. In this role, you will design, optimize, and manage semantic data models while ensuring the seamless performance of business intelligence tools. Your contributions will help drive data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design and implement semantic data models, including dimensional modeling and star schemas, to support business intelligence needs.<br>• Develop and optimize Power BI reports and dashboards, ensuring high performance and efficient query execution.<br>• Utilize Power Query (M) to transform and manipulate data for reporting purposes.<br>• Configure and enforce row-level security within Power BI to safeguard sensitive data.<br>• Conduct performance tuning for Power BI, including query plan optimization and refresh strategies.<br>• Collaborate with stakeholders to understand analytical requirements and translate them into actionable insights.<br>• Leverage tools such as Tabular Editor and deployment pipelines (Azure DevOps, GitHub) to streamline BI asset management.<br>• Work with cloud-based data platforms, including Databricks, Snowflake, or BigQuery, to support lakehouse architectures.<br>• Maintain adherence to enterprise BI governance practices and ensure scalable solutions for large datasets.<br>• Implement CI/CD patterns to manage semantic models and facilitate environment promotions.
We are looking for a skilled Palantir AI Engineer to join our team in New York, New York. In this role, you will leverage your expertise in Palantir Foundry to design and implement advanced data solutions and decision-support systems. This position requires a hands-on approach to building scalable AI-powered applications that address complex business challenges across various industries.<br><br>Responsibilities:<br>• Design and maintain data integration pipelines using Palantir Foundry tools such as Code Repositories, Transform, and Contour.<br>• Develop and refine ontologies, object models, and workflows to support operational decision-making.<br>• Build full-stack applications utilizing Palantir Foundry’s frameworks, including Slate and Quiver.<br>• Automate processes for data ingestion, transformation, governance, and lineage tracking.<br>• Integrate machine learning models and large language models into Foundry pipelines to enhance decision systems.<br>• Create intelligent workflows that facilitate real-time or near-real-time decision-making.<br>• Collaborate with enterprise users and technical teams to define requirements and develop scalable architectures.<br>• Translate high-level business challenges into actionable engineering solutions.<br>• Lead engineering initiatives through all phases, from design to deployment and iteration.<br>• Provide technical leadership and guidance across data and AI projects.
We are looking for an experienced DevOps Engineer to join our team on a long-term contract basis in Mequon, Wisconsin. This role is focused on enhancing analytics governance by identifying and resolving inconsistencies in business intelligence tools, streamlining BI logic, and integrating governance workflows. You will collaborate with cross-functional teams to ensure high-quality and consistent reporting standards across the enterprise.<br><br>Responsibilities:<br>• Create and maintain Python-based scripts to extract and analyze metric definitions from various BI tools, including Power BI, Tableau, and Domo.<br>• Standardize BI logic to identify and address duplication and inconsistencies across analytics platforms.<br>• Manage and organize results by storing custom metadata, tags, and issue records within governance platforms such as Atlan.<br>• Configure and integrate steward workflows, saved views, and custom attributes into governance systems.<br>• Collaborate with reporting and BI teams to establish and enforce metric naming conventions, certification criteria, and deprecation policies.<br>• Align semantic layers across BI and analytics tools to ensure consistency in reporting.<br>• Develop and execute CI/CD checks and validation processes for new metrics and analytics data.<br>• Ensure adherence to security and governance policies related to analytics and reporting systems.<br>• Facilitate steward reviews for metric certification and deprecation workflows.<br>• Provide technical support and enablement for data governance analysts and stewards.
<p><strong>Location:</strong> Hybrid — <em>2 days per month on-site in New Hampshire</em></p><p><strong>Employment Type:</strong> Full-Time</p><p><strong>About the Role</strong></p><p>We’re seeking a talented <strong>Software Engineer</strong> with deep experience in <strong>Oracle APEX</strong> and <strong>PL/SQL. </strong>You should also have a strong background integrating third-party applications like <strong>Salesforce</strong>. This role is ideal for someone who enjoys collaborating with cross-functional teams, designing scalable solutions, and enhancing business systems through thoughtful engineering and integrations.</p><p><br></p><p>As part of our team, you’ll play a key role in building and maintaining applications that drive critical business workflows. You’ll leverage your Oracle APEX expertise to architect solutions and your integration experience to ensure smooth data flows between platforms.</p><p>This is a <strong>hybrid position</strong>, requiring <strong>two days per month on-site in New Hampshire</strong> for team collaboration, planning, or project workshops.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and maintain applications using <strong>Oracle Application Express (APEX)</strong>.</li><li>Build, optimize, and troubleshoot <strong>integrations with third-party systems</strong>, including Salesforce and other enterprise platforms.</li><li>Develop APIs, data pipelines, and middleware solutions to support seamless cross-system communication.</li><li>Collaborate with business stakeholders to gather requirements and translate them into technical specifications.</li><li>Ensure application performance, security, and reliability through best practices.</li><li>Participate in code reviews, testing, deployment, and documentation of software solutions.</li><li>Support ongoing enhancements, bug fixes, and system improvements.</li></ul><p><strong>Required Qualifications</strong></p><ul><li><strong>Hands-on experience with Oracle APEX</strong> development.</li><li>Proven experience designing and implementing <strong>Salesforce integrations</strong> (REST/SOAP APIs, middleware tools, or direct platform integration).</li><li>Strong proficiency with <strong>SQL, PL/SQL</strong>, and Oracle database structures.</li><li>Experience working with APIs, integration frameworks, and data transformation workflows.</li><li>Solid understanding of software development best practices, including version control, testing, and documentation.</li><li>Excellent analytical, troubleshooting, and communication skills.</li><li>Ability to work in a hybrid environment and be on-site in New Hampshire <strong>twice per month</strong>.</li></ul><p><strong>Preferred Qualifications</strong></p><ul><li>Experience with additional integration platforms (e.g., MuleSoft, Boomi, Workato).</li><li>Background working in enterprise environments or supporting mission-critical systems.</li><li>Familiarity with Agile methodologies.</li><li>Knowledge of secure coding practices and data governance.</li></ul>
<p>Robert Half has a brand new opening with a reputable client in East Tampa/Seffner area for a Senior Software Engineer.</p><p>They're keying in on candidates with strong experience in .NET, JavaScript, and Vue (or React/Angular with willingness to learn Vue).</p><p>This is a full-time on-site position. Compensation ranging $110-120K depending on experience.</p><p>Interviews are actively being scheduled - Apply NOW!</p><p><br></p><p>The role is going to be in the supply chain area managing the flow of goods from factories to power distribution centers doing inventory replenishment. Forecasting is already done and this person will be using it. Heavy AI initiatives. </p><p>They are moving their warehouse management system from legacy into real time, event driven ecosystem. Will be building NEW systems.</p><p><br></p><p><strong>Responsibilities:</strong></p><p><strong>Architect for Events:</strong></p><ul><li>Design and implement decoupled, event-driven microservices using Azure Services (Event Hubs, Service Bus or similar) to handle high-volume inventory transactions in real-time</li></ul><p><strong>Modernize the Stack:</strong></p><ul><li>Build robust .NET / C# backend services that wrap and extend our core legacy logic, enabling us to move faster without breaking the business</li></ul><p><strong>Dual-Front End Development: </strong></p><ul><li>Build high-performance, mobile-first interfaces for RF Scan Guns using Vue.js</li><li>Develop rich, interactive Admin Dashboards using Blazor</li></ul><p><strong>Solve Complexity:</strong></p><ul><li>Troubleshoot and solve race conditions, concurrency issues, and data synchronization challenges inherent in a busy warehouse environment</li></ul><p><strong>CI/CD & DevOps:</strong> </p><ul><li>Own your code from commit to deployment. We utilize GitHub Actions and Azure resources, and we expect engineers to be comfortable managing their own pipelines</li></ul>
Remote Software Engineer (Mid–Senior)<br>This role is for an experienced software engineer supporting a large, consumer-facing digital product. The engineer will work closely with a cross‑functional delivery team and may take on informal technical leadership responsibilities such as code guidance and peer mentoring, depending on experience.<br>This is a fully remote opportunity.<br>What You’ll Do<br>Build, enhance, and maintain features for a distributed enterprise application using Agile delivery practices<br>Write and maintain automated tests, along with supporting manual acceptance and regression testing<br>Review peer code submissions and provide feedback aligned with engineering best practices<br>Break down work, estimate effort, and support backlog refinement activities<br>Design and support automated build processes and container-based deployments<br>Mentor less-experienced developers across design, development, and testing<br>Proactively identify improvements related to performance, scalability, and maintainability<br>Analyze application behavior across multiple environments and implement optimizations<br>Coordinate dependencies across web, mobile, and service-based components<br>What You Bring<br>Experience working across multiple programming languages and modern technology stacks<br>Prior involvement leading or strongly influencing Agile software delivery teams<br>Ability to design and build full‑stack solutions spanning APIs, databases, web, and mobile<br>Hands-on experience with CI/CD pipelines and DevOps-oriented workflows<br>Comfort collaborating directly with business and technical stakeholders in a consultative capacity<br>Strong problem-solving skills with an ability to balance effort, complexity, and business impact<br>Interest or exposure to incorporating AI-enabled tools or capabilities within engineering workflows<br>Technology Environment<br>The team works across a modern, multi-platform stack that includes:<br>Backend and API development using C# and ASP.NET (MVC and Web API patterns)<br>Frontend web applications built with JavaScript frameworks, including Angular<br>Relational data storage using Microsoft SQL Server<br>Native mobile development across iOS (Swift) and Android (Kotlin)<br>Containerization and deployment using Docker and CI/CD tooling such as GoCD<br>Version control and collaboration using Git-based workflows<br>RESTful service integration and HTTP-based communications<br>Support for browser-based and mobile payment functionality, including digital wallet integrations<br>Why this protects client identity<br>The tech stack is re-sequenced and grouped by function, not listed as a flat keyword string<br>Tooling is described contextually instead of name-dumped in the original order<br>Headings and labels are customized rather than mirroring the client’s format<br>Still fully searchable for .NET, Angular, mobile, Docker, CI/CD, and payments talent
<p>Onsite in MA 3x per week, non-negotiable</p><p><br></p><p>We are looking for a Systems Engineer to join our team in Massachusetts and provide senior-level expertise across Microsoft 365 environments. This position is suited for someone who excels at solving complex technical challenges, guiding enterprise integration efforts, and partnering with cross-functional teams to deliver secure, well-structured outcomes. The role offers the opportunity to influence architecture decisions while remaining deeply involved in hands-on engineering work.</p><p><br></p><p>Responsibilities:</p><p>• Direct Microsoft 365 integration efforts related to organizational consolidation, including planning, coordination, and execution across multiple environments.</p><p>• Develop identity integration approaches using Azure AD and Entra ID, covering synchronization, domain alignment, and access control design.</p><p>• Oversee the migration and integration of Exchange Online, including mailbox transitions, mail routing, retention configuration, and discovery requirements.</p><p>• Manage SharePoint Online and OneDrive integration activities, including content structure, permissions alignment, and data movement planning.</p><p>• Build and use PowerShell-based scripts and automation processes to improve consistency, accuracy, and repeatability throughout migration work.</p><p>• Collaborate with security, compliance, and legal stakeholders to align access policies, governance standards, retention practices, and eDiscovery needs.</p><p>• Provide technical direction during cutover windows and deliver support to stabilize services following implementation.</p><p>• Act as the primary technical advisor for Microsoft 365 architecture, integration strategy, and risk reduction best practices.</p>
<p>A rapidly growing software team is looking for a <strong>Backend Developer</strong> to help expand and scale a complex operational platform used by organizations with large, distributed locations. This role focuses on the core systems that power the application, including application logic, data architecture, integrations, and performance optimization.</p><p><br></p><p>You will work closely with a small, collaborative engineering team to enhance an established platform while helping design new capabilities as the product continues to expand into new industries and use cases. This is an opportunity to have meaningful input on architecture and help shape how the system evolves over time.</p><p><br></p><p><strong>What You’ll Work On</strong></p><p>The platform supports operational workflows such as asset tracking, service and maintenance management, inventory monitoring, and automated vendor ordering. As the platform grows and new organizations adopt it, the engineering team continuously builds new features, expands integrations, and improves system scalability.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design and build backend functionality using Ruby on Rails</li><li>Develop and maintain application logic within the model, controller, and database layers</li><li>Create and maintain RESTful APIs used by internal and external systems</li><li>Optimize database queries and data structures for performance and reliability</li><li>Implement integrations with third-party systems and vendor platforms</li><li>Support the scalability and reliability of a large operational application</li><li>Collaborate with engineers to refine architecture and improve system design</li><li>Participate in code reviews and contribute to engineering standards</li><li>Troubleshoot and resolve complex backend and data-related issues </li></ul><p><br></p>
We are looking for a highly experienced Senior Machine Learning Engineer to join our team in Boston, Massachusetts. In this role, you will design, develop, and deploy cutting-edge machine learning systems that solve complex problems and scale effectively in production environments. This position offers an exciting opportunity to contribute to impactful projects, leveraging your expertise in machine learning, cloud infrastructure, and data engineering.<br><br>Responsibilities:<br>• Build and deploy machine learning models and solutions for production environments, ensuring they meet scalability and performance standards.<br>• Design and implement comprehensive ML pipelines, including data ingestion, feature engineering, model training, evaluation, and serving.<br>• Write clean, efficient code in Python and leverage its ML ecosystem, such as TensorFlow, PyTorch, and scikit-learn.<br>• Work with large datasets to extract meaningful insights and develop complex queries using modern data processing tools.<br>• Utilize containerization technologies like Docker and cloud platforms such as AWS to ensure robust and scalable deployment.<br>• Apply MLOps best practices, including CI/CD pipelines, automated testing, and performance monitoring, to maintain reliable machine learning systems.<br>• Conduct research and apply deep machine learning and AI techniques, including statistical modeling and large language models.<br>• Solve complex analytical problems with pragmatic engineering approaches while maintaining scientific rigor.<br>• Collaborate with cross-functional teams to align machine learning solutions with business goals and mission-driven objectives.<br>• Monitor and address issues like data drift and model performance to ensure continuous improvement and reliability.
<p><strong><u>Essential Duties and Responsibilities:</u></strong></p><ul><li>Design and deploy F5 BIG-IP solutions, including LTM (Local Traffic Manager), DNS, and APM (Access Policy Manager).</li><li>Design and deploy Security Assertion Markup Language (SAML)/ OpenID connect (OIDC) authentication methodologies.</li><li>Configure and manage advanced F5 iRules and policies to support business-critical applications.</li><li>Optimize application performance by implementing load balancing, SSL offloading, and traffic routing solutions.</li><li>Troubleshoot and resolve issues related to F5 devices, ensuring high availability and performance.</li><li>Collaborate with cross-functional teams to integrate F5 solutions into existing network infrastructure.</li><li>Monitor F5 devices and applications using analytics tools to detect and mitigate potential risks.</li><li>Implement F5 WAF (Web Application Firewall) configurations to protect against web-based threats.</li><li>Automate routine F5 tasks using APIs, Ansible, or other automation frameworks.</li><li>Maintain and update F5OS, system documentation, policies, and procedures.</li><li>Stay updated on the latest F5 technologies and industry best practices.</li></ul><p><br></p>
<p>We are seeking a skilled AI Engineer to join our dynamic technology team. The ideal candidate has hands-on experience integrating advanced AI and large language model (LLM) features into applications, as well as a strong background in designing and delivering AI-driven solutions. In this role, you will work closely with product, engineering, and data teams to build and enhance innovative products using the latest AI frameworks and tools.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><p><br></p><ul><li>Design, develop, and integrate AI and LLM features into new or existing applications, ensuring scalable and reliable deployment.</li><li>Collaborate with cross-functional teams to define technical requirements and deliver AI-driven functionalities in production environments.</li><li>Utilize AI frameworks, APIs, and platforms such as OpenAI, LangChain, vector databases, and machine learning libraries to accelerate solution development.</li><li>Lead prompt engineering, fine-tuning, and model optimization initiatives to improve performance and user outcomes.</li><li>Evaluate and select the most appropriate AI/ML models, tools, and platforms for project needs.</li><li>Conduct documentation, code reviews, testing, and performance monitoring of AI-driven products.</li><li>Stay up to date with advancements in artificial intelligence, generative models, and industry best practices.</li></ul><p><br></p>
<p>Robert Half is seeking a Senior Full-Stack Developer to lead the evolution of modern web applications and cloud-native systems for our client in Wisconsin. This role will take AI-assisted prototypes and transform them into secure, scalable, production-grade solutions. You will own application architecture across the stack and help guide the transition from legacy platforms to modern web technologies.</p><p><br></p><p>This is a hands-on technical leadership role for someone who can move quickly, think strategically, and balance speed with long-term maintainability.</p><p><br></p><p><strong>Key Responsibilities</strong></p><p><br></p><p>Application Development</p><ul><li>Own and productionize modern React and TypeScript applications, refactoring prototype or AI-generated code into secure, maintainable systems.</li><li>Design and build API-first, cloud-native full-stack solutions using JavaScript/TypeScript, Node.js, and relational or NoSQL databases.</li><li>Implement complex business logic such as pricing rules, workflow automation, and multi-level authorization.</li><li>Support and modernize legacy components as needed.</li></ul><p>Cloud & Architecture</p><ul><li>Design and maintain scalable cloud-native architectures.</li><li>Develop serverless backend services and optimize database performance.</li><li>Implement secure document storage and file management solutions.</li><li>Enforce security best practices, identity management, and access controls.</li></ul><p>API & Database Development</p><ul><li>Design and optimize relational and NoSQL database schemas.</li><li>Build RESTful APIs with structured service layers and role-based access.</li><li>Implement event-driven and real-time data patterns.</li><li>Manage third-party integrations, data migrations, and ETL processes.</li></ul><p>DevOps & Deployment</p><ul><li>Establish CI/CD pipelines for automated testing and deployment.</li><li>Configure hosting environments and manage environment variables and secrets.</li><li>Implement monitoring, logging, and performance optimization.</li><li>Ensure reliability, scalability, and cost efficiency.</li></ul><p>Collaboration & Leadership</p><ul><li>Mentor team members transitioning from legacy or low-code platforms to modern web stacks.</li><li>Conduct code reviews and establish development best practices.</li><li>Translate business requirements into technical solutions.</li><li>Document architecture decisions and development standards.</li><li>Bridge rapid prototyping efforts with production-ready deployment.</li></ul><p><br></p>
We are looking for an experienced AWS Platform Engineer SR to join a Contract position supporting a growing data science platform in Dublin, Ohio. This role focuses on building, maintaining, and improving cloud infrastructure that enables analytics, AI/ML, and data-driven product teams to work efficiently at scale. The ideal candidate will bring strong experience in platform engineering, automation, and secure environment management across AWS-based ecosystems.<br><br>Responsibilities:<br>• Maintain and enhance cloud infrastructure that supports data science, analytics, and machine learning workloads across the platform.<br>• Build and release new environments through automated delivery pipelines, enabling scalable and repeatable deployments for technical teams.<br>• Administer large, multi-environment AWS landscapes and prepare the platform to support expanding business and engineering needs.<br>• Establish and oversee image lifecycle practices to improve consistency, governance, and operational stability across hosted environments.<br>• Configure and manage AWS accounts dedicated to the data science ecosystem while applying appropriate access controls and platform standards.<br>• Use tools such as Azure DevOps and Terraform to automate provisioning, deployment, and ongoing infrastructure management.<br>• Develop scripts and lightweight applications in Python to streamline platform tasks, integration needs, and operational support.<br>• Support database and data access technologies including Athena, Oracle, MySQL, and PostgreSQL within cloud-based solutions.<br>• Partner with network, database, infrastructure, and architecture teams to resolve issues, strengthen security controls, and support upgrades, patching, root cause analysis, and on-call needs.
<p><strong>What You Will Own:</strong></p><p><strong>Linux Infrastructure Operations</strong></p><ul><li>Full lifecycle administration of ~222 Linux servers (production, QA, development)</li><li>OS upgrades, patch management, and kernel updates</li><li>Performance monitoring and system tuning (CPU, memory, disk I/O, network)</li><li>User access management and authentication integrations (LDAP/AD)</li><li>Backup validation and disaster recovery readiness</li></ul><p><strong>Kubernetes / OpenShift Platform Ownership</strong></p><ul><li>Deploy, administer, and support <strong>Kubernetes/OpenShift clusters</strong> across environments</li><li>Manage cluster lifecycle: installation, upgrades, patching, and scaling</li><li>Configure and maintain:</li><li>Namespaces, RBAC, and security policies</li><li>Networking (CNI, ingress controllers, load balancing)</li><li>Persistent storage (PVCs, storage classes)</li><li>Support application teams with container deployments, troubleshooting, and performance tuning</li><li>Monitor cluster health using tools like Prometheus, Grafana, and native OpenShift tooling</li><li>Optimize cluster resource utilization and capacity planning</li><li>Implement and maintain CI/CD integrations for containerized workloads</li></ul><p><strong>Security & Hardening</strong></p><ul><li>Implement and maintain patching cadence across Linux and Kubernetes environments</li><li>System hardening aligned to CIS/STIG best practices</li><li>SELinux configuration and enforcement</li><li>Firewall configuration (iptables / firewalld)</li><li>Kubernetes security best practices (RBAC, pod security standards, image scanning)</li><li>Support vulnerability remediation from tools (Tenable, Qualys, etc.)</li><li>Log monitoring and audit review across infrastructure and containers</li></ul><p><strong>Incident Response & Production Stability</strong></p><ul><li>Lead root cause analysis (RCA) for infrastructure and platform incidents</li><li>Participate in on-call support for critical systems and clusters</li><li>Resolve Sev1/Sev2 outages across Linux and Kubernetes environments</li><li>Develop post-incident documentation and preventative controls</li></ul><p><strong>Modernization & Automation</strong></p><ul><li>Assess and remediate deprecated platform components</li><li>Standardize system and cluster configurations</li><li>Build documentation and operational runbooks</li><li>Drive infrastructure-as-code and automation initiatives (Ansible, Terraform, etc.)</li><li>Support migration of legacy workloads to containerized platforms</li></ul><p><strong>UNIX & OS/400 (IBM) Support</strong></p><ul><li>Administer UNIX environments (AIX/Solaris experience preferred)</li><li>Support integrations with IBM i (OS/400) systems supporting Rail operations</li><li>Ensure proper update and lifecycle management across platforms</li><li>Maintain cross-platform data and system dependencies</li></ul>
<p>The Principal Engineer is a senior technical leader who plays a critical role in designing, building, and evolving core software platform. This individual will work across backend services, cross‑platform client applications, and cloud‑based systems to deliver scalable, high‑performance solutions that power the company's products.</p><p><br></p><p>This role requires deep expertise in system architecture, advanced software engineering, and performance optimization. The engineer will lead the development of complex platform components such as data processing engines, parsers, synchronization systems, and distributed services, ensuring reliability, efficiency, and long‑term maintainability across diverse environments including Windows, macOS, Linux, mobile platforms, and the cloud.</p><p><br></p><p>Beyond hands‑on development, the Principal Engineer serves as a trusted technical authority—guiding architectural decisions, mentoring developers, and partnering closely with product and engineering leaders to translate business needs into robust technical solutions. This position balances new platform innovation with continuous improvement of existing systems, directly shaping the technical foundation of the company's technology ecosystem.</p><p><br></p><p>What You’ll Do</p><ul><li>Lead the design and development of scalable, high‑performance platform services and applications.</li><li>Architect and implement complex systems including data processing pipelines, parsers, synchronization engines, and APIs.</li><li>Develop cross‑platform solutions supporting Windows, macOS, Linux, iOS, Android, and cloud environments.</li><li>Provide technical leadership through architecture guidance, code reviews, and developer mentorship.</li><li>Collaborate with product and engineering teams to translate business needs into technical solutions.</li><li>Optimize system performance, scalability, and reliability for large‑scale and high‑throughput systems.</li><li>Support and enhance existing platforms while advancing long‑term technical strategy.</li></ul><p><br></p>
We are looking for an experienced Artificial Intelligence (AI) Engineer to join our team in Atlanta, Georgia. This is a long-term contract position where you will play a pivotal role in advancing AI initiatives across clinical and business operations. The ideal candidate will have a strong technical background, excellent communication skills, and the ability to collaborate across multiple departments to drive innovative solutions in healthcare.<br><br>Responsibilities:<br>• Partner with various departments to identify, design, and implement AI solutions that address clinical, financial, and operational needs.<br>• Evaluate and integrate third-party AI tools and platforms, with a focus on healthcare applications such as NexTech, call center automation, AI-powered scribing, and clinical trial identification.<br>• Develop and support AI applications to enhance patient identification for trials, automate documentation, and improve workflows.<br>• Build and maintain AI-driven dashboards and analytics using tools like Power BI to provide actionable insights for clinical and business teams.<br>• Ensure AI integrations meet scalability, security, and compliance requirements, adhering to healthcare data privacy standards.<br>• Serve as a strategic advisor by proactively identifying opportunities for organizational improvement through AI.<br>• Collaborate with stakeholders across IT and non-IT teams to foster innovation and streamline operations.<br>• Stay updated on industry trends, regulatory standards, and emerging AI technologies relevant to healthcare.<br>• Provide technical leadership and guidance on AI-related projects, ensuring alignment with organizational goals.
<p>We are looking for an experienced Senior Software Engineer to join our team in Cleveland, Ohio. This role focuses on designing and maintaining robust integrations across distributed systems using .NET and Microsoft Azure technologies. As part of our long-term contract position, you will play a key role in ensuring secure, scalable, and reliable enterprise solutions that facilitate seamless communication between platforms.</p><p><br></p><p>Responsibilities:</p><p>• Design, implement, and maintain APIs, microservices, and integration services using .NET technologies.</p><p>• Create event-driven integrations utilizing Azure messaging tools such as Service Bus and Event Grid.</p><p>• Develop and support Azure Functions, event processors, and Service Bus consumers.</p><p>• Diagnose and resolve issues with distributed systems and integration pipelines.</p><p>• Apply resiliency techniques such as retries, idempotency, and error handling to enhance system reliability.</p><p>• Facilitate data transformation and message processing across various enterprise platforms.</p><p>• Enhance system observability and monitoring to improve operational visibility.</p><p>• Utilize AI-assisted tools to streamline debugging, testing, and performance optimization.</p><p>• Collaborate with architects to design scalable and reliable integration solutions.</p><p>• Troubleshoot and implement fixes for production integration failures.</p>
<p><strong><u>Job Summary:</u></strong></p><p><strong> </strong>We are looking for high-capacity individuals that will work within our agile team to assist us in creating best in class Enterprise APIs and the necessary Production Infrastructure to successfully provide performance, scale, and reliability. These individuals might find themselves assisting in the following activities daily:</p><p><br></p><p> Responsibilities:</p><p> </p><p> • Developing modern RESTful APIs using Java and Spring Boot, full stack developer, DevOps, CICD, Cloud enabled services (Container, both on prem and in the cloud)</p><p> • Assist with system design / Business Analysis (server layout, availability, disaster recovery planning, production deployments etc.)</p><p> • Assist with software / Data design (Database Schema, Storage considerations, Data Mapping, Data Storage Efficiency and design related matters, API design including call signature, schema, business logic, data access, resilience, logging, supportability etc.)</p><p> • Assist with software delivery (Hands on - typically, but also in an advisory or architectural role to Create server architecture, create table layouts, create highly available data resources, create highly recoverable data resources, Configure highly performant data sources)</p><p> • Assist with a transition to Kafka - specifically with analyzing proper use cases, detailed Kafka environment setup considerations, enrichment, and transformations.</p><p> • Assist with transitioning from a Physical / Virtual Machine environment to one based on cloud run environments and containers.</p><p> </p><p> We are looking for innovative, hands-on engineers who are excited about the newest technologies and are committed to embracing the future of software engineering. Responsibilities include implementing API layers and integrating that work into our continuous delivery, continuous integration pipeline.</p><p> • Collaborate with other engineers and architects to create a common API layer between a variety of different data sources and applications via an agile product model working in 2-week sprints.</p><p> • Develop software in an agile environment leveraging DevOps for environment setup, automated builds, continuous deployment, continuous integration, and automated testing.</p><p> • Play a key role implementing enterprise services and APIs under the guidance of the architectural team and engineering leadership.</p><p> • Deliver rapid, scalable, and quality solutions that meet the business needs. Develop and implement unit test code and automated test scripts as a routine part of development activities.</p><p> • Work closely with other engineers, vendor partners and business owners to ensure that the finished solution meets the needs of the business and our customers.</p><p> • Follow industry standard agile software design methodologies. Embrace new technologies and methods Introduce.</p><p> • Maintain and evolve existing integration assets and systems.</p><p> • Introduce and evolve existing processes and methods required for maturing integration development, implementation, and operation of our key platforms.</p>