<p>Our company is seeking a skilled and collaborative DevOps Engineer to join our technology team in St. Louis, MO. This role offers the opportunity to drive automation, streamline workflows, and optimize infrastructure for high-performance applications.</p><p><strong> </strong></p><p><strong>Key Responsibilities:</strong></p><p>Design, deploy, and manage CI/CD pipelines to support agile development practices</p><p>Automate infrastructure provisioning, monitoring, and scaling in cloud or hybrid environments</p><p>Collaborate with development and IT teams to implement DevOps best practices</p><p>Troubleshoot and resolve issues in development, test, and production environments</p><p>Stay current with emerging DevOps technologies, tools, and methodologies</p><p><br></p>
<p>We are looking for a Systems Administrator to help drive technology modernization and operational excellence. We are seeking IT professionals with a strong background in systems administration or infrastructure operations, eager to support mission-critical initiatives and grow into senior infrastructure roles.</p><p><strong>Qualifications:</strong></p><ul><li>2–5 years of experience in systems administration or infrastructure operations; helpdesk/NOC backgrounds with relevant exposure also considered. (Based on general knowledge)</li><li>Proven expertise in Linux system administration. (Based on general knowledge)</li><li>Familiarity with enterprise infrastructure, including storage, virtualization, and networking. (Based on general knowledge)</li><li>Hands-on experience with monitoring systems such as Zabbix, Grafana, or Prometheus. (Based on general knowledge)</li><li>Basic scripting skills (e.g., Bash, Python) and a strong interest in further developing automation capabilities. (Based on general knowledge)</li><li>Excellent written communication for documentation and process development. (Based on general knowledge)</li><li>Ability to respond quickly and decisively during support rotation and system issues. (Based on general knowledge)</li><li>Comfortable leveraging AI tools for troubleshooting, documentation, and automation with a disciplined approach to validating outputs. (Based on general knowledge)</li><li>Growth mindset: eagerness to learn, develop, and advance into senior infrastructure roles over time. (Based on general knowledge)</li></ul><p><br></p>
<p>Join our dynamic technology team as a Site Reliability Engineer (SRE) or Platform Engineer, where you’ll play a central role in building, automating, and maintaining our modern infrastructure across both on-premise and cloud environments.</p><p><strong>Qualifications:</strong></p><ul><li>Bachelor’s degree in Computer Science, Engineering, or a related technical field. (Based on general knowledge)</li><li>3–5+ years of experience in SRE, Platform Engineering, or Systems Administration within fast-paced environments. (Based on general knowledge)</li><li>Strong Python scripting skills. (Based on general knowledge)</li><li>Deep hands-on experience with Kubernetes (deployment, management, troubleshooting); OpenShift experience is a plus. (Based on general knowledge)</li><li>Proficiency with Docker/Podman and internal image management. (Based on general knowledge)</li><li>Solid experience with Ansible and Terraform; Puppet knowledge is helpful. (Based on general knowledge)</li><li>Familiarity with CI/CD workflows; experience with ArgoCD (preferred) or Flux for GitOps. (Based on general knowledge)</li><li>Proficiency with Grafana and Prometheus; exposure to Grafana Cloud/Alloy is desirable. (Based on general knowledge)</li><li>Experience with incident management and on-call tools such as Rootly, Opsgenie, or PagerDuty. (Based on general knowledge)</li><li>Security-first mindset with exposure to DevSecOps practices, including SonarQube, SAST, and CVE scanning. (Based on general knowledge)</li><li>Proven experience with both on-premise and cloud infrastructure:</li><li><strong>On-Premise:</strong> Primary experience with Kubernetes clusters; familiarity with Proxmox is desirable.</li><li><strong>Cloud:</strong> AWS and GCP experience (with a growing footprint), managed via Terraform. (Based on general knowledge)</li></ul><p>If you’re passionate about automation, reliability, and working at the forefront of scalable infrastructure, we invite you to apply.</p>
We are looking for a skilled DevOps Engineer to join our team in Westborough, Massachusetts. In this long-term contract role, you will play a pivotal part in enhancing the reliability and scalability of cloud infrastructure, CI/CD pipelines, and deployment strategies. The position requires a hybrid schedule, with in-office work on Tuesdays, Wednesdays, and Thursdays.<br><br>Responsibilities:<br>• Design, build, and maintain robust CI/CD pipelines using tools such as GitHub Actions, Jenkins, and Ansible.<br>• Develop and implement standardized deployment pipelines for applications, integration platforms like MuleSoft, and cloud infrastructure.<br>• Manage cloud environments using Infrastructure as Code (IaC) technologies, including Terraform and Helm.<br>• Support containerized platforms and Kubernetes-based systems, including Docker.<br>• Collaborate with development teams to improve automation processes, deployment frequency, and platform reliability.<br>• Apply best practices for version control, secrets management, artifact repositories, and environmental consistency.<br>• Troubleshoot and resolve issues across pipelines, applications, and infrastructure layers to ensure operational stability.<br>• Enhance monitoring, logging, and observability tools to optimize platform performance.<br>• Partner with cross-functional teams to streamline DevOps practices for custom applications and backend services.<br>• Maintain detailed technical documentation and uphold high standards for follow-up and organizational timelines.
<p><strong>DevOps Engineer</strong></p><p>We are seeking a motivated <strong>DevOps Engineer</strong> to enhance automation, streamline deployments, and support modern cloud-native infrastructure. This role is ideal for someone who enjoys improving system reliability, optimizing pipelines, and enabling faster development workflows.</p><p><strong>Responsibilities</strong></p><ul><li>Build, maintain, and optimize CI/CD pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins</li><li>Support containerized environments using Docker and Kubernetes</li><li>Manage infrastructure automation using Terraform, Helm, Ansible, or Bicep</li><li>Monitor application performance, system uptime, and deployment health</li><li>Troubleshoot build failures, pipeline issues, infrastructure drift, and deployment errors</li><li>Manage configuration management across multiple environments</li><li>Collaborate with developers and cloud engineers during releases and application migrations</li><li>Implement logging, monitoring, and alerting solutions</li><li>Maintain documentation for deployments, pipelines, and CI/CD procedures</li></ul><p><br></p>
We are looking for an experienced DevOps Engineer to join our team in Orem, Utah. In this role, you will collaborate with cross-functional teams to build, deploy, and maintain scalable and reliable systems, ensuring seamless integration and automation of workflows. You will work with cutting-edge technologies to enhance the infrastructure and optimize the development lifecycle.<br><br>Responsibilities:<br>• Design, implement, and maintain infrastructure-as-code solutions using tools like Terraform or AWS CDK.<br>• Develop and optimize containerized deployments using Docker and Kubernetes.<br>• Collaborate with software developers to integrate DevOps practices into the software development lifecycle.<br>• Set up and manage CI/CD pipelines using Bitbucket Pipelines or similar tools.<br>• Monitor and troubleshoot system performance using AWS services such as CloudWatch.<br>• Build scalable backend systems using .NET Core and C#, ensuring efficient data handling with PostgreSQL and Entity Framework Core.<br>• Develop and maintain frontend systems with React, TypeScript, and state management libraries like Redux Toolkit.<br>• Implement authentication and authorization solutions using AWS Cognito.<br>• Enhance testing frameworks and tools, including Vitest, Playwright, and React Testing Library.<br>• Support microservices architecture and ensure seamless communication between components.
<p>Seeking an Azure Cloud Engineer to support cloud infrastructure initiatives, including migrations, automation, and security best practices.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and manage Azure infrastructure (IaaS/PaaS)</li><li>Support Azure networking, storage, and compute</li><li>Implement automation using PowerShell or Terraform</li><li>Monitor and optimize performance and costs</li><li>Partner with DevOps and Security teams</li></ul><p><br></p>
<p>We are looking for an experienced Azure Cloud Engineer to join our team North Houston. In this role, you will leverage your expertise to manage cloud infrastructure, ensure system reliability, and collaborate with team members on key projects. This position requires a strong background in Azure administration and Infrastructure as Code (IaC) tools, along with a commitment to delivering high-quality solutions.</p><p><br></p><p>Responsibilities:</p><p>• Design, implement, and manage Azure cloud infrastructure to support business needs.</p><p>• Utilize tools such as Terraform and Ansible to develop and maintain Infrastructure as Code (IaC) solutions.</p><p>• Collaborate with team members to maintain Office 365, Exchange Online, Intune, and Active Directory systems.</p><p>• Ensure the scalability and reliability of cloud-based systems by implementing auto-scaling solutions.</p><p>• Regularly assess and optimize cloud environments to enhance performance and security.</p><p>• Provide on-site support five days a week, with half-day Fridays.</p><p>• Travel to Midland quarterly to participate in team collaborations and align on project objectives.</p><p>• Maintain documentation for cloud processes and configurations to ensure clarity and compliance.</p><p>• Work closely with stakeholders to identify and address technical challenges.</p><p>• Support and contribute to the development of cloud strategies aligned with organizational goals.</p>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p><strong>Our client is seeking a Senior AWS Data Engineer for a long term, multi-year assignment.</strong></p><p><br></p><p><strong>This role is onsite 4 days/week in Torrance, CA. </strong></p><p><br></p><p>This role is to support and enhance enterprise business intelligence and analytics environments. This role focuses on designing, building, and maintaining scalable data pipelines and cloud‑based data platforms using AWS services. The ideal candidate brings deep hands‑on experience with AWS Glue, PySpark, Redshift, and serverless architectures, along with strong SQL and data analysis skills.</p><p>This role will collaborate closely with architecture, security, compliance, and development teams to ensure data solutions are performant, secure, and compliant with regulatory requirements.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue with PySpark for large‑scale data processing</li><li>Develop and support serverless integrations using AWS Lambda for event‑driven workflows and system integrations</li><li>Design and optimize Amazon Redshift data warehouse solutions, including:</li><li>Advanced SQL analytics</li><li>Stored procedures</li><li>Performance tuning</li><li>Lead implementation of secure vendor file transfer and ingestion solutions using AWS Transfer Family</li><li>Design and implement database migration and replication pipelines using AWS Database Migration Service (DMS)</li><li>Build and manage workflow orchestration using Apache Airflow or similar orchestration tools</li><li>Analyze data quality, transformation logic, and pipeline performance using SQL and data analysis techniques</li><li>Troubleshoot and resolve production data pipeline and integration issues across AWS services</li><li>Provide technical guidance to development team members on:</li><li>AWS best practices</li><li>Cost optimization</li><li>Performance optimization</li><li>Partner with enterprise architecture, security, and compliance teams to ensure SOX and regulatory compliance</li></ul>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p>DevSecOps Security Engineer</p><p> </p><p>Location: Camas, WA (Onsite with potential hybrid flexibility)</p><p> </p><p>We are seeking an experienced DevSecOps Security Engineer to join our technology team in Camas, Washington. This role focuses on strengthening application and infrastructure security while supporting the continued evolution of our engineering platforms. You will collaborate closely with development, infrastructure, and security partners to embed security best practices into modern CI/CD pipelines and cloud environments.</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Configure, maintain, and optimize DevSecOps security tooling across development pipelines</li><li>Partner with engineering teams to integrate security controls into CI/CD workflows</li><li>Identify, assess, and help remediate application and infrastructure vulnerabilities</li><li>Contribute to secure coding standards and architectural security guidelines</li><li>Support infrastructure‑as‑code initiatives and cloud security practices</li><li>Evaluate existing security controls and recommend improvements</li><li>Assist in standardizing DevSecOps processes and documentation</li><li>Communicate security risks and recommendations to technical and business stakeholders</li></ul><p>Qualifications</p><ul><li>5+ years of experience in technology or information security roles</li><li>2+ years of hands‑on experience with DevSecOps, CI/CD pipelines, or cloud security</li><li>Experience with infrastructure‑as‑code tools (Terraform or similar)</li><li>Familiarity with containerized environments (Kubernetes, AKS, or equivalent)</li><li>Exposure to Azure or comparable cloud platforms</li><li>Strong collaboration and communication skills</li></ul><p>Compensation & Benefits</p><ul><li>Salary Range: $130,000 – $165,000</li><li>Competitive medical, dental, and vision coverage</li><li>401(k) plan with employer contribution</li><li>Generous paid time off and paid holidays</li><li>Family‑friendly leave programs and wellness support</li><li>Professional development and learning opportunities</li></ul><p><br></p><p>Work Environment</p><ul><li>Primarily in‑office with potential for hybrid flexibility</li><li>Collaborative, engineering‑driven culture</li><li>Opportunity to influence security practices across modern platforms</li></ul><p>Salary: $130 - $165k</p>
We are looking for a skilled DevOps Engineer III to join our team in San Ramon, California. This role involves working with advanced technologies to optimize and maintain infrastructure, ensuring seamless integration and operation. The ideal candidate will have strong expertise in cloud platforms, automation tools, and DevOps methodologies.<br><br>Responsibilities:<br>• Build and maintain scalable infrastructure using cloud platforms such as AWS and Google Cloud.<br>• Implement and manage Adobe Experience Manager (AEM) systems and Cloudflare configurations.<br>• Develop and optimize CI/CD pipelines to streamline deployment processes.<br>• Automate workflows using Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation.<br>• Create and manage scripts using Python, Bash, or PowerShell to improve operational efficiency.<br>• Collaborate with cross-functional teams to ensure alignment of DevOps practices.<br>• Integrate RESTful APIs and authentication mechanisms into automation workflows.<br>• Monitor system performance and troubleshoot issues to maintain high availability.<br>• Apply security best practices to configurations and log aggregation tools.<br>• Contribute to the implementation and optimization of marketing technology stacks and personalization platforms.
We are looking for an experienced DevOps Engineer to join our team on a long-term contract basis in Mequon, Wisconsin. This role is focused on enhancing analytics governance by identifying and resolving inconsistencies in business intelligence tools, streamlining BI logic, and integrating governance workflows. You will collaborate with cross-functional teams to ensure high-quality and consistent reporting standards across the enterprise.<br><br>Responsibilities:<br>• Create and maintain Python-based scripts to extract and analyze metric definitions from various BI tools, including Power BI, Tableau, and Domo.<br>• Standardize BI logic to identify and address duplication and inconsistencies across analytics platforms.<br>• Manage and organize results by storing custom metadata, tags, and issue records within governance platforms such as Atlan.<br>• Configure and integrate steward workflows, saved views, and custom attributes into governance systems.<br>• Collaborate with reporting and BI teams to establish and enforce metric naming conventions, certification criteria, and deprecation policies.<br>• Align semantic layers across BI and analytics tools to ensure consistency in reporting.<br>• Develop and execute CI/CD checks and validation processes for new metrics and analytics data.<br>• Ensure adherence to security and governance policies related to analytics and reporting systems.<br>• Facilitate steward reviews for metric certification and deprecation workflows.<br>• Provide technical support and enablement for data governance analysts and stewards.
<p>Overview</p><p>We are seeking a <strong>DevOps Engineer II</strong> to support a DevOps/SRE function focused on <strong>media analysis pipeline components</strong> tied to large‑scale data and analytics workflows. This team is primarily centered around <strong>Terraform automation, GitOps, and infrastructure as code</strong> within a cloud‑based environment.</p><p>This role is ideal for a <strong>self‑starter</strong> who can work with minimal oversight, solve problems independently, and collaborate effectively with engineering and product teams. Prior experience with video engineering or media pipelines is a plus but not required.</p><p>The role will involve working in an <strong>AWS environment</strong>, supporting how applications are securely deployed, exposed, and maintained across environments.</p><p><br></p><p>Key Responsibilities</p><ul><li>Support cloud deployments for <strong>media analysis pipeline components</strong></li><li>Collaborate with software engineers, product managers, and business stakeholders to ensure reliable deployments and stable operations</li><li>Build, maintain, and improve <strong>CI/CD pipelines</strong> for provisioning and deployment across environments</li><li>Automate operational processes, monitoring, and reliability tooling</li><li>Troubleshoot and resolve issues across development, test, and production environments</li><li>Build and maintain tools for deployment, monitoring, and operational support</li><li>Communicate project status, risks, and issues clearly to internal teams</li><li>Help streamline DevOps and SRE processes through <strong>automation and best practices</strong></li></ul><p><br></p><p>Biggest Needs</p><ul><li><strong>Strong Terraform experience (must‑have)</strong></li><li>Strong automation background in a DevOps or SRE environment</li><li>Hands‑on experience with <strong>Infrastructure as Code</strong></li><li><strong>AWS experience required</strong></li><li>Experience with <strong>Ansible</strong> and <strong>GitOps</strong> highly preferred</li><li>Strong Linux/Unix background</li><li>Security‑minded engineer who understands how applications are securely exposed and protected in cloud environments</li><li>Video engineering or media pipeline experience is a plus</li></ul><p><br></p><p>Top Skills</p><ul><li>Terraform</li><li>Automation / Infrastructure as Code</li><li>Ansible</li><li>GitOps</li><li>AWS</li><li>Linux / Unix</li><li>CI/CD</li><li>Security‑focused cloud deployment experience</li></ul>
We are looking for a skilled Infrastructure & Cloud Engineer to oversee the design and management of our organization's cloud and IT systems. This role is essential in ensuring the scalability, security, and efficiency of both cloud-based and on-premises environments. The ideal candidate will bring extensive knowledge of Microsoft technologies, including Azure and Microsoft 365, with a focus on supporting a mission-driven organization.<br><br>Responsibilities:<br>• Design and manage Azure cloud environments to ensure scalability and security.<br>• Administer and optimize Microsoft 365 tools, including Office 365, Intune, Entra ID, and Endpoint Protection.<br>• Plan and execute migrations between on-premises systems and cloud platforms with minimal disruption.<br>• Configure and maintain Windows Server environments while adhering to security best practices.<br>• Implement and enforce device management and endpoint protection policies.<br>• Develop automated workflows using tools such as Power Automate to enhance efficiency.<br>• Collaborate with internal teams to identify technology needs and deliver tailored solutions.<br>• Provide training and support to staff on IT infrastructure and cloud technologies.<br>• Document system configurations and processes to ensure compliance with policies and regulations.<br>• Monitor and improve system performance and reliability across all platforms.
<p>Senior Database Administrator</p><p>We're looking for a hands-on Senior AWS DBA to own the operational excellence, reliability, performance, and security of our global database infrastructure across AWS. You'll manage mission-critical, multi-region production systems and collaborate closely with Data Engineering, and application teams.</p><p><br></p><p>Administer and optimize production databases including Couchbase, DynamoDB, DocumentDB, Cosmos DB, and Snowflake</p><p>Manage Couchbase clusters including XDCR replication, monitoring, and troubleshooting</p><p>Perform installations, upgrades, patching, and configuration management across cloud and on-prem environments</p><p>Optimize query performance through indexing strategies, query tuning, and execution plan analysis</p><p>Plan and manage scaling strategies including sharding and capacity planning</p><p>Implement database security controls including access management and encryption</p><p><br></p><p>Backup, DR & Resilience</p><p>Design and maintain backup strategies with defined retention policies</p><p>Implement and validate restore procedures to meet RTO/RPO objectives</p><p>Develop PITR capabilities and execute disaster recovery drills</p><p>Manage AWS Backup, native database backups, and cross-region replication</p><p><br></p><p>Incident Response & Support</p><p>Participate in 24/7 on-call rotation for mission-critical systems</p><p><br></p><p>Monitoring & Automation</p><p>Implement monitoring and alerting using Datadog, CloudWatch, Azure Monitor, and native tools</p><p>Automate routine tasks using Python, Bash, PowerShell, and cloud-native tooling</p><p><br></p><p>Migrations & Documentation</p><p>Lead database migration and upgrade initiatives with minimal downtime</p><p><br></p><p><br></p><p>Security & Compliance</p><p>Implement IAM roles, network isolation, secrets management, and encryption</p><p>Manage credentials via AWS Secrets Manager</p><p>Support security audits and compliance with data residency requirements</p><p><br></p><p>What You'll Need</p><p>5+ years as a DBA managing production systems at scale</p><p>Strong hands-on experience with AWS database services: RDS/PostgreSQL, DynamoDB, DocumentDB, Aurora, AWS Backup</p><p>Working experience with Couchbase cluster management, XDCR, and N1QL</p><p>Deep understanding of database internals, indexing, and query optimization</p><p>Experience designing and testing backup, restore, PITR, and DR strategies</p><p>Scripting experience in Python, Bash, or PowerShell</p><p>Willingness to participate in 24/7 on-call rotation</p><p><br></p><p><br></p>
We are looking for an experienced Cloud Security Engineer to join our team in New York, New York. In this role, you will play a critical part in safeguarding cloud-based infrastructure by deploying, managing, and maintaining security tools and solutions. You will proactively monitor systems for threats, respond to incidents, and collaborate with stakeholders to enhance the overall security posture of cloud environments.<br><br>Responsibilities:<br>• Install, configure, and maintain advanced security solutions to protect cloud-based systems and networks.<br>• Monitor infrastructure to detect and respond to unusual activities, intrusions, or security breaches.<br>• Conduct thorough investigations of security alerts and incidents, ensuring timely and effective resolutions.<br>• Perform risk assessments and vulnerability scans, recommending strategies to mitigate potential threats.<br>• Collaborate with teams to implement and manage security tools tailored to cloud environments.<br>• Develop and enforce policies, procedures, and guidelines to ensure compliance with security standards.<br>• Stay updated on emerging threats and vulnerabilities, adapting security measures as needed.<br>• Create detailed reports and documentation on incidents, findings, and recommendations for stakeholders.<br>• Conduct audits and reviews of cloud infrastructure to identify and address security gaps.<br>• Support compliance initiatives and ensure adherence to industry regulations and standards.
<p><strong>Azure Developer</strong></p><p>We are seeking a knowledgeable <strong>Azure Developer</strong> to build cloud-native applications and services using Microsoft Azure technologies. This role is ideal for someone who enjoys designing scalable solutions, working with modern cloud tools, and collaborating closely with software and cloud engineering teams. The ideal candidate will have strong development skills, deep understanding of Azure services, and a passion for cloud innovation.</p><p><strong>Responsibilities</strong></p><ul><li>Develop cloud-based applications using Azure Functions, App Services, Logic Apps, and related services</li><li>Build APIs, microservices, and serverless workloads using .NET, C#, or other Azure-supported languages</li><li>Implement Azure integrations using Service Bus, Event Hub, API Management, or Durable Functions</li><li>Create and optimize Azure DevOps pipelines for CI/CD automation</li><li>Develop Infrastructure-as-Code templates using ARM, Bicep, or Terraform</li><li>Collaborate with architects and DevOps teams to ensure scalable cloud designs</li><li>Troubleshoot application issues, performance bottlenecks, and integration problems</li><li>Monitor cloud workloads, logs, costs, and performance metrics</li><li>Maintain documentation for Azure solutions, APIs, and deployment procedures</li><li>Participate in code reviews, design sessions, and architectural discussions</li></ul><p><br></p>
<p>We are seeking a senior cloud and platform engineer to help design, build, and operate scalable, secure, and resilient cloud environments. This role partners closely with engineering, security, and infrastructure teams to deliver cloud platforms and tooling that improve agility, consistency, and long-term reliability across the organization.</p><p><br></p><p>In this role, you will act as a senior individual contributor, owning the implementation and ongoing evolution of public cloud infrastructure. </p><p><br></p><p>Key responsibilities</p><ul><li>Implement and evolve public cloud infrastructure across environments</li><li>Deploy new workloads into public cloud platforms and modernize existing workloads</li><li>Align cloud solutions with business objectives and industry best practices</li><li>Design and build internal tooling, systems, and platforms for cloud consumption</li><li>Enable faster, safer, and more consistent access to cloud resources for engineering teams</li><li>Promote standardization, reuse, and automation to support growth and operational stability</li><li>Support Infrastructure as Code practices using Terraform for repeatable, governed deployments</li><li>Build and operate container orchestration platforms, including Kubernetes</li><li>Contribute to highly available, multi-zone, and multi-region architectures</li><li>Diagnose and resolve complex issues across cloud, automation, and distributed systems</li><li>Design and support infrastructure CI/CD pipelines using tools such as GitLab CI and Argo CD</li><li>Collaborate on API-driven secrets management and configuration tooling</li><li>Support cloud and hybrid networking designs, including on-prem integrations and BGP</li></ul><p>Technology scope</p><ul><li>Public cloud platforms such as AWS, Azure, or Google Cloud Platform</li><li>IaaS, PaaS, and SaaS-based cloud solutions</li><li>Self-service platforms that improve developer productivity</li><li>Infrastructure automation and platform engineering tooling</li></ul><p>Interested candidates should submit resumes to sally.lander@roberthalf (.com)</p>
We are looking for an experienced Cloud Engineer/Architect to join our team in Cincinnati, Ohio. In this role, you will ensure the security, scalability, and reliability of our technology environment while aligning technical capabilities with business objectives. This position offers an exciting opportunity to lead architectural decisions and drive innovative solutions that support our consulting delivery and emerging technologies.<br><br>Responsibilities:<br>• Develop and maintain a comprehensive IT roadmap that aligns with organizational growth, consulting needs, and advancements in technology such as AI-enabled services.<br>• Define and enforce technology standards, including Microsoft 365 tenant architecture, identity access models, endpoint configurations, and environment separations.<br>• Oversee risk management efforts, addressing IT risks, and implementing effective remediation strategies.<br>• Govern demo-tenant environments ensuring they are separate from corporate systems and compliant with client and regulatory standards.<br>• Lead responses to major incidents, security breaches, or significant technology failures by coordinating efforts and communicating with leadership.<br>• Collaborate with cross-functional teams, including Security and Legal, to articulate technology risks, trade-offs, and investment priorities in business terms.<br>• Manage cloud-based systems to optimize performance and ensure seamless integration across platforms.<br>• Develop and maintain isolated environments for client demos and prototypes, ensuring functionality and security.<br>• Implement and monitor security and compliance measures, ensuring adherence to industry standards.<br>• Oversee SaaS vendor management, including subscription oversight, license renewals, and vendor relationships.
<p>The Senior Software Engineer is a hands-on technical leadership position responsible for designing, building, and maintaining high-quality software solutions. This role emphasizes both individual development work and ownership of design decisions for features and subsystems. Modern tools, including AI-assisted development and architectural support, are leveraged to drive delivery while maintaining accountability for technical outcomes.</p><p><br></p><p><strong>Responsibilities:</strong></p><p><br></p><ul><li>Design, implement, test, and maintain scalable, secure, and reliable applications and services.</li><li>Act as a senior technical contributor, with responsibility for the design and implementation of features and subsystems.</li><li>Contribute actively to development tasks, applying advanced coding expertise in several programming languages and frameworks.</li><li>Participate in architectural discussions and support incremental evolution of systems with team leads.</li><li>Conduct code reviews and mentor engineering team members, fostering best practices and ongoing improvement.</li><li>Translate requirements from product owners, business analysts, and stakeholders into technical solutions.</li><li>Identify and mitigate technical risks in assigned systems and projects.</li><li>Support and enhance cloud-based applications (Azure, AWS) with emphasis on performance, reliability, and scalability.</li><li>Collaborate effectively with onshore and offshore teams to ensure successful project execution.</li><li>Keep abreast of industry trends and new technologies to encourage innovation.</li><li>Utilize AI-assisted tools to expedite design, documentation, and implementation, while ensuring technical quality.</li><li>Lead and support AI-related initiatives, drawing on prior experience with AI/ML technologies; recommend and implement suitable AI tools and frameworks.</li><li>Test and demonstrate emerging AI tools and platforms via proofs of concept (POCs) to highlight business value.</li><li>Guide customers in leveraging AI to optimize business processes; support teams working on business-facing AI efforts.</li><li>Collaborate with stakeholders to contribute to defining an AI roadmap aligned with organizational strategy and technology objectives.</li></ul>
We are looking for a skilled Deployment Engineer to join our team on a contract basis in Reading, Pennsylvania. In this role, you will be responsible for ensuring the seamless installation, configuration, and integration of various devices and software across multiple platforms. This position requires expertise in managing deployments, troubleshooting technical issues, and collaborating with team members to deliver efficient solutions.<br><br>Responsibilities:<br>• Oversee the deployment and setup of Android devices, Chromebooks, iPads, and other hardware.<br>• Utilize deployment tools to streamline installation processes and ensure accuracy.<br>• Configure and manage Active Directory settings to support device integrations.<br>• Provide regular status updates and documentation to track deployment progress.<br>• Install and maintain medical software across designated systems.<br>• Perform desktop administration tasks, including troubleshooting and resolving technical issues.<br>• Collaborate with team members to identify and address deployment challenges.<br>• Ensure compliance with company standards and protocols during all deployment activities.<br>• Train end-users on device usage and software functionalities as needed.
Position: IT INFRASTRUCTURE ENGINEER / IT HELP DESK MANAGER<br>Location: QUAD CITIES - ONSITE<br>Salary: up to $85K + exceptional benefits<br><br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***<br><br>Robert Half is looking for a IT HELP DESK MANAGER / IT INFRASTRUCTURE ANALYST - ONSITE IN QUAD CITIES for a permanent direct hire full time position for our client company. <br><br> In this unique IT HELP DESK MANAGER / IT INFRASTRUCTURE ANALYST - ONSITE IN QUAD CITIES permanent position you will join a highly successful company.<br> <br>This is a thriving organization with a close knit team. You will have autonomy to manage the IT Help Desk Team and assist in other IT Infrastructure Administration initiatives and projects. You will feel a true sense or ownership and relationship building as you will be a go-to person for your own team and customers across the entire organization. <br><br><br>Responsibilities will include managing and assisting with Help Desk Tier 1-3 tickets and any special projects. A wide breadth of IT experience and a proven track record of IT customer service success are essential. You will build strong collaboration and trust with the Senior Leaders and the IT Infrastructure Teams.<br><br>This is a FANTASTIC opportunity to apply ALL OF YOUR SKILLS, BE VALUED AND REWARDED FOR YOUR CONTRIBUTIONS . You will not be bored in this position and your contributions will be recognized and rewarded.<br><br>Requirements:<br> • 7+ years of IT Help Desk and Infrastructure experience in various roles including: desktop support analyst, help desk manager, system administrator, network administrator, security administrator and other.<br> • Technical skills will include: MS O365, desktop, hardware, software, Active Directory, user accounts, installing network hardware and software, setting up network external devices, rouble-shooting connectivity issues and other research and resolution.<br> • Must possess exceptional communication, presentation, customer service skills<br> <br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. **
We are looking for a skilled Data Platform Engineer to join our team on a long-term contract basis in Cleveland, Ohio. In this role, you will be responsible for managing and maintaining cloud-based analytics platforms, ensuring their stability, performance, and reliability. This is an excellent opportunity to work in a dynamic environment with cutting-edge technologies, including Kubernetes and containerized applications.<br><br>Responsibilities:<br>• Oversee the daily administration and operational support of cloud-based analytics platforms.<br>• Install, configure, monitor, and troubleshoot platform components and services to ensure optimal performance.<br>• Manage deployments within Kubernetes environments, addressing any related issues.<br>• Monitor system health and integrate tools for logging, alerting, and observability.<br>• Resolve performance, connectivity, and access issues to maintain system reliability.<br>• Configure and manage data source connections and platform integrations.<br>• Identify and mitigate potential capacity or performance risks by recommending improvements.<br>• Collaborate with internal teams, including data, engineering, and infrastructure, to meet organizational goals.<br>• Provide user support in a customer-facing or internal capacity, addressing technical concerns effectively.