<p>We are looking for a Systems Administrator to help drive technology modernization and operational excellence. We are seeking IT professionals with a strong background in systems administration or infrastructure operations, eager to support mission-critical initiatives and grow into senior infrastructure roles.</p><p><strong>Qualifications:</strong></p><ul><li>2–5 years of experience in systems administration or infrastructure operations; helpdesk/NOC backgrounds with relevant exposure also considered. (Based on general knowledge)</li><li>Proven expertise in Linux system administration. (Based on general knowledge)</li><li>Familiarity with enterprise infrastructure, including storage, virtualization, and networking. (Based on general knowledge)</li><li>Hands-on experience with monitoring systems such as Zabbix, Grafana, or Prometheus. (Based on general knowledge)</li><li>Basic scripting skills (e.g., Bash, Python) and a strong interest in further developing automation capabilities. (Based on general knowledge)</li><li>Excellent written communication for documentation and process development. (Based on general knowledge)</li><li>Ability to respond quickly and decisively during support rotation and system issues. (Based on general knowledge)</li><li>Comfortable leveraging AI tools for troubleshooting, documentation, and automation with a disciplined approach to validating outputs. (Based on general knowledge)</li><li>Growth mindset: eagerness to learn, develop, and advance into senior infrastructure roles over time. (Based on general knowledge)</li></ul><p><br></p>
<p>Join our dynamic technology team as a Site Reliability Engineer (SRE) or Platform Engineer, where you’ll play a central role in building, automating, and maintaining our modern infrastructure across both on-premise and cloud environments.</p><p><strong>Qualifications:</strong></p><ul><li>Bachelor’s degree in Computer Science, Engineering, or a related technical field. (Based on general knowledge)</li><li>3–5+ years of experience in SRE, Platform Engineering, or Systems Administration within fast-paced environments. (Based on general knowledge)</li><li>Strong Python scripting skills. (Based on general knowledge)</li><li>Deep hands-on experience with Kubernetes (deployment, management, troubleshooting); OpenShift experience is a plus. (Based on general knowledge)</li><li>Proficiency with Docker/Podman and internal image management. (Based on general knowledge)</li><li>Solid experience with Ansible and Terraform; Puppet knowledge is helpful. (Based on general knowledge)</li><li>Familiarity with CI/CD workflows; experience with ArgoCD (preferred) or Flux for GitOps. (Based on general knowledge)</li><li>Proficiency with Grafana and Prometheus; exposure to Grafana Cloud/Alloy is desirable. (Based on general knowledge)</li><li>Experience with incident management and on-call tools such as Rootly, Opsgenie, or PagerDuty. (Based on general knowledge)</li><li>Security-first mindset with exposure to DevSecOps practices, including SonarQube, SAST, and CVE scanning. (Based on general knowledge)</li><li>Proven experience with both on-premise and cloud infrastructure:</li><li><strong>On-Premise:</strong> Primary experience with Kubernetes clusters; familiarity with Proxmox is desirable.</li><li><strong>Cloud:</strong> AWS and GCP experience (with a growing footprint), managed via Terraform. (Based on general knowledge)</li></ul><p>If you’re passionate about automation, reliability, and working at the forefront of scalable infrastructure, we invite you to apply.</p>
We are looking for a skilled DevOps Engineer to join our team in Westborough, Massachusetts. In this long-term contract role, you will play a pivotal part in enhancing the reliability and scalability of cloud infrastructure, CI/CD pipelines, and deployment strategies. The position requires a hybrid schedule, with in-office work on Tuesdays, Wednesdays, and Thursdays.<br><br>Responsibilities:<br>• Design, build, and maintain robust CI/CD pipelines using tools such as GitHub Actions, Jenkins, and Ansible.<br>• Develop and implement standardized deployment pipelines for applications, integration platforms like MuleSoft, and cloud infrastructure.<br>• Manage cloud environments using Infrastructure as Code (IaC) technologies, including Terraform and Helm.<br>• Support containerized platforms and Kubernetes-based systems, including Docker.<br>• Collaborate with development teams to improve automation processes, deployment frequency, and platform reliability.<br>• Apply best practices for version control, secrets management, artifact repositories, and environmental consistency.<br>• Troubleshoot and resolve issues across pipelines, applications, and infrastructure layers to ensure operational stability.<br>• Enhance monitoring, logging, and observability tools to optimize platform performance.<br>• Partner with cross-functional teams to streamline DevOps practices for custom applications and backend services.<br>• Maintain detailed technical documentation and uphold high standards for follow-up and organizational timelines.
We are looking for an experienced Senior DevOps Engineer to lead the technical implementation and operational management of DevOps initiatives within our organization. This role is ideal for someone who thrives in a collaborative environment and is passionate about creating scalable, secure, and reliable infrastructure solutions. The position involves translating architectural designs into actionable plans, guiding team members, and driving automation to enhance efficiency.<br><br>Responsibilities:<br>• Develop and implement solution architecture designs that align with approved platform standards and project requirements.<br>• Lead the execution of DevOps initiatives by coordinating tasks, providing guidance to administrators, and ensuring successful delivery.<br>• Drive automation strategies to reduce manual interventions and improve operational resilience.<br>• Manage operational responsibilities for services and systems within assigned project scopes.<br>• Participate in incident response activities, ensuring swift resolution of platform-related issues.<br>• Coordinate with contractors when additional resources are needed, ensuring alignment with architectural standards.<br>• Mentor and guide team members to enhance their technical capabilities and promote shared ownership of operational tasks.<br>• Establish repeatable deployment patterns, provisioning methods, and monitoring configurations to streamline processes.<br>• Enable execution planning and delegate tasks effectively to optimize team performance.<br>• Collaborate with stakeholders to ensure implemented solutions meet organizational goals and technical standards.
<p><strong>DevOps Engineer</strong></p><p>We are seeking a motivated <strong>DevOps Engineer</strong> to enhance automation, streamline deployments, and support modern cloud-native infrastructure. This role is ideal for someone who enjoys improving system reliability, optimizing pipelines, and enabling faster development workflows.</p><p><strong>Responsibilities</strong></p><ul><li>Build, maintain, and optimize CI/CD pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins</li><li>Support containerized environments using Docker and Kubernetes</li><li>Manage infrastructure automation using Terraform, Helm, Ansible, or Bicep</li><li>Monitor application performance, system uptime, and deployment health</li><li>Troubleshoot build failures, pipeline issues, infrastructure drift, and deployment errors</li><li>Manage configuration management across multiple environments</li><li>Collaborate with developers and cloud engineers during releases and application migrations</li><li>Implement logging, monitoring, and alerting solutions</li><li>Maintain documentation for deployments, pipelines, and CI/CD procedures</li></ul><p><br></p>
We are looking for an experienced DevOps Engineer to join our team in Orem, Utah. In this role, you will collaborate with cross-functional teams to build, deploy, and maintain scalable and reliable systems, ensuring seamless integration and automation of workflows. You will work with cutting-edge technologies to enhance the infrastructure and optimize the development lifecycle.<br><br>Responsibilities:<br>• Design, implement, and maintain infrastructure-as-code solutions using tools like Terraform or AWS CDK.<br>• Develop and optimize containerized deployments using Docker and Kubernetes.<br>• Collaborate with software developers to integrate DevOps practices into the software development lifecycle.<br>• Set up and manage CI/CD pipelines using Bitbucket Pipelines or similar tools.<br>• Monitor and troubleshoot system performance using AWS services such as CloudWatch.<br>• Build scalable backend systems using .NET Core and C#, ensuring efficient data handling with PostgreSQL and Entity Framework Core.<br>• Develop and maintain frontend systems with React, TypeScript, and state management libraries like Redux Toolkit.<br>• Implement authentication and authorization solutions using AWS Cognito.<br>• Enhance testing frameworks and tools, including Vitest, Playwright, and React Testing Library.<br>• Support microservices architecture and ensure seamless communication between components.
We are looking for an experienced Platform / DevOps Engineer to join our team in Los Angeles, California. This role focuses on enhancing developer workflows, maintaining platform operations, and ensuring system observability to support production environments. As part of a long-term contract, you will play a key role in optimizing cloud resources, managing access controls, and troubleshooting issues across various tools and environments.<br><br>Responsibilities:<br>• Manage user access across Atlassian tools, Azure DevOps, GitHub, and other platforms, ensuring secure and compliant permissions.<br>• Process and oversee access requests using ServiceNow and internal workflows to maintain least-privilege access.<br>• Design, maintain, and troubleshoot CI/CD pipelines within Azure DevOps and GitHub Actions.<br>• Provide support for containerized applications using Docker and Kubernetes, including environment configuration.<br>• Collaborate with Systems Engineering teams to manage cloud resources and optimize configurations in Azure.<br>• Analyze system logs and metrics using Elastic tools to identify and resolve backend service issues.<br>• Investigate and troubleshoot issues related to server environments, databases, and backend services.<br>• Partner with engineering teams to identify root causes of system failures and implement preventive measures.<br>• Participate in incident response efforts and contribute to post-incident reviews and improvements.
<p><strong>-- Client =</strong> Interactive Entertainment</p><p><strong>-- Location =</strong> Remote</p><p><strong>-- Comp =</strong> $150k-$200k annual base + benefits</p><p><strong>-- Work Authorization = </strong>our is NOT able to sponsor or transfer visas at this time</p><p><strong>-- Focus =</strong> Design, automate, and maintain CI/CD pipelines and infrastructure across Linux, Windows, and macOS environments</p><p><strong>-- MUST HAVES =</strong> <strong><em>Last 5+ years w/ a focus on DevOps using AWS, EKS,</em></strong> <strong><em>GitLab</em></strong></p><p><strong>-- Bonus Points =</strong> Okta Integration, Backstage.io (or similar dev portals), Mobile or Gaming CI/CD Pipelines, DataDog, OpenTofu, Ansible, CloudFormation, Docker, Python</p><p><br></p>
<p>We are looking for an experienced Azure Cloud Engineer to join our team North Houston. In this role, you will leverage your expertise to manage cloud infrastructure, ensure system reliability, and collaborate with team members on key projects. This position requires a strong background in Azure administration and Infrastructure as Code (IaC) tools, along with a commitment to delivering high-quality solutions.</p><p><br></p><p>Responsibilities:</p><p>• Design, implement, and manage Azure cloud infrastructure to support business needs.</p><p>• Utilize tools such as Terraform and Ansible to develop and maintain Infrastructure as Code (IaC) solutions.</p><p>• Collaborate with team members to maintain Office 365, Exchange Online, Intune, and Active Directory systems.</p><p>• Ensure the scalability and reliability of cloud-based systems by implementing auto-scaling solutions.</p><p>• Regularly assess and optimize cloud environments to enhance performance and security.</p><p>• Provide on-site support five days a week, with half-day Fridays.</p><p>• Travel to Midland quarterly to participate in team collaborations and align on project objectives.</p><p>• Maintain documentation for cloud processes and configurations to ensure clarity and compliance.</p><p>• Work closely with stakeholders to identify and address technical challenges.</p><p>• Support and contribute to the development of cloud strategies aligned with organizational goals.</p>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
We are looking for an experienced AWS/Databricks Engineer to join our team in Houston, Texas. This is a long-term contract position ideal for professionals with a strong background in data engineering and cloud technologies. The role will focus on leveraging Python and Databricks to optimize data processes and enhance system performance.<br><br>Responsibilities:<br>• Develop and implement scalable data engineering solutions using Python and Databricks.<br>• Collaborate with cross-functional teams to design and optimize data workflows.<br>• Migrate and enhance existing Python scripts to Databricks for improved functionality.<br>• Utilize cloud technologies to support data integration and analytics processes.<br>• Implement algorithms and data visualization methods to present actionable insights.<br>• Design and maintain APIs to streamline data interactions and integrations.<br>• Work with tools like Apache Kafka, Spark, and Hadoop to manage large-scale data systems.<br>• Perform data analysis and develop strategies to improve system efficiency.<br>• Ensure high-quality data pipelines and address performance bottlenecks.<br>• Stay updated on emerging trends in data engineering and recommend innovative solutions.
<p>We are seeking an experienced Senior Data Engineer to support and enhance enterprise business intelligence and analytics environments. This role focuses on designing, building, and maintaining scalable data pipelines and cloud‑based data platforms using AWS services. The ideal candidate brings deep hands‑on experience with AWS Glue, PySpark, Redshift, and serverless architectures, along with strong SQL and data analysis skills.</p><p>This role will collaborate closely with architecture, security, compliance, and development teams to ensure data solutions are performant, secure, and compliant with regulatory requirements.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue with PySpark for large‑scale data processing</li><li>Develop and support serverless integrations using AWS Lambda for event‑driven workflows and system integrations</li><li>Design and optimize Amazon Redshift data warehouse solutions, including:</li><li>Advanced SQL analytics</li><li>Stored procedures</li><li>Performance tuning</li><li>Lead implementation of secure vendor file transfer and ingestion solutions using AWS Transfer Family</li><li>Design and implement database migration and replication pipelines using AWS Database Migration Service (DMS)</li><li>Build and manage workflow orchestration using Apache Airflow or similar orchestration tools</li><li>Analyze data quality, transformation logic, and pipeline performance using SQL and data analysis techniques</li><li>Troubleshoot and resolve production data pipeline and integration issues across AWS services</li><li>Provide technical guidance to development team members on:</li><li>AWS best practices</li><li>Cost optimization</li><li>Performance optimization</li><li>Partner with enterprise architecture, security, and compliance teams to ensure SOX and regulatory compliance</li></ul>
<p><strong>Position Summary:</strong></p><ul><li>We are looking for a Data Operations Engineer to support and oversee the automated data‑pipeline environment built on AWS. This position bridges data engineering and customer operations, ensuring that incoming datasets are processed accurately, consistently, and securely within established ingestion and transformation frameworks.</li><li>Key responsibilities include monitoring automated workflows, troubleshooting processing failures, validating data quality, and helping onboard new customers by aligning their data formats to a standardized internal model.</li><li>The role requires strong proficiency in SQL and Python, practical experience with AWS services, and the ability to communicate effectively with external customers when data issues arise.</li></ul><p><strong>Responsibilities:</strong></p><p><strong>Data Pipeline Monitoring & Operations:</strong></p><ul><li>Monitor automated batch and streaming data pipelines in AWS</li><li>Identify, troubleshoot, and resolve data processing failures</li><li>Investigate file‑level errors, schema mismatches, and transformation issues</li><li>Perform root‑cause analysis and document resolutions</li><li>Ensure data integrity, completeness, and timeliness across environments</li><li>Escalate architectural or systemic issues to the Data Engineering team</li></ul><p><strong>Customer Data Onboarding & Implementation:</strong></p><ul><li>Collaborate directly with customers to understand their file formats and data structures</li><li>Create and maintain mapping templates to align customer data to a normalized data model</li><li>Validate sample files and run tests on ingestion workflows</li><li>Configure ingestion parameters within predefined frameworks</li><li>Support customer go‑live processes and initial data processing cycles</li></ul><p><strong>Data Quality & Continuous Improvement:</strong></p><ul><li>Write SQL queries to validate data accuracy and research anomalies</li><li>Develop lightweight Python scripts for validation, transformation checks, or automation tasks</li><li>Improve monitoring processes, internal documentation, and operational playbooks</li><li>Work with engineering teams to strengthen platform reliability and observability</li></ul><p><strong>Customer & Cross‑Functional Collaboration:</strong></p><ul><li>Communicate clearly with customers regarding file issues or data discrepancies</li><li>Partner with internal teams including Data Engineering, Product, and Support</li><li>Provide feedback to enhance scalability, resilience, and overall platform performance</li></ul>
<p>DevSecOps Security Engineer</p><p> </p><p>Location: Camas, WA (Onsite with potential hybrid flexibility)</p><p> </p><p>We are seeking an experienced DevSecOps Security Engineer to join our technology team in Camas, Washington. This role focuses on strengthening application and infrastructure security while supporting the continued evolution of our engineering platforms. You will collaborate closely with development, infrastructure, and security partners to embed security best practices into modern CI/CD pipelines and cloud environments.</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Configure, maintain, and optimize DevSecOps security tooling across development pipelines</li><li>Partner with engineering teams to integrate security controls into CI/CD workflows</li><li>Identify, assess, and help remediate application and infrastructure vulnerabilities</li><li>Contribute to secure coding standards and architectural security guidelines</li><li>Support infrastructure‑as‑code initiatives and cloud security practices</li><li>Evaluate existing security controls and recommend improvements</li><li>Assist in standardizing DevSecOps processes and documentation</li><li>Communicate security risks and recommendations to technical and business stakeholders</li></ul><p>Qualifications</p><ul><li>5+ years of experience in technology or information security roles</li><li>2+ years of hands‑on experience with DevSecOps, CI/CD pipelines, or cloud security</li><li>Experience with infrastructure‑as‑code tools (Terraform or similar)</li><li>Familiarity with containerized environments (Kubernetes, AKS, or equivalent)</li><li>Exposure to Azure or comparable cloud platforms</li><li>Strong collaboration and communication skills</li></ul><p>Compensation & Benefits</p><ul><li>Salary Range: $130,000 – $165,000</li><li>Competitive medical, dental, and vision coverage</li><li>401(k) plan with employer contribution</li><li>Generous paid time off and paid holidays</li><li>Family‑friendly leave programs and wellness support</li><li>Professional development and learning opportunities</li></ul><p><br></p><p>Work Environment</p><ul><li>Primarily in‑office with potential for hybrid flexibility</li><li>Collaborative, engineering‑driven culture</li><li>Opportunity to influence security practices across modern platforms</li></ul><p>Salary: $130 - $165k</p>
We are looking for an experienced DevOps Engineer to join our team on a long-term contract basis in Mequon, Wisconsin. This role is focused on enhancing analytics governance by identifying and resolving inconsistencies in business intelligence tools, streamlining BI logic, and integrating governance workflows. You will collaborate with cross-functional teams to ensure high-quality and consistent reporting standards across the enterprise.<br><br>Responsibilities:<br>• Create and maintain Python-based scripts to extract and analyze metric definitions from various BI tools, including Power BI, Tableau, and Domo.<br>• Standardize BI logic to identify and address duplication and inconsistencies across analytics platforms.<br>• Manage and organize results by storing custom metadata, tags, and issue records within governance platforms such as Atlan.<br>• Configure and integrate steward workflows, saved views, and custom attributes into governance systems.<br>• Collaborate with reporting and BI teams to establish and enforce metric naming conventions, certification criteria, and deprecation policies.<br>• Align semantic layers across BI and analytics tools to ensure consistency in reporting.<br>• Develop and execute CI/CD checks and validation processes for new metrics and analytics data.<br>• Ensure adherence to security and governance policies related to analytics and reporting systems.<br>• Facilitate steward reviews for metric certification and deprecation workflows.<br>• Provide technical support and enablement for data governance analysts and stewards.
<p>Overview</p><p>We are seeking a <strong>DevOps Engineer II</strong> to support a DevOps/SRE function focused on <strong>media analysis pipeline components</strong> tied to large‑scale data and analytics workflows. This team is primarily centered around <strong>Terraform automation, GitOps, and infrastructure as code</strong> within a cloud‑based environment.</p><p>This role is ideal for a <strong>self‑starter</strong> who can work with minimal oversight, solve problems independently, and collaborate effectively with engineering and product teams. Prior experience with video engineering or media pipelines is a plus but not required.</p><p>The role will involve working in an <strong>AWS environment</strong>, supporting how applications are securely deployed, exposed, and maintained across environments.</p><p><br></p><p>Key Responsibilities</p><ul><li>Support cloud deployments for <strong>media analysis pipeline components</strong></li><li>Collaborate with software engineers, product managers, and business stakeholders to ensure reliable deployments and stable operations</li><li>Build, maintain, and improve <strong>CI/CD pipelines</strong> for provisioning and deployment across environments</li><li>Automate operational processes, monitoring, and reliability tooling</li><li>Troubleshoot and resolve issues across development, test, and production environments</li><li>Build and maintain tools for deployment, monitoring, and operational support</li><li>Communicate project status, risks, and issues clearly to internal teams</li><li>Help streamline DevOps and SRE processes through <strong>automation and best practices</strong></li></ul><p><br></p><p>Biggest Needs</p><ul><li><strong>Strong Terraform experience (must‑have)</strong></li><li>Strong automation background in a DevOps or SRE environment</li><li>Hands‑on experience with <strong>Infrastructure as Code</strong></li><li><strong>AWS experience required</strong></li><li>Experience with <strong>Ansible</strong> and <strong>GitOps</strong> highly preferred</li><li>Strong Linux/Unix background</li><li>Security‑minded engineer who understands how applications are securely exposed and protected in cloud environments</li><li>Video engineering or media pipeline experience is a plus</li></ul><p><br></p><p>Top Skills</p><ul><li>Terraform</li><li>Automation / Infrastructure as Code</li><li>Ansible</li><li>GitOps</li><li>AWS</li><li>Linux / Unix</li><li>CI/CD</li><li>Security‑focused cloud deployment experience</li></ul>
We are looking for a skilled Infrastructure & Cloud Engineer to oversee the design and management of our organization's cloud and IT systems. This role is essential in ensuring the scalability, security, and efficiency of both cloud-based and on-premises environments. The ideal candidate will bring extensive knowledge of Microsoft technologies, including Azure and Microsoft 365, with a focus on supporting a mission-driven organization.<br><br>Responsibilities:<br>• Design and manage Azure cloud environments to ensure scalability and security.<br>• Administer and optimize Microsoft 365 tools, including Office 365, Intune, Entra ID, and Endpoint Protection.<br>• Plan and execute migrations between on-premises systems and cloud platforms with minimal disruption.<br>• Configure and maintain Windows Server environments while adhering to security best practices.<br>• Implement and enforce device management and endpoint protection policies.<br>• Develop automated workflows using tools such as Power Automate to enhance efficiency.<br>• Collaborate with internal teams to identify technology needs and deliver tailored solutions.<br>• Provide training and support to staff on IT infrastructure and cloud technologies.<br>• Document system configurations and processes to ensure compliance with policies and regulations.<br>• Monitor and improve system performance and reliability across all platforms.
<p>Senior Database Administrator</p><p>We're looking for a hands-on Senior AWS DBA to own the operational excellence, reliability, performance, and security of our global database infrastructure across AWS. You'll manage mission-critical, multi-region production systems and collaborate closely with Data Engineering, and application teams.</p><p><br></p><p>Administer and optimize production databases including Couchbase, DynamoDB, DocumentDB, Cosmos DB, and Snowflake</p><p>Manage Couchbase clusters including XDCR replication, monitoring, and troubleshooting</p><p>Perform installations, upgrades, patching, and configuration management across cloud and on-prem environments</p><p>Optimize query performance through indexing strategies, query tuning, and execution plan analysis</p><p>Plan and manage scaling strategies including sharding and capacity planning</p><p>Implement database security controls including access management and encryption</p><p><br></p><p>Backup, DR & Resilience</p><p>Design and maintain backup strategies with defined retention policies</p><p>Implement and validate restore procedures to meet RTO/RPO objectives</p><p>Develop PITR capabilities and execute disaster recovery drills</p><p>Manage AWS Backup, native database backups, and cross-region replication</p><p><br></p><p>Incident Response & Support</p><p>Participate in 24/7 on-call rotation for mission-critical systems</p><p><br></p><p>Monitoring & Automation</p><p>Implement monitoring and alerting using Datadog, CloudWatch, Azure Monitor, and native tools</p><p>Automate routine tasks using Python, Bash, PowerShell, and cloud-native tooling</p><p><br></p><p>Migrations & Documentation</p><p>Lead database migration and upgrade initiatives with minimal downtime</p><p><br></p><p><br></p><p>Security & Compliance</p><p>Implement IAM roles, network isolation, secrets management, and encryption</p><p>Manage credentials via AWS Secrets Manager</p><p>Support security audits and compliance with data residency requirements</p><p><br></p><p>What You'll Need</p><p>5+ years as a DBA managing production systems at scale</p><p>Strong hands-on experience with AWS database services: RDS/PostgreSQL, DynamoDB, DocumentDB, Aurora, AWS Backup</p><p>Working experience with Couchbase cluster management, XDCR, and N1QL</p><p>Deep understanding of database internals, indexing, and query optimization</p><p>Experience designing and testing backup, restore, PITR, and DR strategies</p><p>Scripting experience in Python, Bash, or PowerShell</p><p>Willingness to participate in 24/7 on-call rotation</p><p><br></p><p><br></p>
We are looking for an experienced Cloud Security Engineer to join our team in New York, New York. In this role, you will play a critical part in safeguarding cloud-based infrastructure by deploying, managing, and maintaining security tools and solutions. You will proactively monitor systems for threats, respond to incidents, and collaborate with stakeholders to enhance the overall security posture of cloud environments.<br><br>Responsibilities:<br>• Install, configure, and maintain advanced security solutions to protect cloud-based systems and networks.<br>• Monitor infrastructure to detect and respond to unusual activities, intrusions, or security breaches.<br>• Conduct thorough investigations of security alerts and incidents, ensuring timely and effective resolutions.<br>• Perform risk assessments and vulnerability scans, recommending strategies to mitigate potential threats.<br>• Collaborate with teams to implement and manage security tools tailored to cloud environments.<br>• Develop and enforce policies, procedures, and guidelines to ensure compliance with security standards.<br>• Stay updated on emerging threats and vulnerabilities, adapting security measures as needed.<br>• Create detailed reports and documentation on incidents, findings, and recommendations for stakeholders.<br>• Conduct audits and reviews of cloud infrastructure to identify and address security gaps.<br>• Support compliance initiatives and ensure adherence to industry regulations and standards.
Our client is an early-stage, high-growth startup building products that are actively used and loved by real users. They are looking for a Full Stack Engineer (3–6 years of experience) who is excited about building impactful products in a fast-paced, startup environment — and who has interest or exposure to AI. This is a fully onsite role in San Francisco - (must be already living in San Francisco Bay Area to be considered) <br> About the Role As a Full Stack Engineer, you’ll play a key role in designing, developing, and maintaining modern web applications. You’ll work across the stack to build clean, scalable features and collaborate closely with a small, highly motivated team. This is an opportunity for someone who genuinely enjoys building things — especially products that people use every day. <br> What You’ll Do Design, develop, and maintain full stack applications Build user-facing features using React and Next.js Develop and integrate backend services using Python (Flask) Write clean, efficient, and maintainable TypeScript code Debug, test, and optimize application performance Collaborate closely with cross-functional teammates in a fast-moving startup environment Contribute to AI-powered features and generative AI initiatives
<p><strong>Azure Developer</strong></p><p>We are seeking a knowledgeable <strong>Azure Developer</strong> to build cloud-native applications and services using Microsoft Azure technologies. This role is ideal for someone who enjoys designing scalable solutions, working with modern cloud tools, and collaborating closely with software and cloud engineering teams. The ideal candidate will have strong development skills, deep understanding of Azure services, and a passion for cloud innovation.</p><p><strong>Responsibilities</strong></p><ul><li>Develop cloud-based applications using Azure Functions, App Services, Logic Apps, and related services</li><li>Build APIs, microservices, and serverless workloads using .NET, C#, or other Azure-supported languages</li><li>Implement Azure integrations using Service Bus, Event Hub, API Management, or Durable Functions</li><li>Create and optimize Azure DevOps pipelines for CI/CD automation</li><li>Develop Infrastructure-as-Code templates using ARM, Bicep, or Terraform</li><li>Collaborate with architects and DevOps teams to ensure scalable cloud designs</li><li>Troubleshoot application issues, performance bottlenecks, and integration problems</li><li>Monitor cloud workloads, logs, costs, and performance metrics</li><li>Maintain documentation for Azure solutions, APIs, and deployment procedures</li><li>Participate in code reviews, design sessions, and architectural discussions</li></ul><p><br></p>
<p>We are seeking a senior cloud and platform engineer to help design, build, and operate scalable, secure, and resilient cloud environments. This role partners closely with engineering, security, and infrastructure teams to deliver cloud platforms and tooling that improve agility, consistency, and long-term reliability across the organization.</p><p><br></p><p>In this role, you will act as a senior individual contributor, owning the implementation and ongoing evolution of public cloud infrastructure. </p><p><br></p><p>Key responsibilities</p><ul><li>Implement and evolve public cloud infrastructure across environments</li><li>Deploy new workloads into public cloud platforms and modernize existing workloads</li><li>Align cloud solutions with business objectives and industry best practices</li><li>Design and build internal tooling, systems, and platforms for cloud consumption</li><li>Enable faster, safer, and more consistent access to cloud resources for engineering teams</li><li>Promote standardization, reuse, and automation to support growth and operational stability</li><li>Support Infrastructure as Code practices using Terraform for repeatable, governed deployments</li><li>Build and operate container orchestration platforms, including Kubernetes</li><li>Contribute to highly available, multi-zone, and multi-region architectures</li><li>Diagnose and resolve complex issues across cloud, automation, and distributed systems</li><li>Design and support infrastructure CI/CD pipelines using tools such as GitLab CI and Argo CD</li><li>Collaborate on API-driven secrets management and configuration tooling</li><li>Support cloud and hybrid networking designs, including on-prem integrations and BGP</li></ul><p>Technology scope</p><ul><li>Public cloud platforms such as AWS, Azure, or Google Cloud Platform</li><li>IaaS, PaaS, and SaaS-based cloud solutions</li><li>Self-service platforms that improve developer productivity</li><li>Infrastructure automation and platform engineering tooling</li></ul><p>Interested candidates should submit resumes to sally.lander@roberthalf (.com)</p>
<p>The Senior Software Engineer is a hands-on technical leadership position responsible for designing, building, and maintaining high-quality software solutions. This role emphasizes both individual development work and ownership of design decisions for features and subsystems. Modern tools, including AI-assisted development and architectural support, are leveraged to drive delivery while maintaining accountability for technical outcomes.</p><p><br></p><p><strong>Responsibilities:</strong></p><p><br></p><ul><li>Design, implement, test, and maintain scalable, secure, and reliable applications and services.</li><li>Act as a senior technical contributor, with responsibility for the design and implementation of features and subsystems.</li><li>Contribute actively to development tasks, applying advanced coding expertise in several programming languages and frameworks.</li><li>Participate in architectural discussions and support incremental evolution of systems with team leads.</li><li>Conduct code reviews and mentor engineering team members, fostering best practices and ongoing improvement.</li><li>Translate requirements from product owners, business analysts, and stakeholders into technical solutions.</li><li>Identify and mitigate technical risks in assigned systems and projects.</li><li>Support and enhance cloud-based applications (Azure, AWS) with emphasis on performance, reliability, and scalability.</li><li>Collaborate effectively with onshore and offshore teams to ensure successful project execution.</li><li>Keep abreast of industry trends and new technologies to encourage innovation.</li><li>Utilize AI-assisted tools to expedite design, documentation, and implementation, while ensuring technical quality.</li><li>Lead and support AI-related initiatives, drawing on prior experience with AI/ML technologies; recommend and implement suitable AI tools and frameworks.</li><li>Test and demonstrate emerging AI tools and platforms via proofs of concept (POCs) to highlight business value.</li><li>Guide customers in leveraging AI to optimize business processes; support teams working on business-facing AI efforts.</li><li>Collaborate with stakeholders to contribute to defining an AI roadmap aligned with organizational strategy and technology objectives.</li></ul>
<p><strong>Job Title:</strong> Full-Stack Software Developer</p><p><strong>Location:</strong> Bergen County, New Jersey (Hybrid or Onsite Options Available)</p><p><br></p><p>We are seeking a skilled <strong>Full-Stack Software Developer</strong> to help build and maintain high-performance web applications across both backend systems and user-facing interfaces. The ideal candidate will have a strong technical foundation, a proactive mindset, and a willingness to take ownership of projects from concept through deployment.</p><p>Responsibilities</p><ul><li>Design, develop, and maintain full-stack web applications using <strong>ASP.NET, .NET Core, C#, SQL, and Entity Framework</strong></li><li>Develop and maintain <strong>RESTful APIs</strong> and backend services that support internal and external applications</li><li>Build and enhance responsive front-end interfaces using <strong>Angular and NGRX</strong></li><li>Integrate front-end components with backend APIs and services to deliver seamless user experiences</li><li>Collaborate with cross-functional teams to define requirements, system architecture, and technical solutions</li><li>Write clean, maintainable, and well-documented code across both front-end and back-end components</li><li>Participate in <strong>code reviews</strong> and help improve development standards and best practices</li><li>Troubleshoot and resolve issues across the full application stack, including database, API, and UI layers</li><li>Contribute to <strong>Agile development processes</strong>, including sprint planning, standups, and retrospectives</li></ul><p><br></p><p><br></p>
We are looking for a skilled Data Platform Engineer to join our team on a long-term contract basis in Cleveland, Ohio. In this role, you will be responsible for managing and maintaining cloud-based analytics platforms, ensuring their stability, performance, and reliability. This is an excellent opportunity to work in a dynamic environment with cutting-edge technologies, including Kubernetes and containerized applications.<br><br>Responsibilities:<br>• Oversee the daily administration and operational support of cloud-based analytics platforms.<br>• Install, configure, monitor, and troubleshoot platform components and services to ensure optimal performance.<br>• Manage deployments within Kubernetes environments, addressing any related issues.<br>• Monitor system health and integrate tools for logging, alerting, and observability.<br>• Resolve performance, connectivity, and access issues to maintain system reliability.<br>• Configure and manage data source connections and platform integrations.<br>• Identify and mitigate potential capacity or performance risks by recommending improvements.<br>• Collaborate with internal teams, including data, engineering, and infrastructure, to meet organizational goals.<br>• Provide user support in a customer-facing or internal capacity, addressing technical concerns effectively.