<p>We are looking for a skilled Cloud Engineer to join our team in Raleigh, North Carolina. This role requires an individual with a strong background in cloud technologies, particularly in Azure environments. You will play a key part in supporting developers within the cloud infrastructure and collaborating with other specialized teams as needed.</p><p><br></p><p>Responsibilities:</p><p>• Provide robust support for the cloud environment utilized by the development team.</p><p>• Collaborate with security and M365 teams to ensure seamless integration and operations.</p><p>• Implement and manage automation tools such as Ansible to optimize workflows.</p><p>• Oversee scaling processes, including auto-scaling mechanisms, to maintain system efficiency.</p><p>• Manage and maintain Azure DevOps services and pipelines to streamline development processes.</p><p>• Utilize expertise in Azure Admin Center to optimize cloud operations.</p><p>• Monitor cloud environments to ensure high availability and performance.</p><p>• Troubleshoot and resolve issues within cloud systems in a timely manner.</p><p>• Recommend and implement improvements to cloud infrastructure based on industry best practices.</p><p>• Stay updated on emerging cloud technologies and certifications to enhance operational capabilities.</p>
Position: SENIOR DEVOPS ENGINEER - Build the future from the ground up<br>Location: REMOTE<br>Salary: UP TO $175K BASE + BONUS + EXCEPTIONAL BENEFITS<br><br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***<br><br>Imagine joining a brand-new department at day one—where your ideas shape the foundation and your code powers the future.<br>A nationally recognized company with decades of success is launching a bold digital transformation. With the backing of a Fortune 500 parent and full executive support, this initiative is building a mobile-first product from scratch—a greenfield, 0-to-1 launch that’s poised to redefine how users interact with essential services.<br>We’re assembling a high-impact team of 20 innovators—engineers, designers, product leaders—who will architect and launch a modern digital experience. The first MVP is on the runway, and we’re looking for a Senior DevOps Engineer to help our client company take off.<br>Why You’ll Love This Role<br> • Start-up energy, enterprise stability: Move fast, innovate boldly, and build from scratch—with funding, executive support, and guidance from top-tier tech leadership.<br> • Impact from Day One: Join early, shape the infrastructure, and influence the DevOps culture for a product that will scale nationally.<br> • Greenfield Opportunity: No legacy systems for new mobile build. No red tape. Just clean architecture and a blank canvas.<br>What You’ll Do<br> • Architect and maintain scalable, secure CI/CD pipelines for rapid deployment.<br> • Build infrastructure as code (IaC) using Terraform or CloudFormation.<br> • Optimize cloud environments for performance, scalability, and cost-efficiency.<br> • Implement robust monitoring, logging, and alerting systems.<br> • Automate operational tasks and enhance system resilience.<br> • Champion DevOps best practices across the development lifecycle.<br> • Ensure security and compliance through proactive guardrails and vulnerability management.<br> • Mentor junior engineers and help define DevOps standards.<br>What You Bring<br> • 5+ years in DevOps, SRE, or infrastructure engineering.<br> • Deep expertise in AWS (EC2, ECS, Lambda, S3, IAM, CloudWatch).<br> • Hands-on experience with CI/CD tools (GitHub Actions, Jenkins, AWS CodePipeline).<br> • Proficiency in containerization (Docker, Kubernetes).<br> • Strong scripting skills (Bash, Python, PowerShell).<br> • Familiarity with observability tools (Prometheus, Grafana, ELK stack).<br> • Bonus: Azure experience, DevSecOps knowledge, AWS certifications.<br><br>Reporting Structure<br> • Reports to: Director of Product Engineering<br> • Team size: You’ll be a foundational member of a 20-person launch team<br><br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** <br>revise
We are looking for a skilled Data Engineer to join our team in Ann Arbor, Michigan, and contribute to the development of a modern, scalable data platform. In this role, you will focus on building efficient data pipelines, ensuring data quality, and enabling seamless integration across systems to support business analytics and decision-making. This position offers an exciting opportunity to work with cutting-edge technologies and play a key role in the transformation of our data environment.<br><br>Responsibilities:<br>• Design and implement robust data pipelines on Azure using tools such as Databricks, Spark, Delta Lake, and Airflow.<br>• Develop workflows to ingest and integrate data from diverse sources into Azure Data Lake.<br>• Build and maintain data transformation layers following the medallion architecture principles.<br>• Apply data quality checks, validation processes, and deduplication techniques to ensure accuracy and reliability.<br>• Create reusable and parameterized notebooks to streamline batch and streaming data processes.<br>• Optimize merge and update logic in Delta Lake by leveraging efficient partitioning strategies.<br>• Collaborate with business and application teams to understand and fulfill data integration requirements.<br>• Enable downstream integrations with APIs, Power BI dashboards, and reporting systems.<br>• Establish monitoring, logging, and data lineage tracking using tools like Unity Catalog and Azure Monitor.<br>• Participate in code reviews, agile development practices, and team design discussions.
<p>Robert Half Marketing and Creative Atlanta is looking Traffic Manager to join a growing agency team in Midtown Atlanta. The Traffic Manager will facilitate workflow on retail and channel graphic projects. Duties include managing internal traffic systems, creating schedules and tracking project progress, routing work for review and approval, archiving files and assets and opening and closing projects. This position partners and builds strong relationships with internal clients and team members. This position is regarded by the client as a trusted advisor and works closely with the designers, and account/project managers to set project milestones, create project timelines and track schedules to ensure all deliverable deadlines are met on time and on brand. The Traffic Manager must have a thorough understanding of the creative/print production process. </p><p><br></p>
<p>We’re looking for a Front-End Developer with a strong marketing mindset to build high-performing websites, landing pages, and digital experiences. This role blends clean code with creative problem-solving to support campaigns, product launches, lead-generation funnels, and brand initiatives.</p><p> • Build and optimize marketing websites, microsites, and landing pages (HTML, CSS, JavaScript)</p><p> • Implement responsive, mobile-first layouts that align with brand and campaign goals</p><p> • Collaborate closely with marketing, design, and content teams to bring creative concepts to life</p><p> • Work inside CMS platforms (WordPress, Webflow, HubSpot, etc.) to update content and deploy new pages</p><p> • Translate design files (Figma, Adobe XD, or Canva exports) into pixel-perfect front-end builds</p><p> • Improve UX/UI for clarity, speed, accessibility, and conversion</p><p> • Optimize site performance, SEO structure, and page load speeds</p><p> • Troubleshoot front-end issues and ensure cross-browser compatibility</p><p> • Support marketing campaigns with interactive components, form integrations, and tracking setups</p><p> • Implement tracking, analytics, and A/B test variations (Google Analytics, GTM, Hotjar, etc.)</p>
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p><strong>Cloud Engineer</strong></p><p>We are seeking a talented <strong>Cloud Engineer</strong> to join our infrastructure team. This role is ideal for someone who enjoys building cloud-based solutions, optimizing deployments, and supporting scalable, secure environments. The ideal candidate will have strong problem-solving abilities, excellent communication skills, and a solid foundation in cloud architecture with room to grow into more advanced engineering responsibilities.</p><p><strong>Responsibilities</strong></p><ul><li>Deploy, configure, and manage cloud resources across Azure, AWS, and/or GCP</li><li>Implement cloud security controls including IAM/RBAC permissions, encryption, policies, and MFA</li><li>Build and maintain Infrastructure-as-Code templates using Terraform, ARM/Bicep, or CloudFormation</li><li>Support CI/CD pipelines and automated deployments for application and infrastructure releases</li><li>Monitor cloud performance, availability, cost usage, and alerts using native tools</li><li>Troubleshoot cloud networking issues including firewalls, VNETs/VPCs, routing, gateways, and load balancers</li><li>Support containerized workloads using Docker and Kubernetes</li><li>Collaborate with developers, DevOps teams, and systems administrators on cloud projects</li><li>Document cloud architectures, procedures, and operational guidelines</li><li>Assist with cloud migrations, modernization initiatives, and optimization efforts</li></ul><p><br></p>
<p>Robert Half is currently partnering with a well-established company in San Diego that is looking for a Senior Data Engineer, experienced in BigQuery, DBT (Data Build Tool), and GCP. This position is full time (permanent placement) that is 100% onsite in San Diego. We are looking for a Principal Data Engineer that is passionate about optimizing systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing. Must have experience in DBT!</p><p>Responsibilities:</p><ul><li>Design and implement scalable, high-performance data solutions on GCP.</li><li>Develop data pipelines, data warehouses, and data lakes using GCP services (BigQuery, and DBT, etc.).</li><li>Build and maintain ETL/ELT pipelines to ingest, transform, and load data from various sources.</li><li>Ensure data quality, integrity, and security throughout the data lifecycle.</li><li>Design, develop, and implement a new version of a big data tool tailored to client requirements.</li><li>Leverage advanced expertise in DBT (Data Build Tool) and Google BigQuery to model and transform data pipelines.</li><li>Optimize systems with advanced techniques in partitioning, indexing, and Google Sequences for efficient data processing.</li><li>Collaborate cross-functionally with product and technical teams to align project deliverables with client goals.</li><li>Monitor, debug, and refine the performance of the big data tool throughout the development lifecycle.</li></ul><p><strong>Minimum Qualifications:</strong></p><ul><li>5+ years of experience in a data engineering role in GCP .</li><li>Proven experience in designing, building, and deploying data solutions on GCP.</li><li>Strong expertise in SQL, data warehouse design, and data pipeline development.</li><li>Understanding of cloud architecture principles and best practices.</li><li>Proven experience with DBT, BigQuery, and other big data tools.</li><li>Advanced knowledge of partitioning, indexing, and Google Sequences strategies.</li><li>Strong problem-solving skills with the ability to manage and troubleshoot complex systems.</li><li>Excellent written and verbal communication skills, including the ability to explain technical concepts to non-technical stakeholders.</li><li>Experience with Looker or other data visualization tools.</li></ul>
<p>We are looking for an experienced DevOps Engineer to join our team in Alpharetta, GA. In this role, you will contribute to the development and optimization of complex CI/CD pipelines, ensuring efficient and secure deployment workflows across various environments and servers. This is a long-term contract position, offering the opportunity to work on innovative projects in a collaborative and dynamic environment.</p><p><br></p><p><strong>Location:</strong> Alpharetta, GA (Remote candidates considered)</p><p><strong>Duration:</strong> 1 year (Potential for extension)</p><p><strong>Pay:</strong> $56/hour with benefits (Health, Vision, Dental, 401K)</p><p><br></p><p><strong>Position Overview</strong></p><p>We are seeking a <strong>DevOps Engineer</strong> to manage and evolve a complex CI/CD pipeline across <strong>Octopus Deploy</strong>, <strong>GitHub</strong>, and <strong>AWS</strong>. </p><p>The role involves automating deployment workflows across <strong>50+ environments</strong> and <strong>400+ servers</strong>.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Customize and stabilize CI/CD pipelines using Octopus Deploy and GitHub Actions with a focus on security</li><li>Develop and maintain deployment scripts in PowerShell</li><li>Troubleshoot configurations involving Terraform and Ansible</li><li>Collaborate with development and SRE teams to reduce manual deployment steps</li><li>Manage secrets using AWS KMS</li><li>Create reusable scripts and templates for automation</li></ul>
We are looking for a skilled Data Engineer to join our team in Cleveland, Ohio. This long-term contract position offers the opportunity to contribute to the development and optimization of data platforms, with a primary focus on Snowflake and Apache Airflow technologies. You will play a key role in ensuring efficient data management and processing to support critical business needs.<br><br>Responsibilities:<br>• Design, develop, and maintain data pipelines using Snowflake and Apache Airflow.<br>• Collaborate with cross-functional teams to implement scalable data solutions.<br>• Optimize data processing workflows to ensure high performance and reliability.<br>• Monitor and troubleshoot issues within the Snowflake data platform.<br>• Develop ETL processes to support data integration and transformation.<br>• Work with tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data operations.<br>• Implement robust data warehousing strategies to support business intelligence initiatives.<br>• Analyze and resolve data-related technical challenges promptly.<br>• Provide support and guidance during Snowflake deployments across subsidiaries.<br>• Document processes and ensure best practices for data engineering are followed.
<p>We are looking for a skilled Application Support Specialist to join our Information Technology team in Minneapolis, Minnesota. This role is ideal for someone who is detail-oriented, thrives in a dynamic environment, and enjoys providing technical assistance to employees across all levels. The position offers opportunities for growth and collaboration while ensuring the smooth operation of various applications and systems.</p><p><br></p><p>Responsibilities:</p><ul><li>Provide end user support & training </li><li>Maintain and document end user support tickets along with other required applications.</li><li>Record, analyze, and resolve user issues, documenting actions taken and processes followed.</li><li>Identify ways to automate or improve user processes through analysis and technical insights.</li><li>Develop and create technical manuals to help resolve problems </li><li>Test programs, resolve errors, and implement necessary updates or modifications.</li><li>Train users, address inquiries, and foster effective adoption of technology.</li><li>Approve, schedule, and oversee the installation and testing of new software products or updates.</li></ul><p><br></p>
<p>We are looking for a skilled Data Warehouse Engineer to join our team in Malvern, Pennsylvania. This Contract-to-Permanent position offers the opportunity to work with cutting-edge data technologies and contribute to the optimization of data processes. The ideal candidate will have a strong background in Azure and Snowflake, along with experience in data integration and production support. This role is 4-days onsite a WEEK, with no negotiations. Please apply directly if you're interested.</p><p><br></p><p>Responsibilities:</p><p>• Develop, configure, and optimize Snowflake-based data solutions to meet business needs.</p><p>• Utilize Azure Data Factory to design and implement efficient ETL processes.</p><p>• Provide production support by monitoring and managing data workflows and tasks.</p><p>• Extract and analyze existing code from Talend to facilitate system migrations.</p><p>• Stand up and configure data repository processes to ensure seamless performance.</p><p>• Collaborate on the migration from Talend to Azure Data Factory, providing expertise on best practices.</p><p>• Leverage Python scripting to enhance data processing and automation capabilities.</p><p>• Apply critical thinking to solve complex data challenges and support transformation initiatives.</p><p>• Maintain and improve Azure Fabric-based solutions for data warehousing.</p><p>• Work within the context of financial services, ensuring compliance with industry standards.</p>
We are seeking a skilled Cloud Infrastructure Engineer to manage and optimize cloud-based environments across multiple platforms. This role involves maintaining high availability, performance, and security of cloud resources, while collaborating with internal teams and external partners to ensure seamless operations. <br> Key Responsibilities: <br> Configure and maintain virtual machines, storage solutions, and application service plans across cloud platforms Monitor system performance and recommend improvements for reliability and efficiency Collaborate with DevOps and support teams to resolve infrastructure and user-facing issues Stay current with cloud technologies and industry best practices to inform strategic decisions Partner with vendors and consultants to support cloud implementation and ongoing operations Review cloud billing and usage reports to identify cost-saving opportunities Implement and maintain robust security protocols for cloud-based assets Coordinate with third-party auditors to assess infrastructure security Contribute to disaster recovery planning and execution Maintain accurate diagrams and documentation of cloud architecture Develop and update internal glossaries and resource catalogs
We are looking for an experienced Systems Engineer to join our team in Springfield, New Jersey. In this role, you will design, implement, and support advanced Supervisory Control and Data Acquisition (SCADA) systems, contributing to the development of system standards and specifications. You will collaborate with various teams to enhance system functionality, provide technical expertise, and ensure seamless operations.<br><br>Responsibilities:<br>• Design and develop SCADA systems, ensuring functionality aligns with system standards and specifications.<br>• Generate and review Engineering Change Requests (ECRs) to evaluate technical and economic benefits.<br>• Provide technical support to marketing and sales teams for quotations and other activities.<br>• Deliver customer training sessions both onsite and in the field.<br>• Act as a technical liaison to address customer issues and perform field services when required.<br>• Conduct engineering studies to address system challenges and new requirements.<br>• Assist System Test technicians with technical project-related issues.<br>• Create detailed documentation, including parts lists, system layouts, and interconnection drawings.<br>• Develop and program RTUs, gateways, and databases for system integration.<br>• Travel as necessary to support customer needs and project requirements.
Responsible for being the Technical Lead for new product development of mechanical and electro-mechanical products, which includes driving and supporting the team through our phase gate product development process.<br><br>Lead design analysis (FEA, CFD), design reviews, DFMEAs, PFMEAs, and DFM reviews.<br><br>Act as product expert, working closely with internal cross functional team (supply chain, manufacturing engineering, sales, and quality) to develop new and support existing products.<br><br>Interact externally with both customers and suppliers on technical topics.<br><br>Work closely with Program Manager to estimate product cost and development timeline.<br><br>Develop and write design validation test specifications, working closely with the test lab to make sure products are properly validated.<br><br>Mentor and train junior engineers and drafters to assist in developing a strong, consistent engineering organization. <br><br>Education & Experience<br><br>Minimum of five (5) years of relevant experience<br>Bachelor’s degree in Mechanical Engineering or related field<br>MS in Mechanical Engineering (preferred)<br>Requirements<br><br>Proficient in 3D mechanical design (SolidWorks preferred)<br>Proficient in one or more analysis tools (FEA, CFD, Tolerance stack)<br>Excellent oral and written communication skills<br>Attention to detail<br>Understand and apply GD& T standards<br>Ability to work in a fast-paced and challenging environment<br>Injection molding/die casting processes and design practices<br>Seal design<br>Knowledge of CSA Class1, Div 2 and UL (preferred)
<p>Robert Half is recruiting for an experience Windows Systems Engineer with Azure experience, for our client in Green Bay, WI. This role will play a critical role in the monitoring, management, and maintenance of their Azure cloud environment, virtualization, storage, backups, and more. </p><p><br></p><p><strong>This is a Direct Hire role that will require a hybrid work schedule in Green Bay.</strong></p><p><br></p><p>Responsibilities:</p><ul><li>Facilitate effective communication and collaboration between IT teams.</li><li>Provide clear and professional recommendations to leadership to aid in decision-making processes.</li><li>Lead and direct team efforts on specific projects and technology direction when required.</li><li>Uphold security architecture standards, frameworks, and guidelines, with an emphasis on infrastructure security best practices.</li><li>Perform tasks related to troubleshooting, capacity planning, and performance management.</li><li>Utilize infrastructure strategies to guide business-oriented technology initiatives.</li><li>Conduct research on emerging technologies to assess potential business applications.</li><li>Translate business requirements into comprehensive written designs, adhering to industry standards.</li><li>Develop, evaluate, and refine project testing and implementation plans.</li><li>Assess development and testing strategies utilized by external vendors.</li><li>Produce and maintain documentation related to new or modified projects or technologies.</li><li>Ensure optimal system performance through consistent tuning, regular patching, and rigorous monitoring.</li><li>Oversee implementation of upgrades, patches, new applications, and infrastructure components.</li><li>Engage in proof-of-concept engineering initiatives to evaluate system additions or modifications.</li><li>Adhere to established change management processes.</li><li>Participate in the development and review of business continuity and disaster recovery strategies.</li><li>Perform additional duties as necessary or assigned.</li></ul>
We are looking for a skilled and collaborative Platform Engineer to join our team in Pleasant Prairie, Wisconsin. In this role, you will work closely with both the Application Development and Infrastructure teams to design, automate, and deploy robust platforms that support enterprise-level microservices. This is a long-term contract position requiring expertise in Kubernetes, Red Hat OpenShift, CI/CD pipelines, and automation tools.<br><br>Responsibilities:<br>• Collaborate with cross-functional teams to ensure seamless integration and operation of enterprise platforms.<br>• Design and implement scalable container orchestration solutions using open-source Kubernetes.<br>• Develop and automate deployment processes for applications, ensuring efficiency and reliability.<br>• Build and optimize secure CI/CD pipelines using tools such as Jenkins, GitLab CI, and ArgoCD.<br>• Create and maintain automated testing workflows to support continuous delivery.<br>• Manage platform transitions from traditional virtual machines to Kubernetes and Red Hat OpenShift environments.<br>• Utilize scripting languages like Bash and Python to streamline automation tasks.<br>• Troubleshoot and resolve platform-related issues to ensure optimal performance.<br>• Provide technical expertise and guidance to teams on Kubernetes and OpenShift usage.<br>• Document processes and best practices to support ongoing platform development.
<p>The DevOps Engineer will design, build, and maintain secure, scalable, and highly available cloud infrastructure while accelerating delivery of features through automation, CI/CD, and observability. You’ll be a force multiplier for engineering teams, ensuring systems are fast, resilient, and easy to operate.</p><p> </p><p>Key Responsibilities:</p><ul><li>Build, expand, and optimize cloud infrastructure (AWS, GCP, or Azure) using Infrastructure as Code (Terraform, Pulumi, CloudFormation, CDK)</li><li>Design and implement CI/CD pipelines that enable multiple daily deployments with zero downtime (GitHub Actions, GitLab CI, ArgoCD, Jenkins, CircleCI)</li><li>Automate everything: configuration management (Ansible, Chef, Puppet), application deployments, self-healing systems, and security hardening</li><li>Own production reliability: monitoring, alerting, log management, and on-call response (Datadog, Prometheus+Grafana, New Relic, PagerDuty, Opsgenie)</li><li>Drive progressive delivery practices: blue-green, canary, feature flags, and rollback strategies</li><li>Perform capacity planning, cost optimization, and performance tuning across compute, storage, and networking</li><li>Collaborate in project planning to evaluate technical feasibility, risks, and delivery trade-offs</li><li>Harden systems and pipelines for security and compliance (IAM, Vault, Trivy, Snyk, OPA/Gatekeeper)</li><li>Document architectures, runbooks, and processes while mentoring engineers on DevOps best practices</li></ul><p><br></p>
<p>The DevOps Engineer will design, build, and maintain secure, scalable, and highly available cloud infrastructure while accelerating delivery of features through automation, CI/CD, and observability. You’ll be a force multiplier for engineering teams, ensuring systems are fast, resilient, and easy to operate.</p><p> </p><p>Key Responsibilities:</p><ul><li>Build, expand, and optimize cloud infrastructure (AWS, GCP, or Azure) using Infrastructure as Code (Terraform, Pulumi, CloudFormation, CDK)</li><li>Design and implement CI/CD pipelines that enable multiple daily deployments with zero downtime (GitHub Actions, GitLab CI, ArgoCD, Jenkins, CircleCI)</li><li>Automate everything: configuration management (Ansible, Chef, Puppet), application deployments, self-healing systems, and security hardening</li><li>Own production reliability: monitoring, alerting, log management, and on-call response (Datadog, Prometheus+Grafana, New Relic, PagerDuty, Opsgenie)</li><li>Drive progressive delivery practices: blue-green, canary, feature flags, and rollback strategies</li><li>Perform capacity planning, cost optimization, and performance tuning across compute, storage, and networking</li><li>Collaborate in project planning to evaluate technical feasibility, risks, and delivery trade-offs</li><li>Harden systems and pipelines for security and compliance (IAM, Vault, Trivy, Snyk, OPA/Gatekeeper)</li><li>Document architectures, runbooks, and processes while mentoring engineers on DevOps best practices</li></ul><p><br></p>
Position: Senior Field Information Technology Support Engineer<br><br>Job Description:<br>We are seeking a skilled and experienced Senior IT Engineer to join our team. The successful candidate will provide technical support and assistance to our clients on-site and remotely. The Senior Engineer will manage and design network environments, handle firewall and switch configurations, maintain phone systems, manage storage area networks, and support virtualized environments. Additionally, the candidate should possess expertise in cloud infrastructure management, Microsoft 365 administration, and supporting collaboration platforms such as Zoom Rooms and Teams Rooms.<br>Key Responsibilities:<br>• Travel to client sites regularly in MA and NH for technical support and assistance.<br>• Provide remote support for clients.<br>• Design, manage, and maintain diverse network environments.<br>• Configure and manage firewalls, primarily Meraki, Fortinet, and Palo Alto.<br>• Handle the configuration and management of switches, primarily HP/Aruba, Cisco, and Meraki.<br>• Manage wireless networks using Meraki and Ubiquiti technologies.<br>• Maintain and support phone systems, particularly 3CX and Avaya.<br>• Manage and maintain storage area networks, focusing on HPE and EqualLogic.<br>• Design, manage, and maintain VMware and Hyper-V environments.<br>• Oversee the management and administration of cloud infrastructure, primarily Azure and AWS.<br>• Manage and support Active Directory, both on-premises and Azure.<br>• Assist in migrating clients from on-premises to cloud infrastructure.<br>• Utilize PowerShell for automation and scripting tasks.<br>• Provide expertise in Windows Server administration.<br>• Deliver high-level support for Windows 10/11 environments.<br>• Offer essential support for Mac and Linux systems.<br>• Be familiar with Datto BCDR solutions.<br>• Microsoft 365 Administration.<br>• Support Zoom Rooms and Teams Rooms for collaboration purposes.<br>• Perform other job duties assigned by management.<br>Working Conditions:<br>Working conditions include office work and fieldwork if required. Candidates must usually deal with multiple tasks and queries at once. The ability to manage a workload and handle multiple tasks would be the key. The candidate will work out of the office, and one day from home.<br>Knowledge, Skills, and Abilities:<br>• Ability to travel to client sites regularly.<br>• Proficiency in providing remote technical support.<br>• Managed Services experience is preferred.<br>• Strong problem-solving skills and the ability to think creatively to find solutions.<br>• In-depth knowledge of managing and designing diverse network environments<br>• Knowledge of Kaseya RMM/PSA for remote monitoring and management
<p>Olivia from Robert Half is hiring for an experienced DevOps Engineer to join our team in Albany, New York. This role involves managing complex infrastructure deployments and ensuring smooth operations of on-premises systems. The ideal candidate will have extensive hands-on expertise in automation tools, containerized environments, and CI/CD pipelines.</p>
<p>Are you an experienced Systems Engineer looking to take ownership of modern cloud infrastructure and drive technical excellence? Our client is seeking a <strong>Senior Systems Engineer with strong Azure experience</strong> to join their growing team and lead initiatives across cloud architecture, automation, security, and systems reliability.</p><p><br></p><p><strong>🚀 What You’ll Do</strong></p><p>As the Senior Systems Engineer, you will:</p><ul><li>Design, implement, and maintain scalable cloud infrastructure in <strong>Microsoft Azure</strong></li><li>Manage and optimize on-prem and cloud-based Windows server environments</li><li>Develop automation scripts and IaC using PowerShell, ARM/Bicep, or Terraform</li><li>Support identity & access management through Azure AD / Entra ID</li><li>Monitor system performance, security posture, and cost optimization</li><li>Lead projects around migrations, upgrades, and cloud modernization</li><li>Collaborate with cross-functional teams including Security, DevOps, and Networking</li><li>Troubleshoot complex issues across servers, networking, and cloud services</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Johnson City, Texas. In this role, you will design and optimize data solutions to enable seamless data transfer and management in Snowflake. You will work collaboratively with cross-functional teams to enhance data accessibility and support data-driven decision-making across the organization.<br><br>Responsibilities:<br>• Design, develop, and implement ETL solutions to facilitate data transfer between diverse sources and Snowflake.<br>• Optimize the performance of Snowflake databases by constructing efficient data structures and utilizing indexes.<br>• Develop and maintain automated, scalable data pipelines within the Snowflake environment.<br>• Deploy and configure monitoring tools to ensure optimal performance of the Snowflake platform.<br>• Collaborate with product managers and agile teams to refine requirements and deliver solutions.<br>• Create integrations to accommodate growing data volume and complexity.<br>• Enhance data models to improve accessibility for business intelligence tools.<br>• Implement systems to ensure data quality and availability for stakeholders.<br>• Write unit and integration tests while documenting technical work.<br>• Automate testing and deployment processes in Snowflake within Azure.
<p>We are looking for a Senior Systems Engineer to join our team in Springfield, Massachusetts. In this Contract to permanent position, you will play a pivotal role in designing, implementing, and enhancing technology solutions while leading critical IT projects. This opportunity is ideal for a skilled individual eager to contribute to strategic planning, mentor team members, and support the advancement of enterprise-level systems.</p><p><br></p><p>Responsibilities:</p><p>• Design, configure, and troubleshoot storage area networks (SANs), fiber channel infrastructure, and enterprise backup solutions across on-premises and cloud environments.</p><p>• Lead multiple projects focused on migrating and consolidating distributed systems into centralized enterprise models, including transitioning virtual servers to cloud-based platforms.</p><p>• Configure networks and firewalls while developing centralized logging and monitoring solutions to enhance system efficiency.</p><p>• Perform system upgrades and implement server configurations, including creating scripts and automating tasks across test, development, and production environments.</p><p>• Respond to system alerts and user-reported issues, applying independent judgment to resolve problems or escalate them when necessary.</p><p>• Collaborate with cross-functional teams to research and propose strategies for IT system improvements.</p><p>• Provide leadership and mentorship to IT engineers and administrators, fostering growth and knowledge sharing.</p><p>• Implement best practices for system security, including intrusion detection, virus management, and disaster recovery.</p><p>• Monitor and optimize system performance, ensuring the stability and reliability of critical infrastructure.</p>
<p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. The ideal candidate will play a key role in designing, implementing, and maintaining data applications while ensuring alignment with organizational data standards. This position requires expertise in handling large-scale data processing and a collaborative approach to problem-solving.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with teams to design and implement applications utilizing both established and emerging technology platforms.</p><p>• Ensure all applications adhere to organizational data management standards.</p><p>• Develop and optimize queries, stored procedures, and reports using SQL Server to address user requests.</p><p>• Work closely with team members to monitor application performance and ensure quality.</p><p>• Communicate effectively with users and management to resolve issues and provide updates.</p><p>• Create and maintain technical documentation and application procedures.</p><p>• Ensure compliance with change management and security protocols.</p>