<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
<p>We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.</p><p>• Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.</p><p>• Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.</p><p>• Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.</p><p>• Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.</p><p>• Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.</p><p>• Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.</p><p>• Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.</p><p>• Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.</p><p>• Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.</p>
We are looking for a hands-on Data Engineer to help build and expand an enterprise data platform in Mason, Ohio. This role will focus on creating a scalable Azure and Microsoft Fabric environment that brings together information from multiple business systems to support reliable reporting, analytics, and future data-driven innovation. The position is ideal for someone who enjoys designing core data architecture, improving data quality, and enabling business teams with trusted insights across manufacturing, service, supply chain, sales, and finance.<br><br>Responsibilities:<br>• Design and develop a modern enterprise data ecosystem using Azure and Microsoft Fabric, covering ingestion, storage, transformation, and delivery for analytics use cases.<br>• Create and support automated data pipelines that collect information from operational applications, external portals, and databases into a centralized environment.<br>• Structure raw, refined, and business-ready data layers so teams can access consistent data for dashboards, reporting, and self-service analysis.<br>• Consolidate and standardize data from platforms such as Epicor, JobBOSS, Salesforce, and other internal or third-party systems used across the organization.<br>• Build reusable data models and business logic that support reporting across order management, procurement, inventory, manufacturing operations, service, and finance.<br>• Introduce data validation, reconciliation, monitoring, and error-handling processes to strengthen data accuracy and reduce manual correction efforts.<br>• Partner with reporting teams by enabling governed semantic models, optimizing datasets, and supporting secure access to detailed transactional data in Power BI.<br>• Define and apply security controls, including role-based permissions and data access rules, in alignment with internal governance standards and privacy expectations.<br>• Maintain clear technical documentation for mappings, lineage, transformation rules, data definitions, and engineering standards.<br>• Assess existing integration methods, including manual and legacy approaches, and help implement more scalable and controlled data delivery patterns over time.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
<p>We are looking for an experienced Data Engineer to design and support data exchange solutions that connect external business partners with internal systems. This role will mainly work remotely with different office locations. We are looking for a candidate who lives in NC, within 2 hours of Greensboro, NC. This role focuses on building reliable integration processes, transforming structured files and API-based data, and ensuring critical information is available for reporting and operational use. The ideal candidate brings strong technical depth in data movement and troubleshooting, along with a practical understanding of manufacturing and supply chain workflows.</p><p><br></p><p>Responsibilities:</p><p>• Build and maintain business-to-business data interfaces that onboard new partner organizations and align incoming data with internal database structures.</p><p>• Develop automated workflows that ingest, transform, validate, and deliver data using file-based exchanges, APIs, and structured transaction formats such as EDI and X12.</p><p>• Configure and manage end-to-end integration processes across system interfaces, including flat-file handling, file sharing, and reporting-related data movement.</p><p>• Lead data transformation efforts through the full lifecycle by designing solutions, testing functionality, deploying processes, and stabilizing production performance.</p><p>• Investigate integration failures or data quality issues, identify root causes, and implement corrective actions to restore reliable processing.</p><p>• Partner with business intelligence and reporting teams to provide access to accurate, usable data sources that support analysis and operational decision-making.</p><p>• Apply manufacturing and supply chain process knowledge to structure data flows that support purchasing, components, orders, and assembly-related transactions.</p><p>• Use available tools and platforms to execute integration projects independently, including extracting data from enterprise applications and translating it into usable formats.</p><p>• Create scalable data pipelines that enable customer and order transactions to move through systems with minimal manual intervention.</p>
<p>Seeking a Data Engineer to build and maintain data pipelines and reporting systems.</p><p><strong>Responsibilities</strong></p><ul><li>Design and maintain ETL processes</li><li>Work with large datasets in SQL</li><li>Optimize database performance</li><li>Support BI/reporting teams</li></ul><p><br></p>
<p>We are seeking a skilled and motivated Data Engineer to join our team, with deep hands-on experience building and optimizing data pipelines and lakehouse solutions in Databricks. In this role, you will collaborate with cross-functional teams to design, develop, and operate scalable, reliable data products that drive business value.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain batch and streaming data pipelines using Databricks (Spark, Delta Lake, Jobs/Workflows).</li><li>Partner with data scientists, analysts, and application teams to deliver trusted, well-modeled data sets and features in the Databricks Lakehouse.</li><li>Optimize Spark jobs (partitioning, caching, join strategies) and Databricks cluster configurations for performance, scalability, and cost.</li><li>Implement data quality checks, observability, governance, and security controls (e.g., Unity Catalog, access policies) within Databricks.</li><li>Troubleshoot and resolve pipeline failures, data issues, and production incidents; perform root-cause analysis and implement preventative improvements.</li></ul><p><br></p>
We are looking for a Data Engineer to help shape and strengthen the organization’s data ecosystem in Spartanburg, South Carolina. This role focuses on building scalable data structures and reliable integration solutions that support analytics, operational reporting, and long-term business goals. The ideal candidate brings a strong background in modern data platforms and enjoys partnering with technical and business teams to deliver secure, high-quality data solutions.<br><br>Responsibilities:<br>• Develop and refine data models, storage frameworks, and analytical repositories that enable efficient access to trusted information.<br>• Create scalable architecture approaches that support enterprise objectives while improving performance, reliability, and long-term maintainability.<br>• Establish data design standards, architectural patterns, and governance practices that promote consistency, quality, and security across platforms.<br>• Partner with software developers, analytics teams, and business stakeholders to translate operational needs into practical data solutions.<br>• Build and enhance data pipelines and integration processes that move information accurately across systems and support reporting and analysis.<br>• Implement processes for master data, metadata, and data quality management to strengthen governance and regulatory compliance.<br>• Assess emerging tools, cloud technologies, and platform options to recommend solutions that balance cost, scalability, and functionality.<br>• Work closely with data engineering peers to encourage strong technical alignment, knowledge sharing, and continuous improvement across the team.
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
<p>Robert Half is seeking a <strong>Contract Data Engineer</strong> to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Development:</strong> Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources.</li><li><strong>Data Architecture:</strong> Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs.</li><li><strong>Cloud Data Platforms:</strong> Work within cloud environments such as <strong>AWS, Azure, or GCP</strong> to deploy and manage data solutions.</li><li><strong>Data Warehousing:</strong> Design and support enterprise data warehouses using platforms such as <strong>Snowflake, Redshift, BigQuery, or Azure Synapse</strong>.</li><li><strong>Big Data Processing:</strong> Develop solutions using big data technologies such as <strong>Spark, Databricks, Kafka, and Hadoop</strong> when required.</li><li><strong>Performance Optimization:</strong> Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency.</li><li><strong>Data Quality & Reliability:</strong> Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability.</li><li><strong>Collaboration:</strong> Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions.</li><li><strong>Documentation:</strong> Maintain detailed documentation for pipelines, data flows, and system architecture.</li></ul><p><br></p>
<p>The Senior Software Engineer is a hands-on technical leadership position responsible for designing, building, and maintaining high-quality software solutions. This role emphasizes both individual development work and ownership of design decisions for features and subsystems. Modern tools, including AI-assisted development and architectural support, are leveraged to drive delivery while maintaining accountability for technical outcomes.</p><p><br></p><p><strong>Responsibilities:</strong></p><p><br></p><ul><li>Design, implement, test, and maintain scalable, secure, and reliable applications and services.</li><li>Act as a senior technical contributor, with responsibility for the design and implementation of features and subsystems.</li><li>Contribute actively to development tasks, applying advanced coding expertise in several programming languages and frameworks.</li><li>Participate in architectural discussions and support incremental evolution of systems with team leads.</li><li>Conduct code reviews and mentor engineering team members, fostering best practices and ongoing improvement.</li><li>Translate requirements from product owners, business analysts, and stakeholders into technical solutions.</li><li>Identify and mitigate technical risks in assigned systems and projects.</li><li>Support and enhance cloud-based applications (Azure, AWS) with emphasis on performance, reliability, and scalability.</li><li>Collaborate effectively with onshore and offshore teams to ensure successful project execution.</li><li>Keep abreast of industry trends and new technologies to encourage innovation.</li><li>Utilize AI-assisted tools to expedite design, documentation, and implementation, while ensuring technical quality.</li><li>Lead and support AI-related initiatives, drawing on prior experience with AI/ML technologies; recommend and implement suitable AI tools and frameworks.</li><li>Test and demonstrate emerging AI tools and platforms via proofs of concept (POCs) to highlight business value.</li><li>Guide customers in leveraging AI to optimize business processes; support teams working on business-facing AI efforts.</li><li>Collaborate with stakeholders to contribute to defining an AI roadmap aligned with organizational strategy and technology objectives.</li></ul>
<p>Position Overview</p><p>We are seeking a delivery‑focused Data Automation Engineer to design and implement innovative automation solutions across a Microsoft Azure‑based data analytics platform. This role partners closely with engineering teams and stakeholders to translate business requirements into scalable data engineering and AI‑enabled solutions.</p><p>The ideal candidate is hands‑on with Azure Data Factory, Synapse Pipelines, Apache Spark, Python, and SQL, and brings experience building reliable ETL pipelines across SQL and NoSQL environments. This role emphasizes performance optimization, automation, and proactive data quality within Agile DevOps delivery models.</p><p><br></p><p>Key Responsibilities</p><p>Data Engineering & Automation</p><ul><li>Develop high‑performance data pipelines using Azure Data Factory, Synapse Pipelines, Spark Notebooks, Python, and SQL.</li><li>Design ETL workflows supporting advanced analytics, reporting, and AI/ML use cases.</li><li>Implement data migration, integrity, quality, metadata, and security controls across pipelines.</li><li>Monitor, troubleshoot, and optimize pipelines for availability, scalability, and performance.</li></ul><p>Performance Testing & Optimization</p><ul><li>Execute ETL performance testing and validate load performance against benchmarks.</li><li>Analyze pipeline runtime, throughput, latency, and resource utilization.</li><li>Support tuning activities (e.g., query optimization, partitioning, indexing).</li><li>Validate data completeness and consistency after high‑volume processing.</li></ul><p>Platform Collaboration & DevOps Support</p><ul><li>Collaborate with DevOps and infrastructure teams to optimize compute, memory, and scaling.</li><li>Maintain versioning and configuration control across environments.</li><li>Support production, testing, development, and integration environments.</li><li>Actively participate in Agile delivery processes including Program Increment planning.</li></ul>
<p>Robert Half is proactively building a network of Cybersecurity Engineers and Security-focused Infrastructure professionals for upcoming opportunities across the Sacramento area.</p><p><br></p><p>This posting is part of an ongoing talent initiative focused on identifying individuals with experience in cybersecurity engineering, cloud security, infrastructure security, security operations, and enterprise risk mitigation. While this may not represent a specific open requisition today, experienced candidates will be considered for upcoming contract, contract-to-permanent, and permanent opportunities with our clients.</p><p><br></p><p>We regularly support organizations across healthcare, financial services, manufacturing, logistics, public sector, and detail orientated services environments seeking individuals who can help secure modern infrastructure, support compliance initiatives, strengthen cloud environments, and improve overall security posture.</p><p><br></p><p>Typical Responsibilities May Include:</p><ul><li>Supporting enterprise cybersecurity initiatives and infrastructure hardening</li><li>Managing security tools such as firewalls, endpoint protection, SIEM, MFA, and vulnerability management platforms</li><li>Assisting with cloud security initiatives across Azure, AWS, or hybrid environments</li><li>Monitoring and responding to security incidents and alerts</li><li>Supporting compliance and audit efforts related to security frameworks and best practices</li><li>Partnering with infrastructure, networking, and leadership teams to improve security operations</li><li>Helping implement policies, procedures, and security controls across enterprise environments</li></ul><p>This is an excellent opportunity for individuals interested in staying connected to the local technology market and hearing about future cybersecurity and infrastructure security opportunities as they arise.</p>
<p>A rapidly growing software team is looking for a <strong>Backend Developer</strong> to help expand and scale a complex operational platform used by organizations with large, distributed locations. This role focuses on the core systems that power the application, including application logic, data architecture, integrations, and performance optimization.</p><p><br></p><p>You will work closely with a small, collaborative engineering team to enhance an established platform while helping design new capabilities as the product continues to expand into new industries and use cases. This is an opportunity to have meaningful input on architecture and help shape how the system evolves over time.</p><p><br></p><p><strong>What You’ll Work On</strong></p><p>The platform supports operational workflows such as asset tracking, service and maintenance management, inventory monitoring, and automated vendor ordering. As the platform grows and new organizations adopt it, the engineering team continuously builds new features, expands integrations, and improves system scalability.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design and build backend functionality using Ruby on Rails</li><li>Develop and maintain application logic within the model, controller, and database layers</li><li>Create and maintain RESTful APIs used by internal and external systems</li><li>Optimize database queries and data structures for performance and reliability</li><li>Implement integrations with third-party systems and vendor platforms</li><li>Support the scalability and reliability of a large operational application</li><li>Collaborate with engineers to refine architecture and improve system design</li><li>Participate in code reviews and contribute to engineering standards</li><li>Troubleshoot and resolve complex backend and data-related issues </li></ul><p><br></p>
<p>Robert Half is seeking an experienced <strong>Senior Java Backend Engineer</strong> to support a high-impact initiative with a consulting services firm based in Seattle, WA. In this role, you will deliver robust, end-to-end solutions that power critical business functionality. Apply today!</p><p> </p><p><strong>Duration: </strong>3 months with potential to extend </p><p><strong>Schedule: </strong>Monday – Friday Core Business Hours </p><p><strong>Location:</strong> Onsite in Seattle preferred, remote ok. </p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and maintain scalable backend services and APIs for the MDS project using Java and Spring Boot </li><li>Break down complex requirements into small, incremental deliverables to support rapid development and consistent sprint execution </li><li>Collaborate with backend, frontend, and EDO Operations teams to deliver seamless, end-to-end solutions </li><li>Write clean, efficient, and maintainable code aligned with best practices and coding standards </li><li>Troubleshoot and resolve complex technical issues, including performance bottlenecks and system inefficiencies </li><li>Participate in code reviews, architectural discussions, and knowledge-sharing initiatives </li><li>Ensure the security, scalability, and reliability of backend systems </li><li>Document technical designs, workflows, and system integrations </li></ul>
<p>Position: Mobile Full-Stack Software Engineer | Multiple Perm Opps | Mid - Senior level</p><p>Location: Remote</p><p>Salary: $150,000 - 173,000 base + bonus + exceptional benefits</p><p><br></p><p>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</p><p>Mobile Full Stack Software Engineer (Mid–Senior)</p><p>Build a 0→1 Mobile Product. Shape a New Digital Future.</p><p>A well‑funded, nationally recognized enterprise—backed by one of the most respected names in the world—is launching a bold digital transformation initiative. After 20+ years of dominating in their space, the organization is now investing heavily in a brand‑new, mobile‑first digital department built from the ground up.</p><p>This is greenfield, 0→1 product creation on a massive scale—with executive sponsorship, secured funding, and a mission to build a beautifully unified digital experience across dozens of business lines.</p><p>And we’re building the founding engineering team.</p><p>Why This Role Is Special</p><p>• Startup energy, zero startup risk: the pace, innovation, and creativity of a 0→1 build, fully funded by a Fortune‑level parent.</p><p>• Brand‑new digital organization: top‑tier engineers, product leaders, designers, and DevOps experts.</p><p>• Executive level backing: strong support from enterprise leadership and direct guidance</p><p>• High impact: join at the ground floor and influence architecture, product decisions, engineering culture, and scale.</p><p>• A product that is a mobile-first platform that will unify experiences across dozens of companies.</p><p>If you want to build something meaningful, this is the place.</p><p>What You’ll Do</p><p>As a Mobile Full Stack Software Engineer, you’ll be a key contributor in building a brand-new mobile-first platform. You’ll help define the foundation of the MVP, partner closely with product and design, and build high-quality, scalable services in an event-driven, microservices environment.</p><p>You will:</p><p>• Build and iterate on a Flutter/Dart mobile web app for the MVP launch.</p><p>• Develop backend microservices using NestJS, Node.js, Postgres, and modern API patterns.</p><p>• Design and scale cloud-native services in AWS with a strong focus on performance, security, and reliability.</p><p>• Contribute to architecture for a multi-domain, multi-subsidiary enterprise environment.</p><p>• Collaborate with product, UX, DevOps, QA, and business stakeholders to deliver in a fast, iterative, MVP-first culture.</p><p>• Bring both startup scrappiness and enterprise engineering discipline—knowing when to move fast and when to build for scale.</p><p>What You Bring</p><p>We’re looking for builders—engineers who thrive in ambiguity, love solving problems from scratch, and want to influence a product’s DNA from the earliest days.</p><p>Ideal experience includes:</p><p>• 2+ years of modern mobile development (Flutter/Dart preferred).</p><p>• Full stack engineering across mobile and backend ecosystems.</p><p>• Experience with:</p><p>○ Flutter / Dart</p><p>○ NestJS / Node.js</p><p>○ Postgres</p><p>○ Microservices</p><p>Hiring both Mid-Level and Senior-Level Engineers.</p><p><br></p>
<p>We are looking for a skilled Desktop Engineer to join our team in Fort Myers/Naples area in Florida. In this role, you will be responsible for managing software deployments, troubleshooting technical issues, and maintaining system security across enterprise environments. This is a fully on-site long-term contract position offering the opportunity to contribute to essential IT operations in the rental/leasing services industry.</p><p><br></p><p>Responsibilities:</p><p>• Create, test, and deploy software packages using Tanium and other platforms to ensure successful rollouts.</p><p>• Troubleshoot deployment failures and escalate critical issues when necessary to maintain business operations.</p><p>• Take ownership of support tickets, resolving issues independently and collaborating with IT teams and end users.</p><p>• Develop and test software packages using PowerShell App Deployment Toolkit for compatibility and functionality.</p><p>• Perform quarterly updates of endpoint protection tools and manage Java updates to enhance system security.</p><p>• Collaborate with the Vulnerability Management Team to address high-risk security vulnerabilities effectively.</p><p>• Document deployment processes, troubleshooting steps, and system configurations to support organizational knowledge.</p><p>• Monitor third-party automation tools and workflows to ensure consistent operation.</p><p>• Analyze and provide feedback on technical solutions to improve IT processes and align with industry standards.</p>
We are looking for an Oracle Integration Cloud Dev to support a growing enterprise integration landscape in Irvine, California. This Long-term Contract position will focus on building and enhancing cloud-based integrations across Oracle SaaS environments, with an emphasis on reliable data movement and scalable interface design. The role works closely with HR and Finance technology stakeholders to strengthen data ownership models and deliver efficient, well-structured integration solutions.<br><br>Responsibilities:<br>• Design, develop, and maintain integration solutions primarily within Oracle Integration Cloud to connect Oracle SaaS applications and related enterprise platforms.<br>• Partner with HR and Finance information systems teams to support data stewardship objectives and ensure integrations align with business ownership needs.<br>• Create and optimize interfaces that move data accurately between source and target systems while supporting performance, reliability, and maintainability.<br>• Contribute to the evolution of integration methods by reducing dependence on legacy extract and reporting-based approaches where appropriate.<br>• Support a high-volume integration environment by monitoring existing interfaces, troubleshooting issues, and implementing enhancements as business demands grow.<br>• Work across core Oracle Fusion Cloud modules and connected systems, including timekeeping-related interfaces that feed Oracle Time and Labor.<br>• Produce technical documentation, mapping details, and development standards to support consistent delivery and long-term supportability.<br>• Collaborate with cross-functional teams to test, deploy, and refine integrations while helping minimize reliance on older middleware tools such as Boomi.
We are looking for an experienced Cyber Security Engineer to join our team in North Charleston, South Carolina. In this Contract to permanent position, you will play a critical role in supporting mission-essential systems and ensuring the security of Department of Defense (DoD) intelligence and command-and-control operations. This opportunity requires a strong background in cybersecurity and the ability to work collaboratively with cross-functional teams to deliver secure, reliable, and high-performing solutions.<br><br>Responsibilities:<br>• Provide recurring security patch updates and application maintenance for military intelligence and command-and-control systems.<br>• Conduct integration, functional, and operational testing to validate system reliability and performance.<br>• Perform Quality Assurance (QA) and Quality Control (QC) activities to ensure compliance and mission readiness.<br>• Implement and maintain cybersecurity controls in accordance with DoD standards and best practices.<br>• Manage configuration management processes, including version control, change tracking, and baselining.<br>• Create and maintain detailed technical documentation for system users and stakeholders.<br>• Support the development and sustainment of secure and resilient systems for C5ISR, information operations, and enterprise IT environments.<br>• Collaborate with cross-functional teams to develop solutions that meet operational requirements and enhance mission capabilities.<br>• Enhance deployment and update processes to improve system efficiency and minimize downtime.