<p>We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.</p><p>• Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.</p><p>• Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.</p><p>• Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.</p><p>• Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.</p><p>• Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.</p><p>• Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.</p><p>• Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.</p><p>• Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.</p><p>• Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.</p>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p>We are looking for an experienced Data Engineer to design and support data exchange solutions that connect external business partners with internal systems. This role will mainly work remotely with different office locations. We are looking for a candidate who lives in NC, within 2 hours of Greensboro, NC. This role focuses on building reliable integration processes, transforming structured files and API-based data, and ensuring critical information is available for reporting and operational use. The ideal candidate brings strong technical depth in data movement and troubleshooting, along with a practical understanding of manufacturing and supply chain workflows.</p><p><br></p><p>Responsibilities:</p><p>• Build and maintain business-to-business data interfaces that onboard new partner organizations and align incoming data with internal database structures.</p><p>• Develop automated workflows that ingest, transform, validate, and deliver data using file-based exchanges, APIs, and structured transaction formats such as EDI and X12.</p><p>• Configure and manage end-to-end integration processes across system interfaces, including flat-file handling, file sharing, and reporting-related data movement.</p><p>• Lead data transformation efforts through the full lifecycle by designing solutions, testing functionality, deploying processes, and stabilizing production performance.</p><p>• Investigate integration failures or data quality issues, identify root causes, and implement corrective actions to restore reliable processing.</p><p>• Partner with business intelligence and reporting teams to provide access to accurate, usable data sources that support analysis and operational decision-making.</p><p>• Apply manufacturing and supply chain process knowledge to structure data flows that support purchasing, components, orders, and assembly-related transactions.</p><p>• Use available tools and platforms to execute integration projects independently, including extracting data from enterprise applications and translating it into usable formats.</p><p>• Create scalable data pipelines that enable customer and order transactions to move through systems with minimal manual intervention.</p>
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
<p>We are supporting our client in hiring a Product Data Engineer who will take full ownership of their product information environment. This role centers on managing their PIM solution (Salsify), improving data structures, and building automated, API‑driven integrations that ensure product data is clean, scalable, and synchronized across platforms.</p><p>This position will be deeply involved in a major product‑data overhaul, including cleanup, restructuring, and long‑term system improvements. The ideal candidate is someone who enjoys solving data problems, building automated workflows, and improving the reliability of product information across systems.</p><p><br></p><p> Key Responsibilities</p><p>Product Data Platform Ownership</p><ul><li>Act as the primary administrator for the PIM platform</li><li>Define and maintain product attributes, hierarchies, and data relationships</li><li>Create validation rules, formulas, and workflows to enforce data standards</li><li>Manage permissions, governance, and platform configuration</li><li>Troubleshoot issues related to imports, exports, and publishing</li></ul><p>Integrations & Automation</p><ul><li>Manage integrations between the PIM and internal/external systems (eCommerce, retail, etc.)</li><li>Build and support API‑based data flows with a focus on reliability and scale</li><li>Develop automation using scripting (Python preferred)</li><li>Support event‑driven or automated pipelines to reduce manual work</li><li>Monitor integration performance and proactively resolve failures</li></ul><p>Product Data Improvements</p><ul><li>Contribute to a large‑scale product data cleanup and restructuring effort</li><li>Identify gaps in current data models and workflows</li><li>Partner with cross‑functional teams to define scalable data standards</li><li>Improve system design to support long‑term growth</li></ul><p>Channel Syndication</p><ul><li>Manage product data distribution to digital and retail channels</li><li>Ensure data meets channel‑specific requirements</li><li>Troubleshoot publishing issues and improve success rates</li><li>Support product launches and updates across channels</li></ul><p>Data Governance & Quality</p><ul><li>Establish naming conventions, validation rules, and governance standards</li><li>Define and track data quality KPIs (accuracy, completeness, timeliness)</li><li>Utilize or support data governance tools</li><li>Work with business teams to improve data accountability</li></ul><p>Reporting & Metrics</p><ul><li>Build dashboards and reports on data quality and system performance</li><li>Provide insights to leadership to support decision‑making</li><li>Track syndication outcomes and operational metrics</li></ul><p>Operational Support</p><ul><li>Handle day‑to‑day platform usage, enhancements, and issue resolution</li><li>Prioritize incoming requests and tickets</li><li>Ensure stability and reliability of product data operations</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
<p>We are seeking a highly skilled Full Stack Data Engineer who thrives in building modern, scalable data platforms from the ground up. This is an opportunity to work on a cloud-native data stack, influence architecture decisions, and deliver solutions that directly power business insights and operations.</p><p>If you enjoy owning the full lifecycle—from data ingestion to application layer—this role will be a strong fit.</p><p><br></p><p><strong>What You’ll Do</strong></p><p>You will operate as a hands-on engineer across the full data stack:</p><ul><li>Design, build, and maintain scalable ELT pipelines and workflows</li><li>Develop and optimize data models and warehouse structures in Snowflake</li><li>Build full stack data applications and backend services</li><li>Write clean, efficient Python and SQL code</li><li>Develop reusable data frameworks and components</li><li>Implement automated testing for data quality and reliability</li><li>Build and maintain CI/CD pipelines (GitHub-based)</li><li>Create reporting and visualization solutions (Power BI or similar)</li><li>Monitor production systems and troubleshoot data issues proactively</li></ul><p><strong>Tech Stack</strong></p><ul><li>Data Platform: Snowflake</li><li>Languages: Python, SQL</li><li>Cloud: AWS / Azure / GCP (environment dependent)</li><li>DevOps: GitHub, CI/CD pipelines</li><li>Visualization: Power BI (or similar BI tools)</li></ul>
<p>Role Overview</p><p>This role is responsible for developing, enhancing, and maintaining enterprise analytics solutions, including dashboards, reporting assets, data models, and custom web applications used by operational and executive stakeholders. The position focuses on transforming complex data into actionable insights through scalable analytics solutions within an agile development environment.</p><p>The ideal candidate brings strong <strong>MicroStrategy expertise</strong>, modern <strong>data warehousing experience</strong>, and the ability to leverage <strong>AI‑assisted development tools</strong> to deliver high‑quality analytics products efficiently.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>MicroStrategy dashboards, reports, and semantic/data models</strong></li><li>Support and enhance <strong>custom analytics dashboards and web applications</strong> built with <strong>React</strong></li><li>Migrate select legacy reporting solutions to modern <strong>React‑based frameworks</strong></li><li>Perform <strong>data modeling, SQL development, and performance tuning</strong></li><li>Translate business and reporting requirements into technical specifications and analytics solutions</li><li>Use <strong>AI‑assisted coding tools</strong> (e.g., Cursor, Claude Code) to accelerate development, prototyping, and documentation while maintaining coding standards</li><li>Collaborate with technical leads and architects to adopt new tools, standards, and best practices</li><li>Optimize reporting performance across datasets, dashboards, and databases</li><li>Work effectively with <strong>remote and cross‑functional team members</strong></li></ul><p><br></p>
We are looking for an experienced SAP Developer to join our team in San Antonio, Texas. This role involves designing, implementing, and supporting SAP solutions tailored to meet business needs while ensuring high-quality development standards. The ideal candidate will possess strong technical expertise and a proactive approach to problem-solving within a collaborative environment.<br><br>Responsibilities:<br>• Develop and maintain SAP solutions, including SAP ERP and S/4HANA, to support business operations.<br>• Collaborate with functional teams to identify and propose technical solutions for new business requirements.<br>• Create, test, and implement integrations using SAP PI/PO and other SAP technologies.<br>• Write comprehensive technical specifications and provide training and support during go-live phases.<br>• Troubleshoot and resolve SAP-related issues, ensuring minimal disruption to business processes.<br>• Drive continuous improvement initiatives within the SAP environment, delivering system enhancements and projects.<br>• Assist in SAP template rollouts and adhere to established methodologies.<br>• Work in a global, multi-country SAP environment, contributing to international projects as required.<br>• Ensure compliance with configuration management practices and maintain documentation standards.<br>• Stay updated on emerging SAP technologies and contribute to their adoption within the organization.
A top-tier client of ours is seeking a Software Developer / Data Engineer to play a key role in supporting mission-critical data systems within a government intelligence environment. You’ll design and deliver high-performance data pipelines and architectures that drive advanced analytics and real-time insights. <br> Key Responsibilities Design and implement scalable data pipelines and data architectures Develop and optimize data storage solutions (SQL, NoSQL, graph databases) Support ETL processes and ensure efficient data throughput and performance Work closely with stakeholders to translate data requirements into technical solutions Maintain and enhance data infrastructure using tools like Apache Airflow and Docker
<p><strong>About the Role:</strong></p><p>We are seeking a versatile and forward-thinking Full Stack Developer to join our dynamic team. The ideal candidate will be proficient across multiple programming languages and frameworks, with a strong foundation in AI integration, testing, and performance optimization. This role requires a developer who thrives in a fast-paced environment and is passionate about building secure, scalable, and innovative web applications.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and maintain full stack applications using Ruby, Ruby on Rails, Python, Django, HTML, CSS, and JavaScript.</li><li>Integrate AI frameworks, APIs, and plugins to enhance application capabilities.</li><li>Conduct thorough debugging, unit testing, and regression testing to ensure code quality.</li><li>Ensure applications meet security and compliance standards, including US and EU regulations.</li><li>Implement and manage payment systems integration.</li><li>Perform load testing to validate performance under high traffic conditions.</li><li>Collaborate with cross-functional teams to define and implement web architecture (preferred but not required).</li></ul><p><br></p><p><br></p>
<p><strong>Software Developer</strong></p><p>We are seeking a versatile <strong>Software Developer</strong> to build, optimize, and support applications across our technology stack. This role is ideal for someone who enjoys writing code, solving problems, and contributing to high-quality software solutions. The ideal candidate will be detail-oriented, collaborative, and eager to grow into more advanced development responsibilities.</p><p><strong>Responsibilities</strong></p><ul><li>Develop new software features using modern programming languages and frameworks</li><li>Maintain and enhance existing applications and services</li><li>Write clean, maintainable, well-structured code following best practices</li><li>Collaborate with other developers, QA teams, and product managers</li><li>Build APIs, integrations, and backend services</li><li>Debug application issues, performance problems, and system defects</li><li>Participate in sprint planning, technical discussions, and architecture reviews</li><li>Support deployments through CI/CD tools and version control</li><li>Maintain documentation for code, API endpoints, and application architecture</li></ul><p><br></p>
We are looking for a skilled Software Developer to contribute to the maintenance and enhancement of enterprise-level applications in Mechanicsburg, Pennsylvania. This role is ideal for candidates with expertise in COBOL development, Oracle databases, and UNIX environments who are ready to tackle challenges related to legacy systems. As this is a long-term contract position, you will have the opportunity to make a lasting impact on critical business operations.<br><br>Responsibilities:<br>• Develop, test, and maintain COBOL-based applications to ensure they meet business requirements.<br>• Execute and debug programs within a COBOL environment, addressing any issues efficiently.<br>• Analyze existing codebases to identify areas for enhancement or optimization.<br>• Create and refine database queries, stored procedures, and interactions with Oracle databases.<br>• Support job scheduling and batch processing activities to maintain system reliability.<br>• Diagnose and resolve production issues, ensuring minimal disruptions to operations.<br>• Collaborate with business analysts, QA teams, and infrastructure teams to align technical solutions with business needs.<br>• Prepare detailed documentation for code modifications, technical specifications, and system workflows.
<p>We are looking for a Software Developer to join our team in Toledo, Ohio and build reliable, scalable business applications. In this role, you will create new functionality, connect systems, and improve existing software used across the organization. The ideal candidate brings strong Java expertise, front-end development experience, and the ability to work closely with both technical and business teams to deliver practical solutions.</p><p><br></p><p>Responsibilities:</p><p>• Create, enhance, and support software applications and system integrations that meet business and technical needs.</p><p>• Partner with analysts and stakeholders to interpret requirements and convert them into effective application designs.</p><p>• Produce well-structured, efficient code using Java, Angular, and related technologies while maintaining high quality standards.</p><p>• Investigate production and application issues, identify root causes, and implement timely fixes.</p><p>• Build and refine database queries and data access logic to improve reliability and performance.</p><p>• Take part in peer reviews and help strengthen coding standards, development practices, and overall solution quality.</p><p>• Contribute to release, deployment, and DevOps-related activities to support smooth software delivery.</p><p>• Monitor application performance and recommend improvements that increase stability, speed, and long-term maintainability.</p>
We are looking for a Software Developer to join a technology-focused team supporting client initiatives in San Antonio, Texas. This position is suited for someone who can combine hands-on development expertise with strong project coordination, testing, and client support capabilities. The ideal candidate will contribute to software delivery, maintain clear communication across stakeholders, and help ensure solutions are implemented accurately, efficiently, and in line with client expectations.<br><br>Responsibilities:<br>• Build, update, and support software solutions using technologies such as .NET, C#, ASP.NET, JavaScript, and Symitar-related tools.<br>• Organize project activities by preparing timelines, coordinating deliverables, and helping drive work to completion within agreed deadlines and budget expectations.<br>• Partner with internal teams, clients, and external parties to align on project scope, monitor progress, and address issues that may affect delivery.<br>• Develop, execute, and refine testing approaches to confirm applications and enhancements perform as intended and satisfy business needs.<br>• Maintain thorough records of project activity, technical work, operating instructions, and other documentation needed for support and continuity.<br>• Serve as a reliable point of contact for customers by answering questions, resolving technical concerns, and providing guidance or training when needed.<br>• Track risks, milestones, and significant updates, then communicate status clearly to leadership and relevant stakeholders.<br>• Recommend practical improvements to documentation methods, workflow visibility, and overall project support processes.<br>• Contribute to broader team success by assisting with production support, participating in meetings, and taking on additional assignments as priorities evolve.
<p>We are looking for a skilled Software Developer to join our team on a contract basis. This role requires a proactive individual who can work independently while collaborating effectively with team members when needed. If you have expertise in Ruby on Rails, RESTful APIs, and a strong background in software development, we invite you to apply.</p><p><br></p><p>Responsibilities:</p><p>• Develop, edit, and maintain code using Ruby on Rails to create robust and scalable applications.</p><p>• Access and integrate third-party services server-side through RESTful APIs.</p><p>• Design and implement new Ruby gems or enhance existing ones to optimize functionality.</p><p>• Utilize Backstage to manage and interact with databases effectively.</p><p>• Validate and debug JavaScript code to ensure quality and performance.</p><p>• Collaborate with team members remotely to troubleshoot and resolve technical challenges.</p><p>• Apply knowledge in .NET technologies, including C#, ASP.NET, and .NET Framework, in relevant scenarios.</p><p>• Conduct thorough testing and debugging to deliver high-quality solutions.</p><p>• Ensure timely completion of project deliverables while adhering to best practices in software development.</p>
We are looking for a skilled Software Developer to join our team in Washington, District of Columbia. In this role, you will focus on creating and optimizing solutions using the Microsoft Power Platform while collaborating with stakeholders to address business needs. This position offers an exciting opportunity to leverage your expertise in software development to drive innovative and efficient solutions.<br><br>Responsibilities:<br>• Collaborate with stakeholders, project managers, and technical teams to identify requirements and translate them into Power Platform solutions.<br>• Design and develop custom applications using Power Apps, ensuring smooth integration with existing systems.<br>• Create automated workflows using Power Automate to enhance business processes and minimize manual efforts.<br>• Develop and deploy conversational AI experiences with Copilot Studio to address organizational needs.<br>• Utilize programming languages such as JavaScript, Power Fx, and C# to build efficient and scalable solutions.<br>• Ensure that all solutions align with organizational goals and meet performance standards.<br>• Provide technical support and troubleshooting for Power Platform applications to maintain operational efficiency.<br>• Stay updated on emerging technologies and tools within the Microsoft ecosystem to continuously improve solutions.<br>• Collaborate with cross-functional teams to ensure successful implementation and delivery of projects.
<p>A growing product-focused technology team is seeking a<strong> Software Developer</strong> to help expand and improve a large operational software platform used by multi-location organizations. This role offers the opportunity to contribute to a mature application while helping shape new features and system capabilities as the platform scales to support new clients and use cases.</p><p><br></p><p>Developers on this team work closely together in an onsite, collaborative environment where ideas are encouraged and engineers are given ownership of their work. You will contribute to both the user-facing experience and the underlying application logic that powers complex operational workflows.</p><p><br></p><p><strong>What You’ll Work On</strong></p><p>The platform supports a wide range of operational functions for distributed organizations, including asset tracking, maintenance workflows, inventory management, and automated vendor ordering. As new clients adopt the platform, the engineering team continuously enhances features and builds new capabilities that can be leveraged across the broader user base.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, build, and maintain web applications using Ruby on Rails</li><li>Develop and enhance both front-end interfaces and back-end functionality</li><li>Create responsive, user-friendly interfaces using modern web technologies</li><li>Implement and maintain APIs that support integrations with external systems</li><li>Contribute to database design and performance optimization</li><li>Troubleshoot and resolve application issues across the full stack</li><li>Collaborate with other engineers to refine architecture and improve system scalability</li><li>Participate in code reviews and contribute to overall engineering best practices</li><li>Work with stakeholders to translate feature ideas into practical software solutions </li></ul><p><br></p>