<p>A growing product-focused technology team is seeking a<strong> Software Developer</strong> to help expand and improve a large operational software platform used by multi-location organizations. This role offers the opportunity to contribute to a mature application while helping shape new features and system capabilities as the platform scales to support new clients and use cases.</p><p><br></p><p>Developers on this team work closely together in an onsite, collaborative environment where ideas are encouraged and engineers are given ownership of their work. You will contribute to both the user-facing experience and the underlying application logic that powers complex operational workflows.</p><p><br></p><p><strong>What You’ll Work On</strong></p><p>The platform supports a wide range of operational functions for distributed organizations, including asset tracking, maintenance workflows, inventory management, and automated vendor ordering. As new clients adopt the platform, the engineering team continuously enhances features and builds new capabilities that can be leveraged across the broader user base.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, build, and maintain web applications using Ruby on Rails</li><li>Develop and enhance both front-end interfaces and back-end functionality</li><li>Create responsive, user-friendly interfaces using modern web technologies</li><li>Implement and maintain APIs that support integrations with external systems</li><li>Contribute to database design and performance optimization</li><li>Troubleshoot and resolve application issues across the full stack</li><li>Collaborate with other engineers to refine architecture and improve system scalability</li><li>Participate in code reviews and contribute to overall engineering best practices</li><li>Work with stakeholders to translate feature ideas into practical software solutions </li></ul><p><br></p>
<p>Overview</p><p>We are seeking an AI / Machine Learning Engineer to design, build, deploy, and govern scalable ML and AI solutions across the enterprise. This role combines hands‑on model development with strong ownership of AI governance, risk management, and responsible AI practices to ensure models are explainable, secure, compliant, and production‑ready.</p><p>eKey Responsibilities</p><ul><li>Design, develop, train, and deploy machine learning and AI models for structured and unstructured data use cases</li><li>Build end‑to‑end ML pipelines including data ingestion, feature engineering, training, evaluation, deployment, and monitoring</li><li>Implement MLOps practices for versioning, CI/CD, model lifecycle management, and automated retraining</li><li>Collaborate with data engineers, product managers, and business stakeholders to translate requirements into AI solutions</li><li>Monitor model performance, drift, bias, and data quality in production environments</li><li>Optimize model accuracy, scalability, latency, and cost efficiency</li><li>Develop reusable ML components, libraries, and frameworks to accelerate delivery</li></ul><p>AI Governance & Risk Responsibilities</p><ul><li>Embed AI governance controls across the model lifecycle (design, development, testing, deployment, decommissioning)</li><li>Ensure models meet enterprise standards for explainability, transparency, fairness, and auditability</li><li>Implement model documentation, lineage, and traceability (data sources, features, assumptions, limitations)</li><li>Perform model validation activities including bias testing, robustness testing, and performance benchmarking</li><li>Support regulatory, compliance, and legal requirements (e.g., model risk management, data privacy, internal audits)</li><li>Partner with security teams to ensure secure model development and protection of sensitive data</li><li>Contribute to responsible AI policies, standards, and best practices across the organization</li></ul>
<p>Overview</p><p>We are seeking an AI / Machine Learning Engineer to design, build, deploy, and govern scalable ML and AI solutions across the enterprise. This role combines hands‑on model development with strong ownership of AI governance, risk management, and responsible AI practices to ensure models are explainable, secure, compliant, and production‑ready.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, train, and deploy machine learning and AI models for structured and unstructured data use cases</li><li>Build end‑to‑end ML pipelines including data ingestion, feature engineering, training, evaluation, deployment, and monitoring</li><li>Implement MLOps practices for versioning, CI/CD, model lifecycle management, and automated retraining</li><li>Collaborate with data engineers, product managers, and business stakeholders to translate requirements into AI solutions</li><li>Monitor model performance, drift, bias, and data quality in production environments</li><li>Optimize model accuracy, scalability, latency, and cost efficiency</li><li>Develop reusable ML components, libraries, and frameworks to accelerate delivery</li></ul><p>AI Governance & Risk Responsibilities</p><ul><li>Embed AI governance controls across the model lifecycle (design, development, testing, deployment, decommissioning)</li><li>Ensure models meet enterprise standards for explainability, transparency, fairness, and auditability</li><li>Implement model documentation, lineage, and traceability (data sources, features, assumptions, limitations)</li><li>Perform model validation activities including bias testing, robustness testing, and performance benchmarking</li><li>Support regulatory, compliance, and legal requirements (e.g., model risk management, data privacy, internal audits)</li><li>Partner with security teams to ensure secure model development and protection of sensitive data</li><li>Contribute to responsible AI policies, standards, and best practices across the organization</li></ul>
<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
<p>SENIOR Salesforce Developer - PERM FTE. 100% REMOTE MUST RESIDE IN IOWA OR DALLAS, TX OR AUSTIN, TEXAS ONLY!!</p><p>🔥 Ready to take your Salesforce career to the next level? Join a national acquisition mode BE A PART OF EXPLOSIVE GROWTH IN THE NEXT UPCOMING YEAR! MULTIPLE HIRES ON A BRAND NEW TEAM!</p><p>POSITION: Senior Salesforce Developer - PERM FTE. 100% REMOTE MUST RESIDE IN IOWA OR DALLAS, TX OR AUSTIN, TEXAS ONLY!!</p><p>- Sales Cloud, Service Cloud & Experience Cloud – MOST CUSTOM Salesforce development work! We’re conducting interviews THIS WEEK!</p><p><br></p><p>Location: 🌍 100% Remote if you live in Iowa or TX! – YOU MUST RESIDE WITH AN ADDRESS Based in Iowa or Texas! </p><p><br></p><p>Type: 🏢 Direct Hire Permanent = EAD, Green Card or US Citizen only. No OPT nor F1b visa NOR H1b Visa status. NO SPONSORSHIP CAN BE PROVIDED!</p><p>SALARY: 💰 Our TOTAL COMP package is simply unmatched! Earn $150K-$160K, UP TO $17k INCLUDING ANNUAL BONUS!</p><p>You’ll have the opportunity to drive Salesforce innovation across large-scale projects while keeping your technical edge sharp with hands-on APEX coding!</p><p>** For IMMEDIATE & CONFIDENTIAL consideration, reach out today: Direct message Carrie Danger, SVP of Permanent Placement, on LinkedIn. Or reach out Directly: Office: 515-259-6087 | Cell: 515-991-0863. Find Carrie's email on her LinkedIn profile.****•</p><p><br></p><p>• 15% Annual Bonus! 🎉</p><p>• EMPLOYEES LOVE THE people focused culture !Work-life balance with work flexibility recharge and thrive!</p><p>💡 As a SENIOR Salesforce Developer Engineer you’ll:</p><p>• Develop Salesforce CUSTOM features on Salesforce-centric enterprise-level projects.</p><p>• MUST HAVES:</p><p>• CUSTOM APEX coding</p><p>• LWC</p><p>• ANY Salesforce Cloud will consider BUT prefer Sales Cloud, Service Cloud & Experience Cloud</p><p>• Design and implement data-driven Salesforce engineering solutions, transforming user stories into impactful features.</p><p>• Be the go-to technical expert, spearheading code reviews and advising on best practices for complex multi-org environments.</p><p>• Make architecture recommendations building robust, scalable solutions with every Salesforce delivery.</p><p>💾 Must-Have Tech:</p><p>✔ Hands-on Salesforce Development: Extensive experience in APEX coding, LWC, and declarative tools like workflows/flows. ✔ Expertise with Sales Cloud, Marketing Cloud or Service Cloud. ✔ In-depth understanding of Salesforce data storage and API limitations. ✔ Strong knowledge of DevOps best practices and deployment processes.</p><p>✔ Skills in Platform Event Architecture (Pub/Sub frameworks), Data Cloud, and Agentforce AI. ✔ 10+ years of Salesforce development experience across complex, large-scale environments.</p><p>🎓 Bonus: Salesforce App Platform Builder, Developer, or Architect Certifications to show off your skills!</p><p>Full time salaried position $160K PLUS BONUS</p><p>For immediate & confidential consideration, contact me directly, Carrie Danger, SVP Permanent Placement Team, Iowa Region at My Direct Office #: 515-259-6087 or Cell: 515-991-0863, and email resume CONFIDENTIALLY & directly to me. ** my DIRECT EMAIL address is on my LinkedIN profile. Or you can ONE CLICK APPLY. uP TO $175k WITH BONUS !</p>
<p>DevOps Engineer</p><p>We’re looking for a DevOps Engineer who enjoys automating all the things and making the software development lifecycle run smoother, faster, and with fewer “why is this broken?” moments. You’ll support and improve CI/CD pipelines, development environments, and SDLC tooling across IT and Engineering.</p><p><br></p><p>What You’ll Do</p><ul><li>Build, improve, and maintain CI/CD pipelines and DevOps tooling</li><li>Collaborate with IT and Engineering to streamline SDLC processes (less friction, more shipping)</li><li>Administer development environments, automation, and build tools</li><li>Create clear documentation and training materials so others don’t have to guess</li></ul><p>What You Bring</p><ul><li>Hands-on experience designing and supporting CI/CD systems</li><li>Familiarity with tools like Git, containers, infrastructure-as-code, build, and test frameworks</li><li>Strong communication skills (you can explain complex things without heavy sighing)</li><li>Ability to work independently, learn quickly, and keep things running smoothly</li></ul><p>Education & Experience</p><ul><li>Bachelor’s degree in Computer Science, Software Engineering, or similar</li><li>4+ years in DevOps, software engineering, or related roles</li><li>2+ years rolling out CI/CD pipelines in real-world environments</li></ul><p><br></p>
<p><strong>Senior Data Engineer</strong></p><p><strong>Location:</strong> Philadelphia, PA (Hybrid/Onsite as required)</p><p><strong>Employment Type: </strong>39 Week Contract, Potential for Extension</p><p><strong>Position Overview</strong></p><p>We are seeking an experienced <strong>Data Engineer</strong> to support the development and ongoing operation of a large-scale, cloud-based IoT platform. This role focuses on building and supporting scalable, secure, and high‑performance infrastructure, tooling, and frameworks that enable engineering teams to efficiently develop, test, deploy, and operate modern microservices.</p><p>The ideal candidate brings strong cloud engineering experience, a passion for quality and security, and the ability to collaborate in a fast‑paced Agile environment.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop, operate, and support DevOps and platform engineering tools that enable cloud-based IoT services</li><li>Build and promote horizontal tools, frameworks, and best practices supporting microservices, CI/CD, security, monitoring, and performance</li><li>Collaborate with engineering teams to define development standards, workflows, and methodologies</li><li>Design and implement shared libraries and frameworks to support scalable and highly available systems</li><li>Support production platform operations, troubleshooting, and continuous improvement with focus on quality, performance, and security</li><li>Translate system architecture and product requirements into well-designed, tested software solutions</li><li>Work in an Agile environment delivering incremental, high-quality software</li><li>Provide technical guidance and promote modern engineering practices across teams</li></ul>
<p>Our company is seeking a talented Network Engineer to join our technology team in St. Louis, MO. In this role, you will design, implement, and support network infrastructure ensuring optimal performance, security, and reliability for our organization.</p><p><strong> </strong></p><p><strong>Key Responsibilities:</strong></p><p>· Plan, configure, and maintain local and wide area networks, including routers, switches, firewalls, and wireless systems.</p><p>· Monitor network performance and troubleshoot issues to ensure uptime and efficiency.</p><p>· Collaborate with IT and security teams to implement best practices and maintain a secure network environment.</p><p>· Conduct network upgrades and migrations, documenting changes and procedures.</p><p>· Evaluate new networking technologies and provide recommendations for improvements.</p>
<p><strong>About the role</strong></p><p>You will play a leading role in designing, building, and deploying software in client environments. You will work across infrastructure, integrations, data, workflows, and applications. You will move from problem definition to prototype to production, often in close partnership with users and stakeholders.</p><p>You will also contribute to the development of repeatable platform capabilities and reusable technical assets that can be applied across engagements.</p><p>By the time you have completed a few projects, you will have shipped software across a wide range of environments, business problems, and technical contexts, with more variety and end-to-end ownership than many traditional engineering roles offer.</p><p><strong>What you will do</strong></p><ul><li>Build and deploy leading-edge operational intelligence platforms and products that turn fragmented data and workflows into practical, usable systems that improve client operations and drive measurable business outcomes</li><li>Partner with world-class engineers and product leaders to shape technical solutions, improve delivery approaches, and strengthen what we build over time</li><li>Develop full-stack solutions, including services, APIs, applications, dashboards, and intelligent features that help users understand and act on business data</li><li>Integrate client systems and build the data pipelines required to unify information across operational and analytical environments</li><li>Design and implement workflow automations that coordinate actions across people, systems, and processes</li><li>Work directly with users and client stakeholders to understand needs, test ideas, and refine solutions based on feedback</li><li>Support systems in production by monitoring performance, resolving issues across the stack, and continuously improving reliability</li><li>Help create reusable engineering patterns, deployment approaches, and software assets that strengthen future client delivery</li><li>Document architectures, operational procedures, and handoff materials so client teams can own and extend what has been built</li></ul><p><strong>What we are looking for</strong></p><p>We are looking for engineers who care deeply about building.</p><p>You are likely a strong fit if:</p><ul><li>You write clean, production-ready code and care about the craft of software engineering</li><li>You like building from zero, not just maintaining existing systems</li><li>You are energized by hard, unfamiliar problems and can learn quickly when needed</li><li>You are comfortable working across different architectures, codebases, and business contexts</li><li>You can balance speed and quality without becoming rigid about either</li><li>You want to work closely with users and understand whether what you are building is actually useful</li><li>You are comfortable operating with ownership, ambiguity, and a high degree of trust</li><li>You want to help shape not just project outcomes, but how a team builds and delivers over time</li></ul><p><br></p><p><br></p><p><br></p><p><br></p>
<p>Are you passionate about next-generation data engineering, AI, and modern cloud technologies? Our company is seeking an innovative and driven Snowflake Solutions Engineer to join our IT team in a fully remote capacity. In this role, you will lead the design and implementation of advanced Snowflake-native applications and AI-powered data solutions, creating measurable business impact utilizing Snowflake’s latest platform features. This is an exceptional opportunity to work at the forefront of data, leveraging Streamlit, Cortex AI, and emerging Snowflake technologies.</p><p><strong>Key Responsibilities:</strong></p><p><strong>Snowflake Native Application Development (30%)</strong></p><ul><li>Design and build interactive data applications using Snowflake Streamlit to enable intuitive, self-service analytics and operational workflows for business users.</li><li>Develop reusable frameworks and component libraries for rapid application delivery.</li><li>Integrate Snowflake Native Apps and third-party marketplace applications to continuously extend platform capabilities.</li><li>Create custom UDFs and stored procedures to support advanced business logic.</li></ul><p><strong>Data Architecture and Modern Platform Design (30%)</strong></p><ul><li>Develop cutting-edge data architecture solutions spanning data warehousing, data lakes, and lakehouse approaches.</li><li>Implement medallion (bronze-silver-gold) patterns to maintain data quality and governance.</li><li>Recommend optimal architecture patterns for structured analytics, semi-structured data, and AI/ML workloads.</li><li>Establish best practices for data organization, storage optimization, and query performance.</li></ul><p><strong>AI & Advanced Analytics Collaboration (15%)</strong></p><ul><li>Partner with AI/data science teams to support and enhance Snowflake-based AI workloads.</li><li>Enable implementation of Snowflake Cortex AI features for practical business cases.</li><li>Guide data access and feature engineering for ML model requirements.</li><li>Contribute platform expertise to AI proof-of-concept initiatives.</li></ul><p><strong>Security, Governance, & Technical Leadership (15%)</strong></p><ul><li>Design and implement RBAC hierarchies, enforcing least privilege principles.</li><li>Define security best practices including network policies and encryption; implement row/column security and data masking.</li><li>Apply tag-based policies for advanced governance.</li><li>Monitor and optimize application performance, cost, and user experience.</li><li>Lead architectural discussions, create technical documentation, and share best practices.</li></ul><p><br></p>
We are looking for an experienced Security Network Engineer to join our team in Orlando, Florida. This long-term contract role focuses on optimizing server environments, enhancing infrastructure performance, and ensuring robust security measures. The ideal candidate will have a strong background in VMware technologies, documentation, and training delivery.<br><br>Responsibilities:<br>• Assess server infrastructure and VMware environments to identify and address performance gaps, configuration issues, and areas for improvement.<br>• Create and maintain detailed documentation, including topology diagrams, configuration guides, and operational procedures, ensuring compliance with security standards.<br>• Develop and deliver training materials, workshops, and knowledge-sharing sessions to enhance team expertise.<br>• Provide Tier 3 technical support for server-related incidents and collaborate with network, engineering, and application teams to ensure efficient operations.<br>• Lead capacity planning and participate in disaster recovery exercises to ensure system resilience.<br>• Manage and optimize VMware vSphere 8.x, vCenter, ESXi, vSAN, and related technologies.<br>• Implement and support VRealize Automation and VRealize Operations to improve automation and system management.<br>• Troubleshoot complex technical issues and develop solutions to enhance system performance and reliability.<br>• Collaborate with teams to ensure seamless connectivity and compliance with operational standards.
We are looking for an experienced Artificial Intelligence (AI) Engineer to join our team in Atlanta, Georgia. This is a long-term contract position where you will play a pivotal role in advancing AI initiatives across clinical and business operations. The ideal candidate will have a strong technical background, excellent communication skills, and the ability to collaborate across multiple departments to drive innovative solutions in healthcare.<br><br>Responsibilities:<br>• Partner with various departments to identify, design, and implement AI solutions that address clinical, financial, and operational needs.<br>• Evaluate and integrate third-party AI tools and platforms, with a focus on healthcare applications such as NexTech, call center automation, AI-powered scribing, and clinical trial identification.<br>• Develop and support AI applications to enhance patient identification for trials, automate documentation, and improve workflows.<br>• Build and maintain AI-driven dashboards and analytics using tools like Power BI to provide actionable insights for clinical and business teams.<br>• Ensure AI integrations meet scalability, security, and compliance requirements, adhering to healthcare data privacy standards.<br>• Serve as a strategic advisor by proactively identifying opportunities for organizational improvement through AI.<br>• Collaborate with stakeholders across IT and non-IT teams to foster innovation and streamline operations.<br>• Stay updated on industry trends, regulatory standards, and emerging AI technologies relevant to healthcare.<br>• Provide technical leadership and guidance on AI-related projects, ensuring alignment with organizational goals.
<p>We are seeking a Cloud / AI Engineer to be responsible for designing, building, and deploying production‑ready AI agents that support enterprise workflows. This is a hands‑on, execution‑focused role that works with modern agent platforms such as Microsoft Copilot Studio, AWS Agent Core services, and Google Vertex AI to deliver secure, reliable, and well‑governed AI solutions across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><p>· Design, build, and deploy AI agents using Microsoft Copilot Studio, AWS Agent Core services, and Google Vertex AI</p><p>· Develop agent workflows including intent handling, tool calling, memory management, and multi‑step task execution</p><p>· Create and optimize prompts, system instructions, and grounding strategies to ensure consistent and predictable agent behavior</p><p>· Implement Retrieval‑Augmented Generation (RAG) architectures using enterprise data sources, APIs, and document repositories</p><p>· Integrate AI agents with enterprise systems and APIs such as ServiceNow, internal platforms, and cloud services</p><p>· Deploy and manage agents across development, test, and production environments</p><p>· Implement security controls including identity management, authorization, and data access boundaries for AI agents</p><p>· Monitor agent performance through logging, usage analysis, and quality metrics to improve reliability and effectiveness</p><p>· Troubleshoot agent behavior, tool failures, and system integration issues</p><p>· Collaborate with platform, security, and application teams to deliver approved AI agent use cases</p>
We are looking for a skilled Software Engineer to join our dynamic team in Lafayette, Louisiana. In this role, you will contribute to the design, development, and deployment of innovative software solutions, focusing either on ServiceNow module development or full-stack engineering. This position offers the flexibility of working onsite or remotely if based in Louisiana.<br><br>Responsibilities:<br>• Develop and customize applications and modules within the ServiceNow platform.<br>• Design and implement backend and full-stack features using Java, Python, and PostgreSQL.<br>• Ensure software scalability, reliability, and performance through clean coding practices and automated testing.<br>• Enhance development workflows by incorporating automation tools and DevOps methodologies.<br>• Collaborate with cross-functional teams, including product management and QA, to achieve roadmap goals.<br>• Foster coding, testing, and architectural best practices to maintain high engineering standards.<br>• Address performance issues and improve service reliability to meet customer expectations.<br>• Streamline development processes by introducing tools for onboarding and automation.<br>• Actively contribute to the integration of scalable architecture into enterprise-level solutions.<br>• Monitor and optimize software to reduce testing instability and system escalations.
<p><br></p><p>Software Platform Engineer will design, build, and maintain a core Data & Machine Learning platform.</p><p><br></p><p>Platform Development: Design and implement new features for our AWS and Databricks-based platform, staying current with industry trends and advancements in AI. Core Component Implementation: Test and integrate central platform components that support our technology stack and serve tenants across the organization. Collaboration: Partner with other engineering teams to identify and deliver platform enhancements that solve specific business problems. Maintain Excellence: Uphold strict security protocols, compliance controls, and architectural principles in all aspects of your work.</p><p><br></p><p><br></p>
We are looking for a skilled Software Engineer to join our team on a long-term contract basis in Rancho Cucamonga, California. In this role, you will collaborate with various teams, including engineering, inventory control, and planning, to streamline processes and create detailed documentation packages. Your expertise in software development and ability to work closely with machinists and assemblers will be essential in ensuring accurate and efficient workflows.<br><br>Responsibilities:<br>• Transform 3D models generated by engineers into comprehensive drawing packages, including individual drawings for each element in the assembly.<br>• Collaborate with inventory control and planning teams to maintain and organize online documentation, including job build materials, time elements, and work instructions.<br>• Coordinate and conduct meetings with engineers, machinists, and assemblers to discuss designs and develop clear, actionable instructions.<br>• Develop and implement software solutions using programming languages such as C#, .NET, and ASP.NET.<br>• Apply JavaScript and React.js to enhance functionality and usability of applications.<br>• Ensure accurate documentation and workflow processes for manufacturing operations.<br>• Troubleshoot and resolve technical issues related to software and documentation systems.<br>• Maintain effective communication across teams to ensure alignment on project goals and deliverables.<br>• Provide technical expertise and guidance to support ongoing improvement initiatives.
<p><strong>Service & Automation Engineer</strong></p><p>This position supports customers with technical service needs related to advanced manufacturing equipment and automated production systems. The role focuses on troubleshooting, upgrade support, and automation software commissioning to ensure equipment performance, uptime, and customer satisfaction.</p><p>You will travel to customer sites and work independently to diagnose and resolve technical issues. Candidates should be self-driven, organized, and comfortable handling service requests both on-site and remotely. Travel may be required on short notice, with service visits typically lasting 1–2 weeks and occasionally longer. When not traveling, you will provide remote support and contribute to continuous improvement of service operations.</p><p><br></p><p>Key Responsibilities</p><p>Customer Support & Service</p><ul><li>Serve as a primary technical contact for customer inquiries and service requests.</li><li>Provide remote troubleshooting support using phone, email, and secure remote access tools.</li><li>Diagnose issues and guide customers through corrective actions.</li><li>Escalate complex problems to specialized engineering teams when needed.</li><li>Perform on-site service work such as troubleshooting, commissioning, maintenance, and repairs when remote resolution is not possible.</li></ul><p>PLC Programming & Troubleshooting</p><ul><li>Modify, test, and debug PLC programs across modern and legacy control platforms (examples include Siemens and Allen-Bradley systems).</li><li>Troubleshoot ladder logic, function blocks, and structured text issues.</li><li>Optimize control logic for performance, safety, and efficiency.</li><li>Perform online/offline edits during commissioning or service activities.</li></ul><p>Drive Configuration & Diagnostics</p><ul><li>Configure and troubleshoot VFDs, servo drives, and motion controllers.</li><li>Set motor parameters, feedback devices, and motion profiles.</li><li>Diagnose drive faults such as overcurrent, encoder errors, or communication issues.</li><li>Integrate drives with PLC systems via industrial networks (e.g., EtherNet/IP, Profinet).</li></ul><p>HMI / SCADA Support</p><ul><li>Create or modify operator interface screens including alarms, trends, recipes, and controls.</li><li>Connect PLC tags to HMI objects and verify communications.</li><li>Adjust user interfaces based on operator feedback.</li><li>Troubleshoot display, scripting, or communication issues.</li></ul><p>Quality & Continuous Improvement</p><ul><li>Confirm resolution of service issues and ensure equipment reliability.</li><li>Provide feedback to engineering teams on recurring problems or improvement opportunities.</li><li>Document service activities and customer interactions in internal systems.</li></ul><p>Collaboration</p><ul><li>Support spare-parts identification and service quotation activities.</li><li>Share technical knowledge with internal teams and participate in project discussions.</li></ul>
<p><strong>Mid-Level Data Engineer (On-Site | Los Angeles, CA)</strong></p><p><em>Build systems that actually drive business decisions.</em></p><p><br></p><p>This is not a “maintain the pipeline and go home” kind of role.</p><p><br></p><p>We’re looking for a sharp, early-career Data Engineer who wants to operate close to the business, own meaningful projects end-to-end, and build systems that directly impact how decisions get made across an entire organization. You’ll join a small, high-performing team where your work won’t get buried—it will be seen, used, and relied on daily.</p><p><br></p><p>If you’re someone who enjoys solving messy problems, building from scratch, and working in a fast-paced, high-expectation environment, this is the kind of role where you’ll grow quickly.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and build automated data systems (e.g., billing workflows, internal tools)</li><li>Create and maintain BI dashboards and reports using Python, Excel, and visualization tools</li><li>Write and optimize SQL queries and ETL pipelines for clean, reliable data flow</li><li>Analyze large datasets to uncover actionable insights and trends</li><li>Partner with stakeholders across the business to translate needs into technical solutions</li><li>Help improve data accessibility and usability across departments</li><li>Ensure data integrity and accuracy through audits and troubleshooting</li><li>Contribute to a growing data function with high visibility and ownership</li></ul><p>Why This Role Stands Out</p><ul><li>High ownership: You’ll build systems from the ground up, not just maintain them</li><li>Small team, big impact: Work directly with senior leadership and decision-makers</li><li>Growth opportunity: The team is expanding—this role can evolve quickly</li><li>Flexibility within intensity: While this is a high-performance environment, there’s trust and flexibility when needed</li></ul>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, develop, and maintain data pipelines and systems that support critical business operations within the manufacturing industry. Your expertise in data engineering technologies and frameworks will be key to ensuring efficient data processing and integration.<br><br>Responsibilities:<br>• Develop, optimize, and maintain scalable data pipelines to process large datasets efficiently.<br>• Implement ETL processes to extract, transform, and load data from various sources into centralized systems.<br>• Leverage Apache Spark, Hadoop, and Kafka to design solutions for real-time and batch data processing.<br>• Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Document data workflows and processes to ensure clarity and maintainability.<br>• Conduct testing and validation of data systems to ensure accuracy and quality.<br>• Apply Python programming to automate data tasks and streamline workflows.<br>• Stay updated on industry trends and emerging technologies to propose innovative solutions.<br>• Ensure compliance with data security and privacy standards in all engineering efforts.
We are seeking a Senior Data Engineer to join a growing data engineering team responsible for building and scaling an enterprise data platform. This role will focus on developing cloud-based data pipelines within Google Cloud Platform (GCP) while also supporting elements of a legacy on-premise data warehouse environment during an ongoing cloud migration.<br><br>The ideal candidate will have strong experience building scalable data pipelines, event-driven data architectures, and cloud-native data services. This is a great opportunity to contribute to a rapidly expanding data ecosystem and help drive the transition to modern cloud data platforms.<br><br>Key Responsibilities<br><br>Design, build, and maintain data pipelines within Google Cloud Platform (GCP)<br><br>Develop event-driven data streaming solutions using Pub/Sub<br><br>Build and maintain Python-based services using Cloud Run<br><br>Develop and optimize BigQuery datasets and queries<br><br>Integrate new data sources into the enterprise data platform<br><br>Maintain and support existing ETL processes within SQL Server<br><br>Work with SSIS and stored procedures in legacy data environments<br><br>Monitor, troubleshoot, and optimize data pipeline performance<br><br>Collaborate with engineering teams to support data-driven initiatives<br><br>Participate in on-call rotations for production systems<br><br>Required Qualifications<br><br>5+ years of experience in Data Engineering<br><br>Strong experience with Google Cloud Platform (GCP)<br><br>Experience building data pipelines and ETL processes<br><br>Experience with Pub/Sub or event-driven data streaming<br><br>Strong experience with BigQuery<br><br>Proficiency in Python<br><br>Experience with Cloud Run or similar serverless services<br><br>Strong SQL experience including SQL Server<br><br>Experience with SSIS or similar ETL tools
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
Our company is seeking a highly skilled and motivated Systems Engineer to join our technology team in New York City. This is an exciting opportunity for a proactive IT detail oriented to play a critical role in maintaining and optimizing our IT infrastructure. Job Summary: As a Systems Engineer, you will be responsible for supporting, maintaining, and enhancing our IT systems to ensure seamless operation and exceptional reliability. Your knowledge and experience will help drive our technical initiatives and support our end-users efficiently. Responsibilities: Manage and maintain IT infrastructure including servers, storage systems, network devices, and all related components. Monitor system performance, promptly troubleshoot issues, and ensure high system availability and reliability. Configure and administer Azure cloud services (virtual machines, storage, networking, security). Oversee Windows Server environments (2016/2019/2022): installation, configuration, and ongoing maintenance. Manage Active Directory, including user accounts, group policies, security permissions, and domain services. Perform virtualization tasks using VMware: provisioning servers, managing virtual machines, and resolving escalated issues. Administer Barracuda Backups Appliance and support data backup and recovery processes. Maintain SAN Nimble storage systems to ensure performance and continuity. Collaborate with teams to implement and manage secure file transfer solutions (MFT, SFTP). Conduct system upgrades, patching schedules, and security updates based on best practices. Provide timely technical support to end-users, resolving issues related to hardware, software, and network connectivity. Create and maintain robust documentation for system configurations, operational procedures, and troubleshooting processes.
<p>We are seeking a Genesys Cloud Architect / Sr Telecom Engineer. </p><p><br></p><p>The Genesys Cloud Architect / Sr Telecom Engineer is a senior technical leader responsible for the design, optimization, and long-term strategy of the Genesys Cloud CX platform. This role combines architectural ownership with hands-on engineering, ensuring a reliable, scalable, and high‑performing contact center environment.</p><p>Key Responsibilities</p><p>Lead the end-to-end architecture of Genesys Cloud CX, including voice, digital channels, routing, integrations, and analytics.</p><p>Develop and maintain roadmap, standards, and best practices for platform design and evolution.</p><p>Configure, administer, and optimize Genesys Cloud, including IVR, routing, and self-service workflows.</p><p>Build and manage API integrations with internal and third‑party systems.</p><p>Serve as Tier 3 escalation for complex incidents and participate in on-call rotations.</p><p>Govern platform changes to ensure security, compliance, and alignment with IT standards.</p><p>Partner with business and technical teams to translate experience goals into scalable solutions.</p><p>Maintain documentation and mentor engineering teams on platform best practices.</p><p>Other duties as needed</p><p><br></p>
<p>Robert Half is seeking a <strong>Contract Systems Engineer</strong> to join our client's IT infrastructure team. In this role, you will be responsible for the design, implementation, maintenance, and optimization of the organization’s systems and infrastructure. This contract position is ideal for a detail-oriented professional with a strong technical background in systems architecture and enterprise IT environments.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>System Design & Implementation:</strong> Design and implement scalable, secure, and reliable systems to support business operations and growth.</li><li><strong>Infrastructure Maintenance:</strong> Administer and maintain Windows and/or Linux servers, virtualization platforms (e.g., VMware, Hyper-V), and cloud-based services (e.g., Azure, AWS).</li><li><strong>Performance Monitoring:</strong> Monitor system performance and troubleshoot issues to ensure high availability and efficiency of infrastructure.</li><li><strong>Security & Compliance:</strong> Implement system security protocols, manage patching schedules, and ensure compliance with organizational policies and industry regulations.</li><li><strong>Backup & Recovery:</strong> Manage backup solutions and disaster recovery plans to ensure data integrity and business continuity.</li><li><strong>Automation & Scripting:</strong> Develop automation scripts and tools (e.g., PowerShell, Python) to streamline system administration tasks.</li><li><strong>Documentation:</strong> Maintain technical documentation for configurations, processes, and procedures.</li><li><strong>Collaboration:</strong> Work closely with network engineers, developers, and support staff to resolve complex issues and support IT projects.</li></ul><p><br></p>
<p>The LLM Programmer will be responsible for building and optimizing applications that leverage large language models (LLMs) to solve business problems, improve user experiences, and automate complex workflows. You will work closely with engineering, product, and data teams to bring AI-driven features from concept to production.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design and develop applications using large language models (e.g., GPT-style systems)</li><li>Build and maintain RAG (Retrieval-Augmented Generation) pipelines to integrate enterprise data with LLMs</li><li>Develop prompt engineering strategies and reusable AI workflows</li><li>Fine-tune or adapt models for domain-specific use cases when needed</li><li>Integrate LLM APIs into production systems and applications</li><li>Optimize performance, latency, cost, and accuracy of AI solutions</li><li>Evaluate model outputs for quality, reliability, and safety</li><li>Collaborate with cross-functional teams to identify and implement AI opportunities</li><li>Stay current with advancements in generative AI and LLM tooling</li></ul>