<p>Quick moving contract to perm opening with hybrid in-office plus remote type schedule. Expectations would be in our client's Tampa office 2 to 3 days per week in mentoring Jr Data Engineer </p><p><br></p><p>If applying, please make sure you have at least 5 years experience in Power BI, ETL Development, Snowflake and Azure. If you have Sigma reporting that will be a huge plus as our client is going that direction with their reporting initiatives. Healthcare background also a strong preference to understand our client technical work flows. </p><p><br></p><p>We are seeking an experienced Senior Data Engineer with 5+ years of hands-on experience to join our dynamic Data Engineering team. In this role, you will design, build, and optimize scalable data pipelines and analytics solutions in a fast-paced healthcare environment. You will play a pivotal role in enabling real-time insights for healthcare stakeholders, ensuring data integrity, compliance with HIPAA and other regulations, and seamless integration across multi-cloud ecosystems.</p><p><br></p><p>Key Responsibilities</p><p><br></p><p>Architect and implement end-to-end ETL/ELT pipelines using Azure Data Factory, Snowflake, and other tools to ingest, transform, and load healthcare data (e.g., EHR, claims, patient demographics) from diverse sources.</p><p>Design and maintain scalable data warehouses in Snowflake, optimizing for performance, cost, and healthcare-specific querying needs.</p><p>Develop interactive dashboards and reports in Power BI to visualize key healthcare metrics, such as patient outcomes, readmission rates, and resource utilization.</p><p>Collaborate with cross-functional teams (data scientists, analysts, clinicians) to translate business requirements into robust data solutions compliant with HIPAA, GDPR, and HITRUST standards.</p><p>Lead data modeling efforts, including dimensional modeling for healthcare datasets, ensuring data quality, governance, and lineage.</p><p>Integrate Azure services (e.g., Synapse Analytics, Databricks, Blob Storage) to build secure, high-availability data platforms.</p><p>Mentor junior engineers, conduct code reviews, and drive best practices in CI/CD pipelines for data engineering workflows.</p><p>Troubleshoot and optimize data pipelines for performance in high-volume healthcare environments (e.g., processing millions of claims daily).</p><p>Stay ahead of industry trends in healthcare data analytics and contribute to strategic initiatives like AI/ML integration for predictive care models.</p><p><br></p><p><br></p><p><br></p>
<p>We are looking for a skilled and innovative Data Engineer to join our team in Grove City, Ohio. In this role, you will be responsible for designing and implementing advanced data pipelines, ensuring the seamless integration and accessibility of data across various systems. As a key player in our analytics and data infrastructure efforts, you will contribute to building a robust and scalable data ecosystem to support AI and machine learning initiatives.</p><p><br></p><p>Responsibilities:</p><p>• Design and develop scalable data pipelines to ingest, process, and transform data from multiple sources.</p><p>• Optimize data models to support analytics, forecasting, and AI/ML applications.</p><p>• Collaborate with internal teams and external partners to enhance data engineering capabilities.</p><p>• Implement and enforce data governance, security, and quality standards across hybrid cloud environments.</p><p>• Work closely with analytics and data science teams to ensure seamless data accessibility and integration.</p><p>• Develop and maintain data products and services to enable actionable insights.</p><p>• Troubleshoot and improve the performance of data workflows and storage systems.</p><p>• Align data systems across departments to create a unified and reliable data infrastructure.</p><p>• Support innovation by leveraging big data tools and frameworks such as Databricks and Spark.</p>
We are looking for a skilled Data Engineer to join our team in San Antonio, Texas. This role offers an opportunity to design, develop, and optimize data solutions that support business operations and strategic decision-making. The ideal candidate will possess a strong technical background, excellent problem-solving skills, and the ability to collaborate effectively across departments.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines using Azure Synapse Analytics, Microsoft Fabric, and Azure Data Factory.<br>• Implement advanced data modeling techniques and design scalable BI solutions that align with business objectives.<br>• Create and maintain dashboards and reports using Power BI, ensuring data accuracy and usability.<br>• Integrate data from various sources, including APIs and Dataverse, into Azure Data Lake Storage Gen2.<br>• Utilize tools like Delta Lake and Parquet to manage and structure data within a lakehouse architecture.<br>• Define and implement BI governance frameworks to ensure consistent data standards and practices.<br>• Collaborate with cross-functional teams such as Operations, Sales, Engineering, and Accounting to gather requirements and deliver actionable insights.<br>• Troubleshoot, document, and resolve data issues independently while driving continuous improvement initiatives.<br>• Lead or contribute to Agile/Scrum-based projects to deliver high-quality data solutions within deadlines.<br>• Stay updated on emerging technologies and trends to enhance data engineering practices.
Client based near Tucker, GA is looking for a Data Engineer to join a small, high-performing data team of four. This is a hands-on, highly visible role where you will play a critical part in shaping the organization’s data strategy and supporting core operational and analytics systems during a transformative period. You will have the opportunity to directly influence business decisions, design innovative solutions, and make a measurable impact from day one. <br> In this role, you will collaborate closely with Data Analysts, Product Owners, and business stakeholders to design, build, and maintain data pipelines, models, and infrastructure. The environment is fast-paced, collaborative, and tech-forward, with a strong focus on SQL, Tableau, and API-based data integrations. While each Data Engineer owns specific areas of responsibility, cross-training ensures a deep understanding of end-to-end data flows across the organization, giving you broad exposure and detail oriented growth. <br> Key Responsibilities: <br> Design, build, and maintain scalable data pipelines, models, and databases that support analytics and business operations Write, optimize, and maintain complex SQL queries, stored procedures, and triggers Manage data ingestion from APIs and multiple internal and external systems Support data visualization and analytics initiatives using Tableau (Power BI experience also valuable) Ensure data accuracy and troubleshoot issues in reporting, extraction, and warehousing Partner with business and operational teams to translate requirements into actionable data solutions Participate in meetings to drive process improvements and operational efficiencies Provide technical guidance and mentorship to Data Analysts on best practices and prioritization Assist with data governance, performance optimization, and analytics enablement Develop tools, datasets, and processes that streamline business operations Create and maintain clear technical and process documentation Support end users with troubleshooting related to data tools and analytics platforms
<p>We are on the lookout for a Data Engineer in New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
<p>We are seeking a SQL Server Data Engineer</p><p>Location: Albuquerque, NM (Local preferred),</p><p>Work Type: Full-Time | Onsite 3+ days/week | Contract-to-Hire option</p><p><br></p><p>We’re looking for a SQL Server Data Engineer to support and optimize our legacy Operating Budget Management System (OBMS) environment. This role is ideal for someone experienced in stored‑procedure–driven systems, SQL performance tuning, and SSRS reporting.</p><p><br></p><p>Responsibilities include but are not limited to:</p><p>Maintain and optimize T‑SQL code, stored procedures, and functions.</p><p>Perform query tuning, indexing, and performance diagnostics.</p><p>Develop and deploy SSRS reports; troubleshoot reporting issues.</p><p>Translate business requirements into technical solutions.</p><p>Support database design and ETL/data integration efforts.</p><p>Document changes and follow change‑management best practices.</p><p><br></p><p><br></p>
<p><strong>Position: Data Engineer</strong></p><p><strong>Location: Des Moines, IA - HYBRID</strong></p><p><strong>Salary: up to $130K permanent position plus exceptional benefits</strong></p><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***</strong></p><p> </p><p>Our clients is one of the best employers in town. Come join this successful organization with smart, talented, results-oriented team members. You will find that passion in your career again, working together with some of the best in the business. </p><p> </p><p>If you are an experienced Senior Data Engineer seeking a new adventure that entails enhancing data reliability and quality for an industry leader? Look no further! Our client has a robust data and reporting team and need you to bolster their data warehouse and data solutions and facilitate data extraction, transformation, and reporting.</p><p> </p><p>Key Responsibilities:</p><ul><li>Create and maintain data architecture and data models for efficient information storage and retrieval.</li><li>Ensure rigorous data collection from various sources and storage in a centralized location, such as a data warehouse.</li><li>Design and implement data pipelines for ETL using tools like SSIS and Azure Data Factory.</li><li>Monitor data performance and troubleshoot any issues in the data pipeline.</li><li>Collaborate with development teams to track work progress and ensure timely completion of tasks.</li><li>Implement data validation and cleansing processes to ensure data quality and accuracy.</li><li>Optimize performance to ensure efficient data queries and reports execution.</li><li>Uphold data security by storing data securely and restricting access to sensitive data to authorized users only.</li></ul><p>Qualifications:</p><ul><li>A 4-year degree related to computer science or equivalent work experience.</li><li>At least 5 years of professional experience.</li><li>Strong SQL Server and relational database experience.</li><li>Proficiency in SSIS, SSRS.</li><li>.Net experience is a plus.</li></ul><p> </p><p><strong>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. *** </strong></p><p> </p>
We are looking for a skilled Data Engineer to join our team in Chicago, Illinois. This long-term contract role offers an exciting opportunity to contribute to the development and optimization of data systems while working with advanced technologies. The ideal candidate will bring expertise in building efficient data pipelines, managing large datasets, and integrating data processes to support business objectives.<br><br>Responsibilities:<br>• Develop, implement, and maintain scalable data pipelines using tools such as Apache Spark and Hadoop.<br>• Optimize ETL processes to ensure efficient extraction, transformation, and loading of data.<br>• Collaborate with cross-functional teams to understand data needs and design solutions that align with business goals.<br>• Manage large-scale datasets, ensuring data integrity and accuracy.<br>• Integrate data from various sources using Apache Kafka and other relevant technologies.<br>• Monitor and troubleshoot data systems to identify and resolve performance issues.<br>• Create documentation and provide training on data processes to support team operations.<br>• Conduct regular reviews of data architecture to enhance system functionality.<br>• Ensure compliance with data governance and security standards.<br>• Work closely with stakeholders to deliver actionable insights and reporting based on data analytics.
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
We are looking for a skilled Engineer to develop and enhance software solutions that address complex challenges in the real estate and property industry. This long-term contract position involves designing, coding, testing, and maintaining scalable and secure software systems. Based in Minneapolis, Minnesota, this role offers an opportunity to contribute to impactful engineering projects while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Design and implement software solutions that align with customer needs and organizational goals.<br>• Develop, test, debug, and document code to ensure reliability and performance.<br>• Collaborate with team members to solve technical challenges and remove roadblocks.<br>• Apply knowledge of frameworks and systems design to create stable and scalable software.<br>• Participate in product planning and provide input on technical strategies and solutions.<br>• Troubleshoot and analyze complex issues to identify and resolve defects.<br>• Mentor developers who are early in their careers and provide technical guidance to the team.<br>• Explore and adopt new technologies to enhance product performance and lifecycle.<br>• Contribute to DevOps processes, including support rotations and subsystem knowledge-building.<br>• Assist in recruiting efforts by participating in interviews and evaluating potential team members.
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>We are looking for a skilled GenAI Data Automation Engineer to design and implement innovative, AI-driven automation solutions across AWS and Azure hybrid environments. You will be responsible for building intelligent, scalable data pipelines and automations that integrate cloud services, enterprise tools, and Generative AI to support mission-critical analytics, reporting, and customer engagement platforms. Ideal candidate is mission focused, delivery oriented, applies critical thinking to create innovative functions and solve technical issues.</p><p><br></p><p>Location: <strong>REMOTE - EST or CST</strong></p><p><br></p><p>This position involves designing, developing, testing, and troubleshooting software programs to enhance existing systems and build new software products. The ideal candidate will apply software engineering principles and collaborate effectively with colleagues to tackle moderately complex technical challenges and deliver impactful solutions.</p><p><br></p><p>Responsibilities:</p><ul><li> Design and maintain data pipelines in AWS using S3, RDS/SQL Server, Glue, Lambda, EMR, DynamoDB, and Step Functions.</li><li> Develop ETL/ELT processes to move data from multiple data systems including DynamoDB → SQL Server (AWS) and between AWS ↔ Azure SQL systems.</li><li> Integrate AWS Connect, Nice inContact CRM data into the enterprise data pipeline for analytics and operational reporting.</li><li> Engineer, enhance ingestion pipelines with Apache Spark, Flume, Kafka for real-time and batch processing into Apache Solr, AWS Open Search platforms.</li><li> Leverage Generative AI services and Frameworks (AWS Bedrock, Amazon Q, Azure OpenAI, Hugging Face, LangChain) to:</li><li> Create automated processes for vector generation and embedding from unstructured data to support Generative AI models.</li><li> Automate data quality checks, metadata tagging, and lineage tracking.</li><li> Enhance ingestion/ETL with LLM-assisted transformation and anomaly detection.</li><li> Build conversational BI interfaces that allow natural language access to Solr and SQL data.</li><li> Develop AI-powered copilots for pipeline monitoring and automated troubleshooting.</li><li> Implement SQL Server stored procedures, indexing, query optimization, profiling, and execution plan tuning to maximize performance.</li><li> Apply CI/CD best practices using GitHub, Jenkins, or Azure DevOps for both data pipelines and GenAI model integration.</li><li> Ensure security and compliance through IAM, KMS encryption, VPC isolation, RBAC, and firewalls.</li><li> Support Agile DevOps processes with sprint-based delivery of pipeline and AI-enabled features.</li></ul>
<p>We are looking for a skilled Data Analyst / Engineer to join our team on a contract basis remotely. This role focuses on financial data processing, automation, and reporting within a dynamic environment. The ideal candidate will excel at managing data workflows, automating manual processes, and delivering accurate insights to support business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Extract and reconcile financial data from multiple databases, ensuring accuracy and consistency across accounts receivable, accounts payable, and general ledger lanes.</p><p>• Automate manual reporting processes by developing repeatable daily and month-end pipelines for reliable and auditable data.</p><p>• Design and oversee data workflows across development, production, and utility databases, ensuring secure and efficient access.</p><p>• Create and deliver advanced Excel-based reports using macros, formulas, and Power Query to enhance usability for finance teams.</p><p>• Implement data validation and snapshot techniques to support reconciliation and decision-making processes.</p><p>• Ensure the traceability and accuracy of financial data by establishing robust controls and audit mechanisms.</p><p>• Collaborate with stakeholders to understand reporting requirements and translate them into scalable solutions.</p><p>• Utilize expertise in SQL and Teradata Data Warehouse to optimize database objects and queries for performance.</p><p>• Develop and maintain documentation for automated processes and data workflows to ensure clarity and continuity.</p>
We are looking for a Senior Data Architect to join our team on a long-term contract basis in Marysville, Ohio. In this role, you will design and maintain advanced data solutions to support analytics and business intelligence initiatives within the automotive industry. This position offers an exciting opportunity to work with cutting-edge technologies and contribute to the optimization of data systems and processes.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines using cloud-based services such as Glue, S3, and Redshift.<br>• Build and orchestrate workflows with tools like Step Functions, EventBridge, and Lambda.<br>• Develop and integrate CI/CD pipelines using GitHub and other DevOps tools for automated deployments.<br>• Create conceptual, logical, and physical data models tailored for both operational and analytical systems.<br>• Optimize database queries, normalize datasets, and apply performance tuning strategies.<br>• Utilize Python and PySpark for data transformation, automation, and engineering tasks.<br>• Monitor and assess pipeline performance using tools like CloudWatch and Glue job logs.<br>• Troubleshoot and resolve issues related to data quality, system performance, and compliance.<br>• Implement metadata management practices and audit trails to ensure adherence to governance standards.<br>• Collaborate with cross-functional teams and stakeholders to align data architecture with business goals.
<p>**Please email Valerie Nielsen for immediate response**</p><p><br></p><p><strong>Data Analytics Manager (Retail experience is strongly preferred)</strong></p><p><strong>Onsite | Downtown Los Angeles</strong></p><p><strong>Base Salary: 160,000 - 180,000</strong></p><p>We are seeking a <strong>Data Analytics Manager</strong> to lead analytics and data engineering efforts that directly impact business strategy and operational decision-making. This role will partner closely with leadership and cross-functional teams across merchandising, retail, and supply chain to turn complex data into actionable insights.</p><p><strong>What You’ll Do</strong></p><ul><li>Lead the design, development, and optimization of scalable analytics and data engineering solutions</li><li>Translate business questions into analytical frameworks, models, and dashboards</li><li>Partner with Retail, Merchandise Planning/Buying, and Supply Chain teams to drive data-informed decisions</li><li>Build and productionize tools and systems that improve data reliability, accessibility, and performance</li><li>Mentor and guide analysts and engineers, setting best practices for analytics and data quality</li><li>Present insights and recommendations to senior stakeholders in a clear, compelling way</li></ul><p><br></p><p><br></p>
We are looking for an experienced Artificial Intelligence (AI) Engineer to join our dynamic team in Houston, Texas. This Contract to permanent position offers a unique opportunity to work on cutting-edge agentic software development, where autonomous AI agents play a critical role in building and optimizing healthcare technology solutions. The ideal candidate will be eager to contribute to innovative projects, leveraging AI techniques to transform traditional processes and drive automation within the healthcare lifecycle.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines and integrations using AI-based techniques and tools such as Azure Data Factory and Event Hub.<br>• Work collaboratively with cross-functional teams to identify automation opportunities and develop intelligent workflows.<br>• Develop and deploy autonomous agents to interact with data systems, monitor processes, and generate actionable insights.<br>• Build and support data warehousing and analytics platforms utilizing Azure Synapse, Data Lake, and Power BI.<br>• Enhance data processing and decision-making capabilities by integrating machine learning models and AI techniques.<br>• Ensure all solutions adhere to security standards and compliance requirements while being optimized for deployment in Azure environments.<br>• Participate actively in Agile ceremonies and contribute to the continuous improvement of engineering practices.<br>• Explore innovative approaches to agentic software development, pushing the boundaries of automation and AI implementation.
Position: SENIOR CDP DATA ANALYST - Help Build a Smarter Connected Digital Experience<br>Location: REMOTE<br>Salary: UP TO $140K + EXCEPTIONAL BENEFITS<br><br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***<br><br>A nationally recognized company with a long history of success is launching a bold new digital initiative—and this is your opportunity to help shape it from the ground up.<br>This newly formed department is building a mobile-first product from scratch. It’s a greenfield, 0-to-1 launch with the pace and creativity of a startup, backed by the resources and stability of a Fortune 500 parent. The first MVP is nearing launch, and we’re assembling a team of 20 innovators to bring it to life.<br>As a Senior CDP Data Analyst, you’ll be a key player in designing and evolving a custom-built Customer Data Platform. Your work will unify customer insights across systems and empower smarter, faster decision-making across the organization.<br>What You’ll Be Doing<br> • Collaborate with data engineers, architects, and business stakeholders to define data requirements and use cases.<br> • Design data models and integration logic to support a unified customer view.<br> • Analyze customer behavior across platforms to uncover insights and segmentation opportunities.<br> • Build dashboards and visualizations that drive strategic decisions.<br> • Ensure data quality, consistency, and governance across the Customer Data Platform.<br> • Translate business needs into technical specifications and support iterative development.<br> • Advocate for data best practices and help standardize customer metrics across teams.<br>What You Bring<br> • 5+ years of experience in data analysis, with a focus on customer data and cross-platform integration.<br> • Advanced skills in SQL and Python, R, or similar languages.<br> • Experience with data visualization tools like Power BI or Tableau.<br> • Familiarity with cloud data platforms (Azure, AWS, GCP) and modern data warehousing.<br> • Strong communication skills and ability to work across technical and non-technical teams.<br> • Bonus: Experience with customer journey analytics, segmentation modeling, personalization strategies, and data privacy frameworks (GDPR, CCPA).<br>Why Join Now?<br> • Be part of a ground-floor team shaping a transformative digital product.<br> • Work in a fast-paced, agile environment with full executive support.<br> • Influence how data drives decisions across a nationally recognized organization.<br> • Enjoy the freedom to innovate—without legacy constraints.<br><br>*** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654 or mobile: 515-771-8142. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. ***
<p><strong>Overview:</strong></p><p>We are seeking a Machine Learning/AI Engineer to join our growing team in St. Louis, MO. This role focuses on designing and developing scalable machine learning and artificial intelligence models that help organizations streamline operations, gain business insights, and innovate across services. You will collaborate closely with cross-functional partners to build, deploy, and maintain advanced ML/AI solutions that address real-world challenges.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and deploy machine learning and AI models for various business applications, from natural language processing to predictive analytics.</li><li>Collaborate with data engineers and business stakeholders to acquire data, define project requirements, and implement robust solutions.</li><li>Integrate ML models into cloud platforms and existing product infrastructures.</li><li>Monitor, evaluate, and improve model performance with a focus on scalability, reliability, and accuracy.</li><li>Assist in the creation of new data pipelines, automation systems, and APIs that enhance data-driven decision-making.</li><li>Document processes, models, and workflows and provide support and training to peers as needed.</li><li>Keep up to date with the latest advancements in AI/ML technologies and recommend best practices and tools.</li></ul><p><br></p>
<p><br></p><p><strong>Robert Half</strong> is actively partnering with an Austin-based client to identify an <strong>AI Engineer (contract).</strong> In this role, you will be responsible for creating, implementing, and optimizing AI and machine learning solutions that improve engineering, operations, and enterprise workflows. This role serves as a connection point between advanced technology and real-world application, ensuring AI initiatives deliver measurable business value. The ideal candidate brings strong technical depth in AI/ML along with experience working in data‑rich, process‑driven environments. <strong>This role is onsite in Austin, Tx. </strong></p><p><br></p><p><strong>Key Responsibilities: </strong></p><ul><li>Develop and deploy AI/ML models for automation, predictive insights, and generative applications.</li><li>Transform experimental prototypes into scalable, production-ready systems using tools such as TensorFlow, PyTorch, or Scikit-learn.</li><li>Partner closely with data engineering teams to build reliable data pipelines and scalable architectures.</li><li>Apply ETL processes and big-data technologies to clean, structure, and prepare datasets for modeling.</li><li>Deploy AI solutions on cloud platforms (e.g., Azure machine learning services, Databricks) and integrate them into existing digital environments.</li><li>Implement MLOps best practices across model lifecycle management, monitoring, and iteration.</li><li>Collaborate with cross-functional teams—including engineering, IT, and business operations—to ensure AI solutions align with organizational goals.</li><li>Translate complex AI concepts into clear, accessible explanations for non-technical audiences.</li><li>Ensure all AI systems comply with organizational standards for security, governance, and responsible use.</li><li>Maintain thorough documentation covering model assumptions, architecture, and decision-making processes.</li><li>Stay up to date with emerging AI innovations, including agent-based systems and multimodal models.</li><li>Contribute to internal innovation programs and AI-focused centers of excellence.</li></ul>
<p><strong>Role Summary</strong></p><p>As a Technical Project Manager focused on data and AWS cloud, you will lead the planning, execution, and delivery of engineering efforts involving data infrastructure, data platforms, analytics, and cloud services. You will partner with data engineering, analytics, DevOps, product, security, and business stakeholders to deliver on key strategic initiatives. You are comfortable navigating ambiguity, managing dependencies across teams, and ensuring alignment between technical direction and business priorities.</p><p><strong>Key Responsibilities</strong></p><ul><li>Lead end-to-end technical projects pertaining to AWS cloud, data platforms, data pipelines, ETL/ELT, analytics, and reporting.</li><li>Define project scope, objectives, success criteria, deliverables, and timelines in collaboration with stakeholders.</li><li>Create and maintain detailed project plans, roadmaps, dependency maps, risk & mitigation plans, status reports, and communication plans.</li><li>Track and monitor project progress, managing changes to scope, schedule, and resources.</li><li>Facilitate agile ceremonies (e.g., sprint planning, standups, retrospectives) or hybrid methodologies as appropriate.</li><li>Serve as the bridge between technical teams (data engineering, DevOps, platform, security) and business stakeholders (product, analytics, operations).</li><li>Identify technical and organizational risks, escalate when needed, propose mitigation or contingency plans.</li><li>Drive architectural and design discussions, ensure technical feasibility, tradeoff assessments, and alignment with cloud best practices.</li><li>Oversee vendor, third-party, or external partner integrations and workstreams.</li><li>Ensure compliance, security, governance, and operational readiness (e.g., data privacy, logging, monitoring, SLA) are baked into deliverables.</li><li>Conduct post-implementation reviews, lessons learned, and process improvements.</li><li>Present regularly to senior leadership on project status, challenges, KPIs, and outcomes.</li></ul>
Position: IT Product Owner (Data Integrations)<br> Location: Remote<br> Salary: Up to $110,000 base + excellent benefits<br> <br> *** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. ***<br> Are you a Product Owner who thrives at the intersection of vision, strategy, and execution? <br> <br> Do you love transforming complex problems into elegant, buildable solutions? Are you motivated by the opportunity to help build brand‑new digital products from the ground up?<br> If so, we have an incredible Product Owner role— playing a critical part in a major digital transformation that is redefining and transforming services.<br> This is a rare chance to join a newly built, remote‑first product and engineering team shaping the future of a multi‑industry ecosystem.<br> <br> About the Transformation<br> We're building a modern, connected, mobile‑responsive digital platform that unifies dozens of systems into one seamless experience. Think:<br> • MVP‑first mindset<br> • Scalable, flexible architecture<br> • Seamless data flows<br> • Consumer‑grade UX<br> • A team empowered to innovate quickly<br> And you’ll be at the center of it.<br> You’ll Own…<br> • Translating data, API, and integration requirements into clear, actionable user stories<br> • Partnering closely with the Sr Data Integration Engineer to shape data flows, CDP capabilities, and API frameworks<br> • Prioritizing a data‑heavy backlog that balances business value with technical complexity<br> • Ensuring delivered work is accurate, secure, and scalable<br> You Are…<br> • A Product Owner who understands how data moves through systems<br> • Comfortable working with data engineers, APIs, ETL/ELT workflows, and integration frameworks<br> • A strong communicator who can simplify complex technical concepts for non‑technical audiences<br> <br> *** For immediate and confidential consideration, please send a message to MEREDITH CARLE on LinkedIn or send an email to me with your resume. My email can be found on my LinkedIn page. Also, you may contact me by office: 515-303-4654. Or one click apply on our Robert Half website. No third party inquiries please. Our client cannot provide sponsorship and cannot hire C2C. ***
<p>We are seeking an experienced <strong>BI Developer</strong> to design, build, and maintain scalable analytics and reporting solutions. This role focuses on developing <strong>Power BI dashboards</strong>, optimizing <strong>SQL‑driven data models</strong>, and working with modern cloud data platforms, including <strong>Snowflake</strong> and <strong>Microsoft Fabric</strong>, to deliver actionable insights to business stakeholders.</p><p>You will collaborate closely with data engineers, analysts, and business partners to ensure high‑quality, performant, and reliable BI solutions that support enterprise‑level reporting and decision‑making.</p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>Power BI dashboards and reports</strong> for enterprise users</li><li>Build and optimize <strong>semantic models</strong> and datasets for performance and usability</li><li>Write, optimize, and maintain complex <strong>SQL queries</strong> for analysis and reporting</li><li>Work with <strong>Snowflake</strong> data warehouses to support large‑scale analytical workloads</li><li>Leverage <strong>Microsoft Fabric</strong> components (Lakehouse, Data Warehouse, semantic models) to deliver end‑to‑end BI solutions</li><li>Partner with data engineering teams to ensure accurate, well‑modeled, and trusted data sources</li><li>Implement data validation, testing, and governance best practices</li><li>Translate business requirements into clear technical BI solutions</li><li>Support enhancements to existing dashboards and reporting environment</li></ul><p><br></p>
We are looking for an experienced Network Engineer to join our team in Austin, Texas, on a long-term contract basis. In this role, you will contribute to the design, maintenance, and optimization of network systems, with a focus on renewable energy technologies. Collaborating with a team of skilled engineers and project managers, you will play a key part in ensuring seamless network operations and supporting the development of cutting-edge solutions.<br><br>Responsibilities:<br>• Design and implement network systems tailored to renewable energy installations.<br>• Utilize AutoCAD to edit existing templates and create detailed drawings for fiber communication protocols.<br>• Develop XML configurations for point creation and data integration.<br>• Work with Supervisory Control and Data Acquisition (SCADA) systems to monitor, control, and optimize renewable energy machinery.<br>• Perform network module configurations, including IP protocols and real-time sensor data integration.<br>• Collaborate with project managers and engineers to ensure efficient system operations and troubleshooting.<br>• Provide technical expertise in electrical and mechanical aspects of network systems.<br>• Maintain documentation and ensure compliance with industry standards.<br>• Support potential fieldwork activities, including commissioning, wire termination, and system checks.<br>• Optimize network performance through continuous monitoring and improvements.
<p>We are looking for a skilled Data Warehouse Engineer to join our team in Malvern, Pennsylvania. This Contract-to-Permanent position offers the opportunity to work with cutting-edge data technologies and contribute to the optimization of data processes. The ideal candidate will have a strong background in Azure and Snowflake, along with experience in data integration and production support. This role is 4-days onsite a WEEK, with no negotiations. Please apply directly if you're interested.</p><p><br></p><p>Responsibilities:</p><p>• Develop, configure, and optimize Snowflake-based data solutions to meet business needs.</p><p>• Utilize Azure Data Factory to design and implement efficient ETL processes.</p><p>• Provide production support by monitoring and managing data workflows and tasks.</p><p>• Extract and analyze existing code from Talend to facilitate system migrations.</p><p>• Stand up and configure data repository processes to ensure seamless performance.</p><p>• Collaborate on the migration from Talend to Azure Data Factory, providing expertise on best practices.</p><p>• Leverage Python scripting to enhance data processing and automation capabilities.</p><p>• Apply critical thinking to solve complex data challenges and support transformation initiatives.</p><p>• Maintain and improve Azure Fabric-based solutions for data warehousing.</p><p>• Work within the context of financial services, ensuring compliance with industry standards.</p>
<p>We are looking for an experienced Python Data/ML/AI Engineer to join our team in Tampa, Florida. This is a contract position with the potential for a long-term opportunity, offering the chance to work on challenging projects while contributing to the design and development of advanced data solutions. The ideal candidate will possess a strong background in data engineering, analytics, and database management, and will play a key role in transforming data into actionable insights.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement business intelligence solutions tailored to user requirements and organizational goals.</p><p>• Collaborate with stakeholders to gather technical and functional requirements, ensuring the successful creation of reporting solutions.</p><p>• Design, model, and maintain databases and data marts that support analytical and reporting needs.</p><p>• Build and manage ETL processes to efficiently load data into data repositories.</p><p>• Monitor and enhance data quality, recommending governance practices and controls for self-service analytics.</p><p>• Create automated validation and reconciliation checks to ensure data accuracy and integrity.</p><p>• Design data warehouses and implement dimensional modeling techniques such as star and snowflake schemas.</p><p>• Utilize programming languages like Python to clean, merge, and reshape data for analysis.</p><p>• Write optimized queries, functions, and stored procedures for databases such as SQL Server and PostgreSQL.</p><p>• Debug and troubleshoot issues across application and database layers to ensure seamless operations.</p>