<p>Data Engineer</p><p>On-site | Austin, TX | Contract</p><p><br></p><p>Robert Half is partnering with a financial services organization to hire a Data Engineer in Austin, TX. This contract opportunity is ideal for someone with 3 years of experience building and optimizing modern data pipelines and analytics environments. The role focuses on moving data across cloud-based platforms to support reliable reporting, stronger data visibility, and informed decision-making across the organization.</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Build and maintain data pipelines that move information from data lake environments into structured warehouse and reporting platforms.</p><p>• Develop, schedule, and optimize ETL and ELT workflows using Matillion to support dependable data delivery.</p><p>• Design and manage Snowflake data models that improve accessibility, performance, and scalability for business users.</p><p>• Partner with analytics and reporting stakeholders to prepare datasets that support Tableau dashboards and visual insights.</p><p>• Monitor data processing jobs, troubleshoot failures, and resolve quality issues to maintain trusted data assets.</p><p>• Work within AWS-based environments to support secure, efficient, and scalable data integration processes.</p><p>• Collaborate with cross-functional teams to understand data needs and translate them into practical engineering solutions.</p>
<p>We are currently seeking a Data Engineer for a contract opportunity supporting a growing data and analytics organization. This role is focused on building and maintaining modern cloud-based data infrastructure, including scalable ELT pipelines, Snowflake data solutions, and automated data workflows.</p><p>This is a hands-on engineering role where you will design, develop, and support end-to-end data systems that enable reliable reporting, analytics, and business decision-making.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain scalable ELT/ETL data pipelines and workflows</li><li>Develop and optimize Snowflake-based data warehouse solutions</li><li>Build and maintain data models and transformation logic to support analytics and reporting</li><li>Write efficient and high-quality Python and SQL code to support data engineering processes</li><li>Develop reusable data engineering frameworks and backend data services</li><li>Implement and maintain CI/CD pipelines using GitHub and related tooling</li><li>Build automated testing frameworks to ensure data quality and reliability</li><li>Create reporting and visualization solutions using tools such as Power BI</li><li>Monitor production data systems and resolve performance or reliability issues</li><li>Support continuous improvement of data architecture, processes, and standards</li></ul>
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>We are seeking a Data Engineer to support the design, development, and maintenance of scalable data solutions within a Microsoft-focused environment. This role will work closely with business and technical teams to support reporting, analytics, and data integration initiatives.</p><p>Responsibilities</p><ul><li>Develop and maintain data pipelines and ETL processes</li><li>Support data integration and transformation efforts</li><li>Build and optimize SQL queries, stored procedures, and database solutions</li><li>Assist with data warehousing and reporting initiatives</li><li>Monitor data quality, integrity, and system performance</li><li>Collaborate with analysts, developers, and business stakeholders</li><li>Support documentation and troubleshooting efforts</li></ul><p><br></p>
<p>We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.</p><p>• Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.</p><p>• Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.</p><p>• Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.</p><p>• Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.</p><p>• Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.</p><p>• Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.</p><p>• Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.</p><p>• Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.</p><p>• Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.</p>
<p>Robert Half Technology is seeking a <strong>mid-to-senior level Data Engineer</strong> to support the modernization of an existing data environment for a client in Bellevue, Washington. This role will focus on <strong>rearchitecting data pipelines into Databricks</strong>, improving performance, and establishing scalable data architecture and governance. This is a hands-on role in a <strong>fast-paced, less structured environment</strong>, ideal for someone who takes ownership and can operate with autonomy.</p><p> </p><p><strong>Duration:</strong> Long-term contract with potential for extension or conversion</p><p><strong>Location:</strong> Bellevue, Washington (3-days onsite working hybrid)</p><p><strong>Schedule:</strong> Monday-Friday (9AM-5PM PST)</p><p> </p><p><strong>Key Responsibilities</strong></p><ul><li>Rebuild and optimize existing <strong>Python-based ETL pipelines</strong> within Databricks </li><li>Design and implement scalable <strong>data ingestion and transformation processes</strong> </li><li>Architect and maintain <strong>data marts and data warehouse structures</strong> </li><li>Implement <strong>Medallion Architecture (Bronze, Silver, Gold layers)</strong> </li><li>Improve performance of data processing workflows (reduce runtimes, optimize queries) </li><li>Support migration and consolidation of data into Databricks </li><li>Document <strong>data pipelines, tables, and architecture</strong> for governance and maintainability </li><li>Define best practices for <strong>data storage, organization, and access</strong> </li><li>Ensure alignment with existing compliance and data standards </li></ul><p><br></p>
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>Robert Half is seeking a Data Engineer to build, scale, and lead high‑impact data solutions. This role combines hands‑on data engineering with team leadership, mentoring, and oversight of end‑to‑end analytics pipelines that turn raw data into actionable business insights.</p><p>This role will be Business facing, working with departments across the organization to address data solutions.</p><p>This role is Onsite in Albuquerque, New Mexico</p><p><br></p><p>What You’ll Do</p><p>Lead and mentor a team of data engineers and analysts; set standards, review work, and support professional growth</p><p>Design, build, and oversee scalable ETL pipelines using Python, SQL, SSIS, and Airflow</p><p>Develop dimensional data models using Kimball methodology</p><p>Create dashboards and reports using Power BI and SSRS</p><p>Partner with business and IT stakeholders on analytics, ad hoc reporting, and data initiatives</p><p>Ensure data quality, governance, and compliance with PCI, PII, and regulatory standards</p><p>Automate workflows and reporting using Python, PowerShell, and modern analytics tools</p><p>Other duties as needed</p><p><br></p>
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
We are looking for a Data Engineer to help shape and strengthen the organization’s data ecosystem in Spartanburg, South Carolina. This role focuses on building scalable data structures and reliable integration solutions that support analytics, operational reporting, and long-term business goals. The ideal candidate brings a strong background in modern data platforms and enjoys partnering with technical and business teams to deliver secure, high-quality data solutions.<br><br>Responsibilities:<br>• Develop and refine data models, storage frameworks, and analytical repositories that enable efficient access to trusted information.<br>• Create scalable architecture approaches that support enterprise objectives while improving performance, reliability, and long-term maintainability.<br>• Establish data design standards, architectural patterns, and governance practices that promote consistency, quality, and security across platforms.<br>• Partner with software developers, analytics teams, and business stakeholders to translate operational needs into practical data solutions.<br>• Build and enhance data pipelines and integration processes that move information accurately across systems and support reporting and analysis.<br>• Implement processes for master data, metadata, and data quality management to strengthen governance and regulatory compliance.<br>• Assess emerging tools, cloud technologies, and platform options to recommend solutions that balance cost, scalability, and functionality.<br>• Work closely with data engineering peers to encourage strong technical alignment, knowledge sharing, and continuous improvement across the team.
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
<p>We are looking for an experienced Data Engineer to design and support data exchange solutions that connect external business partners with internal systems. This role will mainly work remotely with different office locations. We are looking for a candidate who lives in NC, within 2 hours of Greensboro, NC. This role focuses on building reliable integration processes, transforming structured files and API-based data, and ensuring critical information is available for reporting and operational use. The ideal candidate brings strong technical depth in data movement and troubleshooting, along with a practical understanding of manufacturing and supply chain workflows.</p><p><br></p><p>Responsibilities:</p><p>• Build and maintain business-to-business data interfaces that onboard new partner organizations and align incoming data with internal database structures.</p><p>• Develop automated workflows that ingest, transform, validate, and deliver data using file-based exchanges, APIs, and structured transaction formats such as EDI and X12.</p><p>• Configure and manage end-to-end integration processes across system interfaces, including flat-file handling, file sharing, and reporting-related data movement.</p><p>• Lead data transformation efforts through the full lifecycle by designing solutions, testing functionality, deploying processes, and stabilizing production performance.</p><p>• Investigate integration failures or data quality issues, identify root causes, and implement corrective actions to restore reliable processing.</p><p>• Partner with business intelligence and reporting teams to provide access to accurate, usable data sources that support analysis and operational decision-making.</p><p>• Apply manufacturing and supply chain process knowledge to structure data flows that support purchasing, components, orders, and assembly-related transactions.</p><p>• Use available tools and platforms to execute integration projects independently, including extracting data from enterprise applications and translating it into usable formats.</p><p>• Create scalable data pipelines that enable customer and order transactions to move through systems with minimal manual intervention.</p>
<p>Robert Half is seeking a <strong>Contract Data Engineer</strong> to support our client’s data and analytics initiatives. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that enable efficient data ingestion, transformation, and delivery. The ideal candidate has strong experience working with modern data platforms, cloud environments, and large-scale datasets.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Development:</strong> Design, build, and maintain scalable ETL / ELT pipelines to ingest, transform, and deliver data from multiple sources.</li><li><strong>Data Architecture:</strong> Develop and optimize data models, schemas, and warehouse structures to support analytics, reporting, and business intelligence needs.</li><li><strong>Cloud Data Platforms:</strong> Work within cloud environments such as <strong>AWS, Azure, or GCP</strong> to deploy and manage data solutions.</li><li><strong>Data Warehousing:</strong> Design and support enterprise data warehouses using platforms such as <strong>Snowflake, Redshift, BigQuery, or Azure Synapse</strong>.</li><li><strong>Big Data Processing:</strong> Develop solutions using big data technologies such as <strong>Spark, Databricks, Kafka, and Hadoop</strong> when required.</li><li><strong>Performance Optimization:</strong> Tune queries, pipelines, and storage solutions for performance, scalability, and cost efficiency.</li><li><strong>Data Quality & Reliability:</strong> Implement monitoring, validation, and alerting processes to ensure data accuracy, integrity, and availability.</li><li><strong>Collaboration:</strong> Work closely with Data Analysts, Data Scientists, Software Engineers, and business stakeholders to understand requirements and deliver data solutions.</li><li><strong>Documentation:</strong> Maintain detailed documentation for pipelines, data flows, and system architecture.</li></ul><p><br></p>
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p>Seeking a Data Engineer to build and maintain data pipelines and reporting systems.</p><p><strong>Responsibilities</strong></p><ul><li>Design and maintain ETL processes</li><li>Work with large datasets in SQL</li><li>Optimize database performance</li><li>Support BI/reporting teams</li></ul><p><br></p>
We are looking for a Data Engineer to help transform business data into reliable, accessible insights that support decision-making across the organization. This role partners with teams such as asset management, acquisitions, accounting, and HR to build reporting solutions, improve data quality, and streamline access to critical information. Based in Los Angeles, California, the position is well suited for someone who enjoys combining technical expertise with business collaboration in a fast-moving environment.<br><br>Responsibilities:<br>• Build and enhance dashboards, reports, and automated data workflows using tools such as Python, Excel, and Power BI.<br>• Translate business questions into scalable reporting and analytics solutions by working closely with stakeholders across multiple departments.<br>• Examine large and complex datasets to uncover trends, exceptions, and actionable insights that support operational and strategic decisions.<br>• Design and maintain data extraction, transformation, and loading processes, including query development and performance optimization.<br>• Monitor data accuracy through regular validation, issue resolution, and ongoing improvements to data governance practices.<br>• Support and guide entry-level BI team members by reviewing work, sharing best practices, and encouraging career growth.<br>• Explain technical findings in a clear way to non-technical audiences to promote understanding and adoption of data solutions.<br>• Lead or contribute to cross-functional initiatives that improve data accessibility, usability, and reporting effectiveness across the business.<br>• Administer BI platforms to maintain performance, reliability, and appropriate security controls.<br>• Deliver user support and training to help employees make effective use of reporting tools and interpret data confidently.
We are looking for a Data Governance Manager to lead enterprise data governance efforts in Greenville, South Carolina. This role will shape policies, accountability models, and quality standards that strengthen how data is managed, protected, and used across the organization. The ideal candidate brings strong leadership skills, hands-on experience with governance tooling and Python, and the ability to partner with technical and business teams to advance a data-driven culture.<br><br>Responsibilities:<br>• Direct the development and execution of companywide data governance practices, ensuring policies and controls support business objectives.<br>• Lead and mentor data-focused team members while coordinating governance-related initiatives, priorities, and deliverables.<br>• Partner with leaders across business, technology, legal, and compliance functions to define governance needs and implement practical solutions.<br>• Create and maintain governance standards for data quality, stewardship, ownership, and lifecycle management from intake through archival or disposal.<br>• Oversee controls for data classification, access permissions, sharing protocols, and reference data to safeguard sensitive information.<br>• Establish processes for metadata, lineage, and asset documentation within Atlan to improve transparency and usability of enterprise data.<br>• Drive data quality improvement efforts through profiling, validation, and remediation strategies that increase consistency and trust in reporting and operations.<br>• Promote organization-wide understanding of data governance by delivering training, guidance, and clear communication on governance value and responsibilities.<br>• Ensure adherence to corporate policies and applicable privacy expectations through consistent oversight and enforcement of governance practices.
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul><p><br></p>
<p>We are looking for an experienced Data Architect to join our team on a long-term contract basis in Cleveland, Ohio. This role involves designing scalable enterprise data platforms, ensuring data quality, and implementing robust data governance frameworks. You will play a pivotal role in leveraging Azure services and AI-driven analytics to optimize data architecture and enhance operational insights.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement enterprise-wide data architectures and canonical data models.</p><p>• Establish data ownership protocols, governance standards, and quality benchmarks.</p><p>• Analyze and stabilize data pipelines across distributed systems and platforms.</p><p>• Perform detailed data analysis and reconciliation to identify and resolve integrity issues.</p><p>• Design and implement monitoring tools to validate and improve data quality.</p><p>• Enhance data observability and lineage tracking to streamline governance processes.</p><p>• Utilize AI-driven analytics and automation to detect anomalies and accelerate decision-making.</p><p>• Collaborate with engineering teams to align data architecture with integration services and platform requirements.</p><p>• Optimize event-driven and distributed data systems for scalability and reliability.</p><p>• Conduct hands-on work with Azure services, such as Azure Data Factory and Synapse, to implement solutions.</p>
<p>Data Architect (Product & Commerce Data)</p><p><strong>Overview</strong></p><p>We are seeking a <strong>Data Architect</strong> to lead the design, structure, and governance of product and consumer-facing data across a modern digital commerce ecosystem. This individual will play a critical role in shaping how product data is modeled, integrated, and delivered to create a seamless and personalized customer experience.</p><p>This is a highly cross-functional role, partnering with digital, product, marketing, and data governance teams to ensure data consistency, scalability, and usability across platforms.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Architect and optimize <strong>product and consumer data models</strong> to support a dynamic B2C digital experience</li><li>Serve as the <strong>technical owner of product content and commerce data architecture</strong>, ensuring stability and scalability</li><li>Design and manage integrations between <strong>product data systems and digital commerce platforms</strong></li><li>Partner with Product Owners, content teams, and developers to bring product data to life across digital channels</li><li>Identify gaps and inefficiencies in current data structures and recommend <strong>enhancements vs. full rebuilds</strong></li><li>Establish and enforce <strong>data governance, data quality, and stewardship best practices</strong></li><li>Collaborate with internal teams to align <strong>product data strategy with customer experience goals</strong></li><li>Support the evolution of a multi-phase digital transformation, including ongoing enhancements to web and commerce platforms</li></ul><p><br></p><p><strong>Day-to-Day</strong></p><ul><li>Evaluate and refine product data structures to ensure consistency across platforms</li><li>Work hands-on with architecture and integration challenges between systems</li><li>Collaborate across business and technical teams to translate product complexity into scalable data models</li><li>Support ongoing content migration and platform enhancements</li><li>Act as a key voice in bridging <strong>business needs and technical execution</strong></li></ul><p><br></p><p><br></p>
We are looking for a skilled Content Manager to join our team in New York, New York. In this long-term contract position, you will play a pivotal role in shaping and delivering impactful content strategies that support product marketing efforts. This is an excellent opportunity for a detail-oriented individual to collaborate across diverse teams and create compelling narratives that drive engagement and success.<br><br>Responsibilities:<br>• Develop and manage content strategies for new and existing products, ensuring alignment with product marketing objectives.<br>• Create and expand content programming tailored to partner audiences, addressing their unique needs and interests.<br>• Collaborate with cross-functional teams, including Product Marketing, Editorial, Creative, Communications, and Education, to craft clear and impactful external narratives.<br>• Oversee the end-to-end workflow for product marketing content, including intake, prioritization, production, approvals, and distribution.<br>• Contribute to the creation of case studies that highlight client success stories and support commercial initiatives.<br>• Write and edit a variety of content formats, such as articles, launch narratives, explainers, sales enablement assets, and multimedia materials.<br>• Utilize performance data and analytics to refine content strategies and enhance their effectiveness over time.<br>• Ensure clarity and quality in storytelling, simplifying complex concepts for diverse audiences.
<p>We are looking for a Content Manager to support digital content operations for a manufacturing organization based in Parsippany, New Jersey. This Long-term Contract position will focus on maintaining accurate, engaging, and well-organized product content across multiple brand websites while partnering with cross-functional teams to deliver a strong customer experience. The ideal candidate brings hands-on expertise in content publishing platforms, digital asset coordination, and website quality assurance within a fast-paced environment.</p><p><br></p><p>Responsibilities:</p><p>• Oversee product onboarding and ongoing content maintenance across several brand websites, ensuring information, imagery, and supporting assets remain current and consistent.</p><p>• Create, edit, and publish web content using platforms such as Adobe Experience Manager, Shopify, and Klaviyo while applying user experience best practices.</p><p>• Coordinate with product, marketing, and global stakeholders to gather pricing, documents, creative assets, and other materials needed for accurate product launches.</p><p>• Lead assigned digital initiatives by tracking milestones, communicating status updates, addressing stakeholder questions, and keeping deliverables aligned with expectations.</p><p>• Monitor project risks and operational challenges, develop practical solutions, and take early action to prevent delays or quality issues.</p><p>• Execute quality checks for landing pages, promotional offers, site copy, and functional site elements to confirm content accuracy and site performance before and after publishing.</p><p>• Maintain an organized library of digital content and creative assets, and share newly available materials with internal teams to support ongoing campaigns and site updates.</p><p>• Investigate and resolve publishing or production problems by partnering with internal technical teams and external development resources to restore timely site operations.</p><p>• Work with cross-functional partners to translate business needs into clear digital requirements and implement content updates that support customer-facing goals.</p><p><br></p><p>02720-0013424624</p><p><br></p>
<p>We are looking for an experienced Oracle Database Administrator to join our team in Albuquerque, New Mexico. In this Contract-to-permanent position, you will play a crucial role in designing, maintaining, and optimizing database systems to ensure efficient data management and access across the organization. This opportunity is ideal for someone with a strong technical background and a passion for improving database performance and reliability.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement robust Database Management Systems (DBMS) to support organizational data needs.</p><p>• Plan, coordinate, and monitor database-related projects and routine maintenance activities.</p><p>• Develop strategies to minimize data redundancy and optimize single-source data utilization.</p><p>• Support development teams by translating logical database designs into physical models and creating database objects using Data Definition Language (DDL).</p><p>• Implement and manage database backup and recovery procedures, ensuring data restoration capabilities.</p><p>• Provide 24/7 on-call support to resolve database issues and maintain system reliability.</p><p>• Monitor and fine-tune databases to ensure optimal performance and response times.</p><p>• Collaborate with systems development teams to improve application performance using efficient coding techniques.</p><p>• Participate in DBMS upgrades, including testing, data conversion, and implementation.</p><p>• Enforce database standards and procedures while maintaining knowledge of emerging technologies and business systems.</p><p>Other duties as needed </p>