<p>Architect and lead the modern data platform using <strong>Microsoft Fabric</strong>. You’ll define data models, pipelines, governance, and performance patterns that power analytics at scale.</p><p><strong>What You’ll Do</strong></p><ul><li>Design end‑to‑end architectures leveraging Fabric (OneLake, Lakehouses, Warehouses)</li><li>Define medallion/layered models, dimensional designs, and semantic layers</li><li>Lead ingestion and transformation with Dataflows Gen2 / Data Factory / Notebooks</li><li>Establish governance (data quality, lineage, security, RLS/OLS)</li><li>Optimize performance and cost; standardize reuseable patterns</li><li>Mentor data engineers/analysts; review solutions and set best practices</li><li>Partner with business to translate use cases into scalable models</li></ul><p><br></p>
<p> Develop data models, database schemas, and integration patterns that support digital credentials,</p><p>transcripts, apostilles, and vital records.</p><p>• Create and enforce data governance policies ensuring compliance with FERPA, state regulations, and</p><p>government data security standards.</p><p>• Define data standards and metadata management strategies across both platforms.</p><p>• Lead the technical strategy for data integration between legacy systems and modern cloud-based</p><p>solutions.</p><p>Security & Compliance</p><p>• Ensure data architecture meets SOC2 Type II requirements and government security standards.</p><p>• Design and implement data encryption, access controls, and audit mechanisms for sensitive credential</p><p>and vital record data.</p><p>• Collaborate with security teams to conduct threat modeling and implement data protection measures.</p><p>• Maintain compliance with privacy regulations, including FERPA, state data protection laws, and records</p><p>retention requirements.</p><p>Platform Integration & Development</p><p>• Architect data pipelines and ETL processes for credential issuance, verification, and digital record</p><p>management</p><p>• Design APIs and integration layers that enable secure data exchange with educational institutions and</p><p>government agencies.</p><p>• Work with development teams to implement data solutions in cloud environments (AWS, Azure, or</p><p>similar)</p><p>• Optimize database performance and scalability to handle high-volume credential processing and</p><p>verification requests.</p><p> </p><p>Analytics & Reporting</p><p>• Design data warehousing and business intelligence solutions supporting operational and strategic</p><p>decision-making.</p><p>• Create data models that enable reporting on credential issuance, verification trends, and system usage</p><p>across both platforms.</p><p>• Develop KPIs and metrics frameworks to measure platform performance and customer satisfaction.</p><p>Collaboration & Leadership</p><p>• Mentor development teams on data best practices, modeling techniques, and architectural patterns</p><p>• Document data architecture, create technical specifications, and maintain architecture decision records.</p><p>• Participate in vendor evaluations and technology selection for data-related tools and platforms.</p><p>Qualifications</p><p><br></p>
We are looking for an experienced Data Architect to design and implement cutting-edge data solutions that meet the evolving needs of our enterprise. This role involves building secure, scalable, and high-performing data platforms while leveraging modern technologies and aligning with organizational goals. The ideal candidate will have expertise in cloud-based architecture, data governance, and advanced analytics, driving innovation across diverse business functions.<br><br>Responsibilities:<br>• Develop comprehensive data architecture strategies for advanced analytics and big data solutions using Azure Databricks.<br>• Design and implement Databricks Delta Lake-based Lakehouse architecture, utilizing PySpark Jobs, Databricks Workflows, Unity Catalog, and Medallion architecture.<br>• Optimize and configure Databricks clusters, notebooks, and workflows to ensure efficiency and scalability.<br>• Integrate Databricks with Azure services such as Azure Data Lake Storage, Azure Data Factory, Azure Key Vault, and Microsoft Fabric.<br>• Establish and enforce best practices for data governance, security, and cost management.<br>• Collaborate with data engineers, analysts, and business stakeholders to translate functional requirements into robust technical solutions.<br>• Provide technical mentoring and leadership to team members focused on Databricks and Azure technologies.<br>• Monitor, troubleshoot, and enhance data pipelines and workflows to maintain reliability and performance.<br>• Ensure compliance with organizational and regulatory standards regarding data security and privacy.<br>• Document configurations, processes, and governance standards to support long-term scalability and usability.
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul>
We are looking for an experienced Financial Data Analyst to join our dynamic team in Houston, Texas. This role involves analyzing complex financial data, creating models, and providing actionable insights to support strategic decision-making. The ideal candidate will have a strong background in financial analysis and modeling, coupled with the ability to present findings in a clear and impactful manner.<br><br>Responsibilities:<br>• Develop and maintain advanced financial models in Excel to support budgeting, forecasting, and long-term planning.<br>• Perform scenario, sensitivity, and what-if analyses to evaluate potential outcomes and risks.<br>• Conduct in-depth ad hoc analyses to address strategic business questions and opportunities.<br>• Prepare detailed reports on financial performance, emphasizing key trends and actionable recommendations.<br>• Track and evaluate critical financial metrics to monitor performance against goals.<br>• Analyze profit and loss statements, balance sheets, and cash flow data to identify trends and variances.<br>• Collaborate with the business intelligence team to design and prototype dashboards and management reports using Power BI.<br>• Assist in the preparation of annual budgets and forecasts by working closely with management.<br>• Conduct market research to understand industry trends and assess competitive positioning.<br>• Support financial due diligence efforts by creating pro forma financials for acquisitions and partnerships.
<p>We are seeking detail-oriented and analytical Entry Level Financial Data Analysts to support large-scale financial data and communications review initiatives. This role involves analyzing, annotating, and interpreting structured and unstructured financial data—including written and voice-based communications—to help build and enhance advanced data models and monitoring systems.</p><p><br></p><p>This position is ideal for recent graduates with strong academic backgrounds in finance-related disciplines who are interested in financial markets, data analysis, and emerging financial technologies.</p><p><br></p><p>Key Responsibilities</p><ul><li>Review, classify, and annotate financial communications to accurately identify transaction intent and key financial details.</li><li>Analyze written and voice-based financial content to extract structured insights from unstructured data.</li><li>Transcribe and annotate recorded communications with a high degree of accuracy.</li><li>Evaluate financial transactions and instruments, including options, swaps, forwards, and other derivatives.</li><li>Identify financial, trading, or compliance-related elements within communications and datasets.</li><li>Support the development and validation of high-quality training datasets for automated data extraction and monitoring systems.</li><li>Perform detailed quality checks to ensure data integrity, consistency, and accuracy.</li><li>Utilize Microsoft Excel (pivot tables, VLOOKUP/XLOOKUP, formulas) to organize, analyze, and report findings.</li><li>Maintain productivity and precision in a deadline-driven, remote work environment.</li></ul>
<p>Model Development & Maintenance</p><p> • Develop and maintain actuarial models and data-driven processes using Python, R, and SQL to support insurance pricing, reserving, and risk management.</p><p> • Implement and enhance month-end processes, rate change calculations, and ad-hoc analyses with a focus on completeness, accuracy, and consistency to ensure data is of the highest quality.</p><p> • Work with the Actuarial and Financial Planning and Analysis (FP&A) teams to automate and improve model performance using Python-based scripting and automation.</p><p> • Ensure accuracy, consistency, and efficiency of actuarial models and methodologies.</p><p> Traditional Actuarial Tasks</p><p> • Support reserving analysis to estimate unpaid claim liabilities primarily in partnership with internal and external actuaries.</p><p> • Develop and maintain loss development triangles and incurred but not reported (IBNR) calculations both based on financial and operational data (e.g., claims closing ratios).</p><p> • Support the development and validation of actuarial assumptions for pricing, reserving, and forecasting.</p><p> • Develop and regularly report on rate change calculations including bifurcation of exposure changes from pure rate by line of business.</p><p> Financial Modeling & Risk Assessment</p><p> • Conduct stress testing and scenario analysis to assess financial impacts.</p><p> • Develop, update, and maintain models for predictive analytics, profitability analysis, and business planning.</p><p> • Assist in forecasting financial performance and evaluating risk exposure.</p><p> </p><p> </p>
We are looking for a Data Visualization Specialist to join our team in Cincinnati, Ohio. In this role, you will leverage your expertise in business systems analysis to transform complex data into actionable insights through effective visualization techniques. The ideal candidate will have a passion for understanding business processes and applying technical skills to create impactful data solutions.<br><br>Responsibilities:<br>• Collaborate with stakeholders to analyze business processes and identify data visualization needs.<br>• Design and develop interactive dashboards and reports using tools such as Power BI, Tableau, and Qlik.<br>• Apply best practices in data visualization to ensure clear and accurate representation of data.<br>• Utilize data modeling techniques to enhance reporting capabilities and support business decision-making.<br>• Perform data analysis to identify opportunities for enrichment and aggregation, utilizing tools like Excel.<br>• Work with on-premises or cloud-based data platforms to ensure seamless integration and accessibility of data.<br>• Provide technical expertise and guidance on visualization tools and techniques to team members and clients.<br>• Stay updated on industry trends and advancements in data visualization technologies.
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our logistics team in Lithonia, Georgia. In this role, you will design, construct, and maintain data pipelines and infrastructure to support analytics and operational systems. You will play a key role in enabling data visualization tools, optimizing data processes, and ensuring the accuracy and availability of critical information.<br><br>Responsibilities:<br>• Design and implement data pipelines to efficiently extract, transform, and load data from multiple sources.<br>• Develop and maintain data models and storage solutions to support analytics and reporting needs.<br>• Collaborate with stakeholders to troubleshoot data inconsistencies and resolve technical issues.<br>• Utilize Tableau or Power BI to create meaningful data visualizations that drive business insights.<br>• Write and optimize database procedures, triggers, and other SQL-based functionalities.<br>• Manage and monitor databases to ensure their performance and reliability.<br>• Provide technical guidance to analysts on best practices in data governance and performance optimization.<br>• Participate in cross-functional projects to enhance data accessibility and quality across departments.<br>• Explore and integrate Python-based solutions to enhance data engineering processes.<br>• Assist in training and development related to data availability and analytics tools.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for an experienced Data Engineer to join our team in Cincinnati, Ohio. This long-term contract position offers the opportunity to work on cutting-edge data engineering projects while collaborating with multidisciplinary teams to deliver high-quality solutions. The ideal candidate will have a strong background in Databricks and big data technologies, along with a passion for optimizing data processes and systems.<br><br>Responsibilities:<br>• Design, build, and enhance data pipelines using Databricks Runtime, Delta Lake, Autoloader, and Structured Streaming.<br>• Implement secure and governed data access protocols utilizing Unity Catalog, workspace controls, and audit configurations.<br>• Manage and integrate structured and unstructured data from diverse sources, including APIs and cloud storage.<br>• Develop and maintain notebook-based workflows and manage jobs using Databricks Workflows and Jobs.<br>• Apply best practices for performance tuning, scalability, and cost optimization in Databricks environments.<br>• Collaborate with data scientists, analysts, and business stakeholders to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks jobs and system configurations.<br>• Ensure high standards of data quality and security across all engineering tasks.<br>• Troubleshoot and resolve issues to maintain operational efficiency in data pipelines.
<p>As a Data Engineer at Robert Half, you will be the backbone of our data-driven decision-making process. You aren't just "moving data"; you are architecting the flow of information that powers our localized market analytics and global recruitment engines. In the DC market, this often involves handling high-compliance data environments and integrating cutting-edge AI frameworks into traditional ETL workflows.</p>
We are looking for a skilled Data Engineer to join our team in Philadelphia, Pennsylvania. In this long-term contract position, you will play a key role in managing and optimizing large-scale data pipelines and systems within the healthcare industry. Your expertise will contribute to the development of robust solutions for data processing, analysis, and integration.<br><br>Responsibilities:<br>• Design, develop, and maintain large-scale data pipelines to support business needs.<br>• Optimize data workflows using tools such as Apache Spark and Python.<br>• Implement and manage ETL processes for seamless data transformation and integration.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure consistent performance and reliability.<br>• Work with Apache Hadoop and Apache Kafka to enhance data storage and streaming capabilities.<br>• Ensure compliance with data security and privacy standards.<br>• Analyze and interpret complex datasets to provide actionable insights.<br>• Document processes and solutions to support future scalability and maintenance.
<p>**** For Faster response on the position, please send a message to Jimmy Escobar on LinkedIn or send an email to Jimmy.Escobar@roberthalf(.com) with your resume. You can also call my office number at 424-270-9193****</p><p><br></p><p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Los Angeles, California. In this role, you will design, build, and maintain robust data infrastructure to support business operations and analytics. This position offers an opportunity to work with cutting-edge technologies and contribute to impactful projects. This position is a hybrid that is 3 days a week on-site and 2 days remote.</p><p><br></p><p>Responsibilities:</p><p>• Develop and implement scalable data pipelines using Apache Spark, Hadoop, and other big data technologies.</p><p>• Collaborate with cross-functional teams to understand and translate business requirements into technical solutions.</p><p>• Create and maintain ETL processes to ensure data integrity and accessibility.</p><p>• Manage and monitor large-scale data processing systems, ensuring seamless operation.</p><p>• Design and deploy solutions for real-time data streaming using Apache Kafka.</p><p>• Perform advanced data analytics to support business decision-making.</p><p>• Troubleshoot and resolve issues related to data infrastructure and applications.</p><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This position offers the opportunity to work remotely while contributing to critical data management and integration efforts. The ideal candidate will have hands-on experience with customer master data in ECC6, and the ability to create, maintain, and manage data effectively.<br><br>Responsibilities:<br>• Develop and maintain customer master data within ECC6, ensuring data accuracy and consistency.<br>• Create new customer profiles and manage existing ones, maintaining high standards of data integrity.<br>• Support the integration process by working with custom tables related to customer data.<br>• Collaborate with cross-functional teams to ensure seamless data flow and effective data management.<br>• Utilize tools such as Apache Spark, Python, and ETL processes to extract, transform, and load data efficiently.<br>• Leverage Apache Hadoop for scalable data storage and processing solutions.<br>• Implement Apache Kafka to enable real-time data streaming and integration.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability.<br>• Provide documentation and training to stakeholders on data management processes.<br>• Stay updated on industry best practices and emerging technologies to enhance data engineering workflows.
<p>Robert half has a brand new opening for a Data Engineer with a reputable client here in Tampa.</p><p>Full-time position, HYBRID schedule out of their Tampa office.</p><p>Compensation ranging $100-115K depending on experience</p><p>*Medical benefits are also 100% covered after onboarding period*</p><p><br></p><p>Data Engineer (BI/ETL) focused on building and optimizing ETL/ELT pipelines, migrating/cleaning data between internal, vendor, and legacy systems, and improving data quality. SQL is absolutely required, and this role leans heavily into backend data movement — not dashboarding.</p><p><br></p><p><strong>Top Skills Looking For:</strong></p><ul><li>Strong <strong>SQL </strong>(non negotiable)</li><li>Experience designing and maintaining <strong>ETL / ELT pipelines</strong> using frameworks such as <strong>Apache Airflow, DBT (Data Build Tool), or equivalent orchestration systems</strong>, with the ability to schedule, monitor, and recover complex multi-stage jobs.</li><li><strong>Experience moving data across multiple systems</strong></li></ul><p>Description:</p><p>Build and maintain business intelligence solutions to include law enforcement, detention, human resources, finance, and integration of data from agency criminal justice partners.</p><p>• Design and develop BI solutions.</p><p>• Gather user requirements, develop technical and functional requirements, produce reporting solutions, and document the design and development process, metadata, and business rules.</p><p>• Model, implement, and maintain databases and data marts to support BI reporting.</p><p>• Develop extract, transform, load (ETL) to support the loading of data into data marts.</p><p>• Monitor the data quality of existing databases and data marts and recommend governance and control around self-service BI/Analytics considering the evolution of the BI Industry’s best practices.</p><p>• Perform other related duties as required.</p>
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
We are looking for an experienced Data Engineer to join our team in New York, New York. In this role, you will design, build, and maintain data infrastructure to support business intelligence and analytics needs. The ideal candidate will have a strong technical background, a passion for working with complex datasets, and expertise in cloud-based data platforms.<br><br>Responsibilities:<br>• Develop, implement, and optimize ETL pipelines to ensure efficient data processing and integration.<br>• Design and maintain scalable data solutions, including data warehouses and data lakes.<br>• Collaborate with cross-functional teams to identify data requirements and deliver actionable insights.<br>• Utilize Snowflake, AWS, and other cloud-based platforms to manage data infrastructure and ensure performance optimization.<br>• Leverage Python and SQL to build robust data workflows and automate processes.<br>• Employ orchestration tools like Airflow and dbt to streamline data operations.<br>• Support data analytics and visualization efforts by enabling the creation of impactful dashboards using tools such as Tableau.<br>• Work with marketing and product data sources, including platforms like Google Analytics, to extract and integrate valuable insights.<br>• Implement CI/CD pipelines and DevOps practices to enhance data engineering processes.<br>• Ensure data security and compliance across all systems and tools.
<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
We are looking for an experienced Data Engineer to join our dynamic team in Mayville, Wisconsin. In this role, you will play a key part in developing and enhancing reporting and analytics solutions within a modern data environment. The ideal candidate is passionate about transforming complex data into actionable insights, improving processes, and creating reliable reporting systems. This is a long-term contract position offering the opportunity to make a meaningful impact within a collaborative and forward-thinking team.<br><br>Responsibilities:<br>• Design, develop, and maintain scalable data pipelines to support reporting and analytics needs.<br>• Create and optimize Power BI dashboards and reports to deliver accessible and trustworthy insights.<br>• Automate workflows using Power Automate to improve operational efficiency.<br>• Develop scripts using languages such as PowerShell or Python to streamline data processing tasks.<br>• Integrate and manage data sources including Oracle, Snowflake (hosted within Azure), and other enterprise systems.<br>• Collaborate with stakeholders to gather requirements and deliver customized solutions.<br>• Support the transition to cloud-based data environments, including Azure Data Warehouse and Fabric.<br>• Troubleshoot and resolve data-related issues, ensuring data integrity and reliability.<br>• Document processes and workflows to ensure clarity and maintainability.<br>• Stay updated on industry trends to recommend and implement innovative data solutions.