Search jobs now Find the right job type for you Explore how we help job seekers Contract talent Permanent talent Learn how we work with you Executive search Finance and Accounting Technology Marketing and Creative Legal Administrative and Customer Support Technology Risk, Audit and Compliance Finance and Accounting Digital, Marketing and Customer Experience Legal Operations Human Resources 2026 Salary Guide Demand for Skilled Talent Report Building Future-Forward Tech Teams Job Market Outlook Press Room Salary and hiring trends Adaptive working Competitive advantage Work/life balance Inclusion Browse jobs Find your next hire Our locations

Add your latest resume to match with open positions.

328 results for Big Data Engineer in Session_trace

Sr. Data Engineer
  • Atlanta, GA
  • onsite
  • Permanent
  • 125000.00 - 135000.00 USD / Yearly
  • We are looking for an experienced Senior Data Engineer to join our team in Atlanta, Georgia. This role is ideal for someone with a strong background in data architecture, cloud platforms, and analytics tools. You will play a key role in designing, building, and optimizing data systems to support business operations and decision-making.<br><br>Responsibilities:<br>• Develop and maintain scalable data models and database designs to support business needs.<br>• Implement and manage data integration workflows using ETL processes and tools.<br>• Build and optimize data lakes and LakeHouse architectures on Azure platforms.<br>• Utilize Microsoft Fabric and Azure Databricks to create advanced data solutions.<br>• Design and develop dashboards and reports using Power BI to provide actionable insights.<br>• Ensure data governance by establishing policies, procedures, and standards for data use.<br>• Collaborate with cross-functional teams to align data strategies with organizational goals.<br>• Leverage Python and SQL for data analysis, transformation, and automation.<br>• Work with middleware solutions like MuleSoft for efficient data communication and integration.<br>• Stay updated on emerging technologies to continuously improve data engineering practices.
  • 2026-02-17T20:28:42Z
Senior Data Engineer
  • Woodbury, MN
  • onsite
  • Permanent
  • 140000.00 - 155000.00 USD / Yearly
  • We are looking for an experienced Senior Data Engineer to join our team in Woodbury, Minnesota. In this role, you will play a key part in designing and optimizing data systems, ensuring scalability and reliability for business-critical operations. The ideal candidate will have a strong background in data engineering and a passion for leveraging technology to drive impactful solutions.<br><br>Responsibilities:<br>• Redesign and optimize complex business logic embedded in Postgres functions to improve functionality.<br>• Develop scalable database schemas and create data models that are optimized for analytics and AI applications.<br>• Implement database partitioning, indexing, and performance tuning to ensure data growth is supported efficiently.<br>• Build and maintain production-grade data pipelines from data ingestion to end-user consumption.<br>• Establish robust processes for data quality assurance, monitoring, and operational reliability within pipelines.<br>• Troubleshoot and resolve data-related and performance issues directly in production environments.<br>• Collaborate with cross-functional teams to ensure seamless integration of data systems into business processes.
  • 2026-03-11T14:58:42Z
Data Engineer
  • Savannah, GA
  • remote
  • Contract / Temporary to Hire
  • 35.63 - 41.25 USD / Hourly
  • We are looking for a skilled Data Engineer to support our organization's data initiatives in Savannah, Georgia. This Contract to permanent role focuses on managing, optimizing, and securing data systems to drive strategic decision-making and improve overall performance. The ideal candidate will work closely with technology teams, analytics departments, and business stakeholders to ensure seamless data integration, accuracy, and scalability.<br><br>Responsibilities:<br>• Design and implement robust data lake and warehouse architectures to support organizational needs.<br>• Develop efficient ETL pipelines to process and integrate data from multiple sources.<br>• Collaborate with analytics teams to create and refine data models for reporting and visualization.<br>• Monitor and maintain data systems to ensure quality, security, and availability.<br>• Troubleshoot data-related issues and perform in-depth analyses to identify solutions.<br>• Define and manage organizational data assets, including SaaS tools and platforms.<br>• Partner with IT and security teams to meet compliance and governance standards.<br>• Document workflows, pipelines, and architecture for knowledge sharing and long-term use.<br>• Translate business requirements into technical solutions that meet reporting and analytics needs.<br>• Provide guidance and mentorship to team members on data usage and best practices.
  • 2026-03-09T14:04:42Z
Senior Data Engineer
  • Edgewood, NY
  • onsite
  • Permanent
  • 110000.00 - 140000.00 USD / Yearly
  • We are looking for a highly skilled Senior Data Engineer to join our team in Edgewood, New York. This role is ideal for someone who is detail oriented and has expertise in developing scalable data pipelines, modeling data structures, and optimizing data infrastructure for performance and reliability. The right candidate will play a key role in shaping our data engineering function and collaborating with cross-functional teams to deliver impactful solutions.<br><br>Responsibilities:<br>• Design and maintain efficient and scalable data pipelines to support various operational and commercial systems.<br>• Develop and manage modern data warehouse infrastructure using tools such as BigQuery and dbt.<br>• Integrate, transform, and organize data from multiple sources into structured, queryable formats.<br>• Create and manage logical and physical data models to enhance analytics and reporting capabilities.<br>• Collaborate with stakeholders to enable self-service reporting and build dashboards using platforms like Looker and Looker Studio.<br>• Implement best practices for data engineering, including testing, monitoring, and ensuring pipeline reliability.<br>• Optimize the performance, scalability, and cost-efficiency of data pipelines and warehouses.<br>• Partner with engineering, operations, and business teams to translate data needs into scalable solutions.<br>• Contribute to the improvement of engineering processes, coding standards, and documentation.<br>• Mentor team members and support onboarding as the team grows.
  • 2026-02-26T15:38:45Z
Data Engineer
  • Wayne, PA
  • onsite
  • Contract / Temporary to Hire
  • 71.25 - 82.50 USD / Hourly
  • We are looking for a skilled Data Engineer to join our team in Wayne, Pennsylvania, on a contract to permanent basis. This role offers an exciting opportunity to design, implement, and optimize data pipelines while integrating applications with various digital marketplaces. The ideal candidate will bring strong technical expertise and a collaborative mindset to support business insights and analytics effectively.<br><br>Responsibilities:<br>• Develop and maintain data pipelines and ensure seamless application connectivity with digital marketplaces such as TikTok Shop, Shopify, and Amazon.<br>• Collaborate closely with business teams to understand requirements and provide actionable analytics.<br>• Lead the creation of scalable and efficient data solutions tailored to business needs.<br>• Apply expertise in Python, Snowflake, and other relevant technologies to deliver high-quality results.<br>• Facilitate and support integrations with e-commerce platforms, leveraging previous experience where applicable.<br>• Build robust APIs and ensure their effective implementation.<br>• Utilize Microsoft SQL for database management and optimization.<br>• Provide technical guidance and mentorship to ensure project success.<br>• Troubleshoot and resolve issues related to data workflows and integrations.<br>• Continuously evaluate and improve processes to enhance efficiency and performance.
  • 2026-02-23T21:43:51Z
Data Engineer
  • Mayville, WI
  • remote
  • Temporary
  • 66.50 - 77.00 USD / Hourly
  • We are looking for an experienced Data Engineer to join our dynamic team in Mayville, Wisconsin. In this role, you will play a key part in developing and enhancing reporting and analytics solutions within a modern data environment. The ideal candidate is passionate about transforming complex data into actionable insights, improving processes, and creating reliable reporting systems. This is a long-term contract position offering the opportunity to make a meaningful impact within a collaborative and forward-thinking team.<br><br>Responsibilities:<br>• Design, develop, and maintain scalable data pipelines to support reporting and analytics needs.<br>• Create and optimize Power BI dashboards and reports to deliver accessible and trustworthy insights.<br>• Automate workflows using Power Automate to improve operational efficiency.<br>• Develop scripts using languages such as PowerShell or Python to streamline data processing tasks.<br>• Integrate and manage data sources including Oracle, Snowflake (hosted within Azure), and other enterprise systems.<br>• Collaborate with stakeholders to gather requirements and deliver customized solutions.<br>• Support the transition to cloud-based data environments, including Azure Data Warehouse and Fabric.<br>• Troubleshoot and resolve data-related issues, ensuring data integrity and reliability.<br>• Document processes and workflows to ensure clarity and maintainability.<br>• Stay updated on industry trends to recommend and implement innovative data solutions.
  • 2026-03-05T20:34:06Z
Sr. Python Data Engineer
  • Boston, MA
  • remote
  • Permanent
  • 160000.00 - 180000.00 USD / Yearly
  • <p><strong>Data Engineer (Python / AWS)</strong></p><p><strong>Location:</strong> Remote (Northeast / Greater Boston area preferred)</p><p><strong>Type:</strong> Full-Time</p><p><strong>Level:</strong> Mid-to-Senior Individual Contributor</p><p><strong>About the Role</strong></p><p>We are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You’ll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.</p><p>This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We’re not looking for someone who is aiming to move immediately into architecture or leadership.</p><p>This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Build and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.</li><li>Develop Python‑based data processing workflows deployed on AWS cloud services.</li><li>Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.</li><li>Help modernize existing workflows and assist in the gradual migration away from a legacy data system.</li><li>Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.</li><li>Troubleshoot pipeline issues, optimize performance, and improve overall system stability.</li><li>Contribute to best practices around code quality, testing, documentation, and data governance.</li></ul><p><br></p>
  • 2026-03-10T19:18:46Z
Data Engineer
  • Grand Rapids, MI
  • onsite
  • Permanent
  • 110000.00 - 140000.00 USD / Yearly
  • We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
  • 2026-03-03T14:48:42Z
Data Architect
  • Bloomington, MN
  • remote
  • Permanent
  • 130000.00 - 165000.00 USD / Yearly
  • We are looking for an experienced Data Architect to design and implement cutting-edge data solutions that meet the evolving needs of our enterprise. This role involves building secure, scalable, and high-performing data platforms while leveraging modern technologies and aligning with organizational goals. The ideal candidate will have expertise in cloud-based architecture, data governance, and advanced analytics, driving innovation across diverse business functions.<br><br>Responsibilities:<br>• Develop comprehensive data architecture strategies for advanced analytics and big data solutions using Azure Databricks.<br>• Design and implement Databricks Delta Lake-based Lakehouse architecture, utilizing PySpark Jobs, Databricks Workflows, Unity Catalog, and Medallion architecture.<br>• Optimize and configure Databricks clusters, notebooks, and workflows to ensure efficiency and scalability.<br>• Integrate Databricks with Azure services such as Azure Data Lake Storage, Azure Data Factory, Azure Key Vault, and Microsoft Fabric.<br>• Establish and enforce best practices for data governance, security, and cost management.<br>• Collaborate with data engineers, analysts, and business stakeholders to translate functional requirements into robust technical solutions.<br>• Provide technical mentoring and leadership to team members focused on Databricks and Azure technologies.<br>• Monitor, troubleshoot, and enhance data pipelines and workflows to maintain reliability and performance.<br>• Ensure compliance with organizational and regulatory standards regarding data security and privacy.<br>• Document configurations, processes, and governance standards to support long-term scalability and usability.
  • 2026-02-13T21:28:42Z
Data Architect
  • Green Bay, WI
  • onsite
  • Permanent
  • 140000.00 - 160000.00 USD / Yearly
  • <p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul>
  • 2026-02-17T05:43:41Z
Data Engineer
  • Tampa, FL
  • remote
  • Permanent
  • 90000.00 - 130000.00 USD / Yearly
  • <p>IMMEDIATE HIRE NEEDED. Interviews to begin the first week of February. </p><p><br></p><p>We are looking for a skilled Snowflake Marketing Data Engineer to join our team in Tampa, Florida in a hybrid in-office work schedule (2 to 3 days remote per week) preferably, remote candidates may be considered depending of the quality in match. </p><p><br></p><p>In this role, you will be responsible for designing, implementing, and maintaining data solutions that support critical business operations. Your expertise will play a key part in driving data-driven decisions and optimizing performance across various platforms.</p><p><br></p><p>Responsibilities:</p><p>• Develop and maintain ETL processes to efficiently extract, transform, and load data from multiple sources.</p><p>• Analyze marketing data to uncover insights and support strategic decision-making.</p><p>• Create and manage dashboards and reports using Power BI to visualize data effectively.</p><p>• Integrate and leverage tools like Braze and Google Analytics to enhance data tracking and reporting capabilities.</p><p>• Collaborate with cross-functional teams to ensure the accuracy and reliability of data systems.</p><p>• Optimize database performance and troubleshoot any issues related to data pipelines.</p><p>• Document data workflows and provide training to stakeholders on best practices.</p><p>• Work with cloud-based platforms, such as Snowflake, to store and manage large datasets.</p><p>• Ensure data security and compliance with company policies and standards.</p>
  • 2026-03-12T17:59:05Z
Data Engineer
  • Greenville, SC
  • onsite
  • Permanent
  • - USD / Yearly
  • <p>Robert Half is hiring! We are looking for an experienced Data Engineer to join our team in Greenville, South Carolina. This role offers an exciting opportunity to work with modern data technologies, ensuring the efficient operation and optimization of data pipelines and systems. The ideal candidate will bring a strong technical background, leadership skills, and a proactive approach to maintaining and improving data infrastructure.</p><p><br></p><p>Responsibilities:</p><p>• Oversee daily data loads and ensure the smooth operation of data pipelines and related systems.</p><p>• Troubleshoot and resolve issues such as pipeline failures, performance bottlenecks, schema mismatches, and cloud resource disruptions.</p><p>• Conduct root-cause analyses and implement permanent solutions to prevent recurring issues.</p><p>• Maintain and optimize existing data processes, refactoring or retiring outdated workflows as necessary.</p><p>• Design and build scalable data ingestion pipelines using technologies such as Azure Data Factory, Databricks, and Synapse Pipelines.</p><p>• Collaborate with teams to create and improve operational runbooks, monitoring dashboards, and incident response workflows.</p><p>• Develop reusable ingestion patterns for platforms like Guidewire DataHub, InfoCenter, and other business data sources.</p><p>• Lead the implementation of real-time and event-driven data engineering solutions to enable operational insights and automation.</p><p>• Partner with architects to modernize data workloads using advanced frameworks like Delta Lake and Medallion Architecture.</p><p>• Mentor entry-level engineers, enforce coding best practices, and review code to ensure quality and compliance.</p>
  • 2026-03-11T12:23:45Z
Senior Data Engineer - Python
  • West Des Moines, IA
  • remote
  • Temporary
  • - USD / Hourly
  • We are looking for an experienced Senior Data Engineer with a strong background in Python and modern data engineering tools to join our team in West Des Moines, Iowa. This is a long-term contract position that requires expertise in designing, building, and optimizing data pipelines and working with cloud-based data warehouses. If you thrive in a collaborative environment and have a passion for transforming raw data into actionable insights, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, debug, and optimize Python-based data pipelines using frameworks such as Flask, Django, or FastAPI.<br>• Design and implement data transformations in a data warehouse using tools like dbt, ensuring high-quality analytics-ready datasets.<br>• Utilize Amazon Redshift and Snowflake for managing large-scale data storage and performing advanced querying and optimization.<br>• Automate data integration processes using platforms like Fivetran and orchestration tools such as Prefect or Airflow.<br>• Build reusable and maintainable data models to improve performance and scalability for analytics and reporting.<br>• Conduct data analysis and visualization leveraging Python libraries such as NumPy, Pandas, TensorFlow, and PyTorch.<br>• Manage version control for data engineering projects using Git and GitHub.<br>• Ensure data quality through automated testing and validation processes.<br>• Document workflows, code, and data transformations following best practices for readability and maintainability.<br>• Optimize cloud-based data warehouse and lake platforms for performance and integration of new data sources.
  • 2026-03-06T14:58:40Z
Senior / Lead Database Engineer
  • Radnor, PA
  • onsite
  • Permanent
  • 140000.00 - 150000.00 USD / Yearly
  • The Opportunity: Be part of a dynamic team that designs, develops, and optimizes data solutions supporting enterprise-level products across diverse industries. This role provides a clear track to higher-level positions, including Lead Data Engineer and Data Architect, for those who demonstrate vision, initiative, and impact. Key Responsibilities: Design, develop, and optimize relational database objects and data models using Microsoft SQL Server and Snowflake. Build and maintain scalable ETL/ELT pipelines for batch and streaming data using SSIS and cloud-native solutions. Integrate and utilize Redis for caching, session management, and real-time analytics. Develop and maintain data visualizations and reporting solutions using Sigma Computing, SSRS, and other BI tools. Collaborate across engineering, analytics, and product teams to deliver impactful data solutions. Ensure data security, governance, and compliance across all platforms. Participate in Agile Scrum ceremonies and contribute to continuous improvement within the data engineering process. Support database deployments using DevOps practices, including version control (Git) and CI/CD pipelines (Azure DevOps, Flyway, Octopus, SonarQube). Troubleshoot and resolve performance, reliability, and scalability issues across the data platform. Mentor entry level team members and participate in design/code reviews.
  • 2026-03-05T20:23:48Z
Data Engineer
  • Tampa, FL
  • onsite
  • Contract / Temporary to Hire
  • - USD / Hourly
  • We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
  • 2026-03-04T17:04:09Z
Senior Database Engineer
  • Salt Lake City, UT
  • remote
  • Contract / Temporary to Hire
  • 65.00 - 73.00 USD / Hourly
  • We are looking for a Senior Database Engineer to provide expert technical leadership for our global, cloud-based data infrastructure. This role involves designing, operating, and optimizing scalable, secure, and resilient database systems to support enterprise-scale workloads across AWS and Azure. As this is a Contract position with the possibility of becoming permanent, it offers an excellent opportunity to contribute to the development of cutting-edge database solutions while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Design and manage multi-region database architectures across AWS and Azure to support geo-distributed workloads.<br>• Architect and maintain relational, NoSQL, and document databases such as Snowflake, PostgreSQL, DynamoDB, Cosmos DB, and MongoDB.<br>• Lead hands-on database migrations between cloud platforms and legacy systems with a focus on scalability and reliability.<br>• Implement indexing strategies, optimize queries, and establish scaling patterns for handling large datasets and real-time applications.<br>• Enhance database performance to ensure high availability, low latency, and cost efficiency at an enterprise level.<br>• Support and refine data ingestion workflows and pipeline integrations using tools like AWS Glue, Step Functions, Lambda, and Azure Data Factory.<br>• Collaborate with Data Engineering teams to develop streaming solutions using Kafka, Kinesis, and AWS services.<br>• Apply robust security measures, including encryption, access controls, and secrets management, to protect database systems.<br>• Develop disaster recovery strategies and maintain backup solutions to ensure data integrity and availability.<br>• Monitor database systems using tools like CloudWatch, Azure Monitor, and Datadog, ensuring optimal reliability and performance.
  • 2026-03-12T21:23:42Z
AWS/Databricks Engineer
  • Houston, TX
  • onsite
  • Temporary
  • - USD / Hourly
  • We are looking for an experienced AWS/Databricks Engineer to join our team in Houston, Texas. This is a long-term contract position ideal for professionals with a strong background in data engineering and cloud technologies. The role will focus on leveraging Python and Databricks to optimize data processes and enhance system performance.<br><br>Responsibilities:<br>• Develop and implement scalable data engineering solutions using Python and Databricks.<br>• Collaborate with cross-functional teams to design and optimize data workflows.<br>• Migrate and enhance existing Python scripts to Databricks for improved functionality.<br>• Utilize cloud technologies to support data integration and analytics processes.<br>• Implement algorithms and data visualization methods to present actionable insights.<br>• Design and maintain APIs to streamline data interactions and integrations.<br>• Work with tools like Apache Kafka, Spark, and Hadoop to manage large-scale data systems.<br>• Perform data analysis and develop strategies to improve system efficiency.<br>• Ensure high-quality data pipelines and address performance bottlenecks.<br>• Stay updated on emerging trends in data engineering and recommend innovative solutions.
  • 2026-03-09T13:33:41Z
Data Analyst / Engineer
  • Woburn, MA
  • remote
  • Temporary
  • 55.41 - 64.16 USD / Hourly
  • <p>We are looking for a skilled Data Analyst / Engineer to join our team on a contract basis remotely. This role focuses on financial data processing, automation, and reporting within a dynamic environment. The ideal candidate will excel at managing data workflows, automating manual processes, and delivering accurate insights to support business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Extract and reconcile financial data from multiple databases, ensuring accuracy and consistency across accounts receivable, accounts payable, and general ledger lanes.</p><p>• Automate manual reporting processes by developing repeatable daily and month-end pipelines for reliable and auditable data.</p><p>• Design and oversee data workflows across development, production, and utility databases, ensuring secure and efficient access.</p><p>• Create and deliver advanced Excel-based reports using macros, formulas, and Power Query to enhance usability for finance teams.</p><p>• Implement data validation and snapshot techniques to support reconciliation and decision-making processes.</p><p>• Ensure the traceability and accuracy of financial data by establishing robust controls and audit mechanisms.</p><p>• Collaborate with stakeholders to understand reporting requirements and translate them into scalable solutions.</p><p>• Utilize expertise in SQL and Teradata Data Warehouse to optimize database objects and queries for performance.</p><p>• Develop and maintain documentation for automated processes and data workflows to ensure clarity and continuity.</p>
  • 2026-03-13T12:28:40Z
Informatica Cloud Data Governance Catalog Specialist
  • Torrance, Ca, OH
  • remote
  • Temporary
  • 53.00 - 55.00 USD / Hourly
  • <p>We are looking for an experienced Informatica Cloud Data Governance Catalog Specialist to join our team in Southern California. This position involves working on-site four days per week and offers a long-term contract opportunity. The ideal candidate will have a strong background in data governance, analytics, and business intelligence tools, coupled with a proactive approach to problem-solving and collaboration.</p><p><br></p><p>Responsibilities:</p><p>• Create catalog quality reports to monitor and enhance data governance metrics across domains and sub-domains.</p><p>• Develop and showcase data governance dashboards tailored to different user roles, including Data Owners, Stewards, Engineers, and Privacy Officers.</p><p>• Collaborate with business and IT teams, including data stewards, catalog architects, and platform owners, to implement governance solutions.</p><p>• Execute profiling, sampling, and scanner setups using Informatica tools to ensure data quality.</p><p>• Apply expertise in metadata management, data modeling, and large-scale data analysis to support governance initiatives.</p><p>• Design and implement both traditional relational and modern big-data architectures based on organizational requirements.</p><p>• Utilize business intelligence tools such as Power BI and Tableau to create actionable insights and reports.</p><p>• Define compliance procedures and produce audit reports to meet regulatory requirements.</p><p>• Establish and support governance councils and operational frameworks using data catalog tools.</p><p>• Facilitate metadata ingestion and ensure adherence to data security and quality standards.</p>
  • 2026-02-12T17:58:41Z
Senior Software Engineer
  • Atlanta, GA
  • remote
  • Permanent
  • 120000.00 - 140000.00 USD / Yearly
  • We are seeking a Senior Software Engineer – AI Solutions to help design and implement AI-driven capabilities within a experienced, enterprise SaaS environment. This is a hands-on role for a strong full-stack engineer with practical LLM experience who can contribute at both the architectural and implementation levels. <br> In this position, you will support the evolution of AI-enabled product features, integrate large language models into existing systems, and help move initiatives from concept through production deployment. You’ll contribute to system design discussions, apply sound AI development practices, and build scalable, maintainable services that align with established enterprise standards. <br> You will collaborate with a small, focused engineering team in a highly interactive environment and report directly to engineering leadership.
  • 2026-03-03T19:28:45Z
Software Engineer
  • Hanover, NH
  • onsite
  • Permanent
  • - USD / Yearly
  • <p><strong>Location:</strong> Hybrid — <em>2 days per month on-site in New Hampshire</em></p><p><strong>Employment Type:</strong> Full-Time</p><p><strong>About the Role</strong></p><p>We’re seeking a talented <strong>Software Engineer</strong> with deep experience in <strong>Oracle APEX</strong> and <strong>PL/SQL. </strong>You should also have a strong background integrating third-party applications like <strong>Salesforce</strong>. This role is ideal for someone who enjoys collaborating with cross-functional teams, designing scalable solutions, and enhancing business systems through thoughtful engineering and integrations.</p><p><br></p><p>As part of our team, you’ll play a key role in building and maintaining applications that drive critical business workflows. You’ll leverage your Oracle APEX expertise to architect solutions and your integration experience to ensure smooth data flows between platforms.</p><p>This is a <strong>hybrid position</strong>, requiring <strong>two days per month on-site in New Hampshire</strong> for team collaboration, planning, or project workshops.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and maintain applications using <strong>Oracle Application Express (APEX)</strong>.</li><li>Build, optimize, and troubleshoot <strong>integrations with third-party systems</strong>, including Salesforce and other enterprise platforms.</li><li>Develop APIs, data pipelines, and middleware solutions to support seamless cross-system communication.</li><li>Collaborate with business stakeholders to gather requirements and translate them into technical specifications.</li><li>Ensure application performance, security, and reliability through best practices.</li><li>Participate in code reviews, testing, deployment, and documentation of software solutions.</li><li>Support ongoing enhancements, bug fixes, and system improvements.</li></ul><p><strong>Required Qualifications</strong></p><ul><li><strong>Hands-on experience with Oracle APEX</strong> development.</li><li>Proven experience designing and implementing <strong>Salesforce integrations</strong> (REST/SOAP APIs, middleware tools, or direct platform integration).</li><li>Strong proficiency with <strong>SQL, PL/SQL</strong>, and Oracle database structures.</li><li>Experience working with APIs, integration frameworks, and data transformation workflows.</li><li>Solid understanding of software development best practices, including version control, testing, and documentation.</li><li>Excellent analytical, troubleshooting, and communication skills.</li><li>Ability to work in a hybrid environment and be on-site in New Hampshire <strong>twice per month</strong>.</li></ul><p><strong>Preferred Qualifications</strong></p><ul><li>Experience with additional integration platforms (e.g., MuleSoft, Boomi, Workato).</li><li>Background working in enterprise environments or supporting mission-critical systems.</li><li>Familiarity with Agile methodologies.</li><li>Knowledge of secure coding practices and data governance.</li></ul>
  • 2026-03-09T18:08:44Z
Full Stack Software Engineer (React / Node)
  • Englewood Cliffs, NJ
  • onsite
  • Permanent
  • 110000.00 - 140000.00 USD / Yearly
  • <p>We are seeking a reliable, business-aligned Full Stack Delivery Engineer to take ownership of critical systems and ensure consistent, predictable delivery across our technology stack.</p><p><br></p><p>This role is ideal for a developer who values shipping working software, collaborating across disciplines, and operating within real-world constraints. You will work on production systems that directly impact operations, revenue, and customer experience.</p><p><br></p><p>We are not looking for a “10x developer”. We are looking for someone who finishes what they start, documents what they build, and treats deadlines seriously.</p><p><br></p><p><strong>What Success Looks Like in This Role</strong></p><ul><li>Features ship when expected</li><li>Progress is visible weekly</li><li>Estimates are conservative and reliable</li><li>Systems are understandable by others</li><li>No single person becomes a bottleneck</li></ul><p><strong>Key Responsibilities</strong></p><p><strong>Delivery & Execution</strong></p><ul><li>Deliver production-ready features tied to clear milestones and acceptance criteria</li><li>Break work into small, testable, demoable increments</li><li>Communicate risks early and adjust plans based on reality—not optimism</li></ul><p><strong>Front-End Development</strong></p><ul><li>Build and maintain responsive front-end applications using modern frameworks such as React or Next.js</li><li>Integrate front-end components cleanly with backend APIs</li><li>Prioritize usability and operational clarity over visual perfection</li></ul><p><strong>Back-End & Systems Integration</strong></p><ul><li>Build and maintain backend services and APIs (Node.js / TypeScript; Rust familiarity is a plus)</li><li>Integrate with third-party services including Stripe, shipping providers, and messaging systems</li><li>Work with inventory, asset tracking, and operational data systems</li></ul><p><strong>Data & Infrastructure</strong></p><ul><li>Design and work with relational databases (PostgreSQL / MySQL preferred)</li><li>Write safe, understandable data migrations</li><li>Support logging, monitoring, and observability for production systems</li></ul><p><strong>Collaboration & Accountability</strong></p><ul><li>Work closely with operations, product, and leadership—not just engineers</li><li>Document APIs, assumptions, and system behavior</li><li>Participate in code reviews with an emphasis on clarity and maintainability</li><li>Accept review and feedback without defensiveness</li></ul><p><br></p>
  • 2026-02-25T08:03:49Z
Full Stack Engineer
  • New York, NY
  • onsite
  • Temporary
  • 61.75 - 71.50 USD / Hourly
  • Must have skills: <br>• 3–6 years of professional software engineering experience, with a strong portfolio of full stack development work. <br>• Proficiency in Python, including experience with web frameworks such as Flask or Dash. <br>• Experience integrating frontend applications with RESTful APIs and backend services. <br>• Relational and non-relational databases: SQL, MongoDB, and/or Snowflake using Python. <br>• Designing data models for effective data storage and retrieval (preferably SQL, MongoDB, Snowflake). <br>• Debugging, issue resolution, and troubleshooting. <br>• Developing systems integrated with cloud services, such as for storage or secrets management (preferably AWS). <br>• Designing and troubleshooting ETL pipelines. <br>• Developing REST APIs using Python frameworks (preferably Flask). <br>• Publishing Python packages, maintaining them, and building Python CLI tools. <br>• Deploying REST APIs in containerized environments (Kubernetes), working with other developers in the team to integrate those APIs with web applications. <br> <br>Nice to have skills: <br>• Exposure to financial systems, SEC API, and/or corporate credit modeling is strongly preferred. <br>• Familiarity with UX design tools (Figma) and solid understanding of the design-engineering hand-off process <br>• Familiarity with deployment pipelines, CICD tools (preferably GitLab). <br>• Configuring observability and alerting services (preferably Datadog and Opsgenie). <br>• Containerized development and deployment (i.e. Docker, Kubernetes) <br>• Writing infrastructure as code (preferably Terraform). <br>• Integrating managed authentication services (preferably Auth0). <br>• Familiarity with LLM Document Parsing and Data Framework services (preferably LlamaParse and LlamaIndex) <br>• Familiarity with LLM Observability tooling (preferably Weave) <br>• Experience with the OpenAI SDK <br>• Experience with Vector Databases (preferably MongoDB)
  • 2026-02-18T13:53:44Z
IT, Systems Engineer 2
  • New Haven, CT
  • onsite
  • Permanent
  • 70000.00 - 105000.00 USD / Yearly
  • We are looking for a dedicated Systems Engineer to manage and maintain a multi-node Linux server environment, supporting instructional and research activities. This role involves ensuring the reliability and performance of IT infrastructure, providing technical expertise for Linux systems, and collaborating with stakeholders to meet specialized computing needs. The ideal candidate will play a key role in optimizing and securing IT solutions while documenting workflows and procedures to uphold operational excellence.<br><br>Responsibilities:<br>• Administer and maintain a multi-node Linux server environment, including associated workstations used for teaching and research.<br>• Troubleshoot and resolve complex Linux server and workstation issues, utilizing tools like Ansible for automation and configuration management.<br>• Oversee the operation of a small data center, ensuring uninterrupted support for engineering courses and research activities.<br>• Perform system performance tuning, security hardening, and monitoring to ensure optimal operation and reliability.<br>• Implement and document workflows, procedures, and technical standards to enhance system continuity and reliability.<br>• Collaborate with faculty, researchers, and technical staff to address specialized computing requirements.<br>• Build, configure, and document IT infrastructure to align with best practices and service level objectives.<br>• Monitor and analyze performance metrics, identifying areas for improvement and ensuring system efficiency.<br>• Serve as a technical liaison, providing support and maintaining communication with internal and external stakeholders.<br>• Develop and implement robust and secure IT solutions tailored to the needs of the organization.
  • 2026-02-17T12:28:42Z
Sr. Software Engineer - Integrations (iPaaS)
  • Atlanta, GA
  • remote
  • Temporary
  • 65.00 - 72.00 USD / Hourly
  • <p><strong>Overview</strong></p><p>Our client is seeking a Senior Software Engineer to add to their team as they continue building out integrations on a weekly basis. This is a 90 day contract-to-hire position and is 100% remote. Client can only hire in these approved states: Florida, Georgia, Iowa, Kentucky, Maryland, Michigan, Missouri, North Carolina, Nebraska, New York, Ohio, Pennsylvania, South Carolina, South Dakota, Tennessee, Texas, Virginia, Washington, Wisconsin, or West Virginia)</p><p><br></p><p><strong>Key Responsibilities </strong> </p><ul><li>Architect and Implement Integrations Framework: Develop a scalable and resilient integrations framework that prioritizes ETL techniques and data pipeline efficiency. </li><li>Technical Leadership & Mentorship: Lead and mentor a team of 3 engineers, promoting a culture of extreme ownership, accountability, and clear, effective communication. </li><li>Develop Data Integrations: Design and develop robust integrations with third-party systems, emphasizing data extraction, transformation, and loading combined with API-driven approaches. </li><li>Establish Best Practices: Define and enforce best practices for integration design, development, documentation, and open team communication. </li><li>Collaborate with Stakeholders: Work closely with product managers, engineering teams, and other stakeholders, ensuring alignment with business objectives through transparent and proactive communication. </li><li>Oversee Project Delivery: Manage end-to-end delivery of integration projects, ensuring timely completion and accountability at every stage. </li><li>Drive Innovation: Lead initiatives to innovate our integration strategies and technologies, continuously improving our data handling and ETL processes.</li></ul>
  • 2026-03-10T13:08:42Z
1 3