<p>Position Summary</p><p>We are seeking a Data Engineer to design, build, and optimize data pipelines that support analytics, reporting, and AI initiatives.</p><p>Key Responsibilities</p><ul><li>Build ETL/ELT pipelines using Databricks, Spark, or Azure Data Factory.</li><li>Design and manage data warehouses or data lakes.</li><li>Ensure data quality, reliability, and scalability.</li><li>Collaborate with analysts and data scientists to provide clean, structured data.</li><li>Monitor and optimize performance of data pipelines.</li></ul><p><br></p>
We are looking for a driven Data Engineer to join our team in Middleton, Wisconsin. This position offers an exciting opportunity to work on cutting-edge data solutions, collaborating with an agile team to design, implement, and maintain robust data pipelines and reporting tools. The ideal candidate will have hands-on experience with modern data engineering technologies and a strong commitment to delivering high-quality results.<br><br>Responsibilities:<br>• Develop and maintain data pipelines using tools such as Azure Data Factory and Databricks.<br>• Create and optimize Power BI dashboards to visualize and report key business metrics.<br>• Collaborate with business analysts to translate requirements into actionable data solutions.<br>• Support the integration and management of data lakes using Azure Data Lake.<br>• Participate in daily standups and agile team activities to ensure project alignment.<br>• Implement ETL processes to extract, transform, and load data effectively.<br>• Work with Apache Spark and other frameworks to process large datasets efficiently.<br>• Troubleshoot and resolve data-related issues to ensure seamless operations.<br>• Provide technical expertise in the use of DAX technologies to enhance reporting capabilities.<br>• Contribute to the customization of client-specific data solutions as needed.
<p>We are looking for an experienced Data Engineer to join our dynamic team in Raleigh, North Carolina. In this role, you will lead the design and implementation of data solutions, focusing on creating robust data pipelines and integrating sales data into Snowflake. This position offers a chance to collaborate closely with executive leadership and stakeholders while shaping the future of our data architecture. This is hybrid employee position (3 days per week remote).</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Design and implement scalable data solutions to integrate sales data into Snowflake.</p><p>• Develop and optimize data ingestion pipelines to ensure efficient data processing.</p><p>• Provide technical leadership to define and execute the architectural vision for data systems.</p><p>• Advocate for best practices in data engineering, security, and architecture.</p><p>• Mentor and support team members at entry and mid-level positions to enhance their technical skills.</p><p>• Collaborate with product managers, software developers, and other stakeholders to deliver impactful data solutions.</p><p>• Stay current with emerging technologies and trends to drive innovation within the team.</p><p>• Ensure data access controls and security measures are effectively implemented.</p>
<p>Position Title: Data Engineer</p><p>Location: Onsite – Houston Area</p><p>Compensation:</p><ul><li>Base Salary: $120K–$130K</li><li>Bonus: ~10% </li></ul><p>Overview:</p><p>We’re hiring a Data Engineer to lead the development and optimization of enterprise-grade data pipelines and infrastructure. This role is essential to enabling high-quality analytics, reporting, and business intelligence across the organization. The ideal candidate will bring deep expertise in Azure-based data tools, strong SQL and BI capabilities, and a collaborative mindset to support cross-functional data initiatives.</p><p>Key Responsibilities:</p><ul><li>Design, build, and maintain scalable data pipelines using Azure Data Factory, Microsoft Fabric, PySpark, and Spark SQL</li><li>Develop ETL processes to extract, transform, and load data from diverse sources</li><li>Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality, actionable data solutions</li><li>Ensure data integrity, security, and compliance with governance standards</li><li>Optimize pipeline performance and troubleshoot data infrastructure issues</li><li>Manage the data platform roadmap, including capacity planning and vendor coordination</li><li>Support reporting and analytics needs using Power BI and SQL</li><li>Drive continuous improvement in data quality, accessibility, and literacy across the organization</li><li>Monitor usage, deprecate unused datasets, and implement data cleansing processes</li><li>Lead initiatives to enhance data modeling, visualization standards, and reporting frameworks</li></ul><p><br></p>
We are looking for a Senior Data Engineer to join our agile data engineering team in Philadelphia, Pennsylvania. This role is vital in creating, optimizing, and deploying high-quality data solutions that support strategic business objectives. The ideal candidate will collaborate with cross-functional teams to ensure efficient data processes, robust governance, and innovative technical solutions.<br><br>Responsibilities:<br>• Design and implement secure and scalable data pipelines and products.<br>• Troubleshoot and enhance existing data workflows and queries for optimal performance.<br>• Develop and enforce data governance, security, and privacy standards.<br>• Translate complex business requirements into clear and actionable technical specifications.<br>• Participate in project planning, identifying key milestones and resource requirements.<br>• Collaborate with stakeholders to evaluate business needs and prioritize data solutions.<br>• Conduct technical peer reviews to ensure the quality of data engineering deliverables.<br>• Support production operations and resolve issues efficiently.<br>• Contribute to architectural improvements and innovation within data systems.
Job Summary: We are seeking a experienced Data Engineer with 8+ years of experience to architect, build, and maintain scalable data infrastructure and pipelines. This role is pivotal in enabling advanced analytics and data-driven decision-making across the organization. The ideal candidate will have deep expertise in data architecture, cloud platforms, and modern data engineering tools. <br> Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines and ETL processes. Architect data solutions that support business intelligence, machine learning, and operational reporting. Collaborate with cross-functional teams to gather requirements and deliver data solutions aligned with business goals. Ensure data quality, integrity, and security across all systems and platforms. Optimize data workflows and troubleshoot performance issues. Integrate structured and unstructured data from various internal and external sources. Implement and enforce data governance policies and best practices. Preferred Skills: Experience with real-time data streaming technologies (e.g., Kafka, Spark Streaming). Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of CI/CD pipelines and version control systems (e.g., Git). Relevant certifications in cloud or data engineering technologies.
<p><strong>Job Title: Data Engineer</strong></p><p><strong>Location: Sherman Oaks, CA (On-site)</strong></p><p><strong>Salary: Up to $160,000 annually</strong></p><p><strong>Company Overview:</strong> We are seeking a talented Data Engineer to join our team in Sherman Oaks. As a Data Engineer, you will play a crucial role in maintaining and optimizing our data platform, with a primary focus on ensuring high performance and accurate data delivery. You will also have opportunities to contribute to the development and maintenance of our web platforms and tackling various backend and DevOps tasks related to general web engineering.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and maintain scalable data pipelines and processes.</li><li>Architect and optimize data warehouse structures to support analytics and reporting needs.</li><li>Develop and implement algorithms to transform raw data into actionable insights.</li><li>Collaborate with the marketing team to understand and address data-driven business needs.</li><li>Automate manual data workflows to improve efficiency and reduce errors.</li><li>Manage ETL/ELT processes, ensuring seamless data integration and transformation.</li><li>Work with tools such as Snowflake and Segment to enhance data infrastructure.</li><li>Utilize a variety of programming languages and environments to support data engineering tasks.</li></ul><p>For immediate consideration, direct message Reid Gormly on LinkedIn and Apply Now</p><p><br></p>
<p>We are seeking a highly skilled <strong>Senior Data Engineer</strong> to join our growing team. In this role, you’ll design, build, and optimize data pipelines and architectures that support analytics, reporting, and business intelligence across the organization. You’ll play a critical role in enabling data-driven decisions by ensuring data is accessible, reliable, and scalable.</p><p>As a senior member of the team, you’ll also mentor junior engineers, collaborate with business stakeholders, and contribute to the overall data strategy and architecture.</p><p><strong>What You’ll Do</strong></p><ul><li>Design, develop, and maintain <strong>scalable data pipelines and ETL processes</strong>.</li><li>Build and optimize <strong>data models</strong> to support analytics and reporting (Power BI, Fabric, or similar).</li><li>Work with stakeholders to <strong>understand business requirements</strong> and translate them into technical solutions.</li><li>Ensure <strong>data quality, governance, and security</strong> across platforms.</li><li>Collaborate with data scientists, analysts, and business units to deliver <strong>insightful, actionable data</strong>.</li><li>Leverage cloud technologies (Azure, AWS, or GCP) to support data infrastructure.</li><li>Troubleshoot and optimize <strong>SQL queries, data processes, and integrations</strong>.</li><li>Stay current with emerging <strong>data engineering tools and best practices</strong>.</li></ul><p><br></p>
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. In this role, you will contribute to the development and optimization of data pipelines, ensuring the seamless integration of platforms and tools. Based in Jericho, New York, this position offers an exciting opportunity to work with advanced technologies in the non-profit sector.<br><br>Responsibilities:<br>• Design and implement scalable data pipelines to support organizational goals.<br>• Develop and maintain data integration processes using tools such as Apache Spark and Python.<br>• Collaborate with cross-functional teams to leverage Tableau for data visualization and reporting.<br>• Work extensively with Salesforce and NetSuite to optimize data flow and system functionality.<br>• Utilize ETL processes to transform and prepare data for analysis and decision-making.<br>• Apply expertise in Apache Hadoop and Apache Kafka to enhance data processing capabilities.<br>• Troubleshoot and resolve issues within cloud-based and on-premise data systems.<br>• Ensure the security and integrity of all data management practices.<br>• Provide technical support and recommendations for system improvements.
<p>We are seeking a <strong>Senior Data Engineer</strong> with deep expertise in <strong>Microsoft’s data ecosystem</strong> to design, build, and optimize enterprise data solutions. This role is ideal for someone passionate about turning complex data into actionable insights by leveraging <strong>Azure Data Services, Power BI, DAX, and Microsoft Fabric</strong>.</p><p><strong>What You’ll Do</strong></p><ul><li>Design and maintain <strong>scalable data pipelines</strong> and ETL/ELT processes within <strong>Azure Data Factory</strong> and <strong>Synapse Analytics</strong>.</li><li>Architect and optimize <strong>data models</strong> to support reporting and self-service analytics.</li><li>Develop advanced <strong>Power BI dashboards and reports</strong>, using <strong>DAX</strong> to create calculated measures and complex business logic.</li><li>Leverage <strong>Microsoft Fabric</strong> to unify data sources, streamline analytics, and support business intelligence initiatives.</li><li>Ensure data quality, governance, and security across all pipelines and reporting layers.</li><li>Collaborate with analysts, business stakeholders, and cross-functional teams to deliver clean, actionable datasets.</li><li>Monitor and optimize performance of data workflows to ensure scalability and reliability.</li><li>Provide mentorship on BI best practices, data modeling, and efficient DAX usage.</li></ul><p><br></p>
<p>We are looking for an experienced Data Engineer to lead the development and management of scalable data systems and analytics frameworks. This role is ideal for someone passionate about transforming data into actionable insights that drive business decisions. Based in Nutley, New Jersey, you will play a key role in supporting product, marketing, and operational strategies through robust data solutions.</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Design and implement scalable data pipelines and storage solutions to meet organizational needs.</p><p>• Monitor and analyze platform user behavior to uncover insights and identify opportunities for optimization.</p><p>• Build and maintain analytics dashboards and reporting frameworks for internal teams.</p><p>• Develop schemas, models, and data definitions to support core business operations.</p><p>• Oversee instrumentation of data collection across both backend and frontend systems.</p><p>• Provide ad hoc reporting and data support to enhance product and growth initiatives.</p><p>• Ensure data integrity, accuracy, and performance by implementing industry best practices.</p>
<p><strong>Data Engineer Opportunity – Build Impactful Solutions in a Mission-Driven Environment</strong></p><p><strong>Location:</strong> Des Moines, IA (On-site hybrid with flexible scheduling)</p><p><strong>Type:</strong> Full-Time | Direct Hire | Competitive Benefits</p><p><strong>Salary Range:</strong> $70K–$85K </p><p><br></p><p>Are you driven by innovation, collaboration, and the chance to make a meaningful impact through data engineering?! This hands-on role puts you at the forefront of designing and implementing data pipelines, integrations, and dashboards that empower nonprofit organizations to achieve their goals—not just through data analysis, but by creating actionable solutions from scratch.</p><p><br></p><p>For immediate and confidential consideration, send a current resume to Kristen Lee on LinkedIn or apply directly to this posting today! </p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li><strong>Data Pipeline Integration:</strong> Develop and optimize workflows using tools such as Domo and Power BI.</li><li><strong>Dashboard Creation:</strong> Craft visually compelling dashboards tailored for actionable insights.</li><li><strong>Development & Automation:</strong> Implement intelligent solutions with SQL, Python, or other programming tools to automate and streamline data processes.</li><li><strong>Team Collaboration:</strong> Partner with data analytics experts to ensure efficient, behind-the-scenes delivery of high-quality data solutions.</li></ul><p><br></p><p><strong>Why You’ll Love This Role:</strong></p><ul><li><strong>Mission-Driven Impact:</strong> Your work will directly contribute to nonprofit success and meaningful causes.</li><li><strong>Room to Grow:</strong> Join a forward-thinking team that values innovation and offers opportunities to expand technical skills and scale processes.</li><li><strong>Collaborative Environment:</strong> Work alongside experienced professionals committed to creating and delivering actionable insights while cultivating your craft.</li></ul><p><br></p>
<p>We are looking for an experienced Senior Data Engineer to join our team in Oxford, Massachusetts. In this role, you will design and maintain data platforms, leveraging cutting-edge technologies to optimize processes and drive analytical insights. This position requires a strong background in Python development, cloud technologies, and big data tools. This role is hybrid, onsite 3 days a week. Candidates must have GC or be USC.</p><p><br></p><p>Responsibilities:</p><p>• Develop, implement, and maintain scalable data platforms to support business needs.</p><p>• Utilize Python and PySpark to design and optimize data workflows.</p><p>• Collaborate with cross-functional teams to integrate data solutions with existing systems.</p><p>• Leverage Snowflake and other cloud technologies to manage and store large datasets.</p><p>• Implement and refine algorithms for data processing and analytics.</p><p>• Work with Apache Spark and Hadoop to build robust data pipelines.</p><p>• Create APIs to enhance data accessibility and integration.</p><p>• Monitor and troubleshoot data platforms to ensure optimal performance.</p><p>• Stay updated on emerging trends in big data and cloud technologies to continuously improve solutions.</p><p>• Participate in technical discussions and provide expertise during team reviews.</p>
<p><strong>Robert Half </strong>is actively partnering with an Austin-based client to identify a<strong> Data Engineer (contract) </strong>with 5+ years of experience. In this role, you'll build and maintain scalable data pipelines and integrations that support analytics, applications, and machine learning. <strong>This is an on-site role in Austin, Tx.</strong></p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and maintain batch and real-time data pipelines</li><li>Clean, validate, and standardize data across multiple sources</li><li>Build APIs and data integrations for internal systems</li><li>Collaborate with product, analytics, and engineering teams</li><li>Implement monitoring and automated testing for data quality</li><li>Support predictive model deployment and data streaming</li><li>Document processes and mentor entry level engineers</li><li>Architect and manage cloud-based data infrastructure.</li></ul>
We are in search of a Data Engineer to join our team in the Waste, Refuse & Environmental Waste Management industry, located in King of Prussia, Pennsylvania. As a Data Engineer, your central role will be to develop and implement data quality frameworks and manage Azure PaaS components. This opportunity offers a long-term contract employment opportunity.<br><br>Responsibilities<br>• Develop and implement data quality frameworks using custom or SaaS tools.<br>• Operate with Microsoft SQL Server, Azure PaaS components like Azure Data Factories, Data Bricks, and Azure Data Lake Analytics.<br>• Set up business glossary within the overall data governance program.<br>• Create data lineage in Purview or similar tools.<br>• Manage Azure Purview, registering resources, setting up scans, adding glossary teams, lineages, and more.<br>• Maintain a strong knowledge of Role-Based Access Control (RBAC), Managed Identities, Purview, and Data Sensitivity Classification to ensure data security and compliance.<br>• Coordinate data pipelines utilizing tools like Airflow to ensure smooth data flow and effective processing from multiple sources to specific destinations.
We are looking for a skilled Sr. Software Engineer to join our dynamic team in New York, New York. This Contract-to-Permanent position offers an exciting opportunity to design and implement robust, cloud-based technology solutions that drive business success. Ideal candidates will have a passion for solving complex problems, optimizing data pipelines, and collaborating across departments to deliver impactful results.<br><br>Responsibilities:<br>• Design, develop, and manage scalable data engineering solutions using Agile methodologies.<br>• Facilitate seamless data exchange with external vendors and organizations, including integration of relevant systems.<br>• Collaborate closely with Data Analysts, Data Scientists, DBAs, and cross-functional teams to meet business objectives.<br>• Lead technical projects with a focus on efficiency, quality, scalability, and security.<br>• Continuously refine and enhance data pipelines to improve performance and reliability.<br>• Provide guidance to entry-level team members through code reviews and by promoting best practices.<br>• Implement data orchestration pipelines using tools like Argo or Airflow.<br>• Utilize analytic tools such as Python Pandas, R, Tableau, and Plotly to support data-driven decision-making.
<p>We're seeking a Data Engineer to take ownership of backend data processes and cloud integrations. This position plays a key role in designing and maintaining data pipelines using SQL Server, SSIS, and Azure tools to support analytics and reporting across the business.</p>
<p>Hands-On Technical SENIOR Microsoft Stack Data Engineer / On Prem to Cloud Senior ETL Engineer - Position WEEKLY HYBRID position with major flexibility! FULL Microsoft On-Prem stack.</p><p><br></p><p>LOCATION : HYBRID WEEKLY in Des Moines. You must reside in the Des Moines area for weekly onsite . NO travel back and forth and not a remote position! If you live in Des Moines, eventually you can MOSTLY work remote!! This position has upside with training in Azure.</p><p><br></p><p>IMMEDIATE HIRE ! Solve real Business Problems.</p><p><br></p><p>Hands-On Technical SENIOR Microsoft Stack Data Engineer | SENIOR Data Warehouse Engineer / SENIOR Data Engineer / Senior ETL Developer / Azure Data Engineer / ( Direct Hire) who is looking to help modernize, Build out a Data Warehouse, and Lead & Build out a Data Lake in the CLOUD but FIRST REBUILD an OnPrem data warehouse working with disparate data to structure the data for consumable reporting.</p><p><br></p><p>YOU WILL DOING ALL ASPECTS OF Data Engineering. Must have data warehouse & Data Lake skills. You will be in the technical weeds and technical data day to day BUT you could grow to the Technical Leader of this team. ETL skills like SSIS., working with disparate data. SSAS is a Plus! Fact and Dimension Data warehouse experience AND experience.</p><p>Hands-On Technical Hands-On Technical SENIOR Microsoft Stack Data Engineer / SENIOR Data Warehouse / SENIOR Data Engineer / Azure Data Factory Data Engineer This is a Permanent Direct Hire Hands-On Technical Manager of Data Engineering position with one of our clients in Des Moines up to 155K Plus bonus</p><p><br></p><p>PERKS: Bonus, 2 1/2 day weekends !</p>
<p>We are looking for a skilled Data Engineer to join our team in Toms River, New Jersey. In this role, you will design, manage, and optimize data pipelines and repositories, ensuring reliable and accurate data flow across various platforms. You will also collaborate with cross-functional teams to deliver actionable insights and support business intelligence initiatives.</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Architect and manage data systems, including data lakes and repositories, ensuring optimal performance and reliability.</p><p>• Monitor and maintain daily data pipelines in Azure, addressing any issues to ensure seamless operations.</p><p>• Develop and enhance data pipelines using Python, Azure Functions, Logic Apps, and Synapse, while integrating new platforms when necessary.</p><p>• Oversee the accuracy and timeliness of data loads into BI models, reconciling discrepancies between Power BI and source reports.</p><p>• Create and refine Power BI dashboards and models to support operational and strategic reporting needs.</p><p>• Administer Power BI workspaces, managing licensing, access permissions, and user security.</p><p>• Provide technical support to Power BI users, assisting with troubleshooting and new dashboard development.</p><p>• Partner with stakeholders to scope and deliver analyses, addressing both recurring and ad hoc business needs.</p><p>• Support financial reporting and performance analysis for Finance and Accounting teams.</p><p>• Build a comprehensive understanding of business processes to deliver valuable strategic insights.</p>
<p><strong>Skills and Knowledge:</strong></p><ul><li>Excellent understanding of Relational Database Design</li><li>Strong technical experience in database development, performing DDL operations, writing queries and stored procedures, and optimizing database objects</li><li>Working knowledge of ETL using SSIS or comparable tool</li><li>Solid reporting skills preferably using SSRS</li><li>Establishment and implementation of reporting tools to create reports and dashboards using SSRS, Power BI, Tableau, or similar analytic tool</li><li>Experience with Data management standards such as data governance is a plus</li><li>Effective analyst able to work closely with non-technical users</li><li>Knowledge of MS Access, MS Visual Studio, Crystal Reports or Datawatch Monarch is a plus</li><li>Proficiency in interpersonal communication, presentation, problem solving and</li></ul><p><br></p>
<p>We are looking for a highly skilled Data Engineering and Software Engineering professional to design, build, and optimize our Data Lake and Data Processing platform on AWS. This role requires deep expertise in data architecture, cloud computing, and software development, as well as the ability to define and implement strategies for deployment, testing, and production workflows.</p><p><br></p><p>Key Responsibilities:</p><ul><li>Design and develop a scalable Data Lake and data processing platform from the ground up on AWS.</li><li>Lead decision-making and provide guidance on code deployment, testing strategies, and production environment workflows.</li><li>Define the roadmap for Data Lake development, ensuring efficient data storage and processing.</li><li>Oversee S3 data storage, Delta.io for change data capture, and AWS data processing services.</li><li>Work with Python and PySpark to process large-scale data efficiently.</li><li>Implement and manage Lambda, Glue, Kafka, and Firehose for seamless data integration and processing.</li><li>Collaborate with stakeholders to align technical strategies with business objectives, while maintaining a hands-on engineering focus.</li><li>Drive innovation and cost optimization in data architecture and cloud infrastructure.</li><li>Provide expertise in data warehousing and transitioning into modern AWS-based data processing practices.</li></ul>
Responsibilities<br>• Develop and maintain high-performance SQL queries and stored procedures in SQL Server, DB2, and Azure SQL<br>• Design, optimize, and automate ETL processes for operational and logistics data<br>• Build integrations using RESTful APIs and structured data transfers between systems<br>• Create and manage Tableau dashboards to support operational insights and executive reporting<br>• Support TruckMate TMS data extractions and integrations across departments<br>• Manage and monitor automated workflows via GoAnywhere MFT<br>• Work with Salesforce data for syncing, reporting, and operational use<br>• Collaborate with operations, dispatch, and analytics teams to deliver data solutions<br>• Proactively improve SQL performance, eliminate bottlenecks, and ensure data accuracy<br>• Document technical processes, schema changes, and data flow logic<br>________________________________________<br>Requirements<br>• 4–5+ years of hands-on experience in a Data Engineer or similar data-heavy role<br>• Expert-level SQL skills — must be fluent with:<br>o Window functions<br>o CTEs<br>o Nested subqueries<br>o Index tuning<br>o Performance optimization<br>o Complex joins and aggregations<br>• Direct experience with:<br>o Microsoft SQL Server<br>• Experience with RESTful API development/integration (JSON, tokens, authentication, etc.)<br>• Proficiency in Tableau for building dashboards and visualizing operational data<br>• Experience working with TruckMate or similar transportation management systems<br>• Familiarity with GoAnywhere or other MFT platforms<br>• Working knowledge of Salesforce data structures and integration points<br>• Strong communication and documentation skills<br>• Ability to work independently and manage priorities in a production environment<br>________________________________________<br>Preferred Qualifications<br>• Background in logistics, transportation, or supply chain<br>• Experience in building data pipelines that support operations in real-time<br>• Comfortable troubleshooting across multiple database platforms
We are looking for an experienced Data Engineer to join our dynamic team in Wyoming, Michigan, for a Contract-to-Permanent position. In this role, you will play a key part in designing and managing data systems, developing data pipelines, and ensuring optimal data governance practices across multi-cloud environments. This position offers an exciting opportunity to contribute to cutting-edge healthcare data solutions while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Design and implement robust data architecture frameworks, including modeling, metadata management, and database security.<br>• Create and maintain scalable data models that support both operational and analytical needs.<br>• Develop and manage data pipelines to extract, transform, and load data from diverse sources into a centralized data warehouse.<br>• Collaborate with various departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, ensuring reliability and efficiency.<br>• Implement and enforce data governance policies, including data retention, backup, and security protocols.<br>• Stay updated on emerging technologies in data engineering, such as AI tools and cloud-based solutions, and integrate them into existing systems.<br>• Establish and track key performance indicators (KPIs) to measure the effectiveness of data systems.<br>• Provide mentorship and technical guidance to team members to foster a collaborative work environment.<br>• Evaluate and adopt new tools and technologies to enhance data capabilities and streamline processes.
We are seeking a Data Engineer to join our team based in Bethesda, Maryland. As part of our Investment Management team, you will play a crucial role in designing and maintaining data pipelines in our Azure Data Lake, implementing data warehousing strategies, and collaborating with various teams to address data engineering needs.<br><br>Responsibilities:<br><br>• Design robust data pipelines within Azure Data Lake to support our investment management operations.<br>• Implement effective data warehousing strategies that ensure efficient storage and retrieval of data.<br>• Collaborate with Power BI developers to integrate data reporting seamlessly and effectively.<br>• Conduct data validation and audits to uphold the accuracy and quality of our data pipelines.<br>• Troubleshoot pipeline processes and optimize them for improved performance.<br>• Work cross-functionally with different teams to address and fulfill data engineering needs with a focus on scalability and reliability.<br>• Utilize Apache Kafka, Apache Pig, Apache Spark, and other cloud technologies for efficient data visualization and algorithm implementation.<br>• Develop APIs and use AWS technologies to ensure seamless data flow and analytics.<br>• Leverage Apache Hadoop for effective data management and analytics.