We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>We are looking for a Data Engineer to join a team focused on building reliable, scalable data solutions. In this role, you will create and enhance cloud-based data pipelines, organize data for analytics, and help ensure that business teams have access to trusted information. This position also partners closely with technical and non-technical stakeholders to turn reporting and data needs into practical engineering outcomes.</p><p><br></p><p>Responsibilities:</p><p>• Create and support scalable data ingestion and transformation workflows using Azure Data Factory, Databricks, and PySpark.</p><p>• Connect and consolidate data from enterprise platforms, operational databases, telematics feeds, APIs, and other internal or external sources.</p><p>• Structure and manage data within Azure Data Lake and lakehouse environments to support performance, accessibility, and long-term maintainability.</p><p>• Design curated datasets, data models, and schemas that improve usability for analytics, business intelligence, and downstream reporting.</p><p>• Apply governance and lineage practices through Unity Catalog while promoting strong data quality, consistency, and security standards.</p><p>• Work with business stakeholders and cross-functional teams to gather requirements, define technical specifications, and deliver data solutions aligned with operational needs.</p><p>• Improve pipeline stability and efficiency by troubleshooting failures, resolving performance issues, and refining storage and query strategies.</p><p>• Support Power BI reporting by preparing datasets, assisting with model improvements, and helping maintain reporting standards and governance practices.</p><p>• Use GitHub-based development practices for version control, peer review, CI/CD, and disciplined deployment processes.</p><p>• Mentor less-experienced engineers and contribute to a collaborative environment focused on continuous improvement and dependable delivery.</p>
We are looking for a skilled Data Engineer to join our team in Carmel, Indiana. In this long-term contract role, you will design, build, and optimize data pipelines and systems to support business needs. The ideal candidate will bring expertise in data engineering tools and frameworks, along with a passion for solving complex challenges.<br><br>Responsibilities:<br>• Develop and maintain robust data pipelines using modern frameworks and tools.<br>• Implement ETL processes to ensure accurate and efficient data transformation.<br>• Optimize data storage and retrieval systems for performance and scalability.<br>• Collaborate with cross-functional teams to understand data requirements and deliver solutions.<br>• Utilize Apache Spark and Hadoop for large-scale data processing.<br>• Work with Databricks to streamline data workflows and enhance analytics.<br>• Apply machine learning techniques using tools like scikit-learn and Pandas.<br>• Integrate Kafka for real-time data streaming and processing.<br>• Analyze and troubleshoot data-related issues to ensure system reliability.<br>• Document processes and workflows to support future development and maintenance.
<p><strong>Data Scientist (Big Data) III – Contractor</strong></p><p><strong>Employment Type:</strong> 27 Week Contract, Potential for Extension or Conversion</p><p><strong>Location: </strong>MUST CURRENTLY RESIDE in Philadelphia Region</p><p><strong>Employment Type:</strong> Contract / Temporary</p><p><strong>Pay: </strong>Available on W2 </p><p><strong>Position Overview</strong></p><p>The Senior Data Scientist (Big Data) will support large‑scale data science initiatives by designing, developing, and deploying advanced analytical and machine learning solutions. This role collaborates closely with data engineers, analysts, software developers, and business stakeholders to deliver scalable, production‑ready data products that drive data‑informed decision making.</p><p>The successful candidate will apply statistical modeling, machine learning, and big data technologies to solve complex business problems, while also providing technical guidance and mentorship across project teams.</p><p><strong>Key Responsibilities</strong></p><ul><li>Lead complex, cross‑functional data science initiatives delivering solutions across multiple technologies and platforms.</li><li>Design, develop, and deploy data mining, statistical, machine learning, and graph‑based algorithms for large‑scale data sets.</li><li>Partner with data engineering teams to ensure proper implementation, performance, and operational use of analytical solutions.</li><li>Review and assess data science programs and models at an enterprise level to evaluate suitability, performance, and scalability.</li><li>Build and maintain scalable big‑data analytics solutions supporting accurate targeting, forecasting, and advanced insights.</li><li>Develop and support end‑to‑end machine learning pipelines, including data preparation, training, testing, validation, and deployment.</li><li>Establish performance metrics, monitoring, and evaluation procedures for models in production.</li><li>Translate complex analytical findings into clear, actionable insights for technical and non‑technical stakeholders.</li><li>Provide mentorship and technical guidance to junior team members.</li><li>Contribute to data strategy, methodology selection, and continuous improvement of analytics capabilities.</li><li>Support testing, validation, and user acceptance activities to ensure alignment with business requirements.</li><li>Perform additional related duties as needed to support analytics and data initiatives.</li></ul>
<p>We are looking for an experienced Data Engineer to design and support data exchange solutions that connect external business partners with internal systems. This role will mainly work remotely with different office locations. We are looking for a candidate who lives in NC, within 2 hours of Greensboro, NC. This role focuses on building reliable integration processes, transforming structured files and API-based data, and ensuring critical information is available for reporting and operational use. The ideal candidate brings strong technical depth in data movement and troubleshooting, along with a practical understanding of manufacturing and supply chain workflows.</p><p><br></p><p>Responsibilities:</p><p>• Build and maintain business-to-business data interfaces that onboard new partner organizations and align incoming data with internal database structures.</p><p>• Develop automated workflows that ingest, transform, validate, and deliver data using file-based exchanges, APIs, and structured transaction formats such as EDI and X12.</p><p>• Configure and manage end-to-end integration processes across system interfaces, including flat-file handling, file sharing, and reporting-related data movement.</p><p>• Lead data transformation efforts through the full lifecycle by designing solutions, testing functionality, deploying processes, and stabilizing production performance.</p><p>• Investigate integration failures or data quality issues, identify root causes, and implement corrective actions to restore reliable processing.</p><p>• Partner with business intelligence and reporting teams to provide access to accurate, usable data sources that support analysis and operational decision-making.</p><p>• Apply manufacturing and supply chain process knowledge to structure data flows that support purchasing, components, orders, and assembly-related transactions.</p><p>• Use available tools and platforms to execute integration projects independently, including extracting data from enterprise applications and translating it into usable formats.</p><p>• Create scalable data pipelines that enable customer and order transactions to move through systems with minimal manual intervention.</p>
<p>We are looking for an experienced Data Engineer to join our team in Cleveland, Ohio. In this role, you will design, implement, and optimize data solutions that support business intelligence and analytics needs. If you have a passion for working with cutting-edge technologies and thrive in a fast-paced environment, this opportunity is for you.</p><p><br></p><p>Responsibilities:</p><p>• Develop and refine data models to ensure optimal performance and scalability.</p><p>• Design and implement data warehouse solutions for managing structured and unstructured data.</p><p>• Create and maintain data integration processes to support analytics and data-driven applications.</p><p>• Establish robust data quality and validation protocols to guarantee accuracy and consistency.</p><p>• Collaborate with business intelligence teams and stakeholders to gather requirements and deliver tailored solutions.</p><p>• Monitor and address issues within data pipelines, including performance bottlenecks and system errors.</p><p>• Research and adopt emerging technologies and best practices to enhance data engineering capabilities.</p>
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
We are looking for a Data Engineer to strengthen our data and analytics capabilities in West Chester, Pennsylvania. This role will shape reliable data architecture, support enterprise reporting, and help turn complex information into practical business insight. The position is ideal for someone who enjoys building scalable data solutions, improving performance, and working across Microsoft-based data technologies.<br><br>Responsibilities:<br>• Design and support enterprise data solutions that enable dependable analytics, reporting, and operational decision-making.<br>• Build, optimize, and maintain database structures and data processing workflows using SQL Server, Azure SQL Database, and T-SQL.<br>• Develop and enhance SSIS packages and related data pipelines to ensure accurate, timely, and efficient movement of information across systems.<br>• Create scalable datasets and reporting foundations that support Power BI dashboards and broader business intelligence needs.<br>• Monitor data platform performance, troubleshoot issues, and implement improvements that increase stability, security, and efficiency.<br>• Partner with business and technical stakeholders to translate reporting and analytics goals into practical data engineering solutions.<br>• Lead efforts to move legacy SQL Server workloads into Azure-based services while maintaining data integrity and minimizing disruption.<br>• Establish standards and best practices for data quality, documentation, and ongoing platform maintenance.
<ul><li>Design, develop, and optimize data pipelines using Azure Data Services (Azure Data Factory, Azure Data Lake Storage, Azure Synapse).</li><li>Build and maintain scalable ETL/ELT workflows using Databricks (Spark, PySpark, Delta Lake).</li><li>Implement and manage data orchestration and dependency management using Dagster or similar tools.</li><li>Partner with analytics, data science, and product teams to ensure reliable, high-quality data availability.</li><li>Optimize data models and storage strategies for performance, scalability, and cost efficiency.</li><li>Ensure data quality, observability, and reliability through monitoring, logging, and automated validation.</li><li>Support CI/CD pipelines and infrastructure-as-code practices for data platforms.</li><li>Enforce data security, governance, and compliance best practices within Azure.</li></ul>
<p><strong>Data Engineer (Python / AWS)</strong></p><p><strong>Location:</strong> Remote (Northeast / Greater Boston area preferred)</p><p><strong>Type:</strong> Full-Time</p><p><strong>Level:</strong> Mid-to-Senior Individual Contributor</p><p><strong>About the Role</strong></p><p>We are looking for a strong individual contributor who excels in the Python data ecosystem and enjoys building reliable, scalable data pipelines. This role sits within a data engineering group responsible for integrating large volumes of data from external partners and transforming it into usable datasets for internal teams. You’ll work with modern cloud tools while also helping our team gradually transition away from a legacy platform.</p><p>This position is ideal for someone who wants to stay hands-on, focus on technical execution, and remain in an IC role for the next several years. We’re not looking for someone who is aiming to move immediately into architecture or leadership.</p><p>This team is fully distributed, and although candidates in the Boston area can go into the office, the rest of the group is remote. Anyone local may occasionally sit with other teams when on site.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Build and maintain ETL pipelines that ingest, clean, and aggregate data received from external vendors and large enterprise partners.</li><li>Develop Python‑based data processing workflows deployed on AWS cloud services.</li><li>Work with tools such as AWS Glue, Airflow, dbt, and PySpark to support data transformations and pipeline orchestration.</li><li>Help modernize existing workflows and assist in the gradual migration away from a legacy data system.</li><li>Collaborate with internal stakeholders to understand data needs, define requirements, and ensure reliable integration of partner data feeds.</li><li>Troubleshoot pipeline issues, optimize performance, and improve overall system stability.</li><li>Contribute to best practices around code quality, testing, documentation, and data governance.</li></ul><p><br></p>
<p>We are supporting our client in hiring a Product Data Engineer who will take full ownership of their product information environment. This role centers on managing their PIM solution (Salsify), improving data structures, and building automated, API‑driven integrations that ensure product data is clean, scalable, and synchronized across platforms.</p><p>This position will be deeply involved in a major product‑data overhaul, including cleanup, restructuring, and long‑term system improvements. The ideal candidate is someone who enjoys solving data problems, building automated workflows, and improving the reliability of product information across systems.</p><p><br></p><p> Key Responsibilities</p><p>Product Data Platform Ownership</p><ul><li>Act as the primary administrator for the PIM platform</li><li>Define and maintain product attributes, hierarchies, and data relationships</li><li>Create validation rules, formulas, and workflows to enforce data standards</li><li>Manage permissions, governance, and platform configuration</li><li>Troubleshoot issues related to imports, exports, and publishing</li></ul><p>Integrations & Automation</p><ul><li>Manage integrations between the PIM and internal/external systems (eCommerce, retail, etc.)</li><li>Build and support API‑based data flows with a focus on reliability and scale</li><li>Develop automation using scripting (Python preferred)</li><li>Support event‑driven or automated pipelines to reduce manual work</li><li>Monitor integration performance and proactively resolve failures</li></ul><p>Product Data Improvements</p><ul><li>Contribute to a large‑scale product data cleanup and restructuring effort</li><li>Identify gaps in current data models and workflows</li><li>Partner with cross‑functional teams to define scalable data standards</li><li>Improve system design to support long‑term growth</li></ul><p>Channel Syndication</p><ul><li>Manage product data distribution to digital and retail channels</li><li>Ensure data meets channel‑specific requirements</li><li>Troubleshoot publishing issues and improve success rates</li><li>Support product launches and updates across channels</li></ul><p>Data Governance & Quality</p><ul><li>Establish naming conventions, validation rules, and governance standards</li><li>Define and track data quality KPIs (accuracy, completeness, timeliness)</li><li>Utilize or support data governance tools</li><li>Work with business teams to improve data accountability</li></ul><p>Reporting & Metrics</p><ul><li>Build dashboards and reports on data quality and system performance</li><li>Provide insights to leadership to support decision‑making</li><li>Track syndication outcomes and operational metrics</li></ul><p>Operational Support</p><ul><li>Handle day‑to‑day platform usage, enhancements, and issue resolution</li><li>Prioritize incoming requests and tickets</li><li>Ensure stability and reliability of product data operations</li></ul><p><br></p>
<p>A Manufacturing and distribution company is looking for a Data Engineer with 3 + yeasr of experience to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p>• Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p>• Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p>• Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p>• Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p>• Ensure data quality, governance, and integrity by implementing and enforcing best practice</p>
<p>Our client is looking for an experienced Data Governance Analyst to join their growing team. They need someone who can: Lead the development and implementation of data governance frameworks to support academic, administrative, and research data needs across the university system. Establish data stewardship roles and clarify data ownership for key institutional domains such as student information, financial aid, HR, research compliance, and finance. Create and enforce data policies, standards, and procedures to improve data quality, accuracy, accessibility, and security across campuses and departments. Ensure compliance with higher-ed regulatory and reporting requirements (e.g., FERPA, IPEDS, NCAA, state reporting), and coordinate with Legal, IT Security, and Institutional Compliance teams. Implement and optimize governance technology (data catalog, lineage, and quality tools) to support system-wide reporting, analytics, and decision support. Promote data literacy and provide training to faculty, staff, and administrators to enhance responsible and effective data use. Facilitate collaboration across academic units, administrative offices, and central IT to align governance efforts with institutional priorities and operational needs. Monitor data quality and governance KPIs, report progress to leadership, and drive continuous improvement to support strategic planning, accreditation, and institutional research initiatives. Expereince as a Data Governance analyst. They have a fragmented Data Governance framework in place, and the goal is for this person to unify it across the enterprise. The ideal candidate will be a data Governance Analyst looking for a more challenging opportunity to lead the implementation of Purview and advancing our data governance practices. Administration experience with Microsoft Purview or a similar tool like Collibra, Informatica, Databricks, Etc. This role will be assisting to connect Microsoft Fabric to Purview. Experience with Microsoft Purview is preferred. They have the Data Security layer of Purview implemented. This role will be working with the Microsoft partner implement the Data Governance layer (Unified Data Catalogue, Data Quality, Data Lineage, Data Health management.) See attached overview. Excellent communication skills. Someone who will lead change and help advance their DG practice. Get buy in from stakeholders. </p>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>We are looking for an experienced and detail-oriented Senior Data Engineer to join our team on a long-term contract basis. In this role, you will focus on identifying and resolving data integrity issues across enterprise systems to ensure the accuracy and reliability of critical data. This position is based in Cleveland, Ohio, and requires hands-on expertise with data analysis, remediation, and automation tools.</p><p><br></p><p>Responsibilities:</p><p>• Investigate and analyze data inconsistencies and errors across enterprise systems.</p><p>• Perform root cause analysis to identify the source of data integrity issues.</p><p>• Develop and execute scripts to remediate corrupted, missing, or misaligned data.</p><p>• Collaborate with integration and platform teams to implement preventative measures for recurring data problems.</p><p>• Design and implement data validation processes, monitoring systems, and quality controls.</p><p>• Utilize AI-assisted analytics to enhance anomaly detection and streamline remediation workflows.</p><p>• Write and optimize complex SQL queries to support data reconciliation efforts.</p><p>• Contribute to long-term improvements in enterprise data quality and processes.</p><p>• Work directly with production data to ensure accuracy and reliability.</p>
<p>The Database Engineer will design, develop, and maintain database solutions that meet the needs of our business and clients. You will be responsible for ensuring the performance, availability, and security of our database systems while collaborating with software engineers, data analysts, and IT teams.</p><p> </p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, implement, and maintain highly available and scalable database systems (e.g., SQL, NoSQL).</li><li>Optimize database performance through indexing, query optimization, and capacity planning.</li><li>Create and manage database schemas, tables, stored procedures, and triggers.</li><li>Develop and maintain ETL (Extract, Transform, Load) processes for data integration.</li><li>Ensure data integrity and consistency across distributed systems.</li><li>Monitor database performance and troubleshoot issues to ensure minimal downtime.</li><li>Collaborate with software development teams to design database architectures that align with application requirements.</li><li>Implement data security best practices, including encryption, backups, and access controls.</li><li>Stay updated on emerging database technologies and recommend solutions to enhance efficiency.</li><li>Document database configurations, processes, and best practices for internal knowledge sharing.</li></ul><p> </p>
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
A top-tier client of ours is seeking a Software Developer / Data Engineer to play a key role in supporting mission-critical data systems within a government intelligence environment. You’ll design and deliver high-performance data pipelines and architectures that drive advanced analytics and real-time insights. <br> Key Responsibilities Design and implement scalable data pipelines and data architectures Develop and optimize data storage solutions (SQL, NoSQL, graph databases) Support ETL processes and ensure efficient data throughput and performance Work closely with stakeholders to translate data requirements into technical solutions Maintain and enhance data infrastructure using tools like Apache Airflow and Docker
We are looking for an experienced Principal Software Engineer to design, develop, and optimize large-scale systems while ensuring high availability and performance. This role requires expertise in cloud-based platforms and distributed architectures, along with a commitment to secure coding practices and innovative problem-solving. Based in Bowie, Maryland, this position offers an exciting opportunity to contribute to cutting-edge software solutions.<br><br>Responsibilities:<br>• Develop and maintain large-scale, always-on data systems using Kotlin/Java, C#, and JavaScript.<br>• Design and implement distributed systems and high-availability architectures on cloud-based platforms.<br>• Utilize Infrastructure as Code to manage both managed and unmanaged services effectively.<br>• Optimize performance, conduct profiling, and execute tuning for complex systems to ensure efficiency.<br>• Build and maintain large data warehouse systems such as Snowflake or BigQuery.<br>• Implement DevOps practices, including the development and management of CI/CD pipelines.<br>• Ensure adherence to security best practices and secure coding standards across projects.<br>• Engineer software solutions capable of processing and managing extensive volumes of data.<br>• Collaborate with cross-functional teams to understand and adapt to new problem spaces.<br>• Communicate technical concepts effectively to diverse audiences, both in writing and verbally.
<p>We are seeking a highly skilled Full Stack Data Engineer who thrives in building modern, scalable data platforms from the ground up. This is an opportunity to work on a cloud-native data stack, influence architecture decisions, and deliver solutions that directly power business insights and operations.</p><p>If you enjoy owning the full lifecycle—from data ingestion to application layer—this role will be a strong fit.</p><p><br></p><p><strong>What You’ll Do</strong></p><p>You will operate as a hands-on engineer across the full data stack:</p><ul><li>Design, build, and maintain scalable ELT pipelines and workflows</li><li>Develop and optimize data models and warehouse structures in Snowflake</li><li>Build full stack data applications and backend services</li><li>Write clean, efficient Python and SQL code</li><li>Develop reusable data frameworks and components</li><li>Implement automated testing for data quality and reliability</li><li>Build and maintain CI/CD pipelines (GitHub-based)</li><li>Create reporting and visualization solutions (Power BI or similar)</li><li>Monitor production systems and troubleshoot data issues proactively</li></ul><p><strong>Tech Stack</strong></p><ul><li>Data Platform: Snowflake</li><li>Languages: Python, SQL</li><li>Cloud: AWS / Azure / GCP (environment dependent)</li><li>DevOps: GitHub, CI/CD pipelines</li><li>Visualization: Power BI (or similar BI tools)</li></ul>
<p><strong>Software Engineer (Databricks/Data Platform)</strong></p><p><strong>Hybrid 3-4 days onsite in Alpharetta, GA</strong></p><p><strong>Duration through 10/30/26</strong></p><p><br></p><p>We are looking for an experienced Software Engineer III to join our team in Alpharetta, GA. In this role, you will play a critical part in supporting and developing a Databricks-based data platform, focusing on creating scalable and efficient solutions during the development phase. This is a long-term contract position, requiring in-office work three to four days per week.</p><p><br></p><p>Responsibilities:</p><ul><li>Develop and support Databricks notebooks, jobs, and workflows</li><li>Write, optimize, and maintain PySpark and Python code for data processing</li><li>Help design scalable, reliable, and efficient data pipelines</li><li>Apply Spark best practices (partitioning, caching, joins, file sizing)</li><li>Work with Delta Lake tables and data models</li><li>Perform data validation and quality checks during development</li><li>Support cluster configuration and sizing for development workloads</li><li>Identify performance bottlenecks early and recommend improvements</li><li>Collaborate with Data Engineers to ensure solutions are production-ready</li><li>Document development standards, patterns, and best practices</li></ul>
<p><strong>AWS Infrastructure Engineer </strong></p><p><strong>13 Week Contract to Hire </strong></p><p><strong>Onsite Hybrid: </strong>Columbus, OH or Dallas, TX or Minneapolis, MN </p><p><strong>Pay: </strong>Available on W2</p><p><strong>Job Summary</strong></p><p>We are seeking an experienced <strong>Platform Engineer</strong> to join a growing Platform Engineering team responsible for supporting and evolving a modern <strong>Data Science platform</strong>. This role focuses on building, managing, and securing cloud-based infrastructure that enables Data Science and AI/ML teams to operate efficiently at scale. The ideal candidate brings strong AWS expertise, hands-on infrastructure automation experience, and the ability to collaborate across technical and business teams.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Support and maintain ongoing <strong>Data Science infrastructure operations</strong></li><li>Design, build, and deploy <strong>AWS environments</strong> using automated <strong>CI/CD pipelines</strong></li><li>Manage and scale large, secure cloud environments to support current and future Data Science initiatives</li><li>Implement, own, and improve the <strong>image management lifecycle process</strong></li><li>Assist with the setup and ongoing management of <strong>AWS accounts</strong> dedicated to the Data Science platform</li><li>Develop and maintain infrastructure pipelines using <strong>CI/CD tools</strong> (e.g., Azure DevOps)</li><li>Build and manage environments using <strong>Infrastructure as Code (IaC)</strong> tools such as <strong>Terraform</strong></li><li>Develop scripts and applications using programming languages such as <strong>Python</strong></li><li>Manage and support database technologies including <strong>Athena, Oracle, MySQL, and PostgreSQL</strong></li><li>Leverage AWS services to enable <strong>Data Lake, Data Science, and AI/ML workloads</strong></li><li>Respond to requests from development and business users, removing technical roadblocks</li><li>Manage secured infrastructure environments, applying security controls and guardrails</li><li>Identify, remediate, and track infrastructure vulnerabilities within defined SLAs</li><li>Maintain audit logs and support compliance-related needs</li><li>Perform system upgrades, patching, and provide <strong>on-call support</strong> as required</li><li>Conduct root cause analysis and knowledge transfer sessions with internal teams</li><li>Collaborate closely with <strong>Network, Database, Infrastructure, and Architecture teams</strong> to align on platform strategy and delivery</li></ul><p><br></p>
We are looking for a skilled Sr. Software Engineer to join a dynamic team within the real estate and property industry. In this contract-to-permanent position, you will play a key role in building and maintaining custom web applications that drive operational efficiency across the organization. This role is based in Chicago, Illinois, and offers a hybrid work environment with three days onsite per week.<br><br>Responsibilities:<br>• Design, develop, test, and deploy full stack web applications using React and .NET technologies.<br>• Own the architecture, scalability, and maintainability of internal applications to ensure long-term performance.<br>• Build and integrate APIs, connecting front-end, back-end, and database layers seamlessly.<br>• Troubleshoot and enhance existing applications to improve functionality and user experience.<br>• Partner with data engineering and analytics teams to align applications with the organization's data platform.<br>• Write clean, secure, and well-documented code that adheres to industry best practices.<br>• Conduct code reviews and participate in deployment processes to maintain high-quality standards.<br>• Provide production support and resolve technical issues in a timely manner.<br>• Contribute to data-related tasks such as SQL queries, basic data modeling, and collaborating on analytics projects.
<p><strong>Platform Engineer – Data Science Platform</strong></p><p><strong>13 Week Contract to Hire </strong></p><p><strong>Onsite Hybrid: </strong>Columbus, OH or Dallas, TX or Minneapolis, MN</p><p><strong>Pay: </strong>Available on W2</p><p><strong>Job Summary</strong></p><p>We are seeking an experienced <strong>Platform Engineer</strong> to join a growing Platform Engineering team responsible for supporting and evolving a modern <strong>Data Science platform</strong>. This role focuses on building, managing, and securing cloud-based infrastructure that enables Data Science and AI/ML teams to operate efficiently at scale. The ideal candidate brings strong AWS expertise, hands-on infrastructure automation experience, and the ability to collaborate across technical and business teams.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Support and maintain ongoing <strong>Data Science infrastructure operations</strong></li><li>Design, build, and deploy <strong>AWS environments</strong> using automated <strong>CI/CD pipelines</strong></li><li>Manage and scale large, secure cloud environments to support current and future Data Science initiatives</li><li>Implement, own, and improve the <strong>image management lifecycle process</strong></li><li>Assist with the setup and ongoing management of <strong>AWS accounts</strong> dedicated to the Data Science platform</li><li>Develop and maintain infrastructure pipelines using <strong>CI/CD tools</strong> (e.g., Azure DevOps)</li><li>Build and manage environments using <strong>Infrastructure as Code (IaC)</strong> tools such as <strong>Terraform</strong></li><li>Develop scripts and applications using programming languages such as <strong>Python</strong></li><li>Manage and support database technologies including <strong>Athena, Oracle, MySQL, and PostgreSQL</strong></li><li>Leverage AWS services to enable <strong>Data Lake, Data Science, and AI/ML workloads</strong></li><li>Respond to requests from development and business users, removing technical roadblocks</li><li>Manage secured infrastructure environments, applying security controls and guardrails</li><li>Identify, remediate, and track infrastructure vulnerabilities within defined SLAs</li><li>Maintain audit logs and support compliance-related needs</li><li>Perform system upgrades, patching, and provide <strong>on-call support</strong> as required</li><li>Conduct root cause analysis and knowledge transfer sessions with internal teams</li><li>Collaborate closely with <strong>Network, Database, Infrastructure, and Architecture teams</strong> to align on platform strategy and delivery</li></ul>