We are looking for an experienced AWS/Databricks Engineer to join our team in Houston, Texas. This is a long-term contract position ideal for professionals with a strong background in data engineering and cloud technologies. The role will focus on leveraging Python and Databricks to optimize data processes and enhance system performance.<br><br>Responsibilities:<br>• Develop and implement scalable data engineering solutions using Python and Databricks.<br>• Collaborate with cross-functional teams to design and optimize data workflows.<br>• Migrate and enhance existing Python scripts to Databricks for improved functionality.<br>• Utilize cloud technologies to support data integration and analytics processes.<br>• Implement algorithms and data visualization methods to present actionable insights.<br>• Design and maintain APIs to streamline data interactions and integrations.<br>• Work with tools like Apache Kafka, Spark, and Hadoop to manage large-scale data systems.<br>• Perform data analysis and develop strategies to improve system efficiency.<br>• Ensure high-quality data pipelines and address performance bottlenecks.<br>• Stay updated on emerging trends in data engineering and recommend innovative solutions.
<p><br></p><p>Software Platform Engineer will design, build, and maintain a core Data & Machine Learning platform.</p><p><br></p><p>Platform Development: Design and implement new features for our AWS and Databricks-based platform, staying current with industry trends and advancements in AI. Core Component Implementation: Test and integrate central platform components that support our technology stack and serve tenants across the organization. Collaboration: Partner with other engineering teams to identify and deliver platform enhancements that solve specific business problems. Maintain Excellence: Uphold strict security protocols, compliance controls, and architectural principles in all aspects of your work.</p><p><br></p><p><br></p>
<p>Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies.</p><p>Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives.</p><p>Oversee and execute migrations from legacy relational databases into Databricks-based ecosystems.</p><p>Design and structure scalable data pipelines and foundational data infrastructure aligned with organizational goals.</p><p>Create and maintain ETL/ELT processes within Databricks to ensure efficient ingestion, transformation, and delivery of data.</p><p>Continuously refine and optimize data workflows to improve performance, stability, and data quality across all processes.</p><p>Manage end-to-end data transitions to ensure operational continuity with minimal business disruption.</p><p>Monitor Databricks workloads and optimize performance, scalability, and cost efficiency across compute and storage layers.</p><p>Partner with data engineers, scientists, analysts, and product stakeholders to gather requirements and build fit‑for‑purpose data solutions.</p><p>Establish and enforce data engineering best practices, development standards, and architectural guidelines.</p><p>Assess emerging tools and technologies to enhance pipeline efficiency, reliability, and automation capabilities.</p><p>Provide technical direction, guidance, and mentorship to junior engineers and team members.</p><p>Collaborate closely with DevOps and infrastructure teams to deploy, manage, and support data systems in production.</p><p>Ensure all data solutions meet compliance standards, organizational security policies, and regulatory obligations.</p><p>Work with enterprise architects and IT leadership to align data architecture with broader technology strategies and long-term roadmaps</p>
<p>As our portfolio of AI-driven solutions continues to expand, we’re looking for an experienced <strong>Machine Learning Engineer</strong> to join our high-impact data science team. This role offers the opportunity to work across trading, operations, and support functions—delivering production-grade machine learning systems that solve real business problems.</p><p>You’ll collaborate with data scientists, software engineers, and commercial stakeholders to design, build, and deploy models that drive decision-making and innovation. From project scoping to model deployment, you’ll have visibility and influence across the full ML lifecycle.</p><p>🔧 Core Responsibilities</p><ul><li>Act as a thought partner to commercial teams, identifying high-value opportunities for AI/ML applications</li><li>Lead the design, development, and deployment of machine learning systems, with a focus on <strong>NLP</strong>, <strong>LLMs</strong>, and <strong>Generative AI</strong></li><li>Prioritize projects based on business impact and evolving market conditions</li><li>Collaborate with cross-functional teams to gather requirements and align solutions with strategic goals</li><li>Integrate ML solutions—including GenAI—into existing platforms to ensure seamless user experiences and scalable adoption</li><li>Participate in code reviews, experiment design, and tooling decisions to maintain high engineering standards</li><li>Share knowledge and mentor colleagues to build machine learning fluency across the organization</li></ul><p><br></p>
We are looking for an entry-level Software Engineer to join our dynamic team in Spring, Texas. This role offers an exciting opportunity to contribute to the development and enhancement of cutting-edge software applications. You will work closely with a collaborative team to design, implement, and maintain innovative solutions.<br><br>Responsibilities:<br>• Develop and maintain software applications using C# and .NET frameworks.<br>• Collaborate with team members to design and implement user-friendly features using React.js and JavaScript.<br>• Troubleshoot and resolve software issues to ensure optimal functionality.<br>• Participate in code reviews to uphold high-quality standards.<br>• Assist in testing and debugging applications to ensure reliability and performance.<br>• Create and update technical documentation for software projects.<br>• Implement improvements to existing systems based on user feedback.<br>• Stay updated on emerging technologies and industry trends to continuously enhance skills.
<p>Position Overview</p><p>We are seeking a Data Governance & Data Quality Platform Engineer to own the technical administration, integration, and optimization of enterprise data governance and data quality platforms (e.g., Atlan, Monte Carlo). This role ensures governance and quality tools are scalable, securely integrated into the enterprise data ecosystem, and maintained for high availability and performance.</p><p>The ideal candidate brings strong platform engineering skills, experience automating data quality and metadata workflows, and a solid understanding of governance, compliance, and modern data architectures.</p><p>Key Responsibilities</p><p><br></p><p>1. Platform Engineering & Administration</p><ul><li>Configure and maintain data governance platforms for metadata management, data lineage, and governance workflows</li><li>Configure data quality tools for profiling, rule creation, and monitoring dashboards</li><li>Manage platform security, including user roles, authentication, SSO, RBAC, and access controls</li></ul><p>e2. Integration & Automation</p><ul><li>Develop and maintain integrations across data sources, databases, data lakes, and BI tools</li><li>Automate metadata ingestion and data quality checks using APIs, Python scripts, or ETL frameworks</li><li>Configure and maintain connectors for analytics and reporting platforms</li></ul><p> 3. Performance, Reliability & Monitoring</p><ul><li>Monitor platform health and optimize performance and scalability</li><li>Apply upgrades, patches, and troubleshoot technical issues</li><li>Implement logging, alerting, and proactive monitoring for governance and data quality environments</li></ul><p>a4. Technical Support & Issue Resolution</p><ul><li>Provide Tier 3 support for platform‑related incidents and escalations</li><li>Debug integration failures and resolve configuration conflicts</li><li>Collaborate with vendors for advanced troubleshooting and roadmap alignment</li></ul><p>r5. Security, Compliance & Risk Management</p><ul><li>Ensure platforms comply with data privacy and security standards (e.g., GDPR, CCPA)</li><li>Implement encryption, audit logging, and access controls</li><li>Support compliance reporting and risk assessments using governance and data quality metrics</li></ul>
<p><strong>Full Stack Python Developer</strong></p><p>We are looking for a talented <strong>Full Stack Python Developer</strong> to join our team in <strong>Houston, Texas</strong>. In this role, you will collaborate with technical and non-technical stakeholders to design and develop innovative applications that support global commercial and operational functions. This position offers an exciting opportunity to create impactful solutions that enhance decision-making and optimize processes.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and maintain full-stack applications using <strong>Python</strong>, <strong>React</strong>, and <strong>C#</strong>.</li><li>Rapidly prototype and iterate on solutions based on user feedback.</li><li>Analyze business requirements and manage project lifecycles independently.</li><li>Collaborate with cross-functional teams to identify opportunities for innovation.</li><li>Optimize and maintain relational databases such as <strong>PostgreSQL</strong> or <strong>Oracle</strong>.</li><li>Develop APIs and integrate existing tools and platforms to enhance system capabilities.</li><li>Stay current with advancements in Python, React, and related technologies.</li><li>Provide technical expertise and support to commercial teams.</li><li>Troubleshoot and resolve issues within existing applications.</li></ul><p><br></p><p><br></p>
We are looking for a skilled Data Engineer with expertise in AI/ML technologies and prior experience in the oil and gas industry to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in transforming data into actionable insights through advanced analytics and innovative solutions. This opportunity is ideal for professionals who thrive in data-driven environments and excel at leveraging tools like Power BI and PowerApps.<br><br>Responsibilities:<br>• Develop and manage Power BI dashboards and reports to deliver meaningful insights from raw data.<br>• Utilize PowerApps to create and maintain applications that support business intelligence initiatives.<br>• Collaborate with cross-functional teams to understand data requirements and implement solutions.<br>• Analyze complex datasets to identify trends and patterns that inform decision-making.<br>• Ensure the accuracy, reliability, and security of data within BI systems.<br>• Optimize data pipelines and workflows for improved performance and scalability.<br>• Provide technical expertise to support AI/ML integration into existing data processes.<br>• Stay updated on emerging technologies and best practices in data engineering and AI/ML.<br>• Troubleshoot and resolve issues related to data tools and processes.<br>• Document processes, workflows, and methodologies for future reference.
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>We are seeking a skilled <strong>Azure Data Engineer</strong> to design, build, and maintain scalable data solutions on the Microsoft Azure platform. The ideal candidate will have strong experience developing data pipelines, optimizing data architectures, and supporting analytics and business intelligence initiatives. This role will work closely with data analysts, data scientists, and business stakeholders to ensure reliable, high-quality data is available for reporting and advanced analytics.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines and ETL/ELT processes</strong> using Azure data services.</li><li>Build and manage data solutions using tools such as <strong>Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, and Azure Databricks</strong>.</li><li>Develop and optimize <strong>data models, transformations, and storage strategies</strong> for large-scale structured and unstructured datasets.</li><li>Ensure <strong>data quality, integrity, and security</strong> across the data platform.</li><li>Monitor and troubleshoot data workflows, pipeline failures, and performance issues.</li><li>Collaborate with data analysts, BI developers, and data scientists to deliver reliable datasets for reporting and analytics.</li><li>Implement <strong>data governance and best practices</strong> for data management and documentation.</li><li>Automate data processes and deployments using <strong>CI/CD pipelines and infrastructure-as-code practices</strong>.</li><li>Optimize cost and performance of Azure data services.</li><li>Stay current with new Azure features, tools, and industry best practices.</li></ul><p><br></p>
<p>Position Overview</p><p>We are seeking a talented <strong>Data Engineer</strong> with strong experience in <strong>Python, AWS, and Databricks</strong> to design and build scalable data pipelines and modern data platforms. The ideal candidate will help develop and maintain data infrastructure that supports analytics, machine learning, and business intelligence initiatives. This role requires hands-on experience working with large datasets, cloud-native architectures, and distributed data processing frameworks.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain <strong>scalable data pipelines and ETL/ELT workflows</strong> using Python and cloud technologies.</li><li>Develop and optimize data solutions using <strong>AWS services and Databricks</strong>.</li><li>Build and manage <strong>data lakes and data warehouses</strong> for structured and unstructured data.</li><li>Implement <strong>data transformation and processing pipelines</strong> using Apache Spark within Databricks.</li><li>Integrate data from multiple sources including APIs, databases, and streaming systems.</li><li>Ensure <strong>data quality, governance, security, and compliance</strong> across the data platform.</li><li>Monitor pipeline performance and troubleshoot <strong>data pipeline failures or latency issues</strong>.</li><li>Collaborate with <strong>data analysts, data scientists, and business stakeholders</strong> to deliver reliable datasets.</li><li>Optimize storage and compute costs within the AWS ecosystem.</li><li><br></li></ul><p><br></p>