We are looking for a skilled Software Developer to join our team on a contract basis in The Woodlands, Texas. In this role, you will design, develop, and maintain software applications while ensuring high performance and scalability. This position requires proficiency in modern programming languages and frameworks, as well as the ability to collaborate effectively with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain software applications using .NET, C#, and ASP.NET.<br>• Write clean, efficient, and scalable code to meet project requirements.<br>• Collaborate with team members to define project scope and technical specifications.<br>• Troubleshoot and resolve software bugs and performance issues.<br>• Implement user interfaces using JavaScript and ensure seamless functionality.<br>• Conduct code reviews and provide constructive feedback to peers.<br>• Stay updated on industry trends and emerging technologies.<br>• Test applications thoroughly to ensure optimal performance and reliability.<br>• Document technical processes and application development details.<br>• Support system enhancements and integrations as needed.
<p><br></p><p>Software Platform Engineer will design, build, and maintain a core Data & Machine Learning platform.</p><p><br></p><p>Platform Development: Design and implement new features for our AWS and Databricks-based platform, staying current with industry trends and advancements in AI. Core Component Implementation: Test and integrate central platform components that support our technology stack and serve tenants across the organization. Collaboration: Partner with other engineering teams to identify and deliver platform enhancements that solve specific business problems. Maintain Excellence: Uphold strict security protocols, compliance controls, and architectural principles in all aspects of your work.</p><p><br></p><p><br></p>
<p>We are looking for an experienced Sr. Software Engineer to join our team in northwest Houston. In this Contract to permanent position, you will play a key role in supporting, configuring, and optimizing the Manhattan Active Warehouse Management System (WMS) within a dynamic enterprise IT environment. This role demands a blend of technical expertise and functional knowledge to enhance warehouse operations and streamline processes.</p><p><br></p><p>Responsibilities:</p><p>• Configure, support, and enhance the Manhattan Active Warehouse Management System to meet business needs and improve operational efficiency.</p><p>• Develop and manage system extensions and execute integrations using RESTful APIs and the Manhattan integration framework.</p><p>• Troubleshoot and resolve technical issues, providing post-implementation support and performance tuning.</p><p>• Automate processes to optimize warehouse operations, including inventory management, labor optimization, and shipping.</p><p>• Design and generate ad hoc reports and dashboards using Manhattan tools, with a focus on actionable insights.</p><p>• Collaborate with cross-functional teams to implement solutions that align with warehouse management goals and strategies.</p><p>• Utilize tools such as Postman for scripting and ProActive for system configurations.</p><p>• Create Jasper reports and design labels using JMagic to meet specific operational requirements.</p><p>• Facilitate meetings and communicate technical concepts effectively to stakeholders at all organizational levels.</p><p>• Travel as needed to project sites and distribution facilities to support implementation and troubleshooting efforts.</p>
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. In this role, you will design, build, and manage data pipelines and systems to support business operations and decision-making processes. This position offers an exciting opportunity to work with cutting-edge technologies within the energy and natural resources sector.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to efficiently process large volumes of data.<br>• Collaborate with cross-functional teams to gather requirements and design data solutions that meet business needs.<br>• Implement and optimize ETL processes to ensure the accuracy and reliability of data flows.<br>• Utilize technologies such as Apache Spark, Hadoop, and Kafka to manage and process data streams.<br>• Monitor and troubleshoot data systems to ensure optimal performance and reliability.<br>• Perform data integration from multiple sources to create unified datasets for analysis.<br>• Ensure data security and compliance with organizational and industry standards.<br>• Continuously evaluate and adopt new tools and technologies to enhance data engineering practices.<br>• Provide technical guidance and mentorship to entry-level team members as needed.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. This contract position offers an exciting opportunity to leverage your expertise in data processing and analytics within the dynamic energy and natural resources industry. You will play a pivotal role in designing, implementing, and optimizing data solutions to support critical business operations.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using Apache Spark, Python, and ETL processes.<br>• Design and implement data storage solutions utilizing Apache Hadoop for efficient data management.<br>• Build real-time data streaming architectures with Apache Kafka to support operational needs.<br>• Optimize data workflows to ensure high performance and reliability across systems.<br>• Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.<br>• Perform data quality checks and validation to ensure accuracy and consistency of datasets.<br>• Troubleshoot and resolve technical issues related to data processing and integration.<br>• Document processes and workflows to ensure knowledge sharing and operational transparency.<br>• Monitor and improve system performance, ensuring the infrastructure meets business demands.
<p>As the Software Engineer, you will be responsible for designing and developing automated solutions and system integrations to optimize our business operations. You will be a key player in gathering requirements from non-technical stakeholders, translating them into technical specifications, and ensuring that the delivered solutions meet their needs. You will be responsible for fostering and maintaining strong relationships with stakeholders, ensuring they have confidence in the technology solutions that support their business processes. Your advanced skills in solution design, AWS, and programming languages will be critical to delivering scalable, reliable, and impactful solutions. </p><p><br></p><p><br></p><p>Automation Development: Design, develop, and oversee the maintenance of automation scripts and tools to streamline and optimize business processes.</p><p>Cloud Integration: Architect and manage integrations between various systems and AWS services, ensuring seamless data flow and system interoperability.</p><p>Solution Design: Architect scalable and reliable integration solutions that align with business requirements and technical constraints.</p><p>Testing & Validation: Oversee and participate in the testing of automation and integration solutions to ensure functionality, reliability, and security.</p><p>Documentation: Maintain detailed documentation of automation processes, integration workflows, and system configurations.</p><p>Continuous Improvement: Lead efforts to identify opportunities for process improvements, proposing and implementing innovative automation solutions across the organization.</p><p>Support & Troubleshooting: Provide high-level support for existing automation and integration solutions, troubleshooting issues, and implementing fixes as necessary.</p><p><br></p><p><br></p><p><br></p>
We are looking for an experienced Applications Architect to lead the design, development, and integration of enterprise applications for our organization. This long-term contract position is based in The Woodlands, Texas, and requires a forward-thinking, detail-oriented individual to create scalable, secure, and resilient application architectures that meet complex business needs. The ideal candidate will have deep expertise in cloud-native technologies, application integration strategies, and modern development practices.<br><br>Responsibilities:<br>• Define and own the architecture for enterprise applications, ensuring alignment with business strategies and operational goals.<br>• Develop technical roadmaps and integration strategies for line-of-business systems and customer-facing platforms.<br>• Establish and enforce standards, patterns, and best practices for application design and development.<br>• Guide development teams through complex system design decisions, ensuring scalability, security, and maintainability.<br>• Collaborate with data platform architects to design integration strategies and define clear data contracts.<br>• Lead the implementation of multi-tenant SaaS architectures on Microsoft Azure, including tenant isolation and identity management.<br>• Oversee DevOps practices, CI/CD pipelines, and application deployment for production environments.<br>• Conduct technical reviews and create reference implementations to elevate development standards.<br>• Partner with stakeholders to translate business requirements into robust technical solutions.<br>• Ensure operational excellence by optimizing application performance and maintaining resilience in production environments.
<p><strong>Full Stack Python Developer</strong></p><p>We are looking for a talented <strong>Full Stack Python Developer</strong> to join our team in <strong>Houston, Texas</strong>. In this role, you will collaborate with technical and non-technical stakeholders to design and develop innovative applications that support global commercial and operational functions. This position offers an exciting opportunity to create impactful solutions that enhance decision-making and optimize processes.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and maintain full-stack applications using <strong>Python</strong>, <strong>React</strong>, and <strong>C#</strong>.</li><li>Rapidly prototype and iterate on solutions based on user feedback.</li><li>Analyze business requirements and manage project lifecycles independently.</li><li>Collaborate with cross-functional teams to identify opportunities for innovation.</li><li>Optimize and maintain relational databases such as <strong>PostgreSQL</strong> or <strong>Oracle</strong>.</li><li>Develop APIs and integrate existing tools and platforms to enhance system capabilities.</li><li>Stay current with advancements in Python, React, and related technologies.</li><li>Provide technical expertise and support to commercial teams.</li><li>Troubleshoot and resolve issues within existing applications.</li></ul><p><br></p><p><br></p>
<p>As our portfolio of AI-driven solutions continues to expand, we’re looking for an experienced <strong>Machine Learning Engineer</strong> to join our high-impact data science team. This role offers the opportunity to work across trading, operations, and support functions—delivering production-grade machine learning systems that solve real business problems.</p><p>You’ll collaborate with data scientists, software engineers, and commercial stakeholders to design, build, and deploy models that drive decision-making and innovation. From project scoping to model deployment, you’ll have visibility and influence across the full ML lifecycle.</p><p>🔧 Core Responsibilities</p><ul><li>Act as a thought partner to commercial teams, identifying high-value opportunities for AI/ML applications</li><li>Lead the design, development, and deployment of machine learning systems, with a focus on <strong>NLP</strong>, <strong>LLMs</strong>, and <strong>Generative AI</strong></li><li>Prioritize projects based on business impact and evolving market conditions</li><li>Collaborate with cross-functional teams to gather requirements and align solutions with strategic goals</li><li>Integrate ML solutions—including GenAI—into existing platforms to ensure seamless user experiences and scalable adoption</li><li>Participate in code reviews, experiment design, and tooling decisions to maintain high engineering standards</li><li>Share knowledge and mentor colleagues to build machine learning fluency across the organization</li></ul><p><br></p>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p><strong><u>Essential Duties and Responsibilities:</u></strong></p><ul><li>Design and deploy F5 BIG-IP solutions, including LTM (Local Traffic Manager), DNS, and APM (Access Policy Manager).</li><li>Design and deploy Security Assertion Markup Language (SAML)/ OpenID connect (OIDC) authentication methodologies.</li><li>Configure and manage advanced F5 iRules and policies to support business-critical applications.</li><li>Optimize application performance by implementing load balancing, SSL offloading, and traffic routing solutions.</li><li>Troubleshoot and resolve issues related to F5 devices, ensuring high availability and performance.</li><li>Collaborate with cross-functional teams to integrate F5 solutions into existing network infrastructure.</li><li>Monitor F5 devices and applications using analytics tools to detect and mitigate potential risks.</li><li>Implement F5 WAF (Web Application Firewall) configurations to protect against web-based threats.</li><li>Automate routine F5 tasks using APIs, Ansible, or other automation frameworks.</li><li>Maintain and update F5OS, system documentation, policies, and procedures.</li><li>Stay updated on the latest F5 technologies and industry best practices.</li></ul><p><br></p>
<p>Position Overview</p><p>We are seeking a delivery‑focused Data Automation Engineer to design and implement innovative automation solutions across a Microsoft Azure‑based data analytics platform. This role partners closely with engineering teams and stakeholders to translate business requirements into scalable data engineering and AI‑enabled solutions.</p><p>The ideal candidate is hands‑on with Azure Data Factory, Synapse Pipelines, Apache Spark, Python, and SQL, and brings experience building reliable ETL pipelines across SQL and NoSQL environments. This role emphasizes performance optimization, automation, and proactive data quality within Agile DevOps delivery models.</p><p><br></p><p>Key Responsibilities</p><p>Data Engineering & Automation</p><ul><li>Develop high‑performance data pipelines using Azure Data Factory, Synapse Pipelines, Spark Notebooks, Python, and SQL.</li><li>Design ETL workflows supporting advanced analytics, reporting, and AI/ML use cases.</li><li>Implement data migration, integrity, quality, metadata, and security controls across pipelines.</li><li>Monitor, troubleshoot, and optimize pipelines for availability, scalability, and performance.</li></ul><p>Performance Testing & Optimization</p><ul><li>Execute ETL performance testing and validate load performance against benchmarks.</li><li>Analyze pipeline runtime, throughput, latency, and resource utilization.</li><li>Support tuning activities (e.g., query optimization, partitioning, indexing).</li><li>Validate data completeness and consistency after high‑volume processing.</li></ul><p>Platform Collaboration & DevOps Support</p><ul><li>Collaborate with DevOps and infrastructure teams to optimize compute, memory, and scaling.</li><li>Maintain versioning and configuration control across environments.</li><li>Support production, testing, development, and integration environments.</li><li>Actively participate in Agile delivery processes including Program Increment planning.</li></ul>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.