<p>As our portfolio of AI-driven solutions continues to expand, we’re looking for an experienced <strong>Machine Learning Engineer</strong> to join our high-impact data science team. This role offers the opportunity to work across trading, operations, and support functions—delivering production-grade machine learning systems that solve real business problems.</p><p>You’ll collaborate with data scientists, software engineers, and commercial stakeholders to design, build, and deploy models that drive decision-making and innovation. From project scoping to model deployment, you’ll have visibility and influence across the full ML lifecycle.</p><p>🔧 Core Responsibilities</p><ul><li>Act as a thought partner to commercial teams, identifying high-value opportunities for AI/ML applications</li><li>Lead the design, development, and deployment of machine learning systems, with a focus on <strong>NLP</strong>, <strong>LLMs</strong>, and <strong>Generative AI</strong></li><li>Prioritize projects based on business impact and evolving market conditions</li><li>Collaborate with cross-functional teams to gather requirements and align solutions with strategic goals</li><li>Integrate ML solutions—including GenAI—into existing platforms to ensure seamless user experiences and scalable adoption</li><li>Participate in code reviews, experiment design, and tooling decisions to maintain high engineering standards</li><li>Share knowledge and mentor colleagues to build machine learning fluency across the organization</li></ul><p><br></p>
<p><br></p><p>Software Platform Engineer will design, build, and maintain a core Data & Machine Learning platform.</p><p><br></p><p>Platform Development: Design and implement new features for our AWS and Databricks-based platform, staying current with industry trends and advancements in AI. Core Component Implementation: Test and integrate central platform components that support our technology stack and serve tenants across the organization. Collaboration: Partner with other engineering teams to identify and deliver platform enhancements that solve specific business problems. Maintain Excellence: Uphold strict security protocols, compliance controls, and architectural principles in all aspects of your work.</p><p><br></p><p><br></p>
<p><strong><u>Essential Duties and Responsibilities:</u></strong></p><ul><li>Design and deploy F5 BIG-IP solutions, including LTM (Local Traffic Manager), DNS, and APM (Access Policy Manager).</li><li>Design and deploy Security Assertion Markup Language (SAML)/ OpenID connect (OIDC) authentication methodologies.</li><li>Configure and manage advanced F5 iRules and policies to support business-critical applications.</li><li>Optimize application performance by implementing load balancing, SSL offloading, and traffic routing solutions.</li><li>Troubleshoot and resolve issues related to F5 devices, ensuring high availability and performance.</li><li>Collaborate with cross-functional teams to integrate F5 solutions into existing network infrastructure.</li><li>Monitor F5 devices and applications using analytics tools to detect and mitigate potential risks.</li><li>Implement F5 WAF (Web Application Firewall) configurations to protect against web-based threats.</li><li>Automate routine F5 tasks using APIs, Ansible, or other automation frameworks.</li><li>Maintain and update F5OS, system documentation, policies, and procedures.</li><li>Stay updated on the latest F5 technologies and industry best practices.</li></ul><p><br></p>
<p>Position Overview</p><p>We are seeking a delivery‑focused Data Automation Engineer to design and implement innovative automation solutions across a Microsoft Azure‑based data analytics platform. This role partners closely with engineering teams and stakeholders to translate business requirements into scalable data engineering and AI‑enabled solutions.</p><p>The ideal candidate is hands‑on with Azure Data Factory, Synapse Pipelines, Apache Spark, Python, and SQL, and brings experience building reliable ETL pipelines across SQL and NoSQL environments. This role emphasizes performance optimization, automation, and proactive data quality within Agile DevOps delivery models.</p><p><br></p><p>Key Responsibilities</p><p>Data Engineering & Automation</p><ul><li>Develop high‑performance data pipelines using Azure Data Factory, Synapse Pipelines, Spark Notebooks, Python, and SQL.</li><li>Design ETL workflows supporting advanced analytics, reporting, and AI/ML use cases.</li><li>Implement data migration, integrity, quality, metadata, and security controls across pipelines.</li><li>Monitor, troubleshoot, and optimize pipelines for availability, scalability, and performance.</li></ul><p>Performance Testing & Optimization</p><ul><li>Execute ETL performance testing and validate load performance against benchmarks.</li><li>Analyze pipeline runtime, throughput, latency, and resource utilization.</li><li>Support tuning activities (e.g., query optimization, partitioning, indexing).</li><li>Validate data completeness and consistency after high‑volume processing.</li></ul><p>Platform Collaboration & DevOps Support</p><ul><li>Collaborate with DevOps and infrastructure teams to optimize compute, memory, and scaling.</li><li>Maintain versioning and configuration control across environments.</li><li>Support production, testing, development, and integration environments.</li><li>Actively participate in Agile delivery processes including Program Increment planning.</li></ul>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>Overview</p><p>Our client is seeking a hands-on <strong>Audio-Visual Technician / AV Engineer</strong> to support multiple facilities undergoing significant AV upgrades. This role will focus on supporting new and existing AV environments that leverage <strong>Q‑SYS and Crestron</strong>, with an emphasis on system stability, live event preparedness, and vendor coordination.</p><p>The ideal candidate has strong troubleshooting skills, experience supporting conference rooms and auditoriums, and is comfortable working closely with AV vendors and internal IT teams.</p>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.