<p>As the Software Engineer, you will be responsible for designing and developing automated solutions and system integrations to optimize our business operations. You will be a key player in gathering requirements from non-technical stakeholders, translating them into technical specifications, and ensuring that the delivered solutions meet their needs. You will be responsible for fostering and maintaining strong relationships with stakeholders, ensuring they have confidence in the technology solutions that support their business processes. Your advanced skills in solution design, AWS, and programming languages will be critical to delivering scalable, reliable, and impactful solutions. </p><p><br></p><p><br></p><p>Automation Development: Design, develop, and oversee the maintenance of automation scripts and tools to streamline and optimize business processes.</p><p>Cloud Integration: Architect and manage integrations between various systems and AWS services, ensuring seamless data flow and system interoperability.</p><p>Solution Design: Architect scalable and reliable integration solutions that align with business requirements and technical constraints.</p><p>Testing & Validation: Oversee and participate in the testing of automation and integration solutions to ensure functionality, reliability, and security.</p><p>Documentation: Maintain detailed documentation of automation processes, integration workflows, and system configurations.</p><p>Continuous Improvement: Lead efforts to identify opportunities for process improvements, proposing and implementing innovative automation solutions across the organization.</p><p>Support & Troubleshooting: Provide high-level support for existing automation and integration solutions, troubleshooting issues, and implementing fixes as necessary.</p><p><br></p><p><br></p><p><br></p>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>We are looking for a highly skilled Remote Oracle Cloud Database Administrator to join our team in Texas, must be able to go onsite in Houston or Dallas market. In this role, you will be responsible for managing, securing, and optimizing Oracle Exadata systems across on-premises and cloud environments. Your expertise in database architecture, advanced security measures, and performance tuning will ensure the seamless operation and protection of mission-critical systems.</p><p><br></p><p>Responsibilities:</p><p>• Manage and maintain Oracle Exadata systems, including storage, compute nodes, and networks across both on-premises and cloud environments.</p><p>• Implement advanced security protocols such as encryption, access controls, and auditing to safeguard database environments.</p><p>• Perform installations, configurations, patching, and upgrades of Exadata software and firmware with minimal disruption.</p><p>• Monitor database performance, identify and resolve issues proactively, and optimize systems for peak efficiency.</p><p>• Develop and execute comprehensive backup and recovery plans to support business continuity and disaster recovery.</p><p>• Collaborate with cross-functional teams to address database-related requirements while ensuring compliance and security standards.</p><p>• Automate routine database administration and security processes using scripting languages to enhance efficiency.</p><p>• Participate in capacity planning and disaster recovery exercises, focusing on secure architecture and compliance.</p><p>• Provide 24/7 production support, including handling urgent incidents related to database performance or security.</p>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
<p><br></p><p>Software Platform Engineer will design, build, and maintain a core Data & Machine Learning platform.</p><p><br></p><p>Platform Development: Design and implement new features for our AWS and Databricks-based platform, staying current with industry trends and advancements in AI. Core Component Implementation: Test and integrate central platform components that support our technology stack and serve tenants across the organization. Collaboration: Partner with other engineering teams to identify and deliver platform enhancements that solve specific business problems. Maintain Excellence: Uphold strict security protocols, compliance controls, and architectural principles in all aspects of your work.</p><p><br></p><p><br></p>