We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this long-term contract role, you will design and implement data solutions, ensuring efficient data processing and management. The ideal candidate will have expertise in handling large-scale data systems and a passion for optimizing workflows.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines using modern tools and frameworks.<br>• Implement data transformation processes to ensure efficient storage and retrieval.<br>• Collaborate with cross-functional teams to design and optimize data architecture.<br>• Utilize Apache Spark and Python to process and analyze large datasets.<br>• Manage and monitor data workflows, ensuring high performance and reliability.<br>• Integrate and maintain ETL processes to streamline data operations.<br>• Work with Apache Kafka and Hadoop to enhance system capabilities.<br>• Troubleshoot and resolve issues related to data systems and workflows.<br>• Ensure data security and compliance with industry standards.<br>• Document processes and provide technical support to stakeholders.
<p><strong>About the Role:</strong></p><p>We’re seeking a <strong>Data Engineer</strong> with hands-on experience in <strong>SAP Joule</strong> to help design, build, and optimize data pipelines that power advanced analytics and AI-driven insights. This role will collaborate closely with business and technology teams to leverage SAP Joule for intelligent data processing and automation.</p>
<p><strong>About the Role: </strong>We’re seeking a <strong>Senior Data Engineer</strong> with deep expertise in <strong>Python</strong> to design and implement scalable data solutions for complex, high-volume environments. This role involves building robust data pipelines, optimizing workflows, and collaborating with analytics teams to deliver actionable insights.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Develop and maintain advanced <strong>Python-based ETL pipelines</strong> for large-scale data processing.</li><li>Integrate data from multiple sources into secure, high-performance data platforms.</li><li>Collaborate with data scientists and business analysts to enable predictive analytics and reporting.</li><li>Implement best practices for data governance, security, and compliance.</li><li>Optimize data workflows for speed, reliability, and scalability.</li></ul><p><br></p>
We are looking for a skilled DevOps Engineer to join our team in Houston, Texas. In this long-term contract role, you will play a critical part in designing, implementing, and maintaining cutting-edge infrastructure solutions. If you're passionate about cloud technologies, automation, and optimizing system performance, we want to hear from you.<br><br>Responsibilities:<br>• Build and maintain scalable infrastructure using cloud platforms such as Amazon Web Services (AWS).<br>• Develop and manage automation scripts and tools with technologies like Ansible and Terraform.<br>• Deploy and manage container orchestration platforms, including Kubernetes, to support application delivery.<br>• Monitor and enhance system performance by implementing robust CI/CD pipelines.<br>• Collaborate with development and operations teams to streamline workflows and improve deployment efficiency.<br>• Ensure system security and compliance through regular audits and updates.<br>• Troubleshoot and resolve infrastructure-related issues to ensure maximum reliability and uptime.<br>• Evaluate and implement new tools or processes to improve operational efficiency.<br>• Document processes, configurations, and best practices to support team knowledge sharing.
<p>We are looking for a skilled Data Engineer to join our team on a long-term contract basis in Houston, Texas. The ideal candidate will play a key role in designing, implementing, and maintaining data applications while ensuring alignment with organizational data standards. This position requires expertise in handling large-scale data processing and a collaborative approach to problem-solving.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with teams to design and implement applications utilizing both established and emerging technology platforms.</p><p>• Ensure all applications adhere to organizational data management standards.</p><p>• Develop and optimize queries, stored procedures, and reports using SQL Server to address user requests.</p><p>• Work closely with team members to monitor application performance and ensure quality.</p><p>• Communicate effectively with users and management to resolve issues and provide updates.</p><p>• Create and maintain technical documentation and application procedures.</p><p>• Ensure compliance with change management and security protocols.</p>
<p><strong>About the Role: </strong>We’re seeking a <strong>Data Engineer</strong> with strong experience in <strong>Microsoft Fabric</strong> to design and optimize data pipelines that power analytics and business intelligence. This role is ideal for someone passionate about building scalable data solutions and leveraging modern cloud technologies to drive insights.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, build, and maintain data pipelines using <strong>Microsoft Fabric components</strong> (Lakehouse, Azure Data Factory, Data Warehouses, Notebooks, Dataflows).</li><li>Develop ETL processes for structured and unstructured data across multiple sources.</li><li>Collaborate with data scientists and analysts to deliver high-quality, reliable data solutions.</li><li>Implement best practices for data governance, security, and compliance.</li><li>Monitor and optimize data infrastructure for performance and scalability.</li></ul>