<p><strong>Principal Data Scientist (AI/ML Focus)</strong></p><p><strong>Service Type:</strong> 42 Week Contract </p><p><strong>Worksite:</strong> Onsite, Monday–Thursday — Houston, TX</p><p><strong>Pay: </strong>Available on W2 </p><p><strong>Position Overview</strong></p><p>We are seeking a <strong>Principal Scientist, Data</strong> with deep expertise in <strong>AI, Machine Learning, Natural Language Processing (NLP), Computer Vision (CV), and Generative AI</strong>. This role requires a strong technical foundation, excellent communication skills, and the ability to translate complex methodologies into meaningful business outcomes.</p><p>The ideal candidate is proactive, innovative, and passionate about developing advanced AI-driven solutions using modern architectures including <strong>LLMs, deep learning models, multi-agent systems, and generative AI techniques</strong>.</p><p><strong>Requirements</strong></p><ul><li>Strong background in <strong>NLP, Computer Vision, and Generative AI</strong>.</li><li>Broad background in <strong>Artificial Intelligence</strong>.</li><li>Excellent verbal and written communication skills.</li></ul><p> <strong>Key Responsibilities</strong></p><ul><li>Develop, train, and optimize <strong>machine learning and deep learning models</strong>.</li><li>Build advanced AI solutions using <strong>LLMs, multi-agent systems, fine-tuning techniques, and inference optimization</strong>.</li><li>Transform complex data science methodologies into actionable insights.</li><li>Collaborate closely with stakeholders to develop high-value, data-driven solutions.</li><li>Create clear, compelling presentations, dashboards, and deliverables for non-technical audiences.</li><li>Drive full lifecycle AI/ML projects from ideation through deployment.</li></ul>
<p>Position Overview</p><p>We are seeking a Data Governance & Data Quality Platform Engineer to own the technical administration, integration, and optimization of enterprise data governance and data quality platforms (e.g., Atlan, Monte Carlo). This role ensures governance and quality tools are scalable, securely integrated into the enterprise data ecosystem, and maintained for high availability and performance.</p><p>The ideal candidate brings strong platform engineering skills, experience automating data quality and metadata workflows, and a solid understanding of governance, compliance, and modern data architectures.</p><p>Key Responsibilities</p><p><br></p><p>1. Platform Engineering & Administration</p><ul><li>Configure and maintain data governance platforms for metadata management, data lineage, and governance workflows</li><li>Configure data quality tools for profiling, rule creation, and monitoring dashboards</li><li>Manage platform security, including user roles, authentication, SSO, RBAC, and access controls</li></ul><p>e2. Integration & Automation</p><ul><li>Develop and maintain integrations across data sources, databases, data lakes, and BI tools</li><li>Automate metadata ingestion and data quality checks using APIs, Python scripts, or ETL frameworks</li><li>Configure and maintain connectors for analytics and reporting platforms</li></ul><p> 3. Performance, Reliability & Monitoring</p><ul><li>Monitor platform health and optimize performance and scalability</li><li>Apply upgrades, patches, and troubleshoot technical issues</li><li>Implement logging, alerting, and proactive monitoring for governance and data quality environments</li></ul><p>a4. Technical Support & Issue Resolution</p><ul><li>Provide Tier 3 support for platform‑related incidents and escalations</li><li>Debug integration failures and resolve configuration conflicts</li><li>Collaborate with vendors for advanced troubleshooting and roadmap alignment</li></ul><p>r5. Security, Compliance & Risk Management</p><ul><li>Ensure platforms comply with data privacy and security standards (e.g., GDPR, CCPA)</li><li>Implement encryption, audit logging, and access controls</li><li>Support compliance reporting and risk assessments using governance and data quality metrics</li></ul>
We are looking for an experienced AWS/Databricks Engineer to join our team in Houston, Texas. This is a long-term contract position ideal for professionals with a strong background in data engineering and cloud technologies. The role will focus on leveraging Python and Databricks to optimize data processes and enhance system performance.<br><br>Responsibilities:<br>• Develop and implement scalable data engineering solutions using Python and Databricks.<br>• Collaborate with cross-functional teams to design and optimize data workflows.<br>• Migrate and enhance existing Python scripts to Databricks for improved functionality.<br>• Utilize cloud technologies to support data integration and analytics processes.<br>• Implement algorithms and data visualization methods to present actionable insights.<br>• Design and maintain APIs to streamline data interactions and integrations.<br>• Work with tools like Apache Kafka, Spark, and Hadoop to manage large-scale data systems.<br>• Perform data analysis and develop strategies to improve system efficiency.<br>• Ensure high-quality data pipelines and address performance bottlenecks.<br>• Stay updated on emerging trends in data engineering and recommend innovative solutions.
We are looking for an experienced Data Engineer to join our team on a long-term contract basis. Based in Houston, Texas, this role offers an exciting opportunity to work with cutting-edge data technologies, design scalable solutions, and contribute to data-driven decision-making processes. If you are passionate about optimizing data systems and driving innovation, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python.<br>• Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems.<br>• Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture.<br>• Monitor and troubleshoot data systems to ensure reliability and performance.<br>• Design and maintain data models, ensuring alignment with business requirements.<br>• Conduct thorough testing and validation of data processes to guarantee accuracy.<br>• Document data workflows and processes for future reference and team collaboration.<br>• Provide technical guidance and support to team members on data engineering best practices.<br>• Stay current on emerging technologies and trends in big data and analytics.<br>• Contribute to improving data governance and security protocols.
We are looking for a highly skilled Data Engineer to join our team in Houston, Texas. This Contract to permanent position offers an exciting opportunity to work on cutting-edge data solutions and collaborate with cross-functional teams to deliver impactful results. The ideal candidate will possess strong technical expertise and a passion for creating efficient and scalable data systems.<br><br>Responsibilities:<br>• Design and implement scalable data architectures to support business needs and analytics requirements.<br>• Develop and optimize ETL pipelines for data extraction, transformation, and loading across diverse data sources.<br>• Collaborate with stakeholders to gather requirements and translate them into technical solutions.<br>• Utilize tools such as Apache Spark, Hadoop, and Kafka to manage large-scale data processing and real-time streaming.<br>• Ensure data quality and security by implementing best practices and conducting thorough testing.<br>• Develop and maintain technical documentation related to system design, development processes, and operational workflows.<br>• Work with Agile teams to deliver solutions efficiently while actively participating in sprints and ceremonies.<br>• Troubleshoot and resolve issues in existing data systems to maintain optimal performance.<br>• Provide guidance and conduct code reviews for entry level team members.<br>• Stay updated on emerging technologies and recommend improvements to enhance data engineering practices.
We are looking for a skilled Data Engineer to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in designing, developing, and optimizing data solutions while collaborating with cross-functional teams to deliver impactful results. This role offers an excellent opportunity to contribute to innovative projects and mentor other developers.<br><br>Responsibilities:<br>• Design and implement scalable data solutions using tools such as Apache Spark, Hadoop, and Kafka.<br>• Build and maintain efficient ETL processes to ensure seamless data transformation and integration.<br>• Collaborate with product owners, business analysts, and stakeholders to gather requirements and translate them into technical solutions.<br>• Optimize and troubleshoot complex data workflows to enhance performance and reliability.<br>• Lead technical discussions and provide architectural guidance for best practices and development standards.<br>• Mentor entry level developers and conduct code reviews to ensure high-quality deliverables.<br>• Integrate data solutions with existing systems and third-party tools using APIs and cloud platforms.<br>• Stay updated with the latest data engineering technologies and proactively recommend improvements.<br>• Work within Agile/Scrum teams to deliver solutions aligned with user stories and project goals.<br>• Ensure compliance with security and quality standards through thorough documentation and testing.
<p>Position Overview</p><p>We are seeking a talented <strong>Data Engineer</strong> with strong experience in <strong>Python, AWS, and Databricks</strong> to design and build scalable data pipelines and modern data platforms. The ideal candidate will help develop and maintain data infrastructure that supports analytics, machine learning, and business intelligence initiatives. This role requires hands-on experience working with large datasets, cloud-native architectures, and distributed data processing frameworks.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain <strong>scalable data pipelines and ETL/ELT workflows</strong> using Python and cloud technologies.</li><li>Develop and optimize data solutions using <strong>AWS services and Databricks</strong>.</li><li>Build and manage <strong>data lakes and data warehouses</strong> for structured and unstructured data.</li><li>Implement <strong>data transformation and processing pipelines</strong> using Apache Spark within Databricks.</li><li>Integrate data from multiple sources including APIs, databases, and streaming systems.</li><li>Ensure <strong>data quality, governance, security, and compliance</strong> across the data platform.</li><li>Monitor pipeline performance and troubleshoot <strong>data pipeline failures or latency issues</strong>.</li><li>Collaborate with <strong>data analysts, data scientists, and business stakeholders</strong> to deliver reliable datasets.</li><li>Optimize storage and compute costs within the AWS ecosystem.</li><li><br></li></ul><p><br></p>
<p>We are seeking a skilled <strong>Azure Data Engineer</strong> to design, build, and maintain scalable data solutions on the Microsoft Azure platform. The ideal candidate will have strong experience developing data pipelines, optimizing data architectures, and supporting analytics and business intelligence initiatives. This role will work closely with data analysts, data scientists, and business stakeholders to ensure reliable, high-quality data is available for reporting and advanced analytics.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines and ETL/ELT processes</strong> using Azure data services.</li><li>Build and manage data solutions using tools such as <strong>Azure Data Factory, Azure Synapse Analytics, Azure Data Lake, and Azure Databricks</strong>.</li><li>Develop and optimize <strong>data models, transformations, and storage strategies</strong> for large-scale structured and unstructured datasets.</li><li>Ensure <strong>data quality, integrity, and security</strong> across the data platform.</li><li>Monitor and troubleshoot data workflows, pipeline failures, and performance issues.</li><li>Collaborate with data analysts, BI developers, and data scientists to deliver reliable datasets for reporting and analytics.</li><li>Implement <strong>data governance and best practices</strong> for data management and documentation.</li><li>Automate data processes and deployments using <strong>CI/CD pipelines and infrastructure-as-code practices</strong>.</li><li>Optimize cost and performance of Azure data services.</li><li>Stay current with new Azure features, tools, and industry best practices.</li></ul><p><br></p>