We are looking for an experienced Senior Data Engineer with a strong background in Python and modern data engineering tools to join our team in West Des Moines, Iowa. This is a long-term contract position that requires expertise in designing, building, and optimizing data pipelines and working with cloud-based data warehouses. If you thrive in a collaborative environment and have a passion for transforming raw data into actionable insights, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, debug, and optimize Python-based data pipelines using frameworks such as Flask, Django, or FastAPI.<br>• Design and implement data transformations in a data warehouse using tools like dbt, ensuring high-quality analytics-ready datasets.<br>• Utilize Amazon Redshift and Snowflake for managing large-scale data storage and performing advanced querying and optimization.<br>• Automate data integration processes using platforms like Fivetran and orchestration tools such as Prefect or Airflow.<br>• Build reusable and maintainable data models to improve performance and scalability for analytics and reporting.<br>• Conduct data analysis and visualization leveraging Python libraries such as NumPy, Pandas, TensorFlow, and PyTorch.<br>• Manage version control for data engineering projects using Git and GitHub.<br>• Ensure data quality through automated testing and validation processes.<br>• Document workflows, code, and data transformations following best practices for readability and maintainability.<br>• Optimize cloud-based data warehouse and lake platforms for performance and integration of new data sources.
Our client is an early-stage, high-growth startup building products that are actively used and loved by real users. They are looking for a Full Stack Engineer (3–6 years of experience) who is excited about building impactful products in a fast-paced, startup environment — and who has interest or exposure to AI. This is a fully onsite role in San Francisco - (must be already living in San Francisco Bay Area to be considered) <br> About the Role As a Full Stack Engineer, you’ll play a key role in designing, developing, and maintaining modern web applications. You’ll work across the stack to build clean, scalable features and collaborate closely with a small, highly motivated team. This is an opportunity for someone who genuinely enjoys building things — especially products that people use every day. <br> What You’ll Do Design, develop, and maintain full stack applications Build user-facing features using React and Next.js Develop and integrate backend services using Python (Flask) Write clean, efficient, and maintainable TypeScript code Debug, test, and optimize application performance Collaborate closely with cross-functional teammates in a fast-moving startup environment Contribute to AI-powered features and generative AI initiatives
We are looking for a highly skilled Data Scientist to contribute to a long-term contract position within the healthcare industry. This role focuses on supporting the enterprise-wide launch of Power BI by creating and delivering engaging, high-quality learning materials. The ideal candidate will work remotely, collaborating closely with leadership and subject matter experts to empower analytics and non-analytics professionals to efficiently use Power BI in their daily tasks.<br><br>Responsibilities:<br>• Develop scalable learning experiences tailored to diverse user personas and varying levels of technical expertise.<br>• Collaborate with the data literacy program team and Power BI specialists to ensure instructional content aligns with program objectives.<br>• Translate complex concepts related to Power BI and business intelligence into accessible and engaging educational materials.<br>• Design and deliver training programs using instructional design best practices and tools such as Camtasia, Adobe Creative Suite, or Articulate.<br>• Conduct user interviews to understand learning challenges and tailor content to meet specific needs.<br>• Enhance or create new data literacy resources, such as courses, modules, and curricula, to address emerging needs and best practices.<br>• Evaluate and adapt existing educational materials to make them sustainable and applicable across the organization.<br>• Participate in marketing efforts for the Data Literacy Program, including speaking engagements, blog posts, and other creative channels.<br>• Identify opportunities for new program initiatives that support analytics tools and data literacy.<br>• Serve as a subject matter expert in data literacy on national platforms through networking and conference participation.
<p><strong>Responsibilities:</strong></p><ul><li>Collect, process, and analyze large structured and unstructured datasets to identify meaningful trends, patterns, and opportunities for business improvement</li><li>Develop, test, and deploy predictive models, machine learning algorithms, and statistical analyses to address key business challenges </li><li>Collaborate with cross-functional teams, including business analysts, engineers, and stakeholders, to identify analytics solutions and align deliverables with strategic goals </li><li>Communicate complex findings and recommendations clearly to technical and non-technical audiences through reports, dashboards, and visualizations</li><li>Automate repetitive tasks, streamline data flows, and ensure data quality and governance throughout the analytics lifecycle</li><li>Stay updated on industry trends, emerging technologies, and best practices in data science and AI to continuously enhance solutions</li></ul><p><br></p>
<p>We are looking for a highly skilled Angular/AI Engineer to join our team. The ideal candidate will have extensive experience in Angular development and a strong background in integrating AI-driven solutions into frontend applications.</p><p><br></p><p>Responsibilities:</p><p>• Design and develop dynamic Single Page Applications (SPAs) using Angular, TypeScript, HTML5, and CSS3.</p><p>• Optimize application performance by identifying and addressing bottlenecks, ensuring scalability and responsiveness.</p><p>• Maintain high code quality standards through effective code reviews and adherence to Angular best practices.</p><p>• Provide technical leadership by guiding architectural decisions and mentoring entry level developers.</p><p>• Collaborate with backend teams to integrate RESTful APIs and implement robust testing strategies using tools such as Jasmine, Karma, or Cypress.</p><p>• Participate actively in Agile Scrum processes, including sprint planning, daily stand-ups, and retrospectives.</p><p>• Utilize AI-assisted development tools to enhance efficiency and streamline workflows.</p><p>• Implement CI/CD pipelines and manage version control systems such as Git.</p><p>• Integrate large language models (LLMs), including Claude, into Angular applications to enhance functionality.</p>
<p><strong>Mechanical Engineer</strong></p><p>Onsite | Austin, TX | Contract-to-Hire</p><p><br></p><p>Robert Half is partnering with a rapidly growing global manufacturing company to hire a Mechanical Engineer (contract-to-hire). You'll be working with a team of Server System Engineers to design and develop advanced server and PC chassis systems, in partnership with clients across the Austin-metro region. This position offers great advancement opportunities and the chance to learn from pioneers in the industry!</p><p><br></p><p><strong>Responsibilities: </strong></p><ul><li>Partner directly with clients to gather specs and requirements for machine design</li><li>Design and develop server and PC chassis systems using Creo 3D-CAD</li><li>Provide CAD designs and specifications to senior leadership for prototyping</li><li>Collaborate with cross-functional teams to ensure seamless project/product delivery</li><li>Support project management for machine design and R&D projects</li></ul>
We are looking for a skilled Data Engineer with expertise in AI/ML technologies and prior experience in the oil and gas industry to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in transforming data into actionable insights through advanced analytics and innovative solutions. This opportunity is ideal for professionals who thrive in data-driven environments and excel at leveraging tools like Power BI and PowerApps.<br><br>Responsibilities:<br>• Develop and manage Power BI dashboards and reports to deliver meaningful insights from raw data.<br>• Utilize PowerApps to create and maintain applications that support business intelligence initiatives.<br>• Collaborate with cross-functional teams to understand data requirements and implement solutions.<br>• Analyze complex datasets to identify trends and patterns that inform decision-making.<br>• Ensure the accuracy, reliability, and security of data within BI systems.<br>• Optimize data pipelines and workflows for improved performance and scalability.<br>• Provide technical expertise to support AI/ML integration into existing data processes.<br>• Stay updated on emerging technologies and best practices in data engineering and AI/ML.<br>• Troubleshoot and resolve issues related to data tools and processes.<br>• Document processes, workflows, and methodologies for future reference.
<p>Robert Half is seeking a <strong>Senior Software Engineer</strong> to support a <strong>financial services organization</strong> based in <strong>Bellevue, WA</strong>. This role involves building secure, scalable, cloud-native microservices in AWS while contributing to architectural decisions and engineering best practices across the team. The position is <strong>remote</strong>, and is a <strong>contract opportunity with potential to extend</strong>. Apply today!</p><p><br></p><p>Job Details:</p><p><strong>Schedule:</strong> Monday–Friday, standard business hours (PST)</p><p> <strong>Duration:</strong> 6+ month contract (potential to extend)</p><p> <strong>Location:</strong> Remote (WA preferred)</p><p><br></p><p>Job Responsibilities:</p><ul><li>Design, develop, and maintain secure, scalable backend services using Python.</li><li>Build and enhance cloud-native microservices and event-driven applications in AWS.</li><li>Develop reusable components and shared libraries to improve engineering efficiency.</li><li>Conduct peer code reviews and ensure adherence to engineering standards.</li><li>Implement unit, integration, and performance testing strategies to ensure production readiness.</li><li>Configure and maintain CI/CD pipelines to support automated builds and deployments.</li><li>Participate in architectural design discussions to drive scalable and secure solutions.</li><li>Support root cause analysis and resolution of production issues.</li><li>Apply best practices for observability, monitoring, disaster recovery, and performance tuning.</li><li>Mentor engineers and promote collaboration, knowledge sharing, and continuous improvement.</li></ul><p><br></p>
<p>We are looking for a skilled DevOps Cloud Engineer to join our team on a consulting basis.</p><p>This is a Remote role. </p><p>In this role, you will play a key part in managing and optimizing cloud infrastructure while implementing DevOps practices to enhance operational efficiency. The ideal candidate will have strong experience in cloud environments, automation tools, and collaboration across teams to drive strategic improvements.</p><p><br></p><p>Responsibilities:</p><p>• Manage and maintain CI/CD pipelines, ensuring seamless deployment processes using GitLab.</p><p>• Oversee the cloud infrastructure, focusing on services, resources, and their integration with data pipelines.</p><p>• Implement and maintain infrastructure-as-code solutions using Terraform to streamline cloud operations.</p><p>• Handle all aspects of infrastructure management, including deployments, upgrades, patching, and proactive security measures.</p><p>• Respond to security vulnerabilities and implement strategies to strengthen system protection.</p><p>• Advocate for a DevOps mindset by providing technical leadership and fostering collaboration between development and operations teams.</p><p>• Analyze and optimize cloud architecture to ensure scalability and reliability.</p><p>• Utilize Google Cloud Platform tools and applications to enhance system performance and functionality.</p><p>• Collaborate with teams to identify and resolve technical challenges effectively.</p>
We are looking for an experienced Lead Data Engineer to oversee the design, implementation, and management of advanced data infrastructure in Houston, Texas. This role requires expertise in architecting scalable solutions, optimizing data pipelines, and ensuring data quality to support analytics, machine learning, and real-time processing. The ideal candidate will have a deep understanding of Lakehouse architecture and Medallion design principles to deliver robust and governed data solutions.<br><br>Responsibilities:<br>• Develop and implement scalable data pipelines to ingest, process, and store large datasets using tools such as Apache Spark, Hadoop, and Kafka.<br>• Utilize cloud platforms like AWS or Azure to manage data storage and processing, leveraging services such as S3, Lambda, and Azure Data Lake.<br>• Design and operationalize data architecture following Medallion patterns to ensure data usability and quality across Bronze, Silver, and Gold layers.<br>• Build and optimize data models and storage solutions, including Databricks Lakehouses, to support analytical and operational needs.<br>• Automate data workflows using tools like Apache Airflow and Fivetran to streamline integration and improve efficiency.<br>• Lead initiatives to establish best practices in data management, facilitating knowledge sharing and collaboration across technical and business teams.<br>• Collaborate with data scientists to provide infrastructure and tools for complex analytical models, using programming languages like Python or R.<br>• Implement and enforce data governance policies, including encryption, masking, and access controls, within cloud environments.<br>• Monitor and troubleshoot data pipelines for performance issues, applying tuning techniques to enhance throughput and reliability.<br>• Stay updated with emerging technologies in data engineering and advocate for improvements to the organization's data systems.
We are looking for an experienced Platform / DevOps Engineer to join our team in Los Angeles, California. This role focuses on enhancing developer workflows, maintaining platform operations, and ensuring system observability to support production environments. As part of a long-term contract, you will play a key role in optimizing cloud resources, managing access controls, and troubleshooting issues across various tools and environments.<br><br>Responsibilities:<br>• Manage user access across Atlassian tools, Azure DevOps, GitHub, and other platforms, ensuring secure and compliant permissions.<br>• Process and oversee access requests using ServiceNow and internal workflows to maintain least-privilege access.<br>• Design, maintain, and troubleshoot CI/CD pipelines within Azure DevOps and GitHub Actions.<br>• Provide support for containerized applications using Docker and Kubernetes, including environment configuration.<br>• Collaborate with Systems Engineering teams to manage cloud resources and optimize configurations in Azure.<br>• Analyze system logs and metrics using Elastic tools to identify and resolve backend service issues.<br>• Investigate and troubleshoot issues related to server environments, databases, and backend services.<br>• Partner with engineering teams to identify root causes of system failures and implement preventive measures.<br>• Participate in incident response efforts and contribute to post-incident reviews and improvements.
<p><strong>Overview</strong></p><p>We are seeking a Senior Data Engineer to support a major Salesforce Phase 2 data migration initiative. This role will focus heavily on building and optimizing data pipelines, developing ETL workflows, and moving CRM data from Salesforce into Databricks.</p><p>The engineer will work closely with a senior team member, contribute to Scrum ceremonies, and play a key role in developing the core CRM data environment used by the advertising organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><p><strong>Data Engineering & Migration</strong></p><ul><li>Develop ETL jobs that move and transform Salesforce data into Databricks.</li><li>Build, test, and maintain high‑volume data pipelines across AWS + Databricks.</li><li>Perform data migration, data integration, and pipeline development (including Mulesoft-related work).</li><li>Ensure all pipelines are reliable, scalable, and optimized for production.</li></ul><p><strong>Development & Infrastructure</strong></p><ul><li>Use Python and PySpark to build ETL components and transformation logic.</li><li>Leverage Spark/PySpark for distributed processing at scale (must‑have).</li><li>Use Terraform to provision and manage cloud infrastructure.</li><li>Set up CI/CD pipelines using Concourse or GitHub Actions for automated deployments.</li></ul><p><strong>Quality, Documentation & Support</strong></p><ul><li>Document ETL processes, pipelines, and data flows.</li><li>Participate in testing, QA, and validation of migrated datasets.</li><li>Provide post‑delivery support and proactively mitigate project risks or single points of failure (SPOF).</li><li>Troubleshoot production issues and implement long‑term fixes to maintain pipeline stability.</li></ul><p><strong>Collaboration</strong></p><ul><li>Work closely with engineering teammates to translate business requirements into working pipelines.</li><li>Participate in weekly Scrum ceremonies.</li><li>Contribute to shared best practices and continuous improvement across the data engineering team.</li></ul><p><br></p>
<p><strong>For immediate response please message Valerie Nielsen on LinkedIn or email!</strong></p><p><br></p><p><strong>Job Title:</strong> Senior Data Engineer</p><p> <strong>Location:</strong> Hybrid – Westwood (Los Angeles, CA) near University of California, Los Angeles</p><p> <strong>Compensation:</strong> $175,000 – $185,000 base salary + 10% annual bonus</p><p> <strong>Employment Type:</strong> Full-Time</p><p><br></p><p>Overview</p><p>We are seeking a <strong>Senior Data Engineer</strong> to join a growing data team in <strong>Westwood, CA</strong>. This role will focus on designing and building scalable data pipelines, supporting analytics and reporting initiatives, and improving data infrastructure across the organization.</p><p>The ideal candidate is highly experienced with <strong>Snowflake, dbt, Python</strong>, and modern data pipeline architecture, and enjoys working closely with analytics and business teams to deliver reliable, high-quality data. Experience integrating data from CRM platforms such as <strong>Salesforce</strong> is a strong plus.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain <strong>scalable data pipelines</strong> supporting analytics, reporting, and operational data needs</li><li>Build and optimize data models and transformations using <strong>dbt</strong> within a <strong>Snowflake</strong> data warehouse environment</li><li>Develop robust ETL/ELT workflows using <strong>Python</strong> and modern data engineering best practices</li><li>Collaborate with analytics teams to deliver clean, reliable datasets used in <strong>Power BI</strong> dashboards and reporting</li><li>Ensure data quality, reliability, and performance across the data platform</li><li>Optimize Snowflake warehouse performance and manage cost-efficient data storage and compute usage</li><li>Integrate data from internal and external systems, including CRM and SaaS platforms</li><li>Partner with stakeholders across engineering, product, and business teams to define data requirements and solutions</li><li>Maintain documentation and promote data engineering standards and best practices</li></ul><p><br></p><p><br></p>
We are looking for a highly skilled Senior Data Engineer to join our team in Edgewood, New York. This role is ideal for someone who is detail oriented and has expertise in developing scalable data pipelines, modeling data structures, and optimizing data infrastructure for performance and reliability. The right candidate will play a key role in shaping our data engineering function and collaborating with cross-functional teams to deliver impactful solutions.<br><br>Responsibilities:<br>• Design and maintain efficient and scalable data pipelines to support various operational and commercial systems.<br>• Develop and manage modern data warehouse infrastructure using tools such as BigQuery and dbt.<br>• Integrate, transform, and organize data from multiple sources into structured, queryable formats.<br>• Create and manage logical and physical data models to enhance analytics and reporting capabilities.<br>• Collaborate with stakeholders to enable self-service reporting and build dashboards using platforms like Looker and Looker Studio.<br>• Implement best practices for data engineering, including testing, monitoring, and ensuring pipeline reliability.<br>• Optimize the performance, scalability, and cost-efficiency of data pipelines and warehouses.<br>• Partner with engineering, operations, and business teams to translate data needs into scalable solutions.<br>• Contribute to the improvement of engineering processes, coding standards, and documentation.<br>• Mentor team members and support onboarding as the team grows.
<p>We are seeking a Senior Data Engineer – Ingest to help transform data into meaningful insights and power innovation across the organization. In this role, you will work with a collaborative team of technologists to build scalable data solutions, integrate diverse data sources, and strengthen the core data platform. Your engineering expertise will directly support analytics, data science, operations, and key business stakeholders.</p><p>If you’re passionate about building high‑quality data systems that make a measurable impact, this role offers the opportunity to shape the future of a large, data‑driven organization.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Maintain, update, and expand configuration‑driven data pipelines within the core data platform.</li><li>Build tools and services supporting data discovery, lineage, governance, and privacy.</li><li>Partner with software engineers, data engineers, architects, and product managers to deliver reliable and scalable data solutions.</li><li>Help define and document data standards, naming conventions, pipeline best practices, and system guidelines.</li><li>Ensure the reliability, accuracy, and operational efficiency of datasets to meet SLAs.</li><li>Participate in Agile/Scrum ceremonies and contribute to ongoing process improvements.</li><li>Collaborate closely with users and stakeholders to understand needs and prioritize enhancements.</li><li>Maintain detailed technical documentation to support data quality, governance, and compliance requirements.</li></ul><p><br></p>
<p>Architect and deliver modern data platform solutions with a strong emphasis on Databricks and contemporary cloud data technologies.</p><p>Build secure, scalable, and high‑performing data environments that enable analytics, reporting, and enterprise‑wide data initiatives.</p><p>Oversee and execute migrations from legacy relational databases into Databricks-based ecosystems.</p><p>Design and structure scalable data pipelines and foundational data infrastructure aligned with organizational goals.</p><p>Create and maintain ETL/ELT processes within Databricks to ensure efficient ingestion, transformation, and delivery of data.</p><p>Continuously refine and optimize data workflows to improve performance, stability, and data quality across all processes.</p><p>Manage end-to-end data transitions to ensure operational continuity with minimal business disruption.</p><p>Monitor Databricks workloads and optimize performance, scalability, and cost efficiency across compute and storage layers.</p><p>Partner with data engineers, scientists, analysts, and product stakeholders to gather requirements and build fit‑for‑purpose data solutions.</p><p>Establish and enforce data engineering best practices, development standards, and architectural guidelines.</p><p>Assess emerging tools and technologies to enhance pipeline efficiency, reliability, and automation capabilities.</p><p>Provide technical direction, guidance, and mentorship to junior engineers and team members.</p><p>Collaborate closely with DevOps and infrastructure teams to deploy, manage, and support data systems in production.</p><p>Ensure all data solutions meet compliance standards, organizational security policies, and regulatory obligations.</p><p>Work with enterprise architects and IT leadership to align data architecture with broader technology strategies and long-term roadmaps</p>
We are looking for an experienced Senior Data Engineer to join our team in Woodbury, Minnesota. In this role, you will play a key part in designing and optimizing data systems, ensuring scalability and reliability for business-critical operations. The ideal candidate will have a strong background in data engineering and a passion for leveraging technology to drive impactful solutions.<br><br>Responsibilities:<br>• Redesign and optimize complex business logic embedded in Postgres functions to improve functionality.<br>• Develop scalable database schemas and create data models that are optimized for analytics and AI applications.<br>• Implement database partitioning, indexing, and performance tuning to ensure data growth is supported efficiently.<br>• Build and maintain production-grade data pipelines from data ingestion to end-user consumption.<br>• Establish robust processes for data quality assurance, monitoring, and operational reliability within pipelines.<br>• Troubleshoot and resolve data-related and performance issues directly in production environments.<br>• Collaborate with cross-functional teams to ensure seamless integration of data systems into business processes.
<p>We are looking for an experienced Senior Data Engineer to join our team in Boston, Massachusetts. In this role, you will be responsible for designing and building a robust data platform from the ground up, playing a pivotal part in shaping the data strategy and supporting AI-driven initiatives. This is a unique opportunity to contribute to the creation of a new data engineering function within a dynamic financial services environment. This role is hybrid, onsite in Boston 3 days a week. </p><p><br></p><p>Responsibilities:</p><p>• Design, develop, and implement a scalable data platform using Microsoft Fabric and other technologies within the Microsoft ecosystem.</p><p>• Collaborate with stakeholders to define the data strategy and implement solutions that align with business goals.</p><p>• Oversee and manage external consultants assisting with the development of the data platform.</p><p>• Support AI enablement initiatives by ensuring the data architecture meets analytical and operational needs.</p><p>• Create and maintain ETL processes to ensure efficient data extraction, transformation, and loading.</p><p>• Optimize database performance across SQL, NoSQL, and other database systems.</p><p>• Utilize Python for data engineering tasks, including scripting and automation.</p><p>• Work closely with IT and analytics teams to ensure seamless integration of the data platform into existing systems.</p><p>• Provide technical leadership and guidance while exploring future opportunities to build and expand the data engineering function.</p><p>• Ensure compliance with industry standards and best practices in data security and management.</p>
<p><strong>Overview</strong></p><p> We are seeking a <strong>Power BI Developer</strong> to serve as the technical owner of the organization’s Power BI and analytics environment. This role focuses on platform ownership, data modeling, governance, and maintaining a stable analytics ecosystem rather than primarily building reports. The ideal candidate is a hands-on technical leader with strong experience in <strong>Power BI, SQL, and ETL processes</strong>, who can support the analytics platform while collaborating with data engineers, business stakeholders, and cross-functional teams.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Serve as the technical lead and primary owner of the Power BI analytics environment.</li><li>Act as the first point of contact for platform issues and ensure ongoing system stability and performance.</li><li>Manage and maintain security, access controls, and governance across Power BI workspaces and datasets.</li><li>Monitor and support development and production analytics environments.</li><li>Partner with data engineering and IT teams to understand upstream data pipelines and ensure data is structured for reporting and analytics.</li><li>Design, build, and maintain semantic and data models within Power BI.</li><li>Develop and optimize complex <strong>SQL queries</strong> and <strong>ETL processes</strong> to support analytics and reporting needs.</li><li>Build scalable Power BI data models using <strong>star schema and dimensional modeling</strong> best practices.</li><li>Ensure consistent documentation, deployment standards, and version control across analytics assets.</li><li>Communicate effectively with business users and stakeholders to support reporting and data needs.</li></ul>
<p><strong>M365 Implementation Engineer</strong></p><p>Location: Remote</p><p>Department: Professional Services</p><p>Type: Full-Time</p><p><br></p><p><strong>About the Role</strong></p><p>We are seeking an experienced M365 Implementation Engineer to join our Professional Services team. This position combines hands-on engineering work with frequent client interaction. You will design, deploy, and optimize Microsoft 365 solutions while collaborating directly with customers through daily video calls, workshops, and project updates.</p><p>This role is ideal for someone who enjoys both the technical side of M365 and the client-facing aspects of consulting.</p><p><br></p><p><strong>What You’ll Do</strong></p><ul><li>Lead the deployment, configuration, and migration of Microsoft 365 services in client environments.</li><li>Deliver solutions across collaboration, communication, and cloud productivity platforms within the M365 ecosystem.</li><li>Meet with clients regularly through video calls to gather requirements, present progress, and provide technical guidance.</li><li>Develop automated workflows, apps, and dashboards using the Power Platform to streamline business processes.</li><li>Implement best practices around identity, governance, compliance, and security within the M365 tenant.</li><li>Troubleshoot escalated issues and support clients throughout project delivery.</li><li>Work closely with project managers and stakeholders to translate requirements into effective technical solutions.</li></ul><p><br></p>
<p>We’re looking for an <strong>Identity & Access Management Engineer</strong> to design, implement, and maintain secure authentication and authorization solutions across enterprise applications and cloud platforms. This role focuses on <strong>SSO, MFA, identity federation, and access governance</strong>, partnering closely with security, infrastructure, and application teams.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design and support <strong>SSO and MFA</strong> solutions across internal and external applications</li><li>Implement and manage <strong>identity federation</strong> (SAML, OAuth2, OIDC)</li><li>Integrate IAM platforms with SaaS, cloud, and on‑prem applications</li><li>Manage <strong>user lifecycle provisioning/deprovisioning</strong></li><li>Conduct <strong>access reviews</strong>, entitlement audits, and policy enforcement</li><li>Support compliance initiatives (SOC 2, HIPAA, SOX, etc.)</li><li>Troubleshoot authentication, authorization, and access issues</li><li>Collaborate with security teams on zero‑trust and least‑privilege initiatives</li></ul><p><br></p>
We are looking for an AI Engineer to lead the integration of artificial intelligence and automation into cutting-edge systems. Based in Dallas, Texas, this role is an exciting opportunity to design, develop, and deploy AI-powered applications while leveraging cloud-native technologies and modern engineering practices. The ideal candidate will be passionate about utilizing AI tools to enhance workflows, streamline processes, and drive innovation.<br><br>Responsibilities:<br>• Develop and maintain user interfaces and backend services using modern programming frameworks.<br>• Design and optimize relational database schemas, queries, and stored procedures.<br>• Integrate AI models and services into applications to improve functionality and workflows.<br>• Build and deploy automation workflows using tools such as n8n and Zapier.<br>• Utilize AI-assisted development tools like Copilot and ChatGPT to enhance engineering efficiency.<br>• Identify and implement opportunities to streamline processes through AI and automation.<br>• Develop and deploy solutions within Microsoft Azure, ensuring scalability and security.<br>• Collaborate on cloud platform identity, integration, and security best practices.<br>• Work within a Windows-based environment while leveraging Microsoft 365 tools.<br>• Ensure high performance, scalability, and security across all application layers.
<p>Position Overview</p><p>We are seeking a Data Governance & Data Quality Platform Engineer to own the technical administration, integration, and optimization of enterprise data governance and data quality platforms (e.g., Atlan, Monte Carlo). This role ensures governance and quality tools are scalable, securely integrated into the enterprise data ecosystem, and maintained for high availability and performance.</p><p>The ideal candidate brings strong platform engineering skills, experience automating data quality and metadata workflows, and a solid understanding of governance, compliance, and modern data architectures.</p><p>Key Responsibilities</p><p><br></p><p>1. Platform Engineering & Administration</p><ul><li>Configure and maintain data governance platforms for metadata management, data lineage, and governance workflows</li><li>Configure data quality tools for profiling, rule creation, and monitoring dashboards</li><li>Manage platform security, including user roles, authentication, SSO, RBAC, and access controls</li></ul><p>e2. Integration & Automation</p><ul><li>Develop and maintain integrations across data sources, databases, data lakes, and BI tools</li><li>Automate metadata ingestion and data quality checks using APIs, Python scripts, or ETL frameworks</li><li>Configure and maintain connectors for analytics and reporting platforms</li></ul><p> 3. Performance, Reliability & Monitoring</p><ul><li>Monitor platform health and optimize performance and scalability</li><li>Apply upgrades, patches, and troubleshoot technical issues</li><li>Implement logging, alerting, and proactive monitoring for governance and data quality environments</li></ul><p>a4. Technical Support & Issue Resolution</p><ul><li>Provide Tier 3 support for platform‑related incidents and escalations</li><li>Debug integration failures and resolve configuration conflicts</li><li>Collaborate with vendors for advanced troubleshooting and roadmap alignment</li></ul><p>r5. Security, Compliance & Risk Management</p><ul><li>Ensure platforms comply with data privacy and security standards (e.g., GDPR, CCPA)</li><li>Implement encryption, audit logging, and access controls</li><li>Support compliance reporting and risk assessments using governance and data quality metrics</li></ul>
<p>Join a product‑focused team building scalable web applications on Microsoft stack. You’ll ship features across the full stack using <strong>.NET/C#</strong> on the backend and <strong>React (or similar)</strong> on the frontend, deploy to <strong>Azure</strong>, and improve performance, reliability, and developer experience.</p><p><strong>What You’ll Do</strong></p><ul><li>Build RESTful APIs, services, and backend components using .NET/C#</li><li>Develop modern frontends (React preferred) with clean, reusable components</li><li>Deploy and run apps on Azure (App Service, Functions, ACR/AKS, Storage, Key Vault)</li><li>Work with relational databases (SQL Server/Azure SQL), tuning and queries</li><li>Implement CI/CD (Azure DevOps/GitHub Actions), testing, and observability</li><li>Collaborate with Product/Design to deliver user‑centric features</li><li>Write clean, maintainable code and documentation</li></ul><p><br></p>
Must have skills: <br>• 3–6 years of professional software engineering experience, with a strong portfolio of full stack development work. <br>• Proficiency in Python, including experience with web frameworks such as Flask or Dash. <br>• Experience integrating frontend applications with RESTful APIs and backend services. <br>• Relational and non-relational databases: SQL, MongoDB, and/or Snowflake using Python. <br>• Designing data models for effective data storage and retrieval (preferably SQL, MongoDB, Snowflake). <br>• Debugging, issue resolution, and troubleshooting. <br>• Developing systems integrated with cloud services, such as for storage or secrets management (preferably AWS). <br>• Designing and troubleshooting ETL pipelines. <br>• Developing REST APIs using Python frameworks (preferably Flask). <br>• Publishing Python packages, maintaining them, and building Python CLI tools. <br>• Deploying REST APIs in containerized environments (Kubernetes), working with other developers in the team to integrate those APIs with web applications. <br> <br>Nice to have skills: <br>• Exposure to financial systems, SEC API, and/or corporate credit modeling is strongly preferred. <br>• Familiarity with UX design tools (Figma) and solid understanding of the design-engineering hand-off process <br>• Familiarity with deployment pipelines, CICD tools (preferably GitLab). <br>• Configuring observability and alerting services (preferably Datadog and Opsgenie). <br>• Containerized development and deployment (i.e. Docker, Kubernetes) <br>• Writing infrastructure as code (preferably Terraform). <br>• Integrating managed authentication services (preferably Auth0). <br>• Familiarity with LLM Document Parsing and Data Framework services (preferably LlamaParse and LlamaIndex) <br>• Familiarity with LLM Observability tooling (preferably Weave) <br>• Experience with the OpenAI SDK <br>• Experience with Vector Databases (preferably MongoDB)