We are looking for a talented Data Engineer to join our team in Glendale, California. In this long-term contract role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and platforms that support critical business operations. Through collaboration with cross-functional teams, you will contribute to innovative data solutions that enhance decision-making processes and drive operational excellence.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support the Core Data platform.<br>• Create tools and services to enhance data discovery, governance, and privacy.<br>• Collaborate with product managers, architects, and software engineers to ensure the success of data platforms.<br>• Apply technologies such as Airflow, Spark, Databricks, Delta Lake, and Kubernetes to build advanced data solutions.<br>• Establish and document best practices for pipeline configurations, naming conventions, and operational standards.<br>• Monitor and ensure the accuracy, reliability, and efficiency of datasets to meet service level agreements (SLAs).<br>• Participate in agile and scrum ceremonies to improve collaboration and team processes.<br>• Foster relationships with stakeholders to understand their needs and prioritize platform enhancements.<br>• Maintain detailed documentation to support data governance and quality initiatives.
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
<p><strong>Our client is seeking a Senior AWS Data Engineer for a long term, multi-year assignment.</strong></p><p><br></p><p><strong>This role is onsite 4 days/week in Torrance, CA. </strong></p><p><br></p><p>This role is to support and enhance enterprise business intelligence and analytics environments. This role focuses on designing, building, and maintaining scalable data pipelines and cloud‑based data platforms using AWS services. The ideal candidate brings deep hands‑on experience with AWS Glue, PySpark, Redshift, and serverless architectures, along with strong SQL and data analysis skills.</p><p>This role will collaborate closely with architecture, security, compliance, and development teams to ensure data solutions are performant, secure, and compliant with regulatory requirements.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue with PySpark for large‑scale data processing</li><li>Develop and support serverless integrations using AWS Lambda for event‑driven workflows and system integrations</li><li>Design and optimize Amazon Redshift data warehouse solutions, including:</li><li>Advanced SQL analytics</li><li>Stored procedures</li><li>Performance tuning</li><li>Lead implementation of secure vendor file transfer and ingestion solutions using AWS Transfer Family</li><li>Design and implement database migration and replication pipelines using AWS Database Migration Service (DMS)</li><li>Build and manage workflow orchestration using Apache Airflow or similar orchestration tools</li><li>Analyze data quality, transformation logic, and pipeline performance using SQL and data analysis techniques</li><li>Troubleshoot and resolve production data pipeline and integration issues across AWS services</li><li>Provide technical guidance to development team members on:</li><li>AWS best practices</li><li>Cost optimization</li><li>Performance optimization</li><li>Partner with enterprise architecture, security, and compliance teams to ensure SOX and regulatory compliance</li></ul>
We are looking for a Database Technology Product Manager to lead the strategy, development, and launch of advanced data-driven products within a hospitality environment in California. This contract-to-permanent opportunity is ideal for an experienced product leader who can connect executive priorities with technical execution, while shaping new capabilities that turn complex customer data into meaningful business insight. The right candidate will bring a strong command of machine learning, data science, and hospitality consumer behavior, along with the presence and judgment to communicate effectively at the board level.<br><br>Responsibilities:<br>• Lead the full product lifecycle for database and data science initiatives, from early concept development through deployment, measurement, and ongoing enhancement.<br>• Act as the central point of coordination between executive leadership, business stakeholders, and engineering teams to keep priorities aligned and delivery on track.<br>• Establish and grow a technical product management function focused on research and development initiatives within the organization.<br>• Define product strategy, investment priorities, and roadmap decisions for proprietary data products that support customer insight and business growth.<br>• Develop a data science inference capability that converts large and complex datasets into recommendations, decision support, and behavioral signals across hospitality experiences.<br>• Deliver clear updates to senior executives and board-level audiences, clearly outlining progress, risks, outcomes, and future plans.<br>• Guide prioritization and governance discussions by evaluating trade-offs, setting benchmarks, and supporting funding-related decision processes.<br>• Partner with technical teams to translate machine learning, data modeling, and analytics concepts into practical product direction and measurable value.<br>• Help bring specialized products to market responsibly while serving as a strategic interface between developers and customer-facing partners.
<p>We are seeking an experienced Senior Business Intelligence Developer to design, develop, and support enterprise‑level reporting and analytics solutions. This role focuses on building high‑performance dashboards, semantic layers, and analytical datasets using Power BI, SAP BusinessObjects, and complex SQL across multiple database platforms.</p><p>The ideal candidate has deep BI development experience, strong data modeling skills, and the ability to partner closely with business stakeholders, data engineering, and architecture teams to deliver accurate, scalable reporting solutions.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, develop, and maintain enterprise BI reports, dashboards, and analytical solutions</li><li>Build advanced dashboards and data models using Power BI, including DAX and performance optimization</li><li>Design and support enterprise reporting solutions using SAP BusinessObjects</li><li>Partner with business stakeholders to gather, analyze, and document reporting and analytics requirements</li><li>Develop complex SQL queries, stored procedures, and optimized data retrieval logic</li><li>Perform backend data analysis across SQL Server, DB2, and Amazon Redshift databases</li><li>Design optimized datasets and semantic layers to support BI and analytics tools</li><li>Troubleshoot data discrepancies, reporting inconsistencies, and performance issues</li><li>Implement data validation and reconciliation logic to ensure reporting accuracy</li><li>Optimize database queries and stored procedures for high‑performance analytics workloads</li><li>Collaborate closely with Data Engineering and Architecture teams to align BI solutions with enterprise data platforms</li><li>Support production BI environments and resolve reporting‑related incidents</li></ul>
We are looking for an Artificial Intelligence (AI) Engineer to join a machinery manufacturing organization in Irvine, California on a contract basis with the potential for a permanent role. In this role, you will work closely with business stakeholders to turn practical needs into scalable AI-driven solutions, with a strong focus on Microsoft Copilot, Power Platform, and workflow automation. This opportunity is ideal for someone who combines hands-on development skills with solution architecture experience and can move ideas quickly from proof of concept into production-ready delivery.<br><br>Responsibilities:<br>• Collaborate with departments such as HR, Finance, and Marketing to uncover high-value AI opportunities and define solution goals.<br>• Convert business needs into technical designs for agent-based applications and intelligent workflow solutions.<br>• Build and enhance AI agents using Copilot Studio along with Microsoft Power Platform technologies.<br>• Develop automated, multi-step processes through Power Automate and connected enterprise integrations.<br>• Produce architecture documentation and implementation plans that support secure, scalable deployment.<br>• Provide technical direction and day-to-day guidance to distributed engineering resources, including offshore team members.<br>• Deliver working solutions within short engagement cycles while maintaining quality and business alignment.<br>• Lead initiatives from early prototype stages through enterprise deployment and operational readiness.<br>• Present proposed designs and completed solutions to architecture, security, and cloud governance stakeholders.
<p>We are looking for a Software Developer to create modern internal applications that simplify complex business operations and improve how teams work together. This position blends full-stack development, user-centered design, and systems integration to deliver practical tools that turn manual processes into efficient digital workflows. The role also supports the incorporation of AI-driven functionality into existing platforms through intuitive, reliable web experiences.</p><p><br></p><p>Responsibilities:</p><p>• Build and enhance internal web applications and external-facing tools that improve usability, security, and operational efficiency.</p><p>• Work with business partners and technical stakeholders to translate manual or fragmented processes into structured, database-backed solutions.</p><p>• Develop workflow features such as routing, approvals, progress tracking, alerts, and exception handling to support accountability across teams.</p><p>• Create responsive user interfaces, including forms, dashboards, tables, and status views that help users act quickly on important information.</p><p>• Design and maintain back-end services in C# and .NET to support application logic, integrations, and scalable data exchange.</p><p>• Develop SQL Server databases and data models that centralize information previously managed through disconnected files or email-based processes.</p><p>• Integrate third-party and internal APIs, including AI services, to support automated inputs, outputs, and user-facing results within business applications.</p><p>• Write Python scripts and automation components for data processing, workflow support, and AI-related integration tasks.</p><p>• Produce maintainable, well-documented code and integration patterns that support long-term stability and ease of future development.</p>