<p><strong>Network Engineer (Network Systems Engineer)</strong></p><p> <strong>Location:</strong> Ventura, CA or Santa Barbara/Goleta, CA</p><p> <strong>Salary:</strong> $110,000 – $125,000</p><p><br></p><p>A growing, high-performing technology services team is seeking an experienced Network Engineer to design, implement, and support modern infrastructure environments across diverse client landscapes. This role offers the opportunity to work directly with senior leadership and contribute to projects ranging from on-premise systems to large-scale cloud deployments.</p><p><br></p><p>What You’ll Do</p><ul><li>Design and implement technical solutions spanning on-premise infrastructure and cloud environments</li><li>Troubleshoot complex network and systems issues across multiple client environments</li><li>Manage and enforce network security measures to ensure system reliability and protection</li><li>Create and maintain clear technical documentation, procedures, and user guides</li><li>Mentor and support junior engineers, contributing to a collaborative team culture</li></ul><p><br></p><p>Work Environment</p><ul><li>Hybrid flexibility: work remotely when projects allow, with on-site presence required at the office or client locations as needed</li><li>Opportunities available in both Ventura and Santa Barbara/Goleta areas</li></ul><p>Compensation & Benefits</p><ul><li>Competitive salary range of $110,000–$125,000</li><li>401(k) with employer match (50% up to 6%)</li><li>Generous PTO accrual (6.25 hours per pay period, 26 pay periods annually)</li><li>Comprehensive medical, dental, and vision coverage</li><li>$5,000 annual continuing education allowance to support your professional growth</li></ul><p>This is an opportunity to take ownership of impactful infrastructure projects while continuing to grow your technical expertise in a supportive and forward-thinking environment.</p>
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p>Job / Position Description</p><p>Prototyping & Innovation</p><ul><li>Develop rapid prototypes, demos, and proof‑of‑concepts using in‑market generative AI models and tools</li><li>Support business decision‑making, project scoping, and adoption of emerging AI technology</li><li>Primary focus on <strong>AI tools for visual storytelling</strong>, including imagery and video</li></ul><p>Testing & Evaluation</p><ul><li>Build tooling and methodologies to evaluate emerging AI models, APIs, and platforms</li><li>Assess prototype quality, controllability, performance, and production readiness</li><li>Debug, refine, and iterate on prototypes with minimal technical supervision</li></ul><p>Optimization & Adaptation</p><ul><li>Optimize and adapt existing state‑of‑the‑art AI models for quality, efficiency, and usability</li><li>Perform <strong>model fine‑tuning</strong> (e.g., LoRA training) where appropriate</li><li>The role does <strong>not</strong> require training models from scratch</li></ul><p>Cross‑Functional Collaboration</p><ul><li>Partner closely with creative, production, and technical teams</li><li>Iterate solutions based on stakeholder feedback and evolving creative needs</li><li>Provide technical requirements and direction to vendors or external partners when applicable</li></ul><p>Documentation & Communication</p><ul><li>Document technical learnings, workflows, and best practices</li><li>Communicate complex technical concepts clearly to non‑technical stakeholders</li><li>Present work and outcomes to senior leadership when required</li></ul><p>WCore Expectations</p><ul><li><strong>Hands‑on technical role</strong> (not a project manager or business analyst position)</li><li>Ability to:</li><li>Work deeply with <strong>generative AI video models</strong></li><li>Use tools such as <strong>Comfy UI</strong></li><li>Write and modify code</li><li>Perform model fine‑tuning (e.g., LoRA)</li><li>Strong communication skills are critical, including executive‑level presentations</li></ul>
<p><strong>Data Engineer – CRM Integration (Hybrid in San Fernando Valley)</strong></p><p><strong>Location:</strong> San Fernando Valley (Hybrid – 3x per week onsite)</p><p><strong>Compensation:</strong> $140K–$170K annual base salary</p><p><strong>Job Type:</strong> Full Time, Permanent</p><p><strong>Overview:</strong></p><p>Join our growing technology team as a Data Engineer with a focus on CRM data integration. This permanent role will play a key part in supporting analytics and business intelligence across our organization. The position offers a collaborative hybrid environment and highly competitive compensation.</p><p><strong>Responsibilities:</strong></p><ul><li>Design, develop, and optimize data pipelines and workflows integrating multiple CRM systems (Salesforce, Dynamics, HubSpot, Netsuite, or similar).</li><li>Build and maintain scalable data architectures for analytics and reporting.</li><li>Manage and advance CRM data integrations, including real-time and batch processing solutions.</li><li>Deploy ML models, automate workflows, and support model serving using Azure Databricks (ML Flow experience preferred).</li><li>Utilize Azure Synapse Analytics & Pipelines for high-volume data management.</li><li>Write advanced Python and Spark SQL code for ETL, transformation, and analytics.</li><li>Collaborate with BI and analytics teams to deliver actionable insights using PowerBI.</li><li>Support streaming solutions with technologies like Kafka, Event Hubs, and Spark Streaming.</li></ul><p><br></p>
We are looking for a talented Data Engineer to join our team in Glendale, California. In this long-term contract role, you will be instrumental in designing, developing, and maintaining scalable data pipelines and platforms that support critical business operations. Through collaboration with cross-functional teams, you will contribute to innovative data solutions that enhance decision-making processes and drive operational excellence.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support the Core Data platform.<br>• Create tools and services to enhance data discovery, governance, and privacy.<br>• Collaborate with product managers, architects, and software engineers to ensure the success of data platforms.<br>• Apply technologies such as Airflow, Spark, Databricks, Delta Lake, and Kubernetes to build advanced data solutions.<br>• Establish and document best practices for pipeline configurations, naming conventions, and operational standards.<br>• Monitor and ensure the accuracy, reliability, and efficiency of datasets to meet service level agreements (SLAs).<br>• Participate in agile and scrum ceremonies to improve collaboration and team processes.<br>• Foster relationships with stakeholders to understand their needs and prioritize platform enhancements.<br>• Maintain detailed documentation to support data governance and quality initiatives.