<p><strong> AWS Data Engineer</strong></p><p><br></p><p><strong>Position Overview</strong></p><p>We are seeking a skilled IT Data Integration Engineer / AWS Data Engineer to join our team and lead the development and optimization of data integration processes. This role is critical to ensuring seamless data flow across systems, enabling high-quality, consistent, and accessible data to support business intelligence and analytics initiatives. This is a long-term contract role in Southern California.</p><p><strong>Key Responsibilities</strong></p><ul><li>Develop and Maintain Data Integration Solutions</li><li>Design and implement data workflows using AWS Glue, EMR, Lambda, and Redshift.</li><li>Utilize PySpark, Apache Spark, and Python to process large datasets.</li><li>Ensure accurate and efficient ETL (Extract, Transform, Load) operations.</li></ul><p>Ensure Data Quality and Integrity</p><ul><li>Validate and cleanse data to maintain high standards of quality.</li><li>Implement monitoring, validation, and error-handling mechanisms.</li></ul><p>Optimize Data Integration Processes</p><ul><li>Enhance performance and scalability of data workflows on AWS infrastructure.</li><li>Apply data warehousing concepts including star/snowflake schema design and dimensional modeling.</li><li>Fine-tune queries and optimize Redshift performance.</li></ul><p>Support Business Intelligence and Analytics</p><ul><li>Translate business requirements into technical specifications and data pipelines.</li><li>Collaborate with analysts and stakeholders to deliver timely, integrated data.</li></ul><p>Maintain Documentation and Compliance</p><ul><li>Document workflows, processes, and technical specifications.</li><li>Ensure adherence to data governance policies and regulatory standards.</li></ul>
We are looking for a skilled and driven Data Engineer to join our team in San Juan Capistrano, California. This role centers on building and optimizing data pipelines, integrating systems, and implementing cloud-based data solutions. The ideal candidate will have a strong grasp of modern data engineering tools and techniques and a passion for delivering high-quality, scalable solutions.<br><br>Responsibilities:<br>• Design, build, and optimize scalable data pipelines using tools such as Databricks, Apache Spark, and PySpark.<br>• Develop integrations between internal systems and external platforms, including CRMs like Salesforce and HubSpot, utilizing APIs.<br>• Implement cloud-based data architectures aligned with data mesh principles.<br>• Collaborate with cross-functional teams to model, transform, and ensure the quality of data for analytics and reporting.<br>• Create and maintain APIs while testing and documenting them using tools like Postman and Swagger.<br>• Write efficient and modular Python code to support data processing workflows.<br>• Apply best practices such as version control, CI/CD, and code reviews to ensure robust and maintainable solutions.<br>• Uphold data security, integrity, and governance throughout the data lifecycle.<br>• Utilize cloud services for compute, storage, and orchestration, including platforms like AWS or Azure.<br>• Work within Agile and Scrum methodologies to deliver solutions efficiently and collaboratively.
<p>Overview</p><p>We are seeking a Financial Systems Analyst with a strong focus on <strong>Kinetic ERP</strong> to support, maintain, and enhance financial systems. This role plays a critical part in managing financial operations, including month-end AP close, AP reconciliation, and financial reporting, while ensuring system stability and efficiency. The analyst will also provide support for billing systems (Zuora, A2Z, TMM) and collaborate with finance teams to optimize workflows and automation.</p><p><br></p>
<p><strong>Job Posting: Senior DevOps Engineer</strong></p><p><strong>Salary: </strong>$150K - $185K</p><p><strong>Location:</strong> Hybrid – Onsite Monday-Thursday (Greater Los Angeles Area, with flexibility based on events), Remote on Friday</p><p><strong>Employment Type:</strong> Full-Time</p><p><strong>Schedule:</strong> Highly Flexible, Adapted to Event Calendar, some evenings and weekends are required</p><p> </p><p><strong>About the Role</strong></p><p>Are you ready to make an impact in a dynamic, fast-paced environment powered by cutting-edge technology? We are hiring a <strong>Senior DevOps Engineer</strong> for a unique opportunity to support high-profile events. In this role, you’ll oversee and optimize mission-critical infrastructure that ensures exceptional experiences for guests, staff, and stakeholders.</p><p>This hybrid position offers a flexible work schedule tailored to event calendars, such as games, concerts, and special programming, with quieter offseasons resembling a more traditional schedule. If you thrive in adaptable environments and enjoy solving technical challenges, this role is for you.</p>
<p><strong>Data Engineer (Hybrid, Los Angeles)</strong></p><p><strong>Location:</strong> Los Angeles, California</p><p><strong>Compensation:</strong> $140,000 - $175,000 per year</p><p><strong>Work Environment:</strong> Hybrid, with onsite requirements</p><p>Are you passionate about crafting highly-scalable and performant data systems? Do you have expertise in Azure Databricks, Spark SQL, and real-time data pipelines? We are searching for a talented and motivated <strong>Data Engineer</strong> to join our team in Los Angeles. You'll work in a hybrid environment that combines onsite collaboration with the flexibility of remote work.</p><p><strong>Key Responsibilities:</strong></p><ul><li>Design, develop, and implement data pipelines and ETL workflows using cutting-edge Azure technologies (e.g., Databricks, Synapse Analytics, Synapse Pipelines).</li><li>Manage and optimize big data processes, ensuring scalability, efficiency, and data accuracy.</li><li>Build and work with real-time data pipelines leveraging technologies such as Kafka, Event Hubs, and Spark Streaming.</li><li>Apply advanced skills in Python and Spark SQL to build data solutions for analytics and machine learning.</li><li>Collaborate with business analysts and stakeholders to implement impactful dashboards using Power BI.</li><li>Architect and support the seamless integration of diverse data sources into a central platform for analytics, reporting, and model serving via ML Flow.</li></ul><p><br></p>
<p>Are you a talented Assistant Controller ? You might be the candidate Robert Half is looking for to work with a successful Construction that will recognize your efforts. You might be a good fit for this position if you can lead daily operations, such as preparing and/or reviewing appropriate ledger entries and reconciliations, maintaining the general ledger system, preparing monthly, quarterly and annual financial statements, and assisting with regulatory reporting as applicable. Candidates with the ability to create and monitor the company's accounting and finance operations are the best fits for this position. This Assistant Controller role is based in the Long Beach area. Join a cutting edge company in the Construction/Contractor industry by applying via Robert Half today.</p><p><br></p><p>Your responsibilities in this role</p><p><br></p><p>- Approve implementation and verify adherence to accounting policies and procedures</p><p><br></p><p>- Stimulate accountability and the meeting of deliverables</p><p><br></p><p>- Assist the accounting team during the closing process to assure deadlines are met</p><p><br></p><p>- Mark and expand process improvements to streamline reporting and improve team efficiency</p><p><br></p><p>- Establish relevant and timely reports on financial data analytics like the monthly flash report, key financial metrics, and actual spend against budgets/outlook</p><p><br></p><p>- Aid with the preparation of GAAP financial statements, including budgeting and forecasting</p><p><br></p><p>- Assist to ensure a competent, trained staff through goal setting, development, and regular assessment</p><p><br></p><p>- Test and produce Ad Hoc financial reports</p><p><br></p><p>- Developed preparation and coordination of fiscal year-end audits</p><p><br></p><p>- Backfill for Controller when necessary</p><p><br></p><p>- Extra assignments if necessary</p><p><br></p><p>- Assemble distinct technical accounting analyses, policies, and procedures</p><p><br></p><p>Must have Sage Timberline background.</p><p><br></p><p>- Deliver regular account reconciliation to completion</p><p><br></p><p>For confidential consideration, please email your recruiter with Robert Half. If you're not currently working with anyone at Robert Half, please click "Apply" or call 310-719-1400 and ask for David Bizub. Please reference job order number 00460-0012878536 email resume to [email protected]</p>
<p>We are looking for an experienced Informatica and AWS Data Engineer to join our team in Southern California. In this long-term, multi-year position, you will play a pivotal role in configuring and managing Informatica Cloud Catalog, Governance, and Marketplace systems, ensuring seamless integration with various platforms and tools. This opportunity is ideal for professionals with a strong background in data governance, security, and compliance, as well as expertise in cloud technologies and database systems.</p><p><br></p><p>Responsibilities:</p><p>• Configure and implement role-based and policy-based access controls within Informatica Cloud Catalog and Governance systems.</p><p>• Develop and set up connections for diverse platforms, including mainframe databases, cloud services, S3, Athena, and Redshift.</p><p>• Troubleshoot and resolve issues encountered during connection creation and data profiling.</p><p>• Optimize performance by identifying and addressing bottlenecks in profiling workflows.</p><p>• Configure and manage Cloud Marketplace integrations to enforce policy-based data protections.</p><p>• Review and communicate Informatica upgrade schedules, exploring new features and coordinating timelines with business and technical teams.</p><p>• Collaborate with infrastructure teams to establish clusters for managing profiling workloads efficiently.</p><p>• Support governance initiatives by classifying and safeguarding sensitive financial and customer data.</p><p>• Create and manage metadata, glossaries, and data quality rules across regions to ensure compliance with governance policies.</p><p>• Set up user groups, certificates, and IP whitelisting to maintain secure access and operations.</p>