We are looking for a skilled Software Engineer to join our team on a long-term contract basis in Rancho Cucamonga, California. In this role, you will collaborate with various teams, including engineering, inventory control, and planning, to streamline processes and create detailed documentation packages. Your expertise in software development and ability to work closely with machinists and assemblers will be essential in ensuring accurate and efficient workflows.<br><br>Responsibilities:<br>• Transform 3D models generated by engineers into comprehensive drawing packages, including individual drawings for each element in the assembly.<br>• Collaborate with inventory control and planning teams to maintain and organize online documentation, including job build materials, time elements, and work instructions.<br>• Coordinate and conduct meetings with engineers, machinists, and assemblers to discuss designs and develop clear, actionable instructions.<br>• Develop and implement software solutions using programming languages such as C#, .NET, and ASP.NET.<br>• Apply JavaScript and React.js to enhance functionality and usability of applications.<br>• Ensure accurate documentation and workflow processes for manufacturing operations.<br>• Troubleshoot and resolve technical issues related to software and documentation systems.<br>• Maintain effective communication across teams to ensure alignment on project goals and deliverables.<br>• Provide technical expertise and guidance to support ongoing improvement initiatives.
<p><strong>Our client is seeking a Senior AWS Data Engineer for a long term, multi-year assignment.</strong></p><p><br></p><p><strong>This role is onsite 4 days/week in Torrance, CA. </strong></p><p><br></p><p>This role is to support and enhance enterprise business intelligence and analytics environments. This role focuses on designing, building, and maintaining scalable data pipelines and cloud‑based data platforms using AWS services. The ideal candidate brings deep hands‑on experience with AWS Glue, PySpark, Redshift, and serverless architectures, along with strong SQL and data analysis skills.</p><p>This role will collaborate closely with architecture, security, compliance, and development teams to ensure data solutions are performant, secure, and compliant with regulatory requirements.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue with PySpark for large‑scale data processing</li><li>Develop and support serverless integrations using AWS Lambda for event‑driven workflows and system integrations</li><li>Design and optimize Amazon Redshift data warehouse solutions, including:</li><li>Advanced SQL analytics</li><li>Stored procedures</li><li>Performance tuning</li><li>Lead implementation of secure vendor file transfer and ingestion solutions using AWS Transfer Family</li><li>Design and implement database migration and replication pipelines using AWS Database Migration Service (DMS)</li><li>Build and manage workflow orchestration using Apache Airflow or similar orchestration tools</li><li>Analyze data quality, transformation logic, and pipeline performance using SQL and data analysis techniques</li><li>Troubleshoot and resolve production data pipeline and integration issues across AWS services</li><li>Provide technical guidance to development team members on:</li><li>AWS best practices</li><li>Cost optimization</li><li>Performance optimization</li><li>Partner with enterprise architecture, security, and compliance teams to ensure SOX and regulatory compliance</li></ul>