<p><strong>AWS Big Data Architect (with Hadoop) </strong></p><p><strong>Location:</strong> Hybrid 4x Onsite – Philadelphia, PA</p><p><strong>Contract Duration:</strong> April 6, 2026 – December 31, 2026</p><p><strong>Employment Type:</strong> W2 Contract</p><p><strong>Overview</strong></p><p>We are seeking a highly skilled <strong>AWS Big Data Architect / Senior Data Engineer</strong> to design, develop, and deliver scalable Big Data Warehouse solutions. This is a hands-on role suited for someone who is passionate about technology, thrives in a collaborative environment, and can work effectively with both technical and non-technical stakeholders. The ideal candidate excels in fast-paced settings and is committed to producing high-quality, impactful results.</p><p>This role offers the opportunity to collaborate with engineering teams across the enterprise and influence broader data and technology strategies.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design and develop scalable Big Data Warehouse solutions across the full data supply chain.</li><li>Build and implement metadata management solutions.</li><li>Create and maintain technical documentation, user documentation, data models, data dictionaries, glossaries, process flows, and architecture diagrams.</li><li>Enhance and expand the enterprise Data Lake environment.</li><li>Solve complex data integration challenges across multiple systems.</li><li>Design and execute strategies for real-time data analysis and decision-making.</li><li>Collaborate with business partners, analysts, developers, architects, and engineers to support ongoing data quality initiatives.</li><li>Work closely with Data Science teams to improve actionable insights.</li><li>Continuously expand knowledge of new tools, platforms, and technologies.</li></ul>
<p>The Data Engineer role focuses on designing, building, and optimizing scalable data solutions that support diverse business needs. This position requires the ability to work independently while collaborating effectively in a fast-paced, agile environment. The individual in this role partners with cross-functional teams to gather data requirements, recommend enhancements to existing data pipelines and architectures, and ensure the reliability, performance, and efficiency of data processes.</p><p>Responsibilities</p><ul><li>Support the team’s adoption and continued evolution of the Databricks platform, leveraging features such as Delta Live Tables, workflows, and related tooling</li><li>Design, develop, and maintain data pipelines that extract data from relational sources, load it into a data lake, transform it as needed, and publish it to a Databricks-based lakehouse environment</li><li>Optimize data pipelines and processing workflows to improve performance, scalability, and overall efficiency</li><li>Implement data quality checks and validation logic to ensure data accuracy, consistency, and completeness</li><li>Create and maintain documentation including data mappings, data definitions, architectural diagrams, and data flow diagrams</li><li>Develop proof-of-concepts to evaluate and validate new technologies, tools, or data processes</li><li>Deploy, manage, and support code across non-production and production environments</li><li>Investigate, troubleshoot, and resolve data-related issues, including identifying root causes and implementing fixes</li><li>Identify performance bottlenecks and recommend optimization strategies, including database tuning and query performance improvements</li></ul>
Join our team as a Business Intelligence Software Engineer and help design, build, and maintain innovative reporting and data-driven applications that power field operations, business units, and customer solutions. This is a hands-on coding role that requires strong technical judgment and collaboration with cross-functional teams. You’ll manage the entire development lifecycle, ensuring solutions are scalable, reliable, and aligned with business priorities. Key Responsibilities: Lead the Software Development Lifecycle (SDLC): Oversee all phases of BI application development, from concept through deployment and support. Hands-on Development: Build and maintain applications using Python (PySpark), SQL, and TypeScript/JavaScript. Technical Strategy & Architecture: Apply best practices for design, performance, and scalability. Quality Assurance: Establish testing frameworks, conduct code reviews, and maintain bug-tracking processes. Continuous Improvement: Identify and implement tools and methodologies to streamline development and increase system reliability. Collaboration: Work with internal stakeholders, data scientists, analysts, and operations teams to translate business needs into software solutions. Support & Maintenance: Provide ongoing support for newly developed applications, ensuring smooth integration with existing systems.