<p>We are looking for a skilled Data Warehouse Analyst to join our team in New Jersey. In this role, you will transform logistics challenges into actionable insights through advanced data analysis and reporting. By collaborating with cross-functional teams, you will play a pivotal role in enhancing operational efficiency and driving key business decisions.</p><p><br></p><p>Responsibilities:</p><p>• Collaborate with Operations, Transportation, and Finance teams to establish and refine KPIs that drive logistics and fulfillment performance.</p><p>• Develop and optimize labor planning and forecasting models for warehouse and delivery operations, partnering closely with recruitment teams.</p><p>• Analyze distribution and fulfillment data to uncover performance trends and identify cost-saving opportunities.</p><p>• Design and maintain dashboards and reports to provide real-time insights into logistics metrics, including delivery times, warehouse productivity, and route optimization.</p><p>• Automate reporting processes to improve accuracy and timeliness of operational data.</p><p>• Continuously enhance data integrity and streamline workflows to optimize logistics operations.</p><p>• Work on data modeling and warehousing projects to support scalable analytics and reporting solutions.</p><p>• Partner with stakeholders to deliver clear and actionable insights to improve decision-making processes.</p><p>• Investigate and implement tools and techniques to improve overall business intelligence capabilities.</p>
We are looking for a Desktop Support Analyst to deliver hands-on technical support for employees in New York, New York. This Long-term Contract position is ideal for someone who enjoys resolving user issues, maintaining reliable workstation performance, and providing responsive service across a fast-paced work environment. The role will support day-to-day desktop operations, assist remote and international teams, and contribute to a consistent, high-quality end-user experience.<br><br>Responsibilities:<br>• Deliver first- and second-line technical assistance for hardware, software, and infrastructure-related incidents and service requests across the organization.<br>• Provide in-person floor support on a rotating schedule, assisting employees directly and ensuring all requests are properly recorded in the service management system.<br>• Take full ownership of assigned tickets from initial intake through final resolution, including user updates, troubleshooting, and timely closure.<br>• Support colleagues in international offices by providing remote assistance that aligns with established service standards and response expectations.<br>• Follow defined escalation procedures to route complex issues appropriately and maintain dependable support delivery.<br>• Investigate recurring technical problems, identify underlying causes, and create clear knowledge documentation for both engineers and end users.<br>• Administer user lifecycle activities such as onboarding, offboarding, account support, and related end-user access tasks.<br>• Configure, maintain, and troubleshoot laptops, desktop hardware, mobile devices, remote access tools, and Windows 10 workstation environments.<br>• Assist with event technology support and coordinate Zoom-based meeting and interview connections with domestic and international participants.<br>• Participate in after-hours on-call coverage and contribute to time-sensitive projects and organization-wide IT communications as needed.
<p>We are seeking a highly skilled Data Engineer to design, build, and manage our data infrastructure. The ideal candidate is an expert in writing complex SQL queries, designing efficient database schemas, and developing ETL/ELT pipelines. This role ensures data accuracy, accessibility, and performance optimization to support business intelligence, analytics, and reporting initiatives.</p><p><br></p><p><strong><em><u>Key Responsibilities</u></em></strong></p><p><br></p><p><strong>Database Design & Management</strong></p><ul><li>Design, develop, and maintain relational databases, including SQL Server, PostgreSQL, and Oracle, as well as cloud-based data warehouses.</li></ul><p><strong>Strategic SQL & Data Engineering</strong></p><ul><li>Develop advanced, optimized SQL queries, stored procedures, and functions to process and analyze large, complex datasets and deliver actionable business insights.</li></ul><p><strong>Data Pipeline Automation & Orchestration</strong></p><ul><li>Build, automate, and orchestrate ETL/ELT workflows using SQL, Python, and cloud-native tools to integrate and transform data from diverse, distributed sources.</li></ul><p><strong>Performance Optimization</strong></p><ul><li>Tune SQL queries and optimize database schemas through indexing, partitioning, and normalization to improve data retrieval and processing performance.</li></ul><p><strong>Data Integrity & Security</strong></p><ul><li>Ensure data quality, consistency, and integrity across systems.</li><li>Implement data masking, encryption, and role-based access control (RBAC).</li></ul><p><strong>Documentation</strong></p><ul><li>Maintain comprehensive technical documentation, including database schemas, data dictionaries, and ETL workflows.</li></ul>
<p>We are seeking a skilled and motivated Data Engineer to join our team, with deep hands-on experience building and optimizing data pipelines and lakehouse solutions in Databricks. In this role, you will collaborate with cross-functional teams to design, develop, and operate scalable, reliable data products that drive business value.</p><p><br></p><p>Key Responsibilities</p><ul><li>Design, build, and maintain batch and streaming data pipelines using Databricks (Spark, Delta Lake, Jobs/Workflows).</li><li>Partner with data scientists, analysts, and application teams to deliver trusted, well-modeled data sets and features in the Databricks Lakehouse.</li><li>Optimize Spark jobs (partitioning, caching, join strategies) and Databricks cluster configurations for performance, scalability, and cost.</li><li>Implement data quality checks, observability, governance, and security controls (e.g., Unity Catalog, access policies) within Databricks.</li><li>Troubleshoot and resolve pipeline failures, data issues, and production incidents; perform root-cause analysis and implement preventative improvements.</li></ul><p><br></p>