We are looking for a skilled Business Intelligence (BI) Engineer to join our team in Jenkintown, Pennsylvania. As a key contributor, you will design, develop, and implement advanced data solutions to support business analytics and decision-making processes. This role requires a strong technical background and a collaborative mindset to drive innovation and optimize data platforms.<br><br>Responsibilities:<br>• Design and implement dimensional and semantic data models to enhance business analytics and reporting capabilities.<br>• Develop and optimize data pipelines using modern orchestration tools like Apache Airflow or Azure Data Factory.<br>• Create and manage interactive dashboards and visualizations using Power BI, ensuring accuracy and usability.<br>• Leverage Microsoft Fabric architecture to integrate centralized semantic models and external data platforms.<br>• Administer cloud-based data warehouses like Azure Synapse or Snowflake, ensuring performance and scalability.<br>• Collaborate with cross-functional teams to address data governance, quality frameworks, and compliance standards.<br>• Utilize Python and data science libraries to process, analyze, and develop machine learning workflows.<br>• Implement DataOps methodologies, including CI/CD practices and version control for data solutions.<br>• Conduct advanced statistical analysis and predictive modeling to inform business strategies.<br>• Partner with stakeholders to translate technical concepts into actionable insights for diverse audiences.
<p>We are looking for a skilled Data Engineer to join our team in Toms River, New Jersey. In this role, you will design, manage, and optimize data pipelines and repositories, ensuring reliable and accurate data flow across various platforms. You will also collaborate with cross-functional teams to deliver actionable insights and support business intelligence initiatives.</p><p><br></p><p><strong>Responsibilities:</strong></p><p>• Architect and manage data systems, including data lakes and repositories, ensuring optimal performance and reliability.</p><p>• Monitor and maintain daily data pipelines in Azure, addressing any issues to ensure seamless operations.</p><p>• Develop and enhance data pipelines using Python, Azure Functions, Logic Apps, and Synapse, while integrating new platforms when necessary.</p><p>• Oversee the accuracy and timeliness of data loads into BI models, reconciling discrepancies between Power BI and source reports.</p><p>• Create and refine Power BI dashboards and models to support operational and strategic reporting needs.</p><p>• Administer Power BI workspaces, managing licensing, access permissions, and user security.</p><p>• Provide technical support to Power BI users, assisting with troubleshooting and new dashboard development.</p><p>• Partner with stakeholders to scope and deliver analyses, addressing both recurring and ad hoc business needs.</p><p>• Support financial reporting and performance analysis for Finance and Accounting teams.</p><p>• Build a comprehensive understanding of business processes to deliver valuable strategic insights.</p>
<p>The Data Operations & Automation Specialist is responsible for optimizing the use of internal and external data across the organization’s core financial systems. This role manages data exchanges, ensures seamless integration with third-party platforms, and supports the development of business intelligence (BI) solutions. The Specialist leads automation initiatives, oversees accurate and timely reporting, and provides technical expertise to enable data-driven decision-making across the organization.</p><p><br></p><p>Essential Duties & Responsibilities</p><ul><li>Oversee and support the organization’s core financial system, including monitoring performance, resolving issues, and ensuring compliance with security standards.</li><li>Serve as the primary point of contact for users, providing guidance, troubleshooting, and technical expertise.</li><li>Lead the implementation of Robotic Process Automation (RPA) and Digital Process Automation (DPA), including process discovery, scripting, testing, deployment, and support.</li><li>Collaborate with cross-functional teams to analyze data and processes, supporting data-driven decision-making.</li><li>Develop, automate, and maintain BI reporting solutions to provide actionable insights.</li><li>Identify opportunities for automation and streamline manual processes through scripting and system enhancements.</li><li>Manage data transfers and integrations between core systems, third-party applications, and internal platforms.</li><li>Support and maintain SQL databases for server-based applications.</li><li>Test and maintain disaster recovery processes to ensure business continuity.</li><li>Assist in evaluating, testing, and deploying new hardware and software systems.</li><li>Partner with departments to design and deploy reports and dashboards that enhance operational efficiency.</li></ul><p><br></p>
<p>We are on the lookout for a Data Engineer in Basking Ridge, New Jersey. (1-2 days a week on-site*) In this role, you will be required to develop and maintain business intelligence and analytics solutions, integrating complex data sources for decision support systems. You will also be expected to have a hands-on approach towards application development, particularly with the Microsoft Azure suite.</p><p><br></p><p>Responsibilities:</p><p><br></p><p>• Develop and maintain advanced analytics solutions using tools such as Apache Kafka, Apache Pig, Apache Spark, and AWS Technologies.</p><p>• Work extensively with Microsoft Azure suite for application development.</p><p>• Implement algorithms and develop APIs.</p><p>• Handle integration of complex data sources for decision support systems in the enterprise data warehouse.</p><p>• Utilize Cloud Technologies and Data Visualization tools to enhance business intelligence.</p><p>• Work with various types of data including Clinical Trials Data, Genomics and Bio Marker Data, Real World Data, and Discovery Data.</p><p>• Maintain familiarity with key industry best practices in a regulated “GXP” environment.</p><p>• Work with commercial pharmaceutical/business information, Supply Chain, Finance, and HR data.</p><p>• Leverage Apache Hadoop for handling large datasets.</p>
We are looking for an experienced ETL Developer to join our team on a Contract-to-Permanent basis in Malvern, Pennsylvania. In this role, you will leverage your expertise with Snowflake and ETL processes to design and implement efficient data solutions. This position also involves working with Salesforce data integration and supporting financial services operations.<br><br>Responsibilities:<br>• Design, develop, and maintain ETL processes to ensure efficient data flow and transformation.<br>• Utilize Snowflake to source, manage, and optimize data pipelines.<br>• Collaborate with Salesforce teams to analyze functionality and determine the best approaches for data accessibility.<br>• Implement data engineering solutions to support business needs within financial services.<br>• Troubleshoot and resolve data-related issues to maintain system performance.<br>• Develop and maintain documentation for ETL workflows and processes.<br>• Ensure data integrity and security throughout the development lifecycle.<br>• Provide technical expertise and guidance to team members on ETL and Snowflake usage.
We are looking for an experienced Technical Business Analyst III to join our team in Philadelphia, Pennsylvania. This long-term contract position involves collaborating with cross-functional teams to define and deliver technical specifications that enhance product performance and customer experience. The ideal candidate will serve as a subject matter expert, driving the successful implementation of solutions across various systems and platforms.<br><br>Responsibilities:<br>• Collaborate with architects, product owners, UX designers, and vendor partners to gather and refine business and technical requirements.<br>• Develop comprehensive system documentation, including user stories, data mappings, sequence diagrams, and API specifications.<br>• Create and maintain Swagger specifications for APIs, detailing endpoints, parameters, error codes, and authentication methods.<br>• Draft sample payloads to illustrate communication between front-end and back-end systems.<br>• Analyze third-party API specifications to ensure seamless integration with existing systems.<br>• Write detailed user stories in Jira, covering front-end UI designs, system integrations, acceptance criteria, and error handling.<br>• Facilitate grooming sessions to clarify requirements and ensure development readiness.<br>• Define test data requirements and create tickets to support quality assurance processes.<br>• Develop end-to-end test plans aligned with sprint schedules and project timelines.<br>• Actively participate in Agile ceremonies, including sprint planning, daily stand-ups, and retrospectives.
<ul><li>Product Development: Partner with the data science team to conceptualize, develop, and refine cutting-edge data analytics products tailored to specific engineering domains.</li><li>Sales Enablement: Collaborate with the sales team to articulate product value, address customer needs, and drive sales to advanced technical SMEs/PHDs</li><li>Customer Success: Provide comprehensive technical support, training, and onboarding to ensure customer satisfaction and product adoption.</li><li>Market Intelligence: Conduct market research, analyze competitive landscapes, and identify opportunities for product enhancement and expansion.</li><li>Product Documentation: Develop clear and concise product documentation, including user guides, process flows, and use cases.</li></ul><p><br></p>
<p><strong>Description:</strong></p><p>We are seeking a talented and driven AI Engineer to join our cross-functional AI innovation team within a leading pharmaceutical organization. You will work on designing, developing, and deploying GenAI-driven solutions, retrieval-augmented generation (RAG) systems, and scalable ML-powered applications across cloud-native environments to support drug development, regulatory, manufacturing, and commercial operations. The ideal candidate combines a strong academic foundation in machine learning with hands-on experience building real-world AI systems and full-stack applications that comply with the high standards of a regulated industry.</p><p><br></p><p> <strong>Essential Functions:</strong></p><ul><li>Design and implement GenAI applications including LLM-integrated tools, RAG systems, and intelligent chatbots tailored for pharmaceutical functions such as R& D knowledge mining, regulatory intelligence, and medical information services.</li><li>Build and optimize data ingestion, transformation, and semantic search pipelines using domain-specific embedding models (e.g., BioBERT, Ada-002).</li><li>Develop and containerize AI services using FastAPI, Docker, and Kubernetes, deployed on AWS or GCP, ensuring adherence to data privacy and compliance standards (e.g., HIPAA, GxP).</li><li>Collaborate with product, data, and platform engineering teams to integrate AI features into validated systems supporting clinical, quality, and manufacturing workflows.</li><li>Contribute to end-to-end MLOps: model experimentation, deployment pipelines, audit readiness, monitoring, and performance tuning.</li><li>Engineer full-stack applications (React + REST/GraphQL backends) with role-based access controls for internal and external scientific and business users.</li><li>Support internal research and POCs on advanced topics like LLM reasoning, adverse event detection, pharmacovigilance, and digital biomarker analysis.</li></ul><p><br></p>