<p><strong>Role Summary</strong></p><p>As a Technical Project Manager focused on data and AWS cloud, you will lead the planning, execution, and delivery of engineering efforts involving data infrastructure, data platforms, analytics, and cloud services. You will partner with data engineering, analytics, DevOps, product, security, and business stakeholders to deliver on key strategic initiatives. You are comfortable navigating ambiguity, managing dependencies across teams, and ensuring alignment between technical direction and business priorities.</p><p><strong>Key Responsibilities</strong></p><ul><li>Lead end-to-end technical projects pertaining to AWS cloud, data platforms, data pipelines, ETL/ELT, analytics, and reporting.</li><li>Define project scope, objectives, success criteria, deliverables, and timelines in collaboration with stakeholders.</li><li>Create and maintain detailed project plans, roadmaps, dependency maps, risk & mitigation plans, status reports, and communication plans.</li><li>Track and monitor project progress, managing changes to scope, schedule, and resources.</li><li>Facilitate agile ceremonies (e.g., sprint planning, standups, retrospectives) or hybrid methodologies as appropriate.</li><li>Serve as the bridge between technical teams (data engineering, DevOps, platform, security) and business stakeholders (product, analytics, operations).</li><li>Identify technical and organizational risks, escalate when needed, propose mitigation or contingency plans.</li><li>Drive architectural and design discussions, ensure technical feasibility, tradeoff assessments, and alignment with cloud best practices.</li><li>Oversee vendor, third-party, or external partner integrations and workstreams.</li><li>Ensure compliance, security, governance, and operational readiness (e.g., data privacy, logging, monitoring, SLA) are baked into deliverables.</li><li>Conduct post-implementation reviews, lessons learned, and process improvements.</li><li>Present regularly to senior leadership on project status, challenges, KPIs, and outcomes.</li></ul>
<p><strong>AWS Big Data Architect (with Hadoop) </strong></p><p><strong>Location:</strong> Hybrid 4x Onsite – Philadelphia, PA</p><p><strong>Contract Duration:</strong> April 6, 2026 – December 31, 2026</p><p><strong>Employment Type:</strong> W2 Contract</p><p><strong>Overview</strong></p><p>We are seeking a highly skilled <strong>AWS Big Data Architect / Senior Data Engineer</strong> to design, develop, and deliver scalable Big Data Warehouse solutions. This is a hands-on role suited for someone who is passionate about technology, thrives in a collaborative environment, and can work effectively with both technical and non-technical stakeholders. The ideal candidate excels in fast-paced settings and is committed to producing high-quality, impactful results.</p><p>This role offers the opportunity to collaborate with engineering teams across the enterprise and influence broader data and technology strategies.</p><p><strong>Key Responsibilities</strong></p><ul><li>Design and develop scalable Big Data Warehouse solutions across the full data supply chain.</li><li>Build and implement metadata management solutions.</li><li>Create and maintain technical documentation, user documentation, data models, data dictionaries, glossaries, process flows, and architecture diagrams.</li><li>Enhance and expand the enterprise Data Lake environment.</li><li>Solve complex data integration challenges across multiple systems.</li><li>Design and execute strategies for real-time data analysis and decision-making.</li><li>Collaborate with business partners, analysts, developers, architects, and engineers to support ongoing data quality initiatives.</li><li>Work closely with Data Science teams to improve actionable insights.</li><li>Continuously expand knowledge of new tools, platforms, and technologies.</li></ul>
<p>Our client is seeking an AI Developer to design, build, and deploy production‑grade AI solutions within a complex enterprise environment. This role focuses on integrating Azure‑based AI capabilities, developing machine learning workflows, and supporting teams as they adopt intelligent automation and advanced analytics. The ideal candidate has hands‑on experience with Azure Machine Learning, LLMs, and real‑world model deployment.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design and implement AI/ML models using Azure Machine Learning, Azure OpenAI, and Cognitive Services.</li><li>Develop and maintain solutions such as chatbots, document intelligence pipelines, predictive analytics, and NLP workloads.</li><li>Partner with data engineers, analysts, and technical stakeholders to identify and evaluate opportunities for AI integration.</li><li>Deploy, monitor, and maintain models using Azure ML pipelines, Azure Functions, and Azure Kubernetes Service (AKS).</li><li>Ensure all AI solutions meet organizational standards related to security, compliance, performance, and governance.</li><li>Document models, workflows, APIs, and deployment processes for scalability and internal knowledge sharing.</li><li>Stay current with emerging AI trends, tools, and best practices relevant to enterprise environments.</li></ul><p><br></p>
<p><strong>Overview</strong></p><p> We are seeking a <strong>Power BI Developer</strong> to serve as the technical owner of the organization’s Power BI and analytics environment. This role focuses on platform ownership, data modeling, governance, and maintaining a stable analytics ecosystem rather than primarily building reports. The ideal candidate is a hands-on technical leader with strong experience in <strong>Power BI, SQL, and ETL processes</strong>, who can support the analytics platform while collaborating with data engineers, business stakeholders, and cross-functional teams.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Serve as the technical lead and primary owner of the Power BI analytics environment.</li><li>Act as the first point of contact for platform issues and ensure ongoing system stability and performance.</li><li>Manage and maintain security, access controls, and governance across Power BI workspaces and datasets.</li><li>Monitor and support development and production analytics environments.</li><li>Partner with data engineering and IT teams to understand upstream data pipelines and ensure data is structured for reporting and analytics.</li><li>Design, build, and maintain semantic and data models within Power BI.</li><li>Develop and optimize complex <strong>SQL queries</strong> and <strong>ETL processes</strong> to support analytics and reporting needs.</li><li>Build scalable Power BI data models using <strong>star schema and dimensional modeling</strong> best practices.</li><li>Ensure consistent documentation, deployment standards, and version control across analytics assets.</li><li>Communicate effectively with business users and stakeholders to support reporting and data needs.</li></ul>
<p><strong>Data Engineer </strong>Java Dev (AWS, Microservices, Spring Boot) IV </p><p>46 Week Contract </p><p>Hybrid | Philadelphia, PA </p><p><strong>Job Summary</strong></p><p>The Senior Java Developer will design, build, and support cloud‑based microservices using Java and AWS. This role focuses on developing scalable, secure solutions, supporting DevOps and CI/CD practices, and collaborating with cross‑functional teams to deliver high‑quality software in an Agile environment.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, test, and maintain <strong>Java‑based microservices</strong> using <strong>Spring Boot</strong> and AWS.</li><li>Build and support <strong>cloud‑native solutions</strong> with an emphasis on scalability, performance, and security.</li><li>Contribute to <strong>DevOps and CI/CD pipelines</strong>, including source control, automation, monitoring, and deployment practices.</li><li>Troubleshoot production issues and drive continuous improvements across platform reliability and performance.</li><li>Collaborate with architects, product managers, and engineering teams to translate requirements into technical solutions.</li><li>Promote and apply <strong>software engineering best practices</strong> within an Agile development environment.</li></ul><p><br></p>
We are looking for a dedicated Systems Engineer to manage and maintain a multi-node Linux server environment, supporting instructional and research activities. This role involves ensuring the reliability and performance of IT infrastructure, providing technical expertise for Linux systems, and collaborating with stakeholders to meet specialized computing needs. The ideal candidate will play a key role in optimizing and securing IT solutions while documenting workflows and procedures to uphold operational excellence.<br><br>Responsibilities:<br>• Administer and maintain a multi-node Linux server environment, including associated workstations used for teaching and research.<br>• Troubleshoot and resolve complex Linux server and workstation issues, utilizing tools like Ansible for automation and configuration management.<br>• Oversee the operation of a small data center, ensuring uninterrupted support for engineering courses and research activities.<br>• Perform system performance tuning, security hardening, and monitoring to ensure optimal operation and reliability.<br>• Implement and document workflows, procedures, and technical standards to enhance system continuity and reliability.<br>• Collaborate with faculty, researchers, and technical staff to address specialized computing requirements.<br>• Build, configure, and document IT infrastructure to align with best practices and service level objectives.<br>• Monitor and analyze performance metrics, identifying areas for improvement and ensuring system efficiency.<br>• Serve as a technical liaison, providing support and maintaining communication with internal and external stakeholders.<br>• Develop and implement robust and secure IT solutions tailored to the needs of the organization.
<p><strong>Overview</strong></p><p>Our client is seeking a Senior Software Engineer to add to their team as they continue building out integrations on a weekly basis. This is a 90 day contract-to-hire position and is 100% remote. Client can only hire in these approved states: Florida, Georgia, Iowa, Kentucky, Maryland, Michigan, Missouri, North Carolina, Nebraska, New York, Ohio, Pennsylvania, South Carolina, South Dakota, Tennessee, Texas, Virginia, Washington, Wisconsin, or West Virginia)</p><p><br></p><p><strong>Key Responsibilities </strong> </p><ul><li>Architect and Implement Integrations Framework: Develop a scalable and resilient integrations framework that prioritizes ETL techniques and data pipeline efficiency. </li><li>Technical Leadership & Mentorship: Lead and mentor a team of 3 engineers, promoting a culture of extreme ownership, accountability, and clear, effective communication. </li><li>Develop Data Integrations: Design and develop robust integrations with third-party systems, emphasizing data extraction, transformation, and loading combined with API-driven approaches. </li><li>Establish Best Practices: Define and enforce best practices for integration design, development, documentation, and open team communication. </li><li>Collaborate with Stakeholders: Work closely with product managers, engineering teams, and other stakeholders, ensuring alignment with business objectives through transparent and proactive communication. </li><li>Oversee Project Delivery: Manage end-to-end delivery of integration projects, ensuring timely completion and accountability at every stage. </li><li>Drive Innovation: Lead initiatives to innovate our integration strategies and technologies, continuously improving our data handling and ETL processes.</li></ul>
Lead Power BI Developer<br>Overview<br>We are seeking a Lead Power BI Developer to serve as the technical owner of our Power BI and analytics environment during a period of divestiture and transformation. This role will focus on platform ownership, security, data modeling, and operational stability rather than immediate report development. The ideal candidate is hands-on, highly technical, and comfortable acting as the first line of support for the Power BI/Fabric ecosystem while partnering closely with data engineers, business users, and offshore teams.<br>________________________________________<br>Key Responsibilities<br>• Serve as the technical lead and primary owner of the Microsoft Power BI environment.<br>• Act as first line of attack for platform issues, including after-hours support when required.<br>• Maintain and enforce security, access controls, and governance across workspaces and datasets.<br>• Manage and monitor environments across Health and Production.<br>• Partner with data engineers to understand upstream data pipelines and ensure analytics-ready data.<br>• Design, build, and maintain semantic models, including complex DAX calculations.<br>• Create and manage Power BI data models (star schema, dimensional modeling, performance optimization).<br>• Oversee platform stability during divestiture and organizational changes.<br>• Communicate effectively with business users, stakeholders, and offshore resources.<br>• Drive best practices around documentation, deployment, and version control.<br>________________________________________<br>Required Qualifications<br>• 5–6+ years of experience working with Power BI in an enterprise environment.<br>• Mandatory experience with Microsoft Fabric.<br>• Strong data modeling and DAX expertise.<br>• Solid understanding of data engineering concepts and how data flows from source to analytics.<br>• Experience managing security, row-level security (RLS), and workspace governance.<br>• Proven ability to work hands-on while also providing technical leadership.<br>• Excellent communication and stakeholder management skills.<br>________________________________________<br>Preferred Qualifications<br>• Experience supporting large-scale analytics platforms during mergers, divestitures, or system separations.<br>• Familiarity with CI/CD for Power BI or Fabric.<br>• Background in cloud-based data platforms and modern data stacks.
<p>We are looking for a skilled Data Warehouse Engineer to join our team in Malvern, Pennsylvania. This Contract-to-Permanent position offers the opportunity to work with cutting-edge data technologies and contribute to the optimization of data processes. The ideal candidate will have a strong background in Azure and Snowflake, along with experience in data integration and production support. This role is 4-days onsite a WEEK, with no negotiations. Please apply directly if you're interested.</p><p><br></p><p>Responsibilities:</p><p>• Develop, configure, and optimize Snowflake-based data solutions to meet business needs.</p><p>• Utilize Azure Data Factory to design and implement efficient ETL processes.</p><p>• Provide production support by monitoring and managing data workflows and tasks.</p><p>• Extract and analyze existing code from Talend to facilitate system migrations.</p><p>• Stand up and configure data repository processes to ensure seamless performance.</p><p>• Collaborate on the migration from Talend to Azure Data Factory, providing expertise on best practices.</p><p>• Leverage Python scripting to enhance data processing and automation capabilities.</p><p>• Apply critical thinking to solve complex data challenges and support transformation initiatives.</p><p>• Maintain and improve Azure Fabric-based solutions for data warehousing.</p><p>• Work within the context of financial services, ensuring compliance with industry standards.</p>
We are looking for a skilled and experienced Senior Software Engineer to join our team in Atlanta, Georgia. In this role, you will leverage your technical expertise to design, develop, and implement innovative solutions using advanced technologies. You will collaborate with cross-functional teams to integrate artificial intelligence concepts into enterprise applications, ensuring high-quality deliverables that meet business objectives.<br><br>Responsibilities:<br>• Design and develop full-stack software solutions using .NET, React.js, TypeScript, and Python.<br>• Implement AI technologies such as LLMs, Azure OpenAI, LangChain, and vector databases in application development.<br>• Collaborate with stakeholders to translate complex AI concepts into actionable solutions for both technical and non-technical teams.<br>• Lead and mentor team members, providing guidance on development best practices and AI implementation strategies.<br>• Manage cloud-based systems with a focus on Azure, ensuring scalability and reliability.<br>• Develop CI/CD pipelines and optimize DevOps processes to improve deployment efficiency.<br>• Conduct prompt engineering and integrate AI frameworks to enhance functionality.<br>• Ensure code quality and maintainability through rigorous testing and review processes.<br>• Stay updated on emerging technologies and recommend advancements to improve product offerings.<br>• Drive architectural decisions for cloud-native applications to align with business goals.
We are seeking a Senior Software Engineer – AI Solutions to help design and implement AI-driven capabilities within a experienced, enterprise SaaS environment. This is a hands-on role for a strong full-stack engineer with practical LLM experience who can contribute at both the architectural and implementation levels. <br> In this position, you will support the evolution of AI-enabled product features, integrate large language models into existing systems, and help move initiatives from concept through production deployment. You’ll contribute to system design discussions, apply sound AI development practices, and build scalable, maintainable services that align with established enterprise standards. <br> You will collaborate with a small, focused engineering team in a highly interactive environment and report directly to engineering leadership.
We are looking for an experienced Principal Advanced Analytics Engineer to join our team in Reston, Virginia. In this role, you will collaborate with business leaders and technical experts to design and implement cutting-edge AI and machine learning solutions that drive impactful insights and decision-making. This position offers the opportunity to lead advanced analytics initiatives, establish engineering standards, and mentor team members while staying at the forefront of emerging technologies.<br><br>Responsibilities:<br>• Collaborate with stakeholders to design and implement AI/ML solutions tailored to business objectives.<br>• Develop and enforce engineering standards, design patterns, and quality controls for analytics projects.<br>• Build and manage CI/CD pipelines for machine learning models, incorporating automated testing, monitoring, and retraining processes.<br>• Integrate Responsible AI practices to ensure fairness, bias detection, explainability, and compliance with regulations.<br>• Create and maintain enterprise data models, including dimensional and star schemas, to support analytics and business intelligence.<br>• Work closely with analytics teams to deliver scalable and efficient data solutions.<br>• Implement robust data security measures, access controls, and lineage tracking in line with corporate policies and privacy laws.<br>• Stay updated on advancements in AI, ML, and BI technologies, bringing innovative solutions to the team.<br>• Provide mentorship to Data Analytics Engineers, fostering attention to detail and promoting best practices.
<p>We are looking for an experienced Application Support Engineer to join our clients team in North Florida. In this role, you will be responsible for maintaining and optimizing system integrations between administrative platforms and third-party tools, ensuring seamless data exchanges and functionality. This is a Contract to permanent position, offering an exciting opportunity to collaborate with technical teams and stakeholders to enhance system performance.</p><p><br></p><p>Responsibilities:</p><p>• Diagnose and resolve integration issues across administrative systems, including student and departmental platforms.</p><p>• Provide advanced technical support for integrations involving both cloud-based and on-premise systems, such as Salesforce and Oracle Integration Cloud.</p><p>• Monitor and optimize data exchanges and system interoperability, escalating complex challenges when necessary.</p><p>• Configure and document middleware, APIs, and data connectors to ensure secure and reliable integrations.</p><p>• Plan and oversee upgrades and enhancements to integrated platforms, addressing dependencies and downstream effects.</p><p>• Develop and troubleshoot automation workflows to meet evolving business and academic requirements.</p><p>• Coordinate and support Salesforce platform updates, ensuring compatibility with customizations and integrations.</p><p>• Create and maintain workflows within Salesforce to enhance institutional processes and resolve issues.</p><p>• Communicate the status of system enhancements, upgrades, and open issues to campus stakeholders.</p><p>• Perform additional duties as assigned by leadership, contributing to the overall success of the technology team.</p>
<p><strong>Location:</strong> Hybrid — <em>2 days per month on-site in New Hampshire</em></p><p><strong>Employment Type:</strong> Full-Time</p><p><strong>About the Role</strong></p><p>We’re seeking a talented <strong>Software Engineer</strong> with deep experience in <strong>Oracle APEX</strong> and <strong>PL/SQL. </strong>You should also have a strong background integrating third-party applications like <strong>Salesforce</strong>. This role is ideal for someone who enjoys collaborating with cross-functional teams, designing scalable solutions, and enhancing business systems through thoughtful engineering and integrations.</p><p><br></p><p>As part of our team, you’ll play a key role in building and maintaining applications that drive critical business workflows. You’ll leverage your Oracle APEX expertise to architect solutions and your integration experience to ensure smooth data flows between platforms.</p><p>This is a <strong>hybrid position</strong>, requiring <strong>two days per month on-site in New Hampshire</strong> for team collaboration, planning, or project workshops.</p><p><br></p><p><strong>Key Responsibilities</strong></p><ul><li>Design, develop, and maintain applications using <strong>Oracle Application Express (APEX)</strong>.</li><li>Build, optimize, and troubleshoot <strong>integrations with third-party systems</strong>, including Salesforce and other enterprise platforms.</li><li>Develop APIs, data pipelines, and middleware solutions to support seamless cross-system communication.</li><li>Collaborate with business stakeholders to gather requirements and translate them into technical specifications.</li><li>Ensure application performance, security, and reliability through best practices.</li><li>Participate in code reviews, testing, deployment, and documentation of software solutions.</li><li>Support ongoing enhancements, bug fixes, and system improvements.</li></ul><p><strong>Required Qualifications</strong></p><ul><li><strong>Hands-on experience with Oracle APEX</strong> development.</li><li>Proven experience designing and implementing <strong>Salesforce integrations</strong> (REST/SOAP APIs, middleware tools, or direct platform integration).</li><li>Strong proficiency with <strong>SQL, PL/SQL</strong>, and Oracle database structures.</li><li>Experience working with APIs, integration frameworks, and data transformation workflows.</li><li>Solid understanding of software development best practices, including version control, testing, and documentation.</li><li>Excellent analytical, troubleshooting, and communication skills.</li><li>Ability to work in a hybrid environment and be on-site in New Hampshire <strong>twice per month</strong>.</li></ul><p><strong>Preferred Qualifications</strong></p><ul><li>Experience with additional integration platforms (e.g., MuleSoft, Boomi, Workato).</li><li>Background working in enterprise environments or supporting mission-critical systems.</li><li>Familiarity with Agile methodologies.</li><li>Knowledge of secure coding practices and data governance.</li></ul>
<ul><li>Develop and maintain backend services and applications using C# and the .NET platform.</li><li>Design, write, and optimize complex SQL queries, stored procedures, and database schemas to support high-performance, data-intensive workflows.</li><li>Build and maintain APIs and service layers that integrate with relational databases and downstream systems.</li><li>Troubleshoot, analyze, and resolve application and data-related issues, including query performance and data integrity concerns.</li><li>Apply best practices for testing, version control, and deployment to ensure stability and reliability.</li><li>Collaborate with product, business, and technical stakeholders to translate data and system requirements into scalable technical solutions.</li><li>Contribute to architectural decisions, code reviews, and documentation to improve system quality and maintainability.</li></ul>
We are looking for a Salesforce Application Support Engineer to join our team on a long-term contract basis in Palm Beach Gardens, Florida. In this role, you will be responsible for ensuring the optimal functionality of Salesforce solutions while collaborating with cross-functional teams to address technical challenges and streamline processes. The ideal candidate will bring expertise in Salesforce development, problem-solving capabilities, and a proactive approach to maintaining data accuracy across systems.<br><br>Responsibilities:<br>• Develop and implement scalable solutions using JavaScript and Apex within the Salesforce platform.<br>• Collaborate with stakeholders to translate business requirements, particularly in the legal domain, into technical designs.<br>• Configure and modify Salesforce settings, including permissions and record updates, to meet organizational needs.<br>• Manage escalations and resolve issues using multiple ticketing systems.<br>• Ensure the accuracy and integrity of data across all Salesforce deliverables.<br>• Build and maintain integrations between Salesforce and third-party systems to enhance functionality.<br>• Provide technical support for production issues and troubleshoot problems efficiently.<br>• Utilize Salesforce DevOps tools to streamline development and deployment processes.<br>• Contribute to the ongoing evolution of analytics and reporting by leveraging Salesforce data.
<p>We are seeking a reliable, business-aligned Full Stack Delivery Engineer to take ownership of critical systems and ensure consistent, predictable delivery across our technology stack.</p><p><br></p><p>This role is ideal for a developer who values shipping working software, collaborating across disciplines, and operating within real-world constraints. You will work on production systems that directly impact operations, revenue, and customer experience.</p><p><br></p><p>We are not looking for a “10x developer”. We are looking for someone who finishes what they start, documents what they build, and treats deadlines seriously.</p><p><br></p><p><strong>What Success Looks Like in This Role</strong></p><ul><li>Features ship when expected</li><li>Progress is visible weekly</li><li>Estimates are conservative and reliable</li><li>Systems are understandable by others</li><li>No single person becomes a bottleneck</li></ul><p><strong>Key Responsibilities</strong></p><p><strong>Delivery & Execution</strong></p><ul><li>Deliver production-ready features tied to clear milestones and acceptance criteria</li><li>Break work into small, testable, demoable increments</li><li>Communicate risks early and adjust plans based on reality—not optimism</li></ul><p><strong>Front-End Development</strong></p><ul><li>Build and maintain responsive front-end applications using modern frameworks such as React or Next.js</li><li>Integrate front-end components cleanly with backend APIs</li><li>Prioritize usability and operational clarity over visual perfection</li></ul><p><strong>Back-End & Systems Integration</strong></p><ul><li>Build and maintain backend services and APIs (Node.js / TypeScript; Rust familiarity is a plus)</li><li>Integrate with third-party services including Stripe, shipping providers, and messaging systems</li><li>Work with inventory, asset tracking, and operational data systems</li></ul><p><strong>Data & Infrastructure</strong></p><ul><li>Design and work with relational databases (PostgreSQL / MySQL preferred)</li><li>Write safe, understandable data migrations</li><li>Support logging, monitoring, and observability for production systems</li></ul><p><strong>Collaboration & Accountability</strong></p><ul><li>Work closely with operations, product, and leadership—not just engineers</li><li>Document APIs, assumptions, and system behavior</li><li>Participate in code reviews with an emphasis on clarity and maintainability</li><li>Accept review and feedback without defensiveness</li></ul><p><br></p>
We are looking for a highly experienced Senior Machine Learning Engineer to join our team in Boston, Massachusetts. In this role, you will design, develop, and deploy cutting-edge machine learning systems that solve complex problems and scale effectively in production environments. This position offers an exciting opportunity to contribute to impactful projects, leveraging your expertise in machine learning, cloud infrastructure, and data engineering.<br><br>Responsibilities:<br>• Build and deploy machine learning models and solutions for production environments, ensuring they meet scalability and performance standards.<br>• Design and implement comprehensive ML pipelines, including data ingestion, feature engineering, model training, evaluation, and serving.<br>• Write clean, efficient code in Python and leverage its ML ecosystem, such as TensorFlow, PyTorch, and scikit-learn.<br>• Work with large datasets to extract meaningful insights and develop complex queries using modern data processing tools.<br>• Utilize containerization technologies like Docker and cloud platforms such as AWS to ensure robust and scalable deployment.<br>• Apply MLOps best practices, including CI/CD pipelines, automated testing, and performance monitoring, to maintain reliable machine learning systems.<br>• Conduct research and apply deep machine learning and AI techniques, including statistical modeling and large language models.<br>• Solve complex analytical problems with pragmatic engineering approaches while maintaining scientific rigor.<br>• Collaborate with cross-functional teams to align machine learning solutions with business goals and mission-driven objectives.<br>• Monitor and address issues like data drift and model performance to ensure continuous improvement and reliability.
<p>We’re looking for a skilled <strong>BI Developer</strong> with strong experience in <strong>Power BI</strong> and <strong>Microsoft Fabric</strong> to support high‑visibility analytics initiatives. You’ll build dashboards, data models, and reporting solutions that drive real‑time insights across the organization.</p><p><strong>What You’ll Do</strong></p><ul><li>Develop and enhance <strong>Power BI dashboards</strong>, reports, and visualizations</li><li>Build and maintain data models using <strong>Microsoft Fabric</strong> (Lakehouse, pipelines, etc.)</li><li>Work with real‑time or frequently refreshed datasets</li><li>Collaborate with data engineers, analysts, and business stakeholders</li><li>Tune DAX, queries, and models for performance and scalability</li><li>Ensure data accuracy, consistency, and reliability across all reporting assets</li></ul><p><br></p>
<p>We are looking for a skilled Principal Systems Engineer to join our team in Chicago, Illinois. This role requires a strong understanding of AI platforms, cloud technologies, and modern infrastructure tools. The ideal candidate will play a crucial role in designing, implementing, and maintaining systems to support cutting-edge financial services.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement AI platforms and models to optimize business processes.</p><p>• Manage cloud environments including Microsoft Azure, AWS, and Google Cloud Platform.</p><p>• Develop and maintain Infrastructure as Code solutions using tools like Terraform and Ansible.</p><p>• Oversee containerization efforts with technologies such as Docker and Kubernetes.</p><p>• Implement and maintain data platforms such as Azure Synapse and Databricks.</p><p>• Configure and monitor backup solutions, including CommVault.</p><p>• Manage virtual environments using hypervisors like VMware vSphere.</p><p>• Utilize monitoring and logging tools such as CloudWatch and Prometheus for system analysis.</p><p>• Write and maintain scripts using languages like Powershell, Python, and Bash.</p><p>• Collaborate across teams to ensure seamless integration of SaaS solutions.</p>
Must have skills: <br>• 3–6 years of professional software engineering experience, with a strong portfolio of full stack development work. <br>• Proficiency in Python, including experience with web frameworks such as Flask or Dash. <br>• Experience integrating frontend applications with RESTful APIs and backend services. <br>• Relational and non-relational databases: SQL, MongoDB, and/or Snowflake using Python. <br>• Designing data models for effective data storage and retrieval (preferably SQL, MongoDB, Snowflake). <br>• Debugging, issue resolution, and troubleshooting. <br>• Developing systems integrated with cloud services, such as for storage or secrets management (preferably AWS). <br>• Designing and troubleshooting ETL pipelines. <br>• Developing REST APIs using Python frameworks (preferably Flask). <br>• Publishing Python packages, maintaining them, and building Python CLI tools. <br>• Deploying REST APIs in containerized environments (Kubernetes), working with other developers in the team to integrate those APIs with web applications. <br> <br>Nice to have skills: <br>• Exposure to financial systems, SEC API, and/or corporate credit modeling is strongly preferred. <br>• Familiarity with UX design tools (Figma) and solid understanding of the design-engineering hand-off process <br>• Familiarity with deployment pipelines, CICD tools (preferably GitLab). <br>• Configuring observability and alerting services (preferably Datadog and Opsgenie). <br>• Containerized development and deployment (i.e. Docker, Kubernetes) <br>• Writing infrastructure as code (preferably Terraform). <br>• Integrating managed authentication services (preferably Auth0). <br>• Familiarity with LLM Document Parsing and Data Framework services (preferably LlamaParse and LlamaIndex) <br>• Familiarity with LLM Observability tooling (preferably Weave) <br>• Experience with the OpenAI SDK <br>• Experience with Vector Databases (preferably MongoDB)
<p>We are looking for a hands‑on AI/ML Engineer to join our Automation and Analytics team and help design, build, and scale AI‑driven solutions that boost decision‑making and operational efficiency across the organization. In this role, you’ll take the lead in developing generative AI applications, RAG (Retrieval-Augmented Generation) architectures, intelligent agents, and traditional machine learning models—transforming complex business challenges into scalable, production‑ready systems.</p><p>This is a highly collaborative position where you’ll partner with data engineers, analysts, and key business stakeholders to rapidly prototype solutions, establish best practices, and help upskill the broader team. We’re seeking someone with strong experience in LLM orchestration, prompt engineering, vector databases, model evaluation, MLOps, and secure enterprise deployment patterns.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><p>· Design, build, and deploy generative AI applications, RAG pipelines, and intelligent agents.</p><p>· Develop and maintain machine learning models that address real business problems.</p><p>· Translate business requirements into scalable, production-ready AI systems.</p><p>· Collaborate with data engineers, analysts, and business stakeholders to prototype and refine solutions.</p><p>· Implement best practices for model evaluation, monitoring, governance, and compliance.</p><p>· Support and improve MLOps processes for reliable deployment and lifecycle management.</p>
We are looking for a skilled Artificial Intelligence (AI) Engineer to join our team in Tampa, Florida. This role offers the opportunity to design and implement innovative AI solutions while collaborating with cross-functional teams to drive impactful results. As a Contract to permanent position, this role provides a pathway to long-term growth and development within our organization.<br><br>Responsibilities:<br>• Build, train, and refine machine learning models using frameworks such as TensorFlow, PyTorch, Scikit-Learn, and Keras.<br>• Integrate AI-driven solutions into existing on-premises applications to enhance functionality and performance.<br>• Explore and experiment with large language models (LLMs) and agent-based coding tools to optimize internal automation and analytics workflows.<br>• Process, engineer, and evaluate data from diverse internal sources, including structured and unstructured datasets.<br>• Collaborate with teams across departments to ensure compliance with Criminal Justice Information Systems (CJIS) and Personally Identifiable Information (PII) standards.<br>• Partner with analysts, investigators, and IT staff to identify opportunities where AI can provide operational improvements.<br>• Participate in coding reviews and testing processes to ensure high-quality deliverables.<br>• Stay updated on emerging AI technologies and prototype new tools while adhering to data governance standards.<br>• Contribute to the continuous improvement of AI systems and processes by identifying areas for innovation and optimization.
We are looking for a highly skilled Senior Software Engineer to join our team in Edgewood, New York. In this role, you will play a pivotal part in designing and developing backend systems, managing infrastructure, and ensuring the reliability of production operations. Your expertise will contribute to building scalable solutions and improving engineering practices while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain backend services and APIs using programming languages such as Python, Go, or TypeScript.<br>• Design and optimize infrastructure using Infrastructure-as-Code tools like Terraform.<br>• Create and manage CI/CD pipelines with tools such as GitHub Actions to streamline deployment processes.<br>• Operate and enhance cloud infrastructure on Google Cloud Platform to improve system reliability and efficiency.<br>• Monitor production systems, troubleshoot issues, and conduct root-cause analysis to ensure operational stability.<br>• Design and manage database schemas and queries using PostgreSQL or similar technologies to support data integration.<br>• Collaborate with product, operations, and engineering teams to refine technical designs and enhance platform capabilities.<br>• Implement modern development tools, including AI-assisted solutions, to improve engineering workflows.<br>• Contribute to improving system scalability, reliability, and performance across services.<br>• Participate in the development of engineering standards and best practices to drive excellence.
<p>We are looking for an experienced Systems Engineer to oversee and maintain a complex Linux server environment in New Haven, Connecticut. This role involves ensuring the reliability and performance of IT infrastructure supporting engineering courses and research activities. </p><p><br></p><p>Responsibilities:</p><p>• Administer and manage a multi-node Linux server environment, ensuring its seamless operation.</p><p>• Diagnose and resolve advanced technical issues related to Linux servers and workstations, including automation and configuration management using Ansible.</p><p>• Monitor and maintain the performance of a small data center, ensuring uninterrupted service for research and educational activities.</p><p>• Provide expert support for Linux systems, focusing on performance optimization, security enhancements, and system monitoring.</p><p>• Develop, document, and implement workflows, procedures, and technical standards to uphold operational excellence.</p><p>• Collaborate with faculty and researchers to understand and fulfill specific computing requirements.</p><p>• Configure and support IT infrastructure, ensuring compliance with established standards and objectives.</p><p>• Analyze performance metrics and provide actionable insights to improve system efficiency.</p><p>• Implement secure IT solutions and ensure their reliable integration into existing infrastructure.</p><p>• Act as a technical liaison, fostering collaboration with internal and external stakeholders.</p>