<p>The Senior Data Engineer plays a key role in architecting, developing, and operating reliable, production-ready data solutions that enable analytics, automation, and operational processes across our client’s organization.</p><p><br></p><p>Operating within a modern, cloud-based data ecosystem, this role is responsible for bringing together data from internal platforms and external partners, transforming it into trusted, high-quality assets, and delivering it consistently to downstream users and systems. The work spans the full data lifecycle—ingestion, orchestration, transformation, and delivery—and blends advanced SQL development with Python-based pipeline and workflow automation.</p><p><br></p><p>This role sits at the intersection of data and systems engineering and works closely with Business Intelligence, Business Technology, and operational teams to ensure data solutions are scalable, dependable, and aligned with real business outcomes.</p><p><br></p><p><br></p><p><br></p><p><br></p>
We are looking for a talented Data Engineer to join our team in Grand Rapids, Michigan. In this role, you will focus on designing, building, and optimizing robust data solutions using Snowflake and other cloud-based technologies. You will work closely with business intelligence and analytics teams to deliver scalable, high-performance data pipelines that support organizational goals.<br><br>Responsibilities:<br>• Design and implement scalable data models, schemas, and tables within Snowflake, including staging, integration, and presentation layers.<br>• Develop and optimize data pipelines using Snowflake tools such as Snowpipe, Streams, Tasks, and stored procedures.<br>• Ensure data security and access through role-based controls and best practices for data sharing.<br>• Build and maintain ETL pipelines leveraging tools like dbt, Matillion, Fivetran, Informatica, or Azure-native solutions.<br>• Integrate data from diverse sources such as APIs, IoT devices, and NoSQL databases to create unified datasets.<br>• Enhance performance by utilizing clustering, partitioning, caching, and efficient warehouse sizing strategies.<br>• Collaborate with cloud technologies such as AWS, Azure, or Google Cloud to support Snowflake infrastructure and operations.<br>• Implement automated workflows and CI/CD processes for seamless deployment of data solutions.<br>• Maintain high standards for data accuracy, completeness, and reliability while supporting governance and documentation.<br>• Work closely with analytics, reporting, and business teams to troubleshoot issues and deliver scalable solutions.
<p>Robert half has a brand new opening for a Data Engineer with a reputable client here in Tampa.</p><p>Full-time position, HYBRID schedule out of their Tampa office.</p><p>Compensation ranging $100-115K depending on experience</p><p>*Medical benefits are also 100% covered after onboarding period*</p><p><br></p><p>Data Engineer (BI/ETL) focused on building and optimizing ETL/ELT pipelines, migrating/cleaning data between internal, vendor, and legacy systems, and improving data quality. SQL is absolutely required, and this role leans heavily into backend data movement — not dashboarding.</p><p><br></p><p><strong>Top Skills Looking For:</strong></p><ul><li>Strong <strong>SQL </strong>(non negotiable)</li><li>Experience designing and maintaining <strong>ETL / ELT pipelines</strong> using frameworks such as <strong>Apache Airflow, DBT (Data Build Tool), or equivalent orchestration systems</strong>, with the ability to schedule, monitor, and recover complex multi-stage jobs.</li><li><strong>Experience moving data across multiple systems</strong></li></ul><p>Description:</p><p>Build and maintain business intelligence solutions to include law enforcement, detention, human resources, finance, and integration of data from agency criminal justice partners.</p><p>• Design and develop BI solutions.</p><p>• Gather user requirements, develop technical and functional requirements, produce reporting solutions, and document the design and development process, metadata, and business rules.</p><p>• Model, implement, and maintain databases and data marts to support BI reporting.</p><p>• Develop extract, transform, load (ETL) to support the loading of data into data marts.</p><p>• Monitor the data quality of existing databases and data marts and recommend governance and control around self-service BI/Analytics considering the evolution of the BI Industry’s best practices.</p><p>• Perform other related duties as required.</p>
We are looking for an experienced Senior Data Engineer with a strong background in Python and modern data engineering tools to join our team in West Des Moines, Iowa. This is a long-term contract position that requires expertise in designing, building, and optimizing data pipelines and working with cloud-based data warehouses. If you thrive in a collaborative environment and have a passion for transforming raw data into actionable insights, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, debug, and optimize Python-based data pipelines using frameworks such as Flask, Django, or FastAPI.<br>• Design and implement data transformations in a data warehouse using tools like dbt, ensuring high-quality analytics-ready datasets.<br>• Utilize Amazon Redshift and Snowflake for managing large-scale data storage and performing advanced querying and optimization.<br>• Automate data integration processes using platforms like Fivetran and orchestration tools such as Prefect or Airflow.<br>• Build reusable and maintainable data models to improve performance and scalability for analytics and reporting.<br>• Conduct data analysis and visualization leveraging Python libraries such as NumPy, Pandas, TensorFlow, and PyTorch.<br>• Manage version control for data engineering projects using Git and GitHub.<br>• Ensure data quality through automated testing and validation processes.<br>• Document workflows, code, and data transformations following best practices for readability and maintainability.<br>• Optimize cloud-based data warehouse and lake platforms for performance and integration of new data sources.
We are looking for a skilled Data Engineer with expertise in AI/ML technologies and prior experience in the oil and gas industry to join our team in Houston, Texas. In this Contract to permanent position, you will play a key role in transforming data into actionable insights through advanced analytics and innovative solutions. This opportunity is ideal for professionals who thrive in data-driven environments and excel at leveraging tools like Power BI and PowerApps.<br><br>Responsibilities:<br>• Develop and manage Power BI dashboards and reports to deliver meaningful insights from raw data.<br>• Utilize PowerApps to create and maintain applications that support business intelligence initiatives.<br>• Collaborate with cross-functional teams to understand data requirements and implement solutions.<br>• Analyze complex datasets to identify trends and patterns that inform decision-making.<br>• Ensure the accuracy, reliability, and security of data within BI systems.<br>• Optimize data pipelines and workflows for improved performance and scalability.<br>• Provide technical expertise to support AI/ML integration into existing data processes.<br>• Stay updated on emerging technologies and best practices in data engineering and AI/ML.<br>• Troubleshoot and resolve issues related to data tools and processes.<br>• Document processes, workflows, and methodologies for future reference.
<p>We are looking for a talented Data Engineer to join our team in Fort Lauderdale, Florida. This long-term contract position offers the opportunity to work on cutting-edge technologies and contribute to the development of efficient data pipelines and processes. The ideal candidate will have a strong background in data engineering and a passion for delivering high-quality solutions that drive business success.</p><p><br></p><p>Responsibilities:</p><p>• Design and implement scalable data pipelines using Snowflake, Python, and other relevant tools.</p><p>• Collaborate with stakeholders to gather and refine data requirements, ensuring alignment with business needs.</p><p>• Develop and maintain data models to support analytics, reporting, and operational processes.</p><p>• Optimize data warehouse performance by tuning queries and managing resources effectively.</p><p>• Ensure data quality through rigorous testing and governance protocols.</p><p>• Implement security and compliance measures to protect sensitive data.</p><p>• Research and integrate emerging technologies to enhance system capabilities.</p><p>• Support ETL processes for data extraction, transformation, and loading.</p><p>• Work with technologies such as Apache Spark, Hadoop, and Kafka to manage and process large datasets.</p><p>• Provide technical guidance and support to team members and stakeholders.</p>
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This position offers the opportunity to work remotely while contributing to critical data management and integration efforts. The ideal candidate will have hands-on experience with customer master data in ECC6, and the ability to create, maintain, and manage data effectively.<br><br>Responsibilities:<br>• Develop and maintain customer master data within ECC6, ensuring data accuracy and consistency.<br>• Create new customer profiles and manage existing ones, maintaining high standards of data integrity.<br>• Support the integration process by working with custom tables related to customer data.<br>• Collaborate with cross-functional teams to ensure seamless data flow and effective data management.<br>• Utilize tools such as Apache Spark, Python, and ETL processes to extract, transform, and load data efficiently.<br>• Leverage Apache Hadoop for scalable data storage and processing solutions.<br>• Implement Apache Kafka to enable real-time data streaming and integration.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability.<br>• Provide documentation and training to stakeholders on data management processes.<br>• Stay updated on industry best practices and emerging technologies to enhance data engineering workflows.
We are looking for a skilled Data Engineer to join our team in Wyoming, Michigan. This Contract to permanent role offers an exciting opportunity to design, manage, and optimize data architecture and engineering solutions across a dynamic healthcare organization. The ideal candidate will play a key role in ensuring efficient data governance and infrastructure performance while collaborating with cross-functional teams.<br><br>Responsibilities:<br>• Develop and maintain robust data architectures and frameworks, including relational and graph databases, to meet business objectives.<br>• Create and manage data pipelines to extract, transform, and load data from various sources into data warehouses.<br>• Ensure data governance policies are implemented and monitored, including retention and backup protocols.<br>• Collaborate with teams across departments to translate business requirements into technical specifications.<br>• Monitor and optimize the performance of data assets, identifying opportunities for improvement.<br>• Design scalable and secure data solutions using cloud-based platforms like AWS and Microsoft Azure.<br>• Implement advanced tools and technologies, such as AI, to enhance data analytics and processing capabilities.<br>• Mentor and support team members by sharing technical expertise and providing guidance.<br>• Establish key performance indicators (KPIs) to measure database performance and drive continuous improvement.<br>• Stay up to date with emerging trends and advancements in data engineering and architecture.
We are looking for an experienced Data Engineer to join our team in Houston, Texas. In this role, you will design, implement, and optimize data systems that support critical business operations. The ideal candidate will have a strong technical background and a passion for creating efficient, scalable solutions.<br><br>Responsibilities:<br>• Develop and maintain scalable data pipelines to ensure efficient processing of large datasets.<br>• Design and implement data architectures that support high-performance data integration and analysis.<br>• Collaborate with cross-functional teams to gather requirements and deliver tailored data solutions.<br>• Build and manage ETL workflows to support data transformation and integration processes.<br>• Optimize data storage and processing techniques using tools such as Apache Hadoop and Apache Spark.<br>• Implement real-time data streaming solutions using Apache Kafka.<br>• Troubleshoot and resolve issues within the data infrastructure to maintain system reliability.<br>• Monitor system performance and suggest improvements to enhance data processing efficiency.<br>• Document processes and workflows to ensure clarity and consistency in data operations.<br>• Stay current with emerging technologies and industry trends to continually improve data engineering practices.
The Opportunity: Be part of a dynamic team that designs, develops, and optimizes data solutions supporting enterprise-level products across diverse industries. This role provides a clear track to higher-level positions, including Lead Data Engineer and Data Architect, for those who demonstrate vision, initiative, and impact. Key Responsibilities: Design, develop, and optimize relational database objects and data models using Microsoft SQL Server and Snowflake. Build and maintain scalable ETL/ELT pipelines for batch and streaming data using SSIS and cloud-native solutions. Integrate and utilize Redis for caching, session management, and real-time analytics. Develop and maintain data visualizations and reporting solutions using Sigma Computing, SSRS, and other BI tools. Collaborate across engineering, analytics, and product teams to deliver impactful data solutions. Ensure data security, governance, and compliance across all platforms. Participate in Agile Scrum ceremonies and contribute to continuous improvement within the data engineering process. Support database deployments using DevOps practices, including version control (Git) and CI/CD pipelines (Azure DevOps, Flyway, Octopus, SonarQube). Troubleshoot and resolve performance, reliability, and scalability issues across the data platform. Mentor entry level team members and participate in design/code reviews.
We are looking for a skilled Data Engineer to join our team in Tampa, Florida. This is a Contract to permanent position, offering an excellent opportunity to contribute to innovative business intelligence solutions while advancing your career. The ideal candidate will have a strong background in data engineering, database design, and analytics, with the ability to solve complex problems and deliver high-quality results.<br><br>Responsibilities:<br>• Design and implement robust business intelligence solutions tailored to meet organizational needs.<br>• Collaborate with stakeholders to gather user requirements and translate them into technical and functional specifications.<br>• Create and maintain databases and data marts that support analytics and reporting activities.<br>• Develop and optimize ETL processes to efficiently load data into data marts.<br>• Monitor and ensure the accuracy, consistency, and quality of data within databases and reporting systems.<br>• Recommend and implement governance practices to improve self-service BI and analytics capabilities.<br>• Develop automated data validation checks to maintain data integrity and accuracy.<br>• Utilize dimensional modeling and star/snowflake schemas to design effective data warehouses.<br>• Troubleshoot and debug issues across application and database layers to ensure smooth operations.<br>• Perform exploratory data analysis to identify trends, anomalies, and areas for improvement.
We are looking for a skilled Data Engineer to join our team in Philadelphia, Pennsylvania. In this long-term contract position, you will play a key role in managing and optimizing large-scale data pipelines and systems within the healthcare industry. Your expertise will contribute to the development of robust solutions for data processing, analysis, and integration.<br><br>Responsibilities:<br>• Design, develop, and maintain large-scale data pipelines to support business needs.<br>• Optimize data workflows using tools such as Apache Spark and Python.<br>• Implement and manage ETL processes for seamless data transformation and integration.<br>• Collaborate with cross-functional teams to ensure data solutions align with organizational goals.<br>• Monitor and troubleshoot data systems to ensure consistent performance and reliability.<br>• Work with Apache Hadoop and Apache Kafka to enhance data storage and streaming capabilities.<br>• Ensure compliance with data security and privacy standards.<br>• Analyze and interpret complex datasets to provide actionable insights.<br>• Document processes and solutions to support future scalability and maintenance.
<p>We are looking for an experienced Data Engineer to join a dynamic team in Oklahoma City, Oklahoma. In this role, you will play a crucial part in designing and maintaining data infrastructure to support analytics and decision-making processes. You will be a key contributor in developing, optimizing, and maintaining the data infrastructure that supports analytics and business intelligence initiatives, and data driven decision making using Snowflake, Matillion, and other tools. Position will be in-office to work closely with the team. No 3rd parties please.</p><p><br></p><p> Responsibilities:</p><p> </p><p> • Design, develop, and maintain scalable data pipelines to support data integration and real-time processing.</p><p> • Implement and manage data warehouse solutions, with a strong focus on Snowflake architecture and optimization.</p><p> • Write efficient and effective scripts and tools using Python to automate workflows and enhance data processing capabilities.</p><p> • Work with SQL Server to design, query, and optimize relational databases in support of analytics and reporting needs.</p><p> • Monitor and troubleshoot data pipelines, resolving any performance or reliability issues.</p><p> • Ensure data quality, governance, and integrity by implementing and enforcing best practices.</p><p> </p><p><br></p>
We are looking for an experienced AWS/Databricks Engineer to join our team in Houston, Texas. This is a long-term contract position ideal for professionals with a strong background in data engineering and cloud technologies. The role will focus on leveraging Python and Databricks to optimize data processes and enhance system performance.<br><br>Responsibilities:<br>• Develop and implement scalable data engineering solutions using Python and Databricks.<br>• Collaborate with cross-functional teams to design and optimize data workflows.<br>• Migrate and enhance existing Python scripts to Databricks for improved functionality.<br>• Utilize cloud technologies to support data integration and analytics processes.<br>• Implement algorithms and data visualization methods to present actionable insights.<br>• Design and maintain APIs to streamline data interactions and integrations.<br>• Work with tools like Apache Kafka, Spark, and Hadoop to manage large-scale data systems.<br>• Perform data analysis and develop strategies to improve system efficiency.<br>• Ensure high-quality data pipelines and address performance bottlenecks.<br>• Stay updated on emerging trends in data engineering and recommend innovative solutions.
We are looking for an experienced Data Architect to design and implement cutting-edge data solutions that meet the evolving needs of our enterprise. This role involves building secure, scalable, and high-performing data platforms while leveraging modern technologies and aligning with organizational goals. The ideal candidate will have expertise in cloud-based architecture, data governance, and advanced analytics, driving innovation across diverse business functions.<br><br>Responsibilities:<br>• Develop comprehensive data architecture strategies for advanced analytics and big data solutions using Azure Databricks.<br>• Design and implement Databricks Delta Lake-based Lakehouse architecture, utilizing PySpark Jobs, Databricks Workflows, Unity Catalog, and Medallion architecture.<br>• Optimize and configure Databricks clusters, notebooks, and workflows to ensure efficiency and scalability.<br>• Integrate Databricks with Azure services such as Azure Data Lake Storage, Azure Data Factory, Azure Key Vault, and Microsoft Fabric.<br>• Establish and enforce best practices for data governance, security, and cost management.<br>• Collaborate with data engineers, analysts, and business stakeholders to translate functional requirements into robust technical solutions.<br>• Provide technical mentoring and leadership to team members focused on Databricks and Azure technologies.<br>• Monitor, troubleshoot, and enhance data pipelines and workflows to maintain reliability and performance.<br>• Ensure compliance with organizational and regulatory standards regarding data security and privacy.<br>• Document configurations, processes, and governance standards to support long-term scalability and usability.
<p>Robert Half is seeking an experienced Data Architect to design and lead scalable, secure, and high-performing enterprise data solutions. This role will focus on building next-generation cloud data platforms, driving adoption of modern analytics technologies, and ensuring alignment with governance and security standards.</p><p><br></p><p>You’ll serve as a hands-on technical leader, partnering closely with engineering, analytics, and business teams to architect data platforms that enable advanced analytics and AI/ML initiatives. This position blends deep technical expertise with strategic thinking to help unlock the value of data across the organization.</p><p><br></p><p><strong>Key Responsibilities:</strong></p><ul><li>Design and implement end-to-end data architecture for big data and advanced analytics platforms.</li><li>Architect and build Delta Lake–based lakehouse environments from the ground up, including DLT pipelines, PySpark jobs, workflows, Unity Catalog, and Medallion architecture.</li><li>Develop scalable data models that meet performance, security, and governance requirements.</li><li>Configure and optimize clusters, notebooks, and workflows to support ETL/ELT pipelines.</li><li>Integrate cloud data platforms with supporting services such as data storage, orchestration, secrets management, and analytics tools.</li><li>Establish and enforce best practices for data governance, security, and cost optimization.</li><li>Collaborate with data engineers, analysts, and stakeholders to translate business requirements into technical solutions.</li><li>Provide technical leadership and mentorship to team members.</li><li>Monitor, troubleshoot, and optimize data pipelines to ensure reliability and efficiency.</li><li>Ensure compliance with organizational and regulatory standards related to data privacy and security.</li><li>Create and maintain documentation for architecture, processes, and governance standards.</li></ul>
<p>We are seeking a talented and motivated Python Data Engineer to join our global team. In this role, you will be instrumental in expanding and optimizing our data assets to enhance analytical capabilities across the organization. You will collaborate closely with traders, analysts, researchers, and data scientists to gather requirements and deliver scalable data solutions that support critical business functions.</p><p><br></p><p>Responsibilities</p><ul><li>Develop modular and reusable Python components to connect external data sources with internal systems and databases.</li><li>Work directly with business stakeholders to translate analytical requirements into technical implementations.</li><li>Ensure the integrity and maintainability of the central Python codebase by adhering to existing design standards and best practices.</li><li>Maintain and improve the in-house Python ETL toolkit, contributing to the standardization and consolidation of data engineering workflows.</li><li>Partner with global team members to ensure efficient coordination and delivery.</li><li>Actively participate in internal Python development community and support ongoing business development initiatives with technical expertise.</li></ul>
<p>Position Overview</p><p>We are seeking a Data Governance & Data Quality Platform Engineer to own the technical administration, integration, and optimization of enterprise data governance and data quality platforms (e.g., Atlan, Monte Carlo). This role ensures governance and quality tools are scalable, securely integrated into the enterprise data ecosystem, and maintained for high availability and performance.</p><p>The ideal candidate brings strong platform engineering skills, experience automating data quality and metadata workflows, and a solid understanding of governance, compliance, and modern data architectures.</p><p>Key Responsibilities</p><p><br></p><p>1. Platform Engineering & Administration</p><ul><li>Configure and maintain data governance platforms for metadata management, data lineage, and governance workflows</li><li>Configure data quality tools for profiling, rule creation, and monitoring dashboards</li><li>Manage platform security, including user roles, authentication, SSO, RBAC, and access controls</li></ul><p>e2. Integration & Automation</p><ul><li>Develop and maintain integrations across data sources, databases, data lakes, and BI tools</li><li>Automate metadata ingestion and data quality checks using APIs, Python scripts, or ETL frameworks</li><li>Configure and maintain connectors for analytics and reporting platforms</li></ul><p> 3. Performance, Reliability & Monitoring</p><ul><li>Monitor platform health and optimize performance and scalability</li><li>Apply upgrades, patches, and troubleshoot technical issues</li><li>Implement logging, alerting, and proactive monitoring for governance and data quality environments</li></ul><p>a4. Technical Support & Issue Resolution</p><ul><li>Provide Tier 3 support for platform‑related incidents and escalations</li><li>Debug integration failures and resolve configuration conflicts</li><li>Collaborate with vendors for advanced troubleshooting and roadmap alignment</li></ul><p>r5. Security, Compliance & Risk Management</p><ul><li>Ensure platforms comply with data privacy and security standards (e.g., GDPR, CCPA)</li><li>Implement encryption, audit logging, and access controls</li><li>Support compliance reporting and risk assessments using governance and data quality metrics</li></ul>
<p>We’re seeking a <strong>BI Engineer</strong> who can design and optimize end‑to‑end analytics solutions leveraging <strong>Power BI</strong> and <strong>Microsoft Fabric</strong>. This role blends engineering, modeling, and performance optimization to support scalable reporting environments.</p><p><strong>What You’ll Do</strong></p><ul><li>Architect and maintain scalable BI solutions across Power BI and Fabric</li><li>Create and optimize semantic models, dataflows, and pipelines</li><li>Implement real‑time or near‑real‑time reporting frameworks</li><li>Build workspace structures, governance standards, and deployment processes</li><li>Partner with data engineering teams to ensure structured, high‑quality data</li><li>Troubleshoot data refresh, gateway, and performance issues</li></ul><p><br></p>
We are looking for an experienced Power BI Business Intelligence Engineer to join our team in Niceville, Florida. In this role, you will play a vital part in managing and enhancing our reporting and business intelligence platforms to provide actionable insights. Your expertise will drive data analysis, dashboard creation, and the development of solutions that support key business decisions.<br><br>Responsibilities:<br>• Oversee the company's reporting and business intelligence systems to ensure optimal performance and accuracy.<br>• Develop a deep understanding of the organization's business models, operations, and decision-making processes.<br>• Analyze data architecture and gather requirements from stakeholders to create tailored solutions.<br>• Build and manage data sources, models, and integrations for reporting and analytics purposes.<br>• Design and maintain dashboards and reports using enterprise business intelligence tools.<br>• Facilitate seamless data integration processes to retrieve, transform, and analyze datasets.<br>• Support leadership in creating management information and KPIs to drive data-driven decision-making.<br>• Ensure data quality and integrity across all business intelligence deliverables.<br>• Stay updated with the latest advancements in BI technologies, tools, and practices to recommend improvements.<br>• Document systems and processes comprehensively while adhering to governance, security, and privacy standards.
<p>We are looking for a Director of Artificial Intelligence (AI) to drive innovation and implementation efforts within our organization. This role blends hands-on technical expertise with leadership responsibilities, offering the opportunity to shape and develop cutting-edge AI capabilities. You will collaborate with talented engineers and developers to create impactful AI solutions.</p><p><br></p><p>Responsibilities:</p><p>• Lead AI initiatives by setting roadmaps and strategic plans for the organization's AI platform.</p><p>• Manage and mentor a team of AI Engineers, supporting their growth and ensuring alignment with organizational goals.</p><p>• Design, develop, and implement advanced AI and machine learning models to address business needs.</p><p>• Collaborate closely with engineers and developers to integrate AI solutions into production systems.</p><p>• Stay updated on the latest advancements in AI technology and tools, applying them to enhance organizational capabilities.</p><p>• Provide hands-on expertise in Python programming and other relevant technologies.</p><p>• Build and maintain robust AI/ML infrastructures that support scalable solutions.</p><p>• Ensure the successful deployment and implementation of AI tools and frameworks across various projects.</p><p>• Leverage your background in data engineering, architecture, or science to optimize AI systems.</p><p>• Drive technical excellence and innovation in the development of AI-driven solutions.</p>
Join our team as a Business Intelligence Software Engineer and help design, build, and maintain innovative reporting and data-driven applications that power field operations, business units, and customer solutions. This is a hands-on coding role that requires strong technical judgment and collaboration with cross-functional teams. You’ll manage the entire development lifecycle, ensuring solutions are scalable, reliable, and aligned with business priorities. Key Responsibilities: Lead the Software Development Lifecycle (SDLC): Oversee all phases of BI application development, from concept through deployment and support. Hands-on Development: Build and maintain applications using Python (PySpark), SQL, and TypeScript/JavaScript. Technical Strategy & Architecture: Apply best practices for design, performance, and scalability. Quality Assurance: Establish testing frameworks, conduct code reviews, and maintain bug-tracking processes. Continuous Improvement: Identify and implement tools and methodologies to streamline development and increase system reliability. Collaboration: Work with internal stakeholders, data scientists, analysts, and operations teams to translate business needs into software solutions. Support & Maintenance: Provide ongoing support for newly developed applications, ensuring smooth integration with existing systems.
We are looking for an experienced Business Intelligence (BI) Engineer to join our team in Englewood, Colorado. In this role, you will focus on developing and optimizing dashboards using Amazon QuickSight to support analytics initiatives across multiple projects. This position requires strong technical expertise, excellent communication skills, and a creative approach to dashboard design and data storytelling. This is a long-term contract opportunity within the telecom services industry.<br><br>Responsibilities:<br>• Develop enterprise-level dashboards using Amazon QuickSight to deliver actionable insights.<br>• Collaborate with stakeholders to gather and translate business requirements into reporting specifications and prototypes.<br>• Design and prototype interactive dashboards, ensuring clarity, usability, and impactful storytelling.<br>• Work alongside Data Engineering teams to define required datasets and support semantic layer needs.<br>• Utilize QuickSight Q and AI features to enhance data interpretation and guide strategic decision-making.<br>• Create calculated fields, parameters, and variables to optimize reporting functionalities.<br>• Ensure dashboards integrate both qualitative and quantitative data for comprehensive analysis.<br>• Conduct user experience (UX) design for dashboards, focusing on layout and accessibility.<br>• Support government projects requiring eligibility for federal security clearance and strict vetting processes.<br>• Communicate effectively with cross-functional teams to ensure alignment and successful project execution.
<p>Architect and lead the modern data platform using <strong>Microsoft Fabric</strong>. You’ll define data models, pipelines, governance, and performance patterns that power analytics at scale.</p><p><strong>What You’ll Do</strong></p><ul><li>Design end‑to‑end architectures leveraging Fabric (OneLake, Lakehouses, Warehouses)</li><li>Define medallion/layered models, dimensional designs, and semantic layers</li><li>Lead ingestion and transformation with Dataflows Gen2 / Data Factory / Notebooks</li><li>Establish governance (data quality, lineage, security, RLS/OLS)</li><li>Optimize performance and cost; standardize reuseable patterns</li><li>Mentor data engineers/analysts; review solutions and set best practices</li><li>Partner with business to translate use cases into scalable models</li></ul><p><br></p>