We are looking for an experienced Data Engineer to join our team in Cincinnati, Ohio. This long-term contract position offers the opportunity to work on cutting-edge data engineering projects while collaborating with multidisciplinary teams to deliver high-quality solutions. The ideal candidate will have a strong background in Databricks and big data technologies, along with a passion for optimizing data processes and systems.<br><br>Responsibilities:<br>• Design, build, and enhance data pipelines using Databricks Runtime, Delta Lake, Autoloader, and Structured Streaming.<br>• Implement secure and governed data access protocols utilizing Unity Catalog, workspace controls, and audit configurations.<br>• Manage and integrate structured and unstructured data from diverse sources, including APIs and cloud storage.<br>• Develop and maintain notebook-based workflows and manage jobs using Databricks Workflows and Jobs.<br>• Apply best practices for performance tuning, scalability, and cost optimization in Databricks environments.<br>• Collaborate with data scientists, analysts, and business stakeholders to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks jobs and system configurations.<br>• Ensure high standards of data quality and security across all engineering tasks.<br>• Troubleshoot and resolve issues to maintain operational efficiency in data pipelines.
We are looking for a skilled AI/Data Governance Analyst to join our team on a long-term contract basis. In this role, you will play a pivotal part in designing and optimizing data pipelines, implementing governance frameworks, and supporting secure data access. This position will be based in Cincinnati, Ohio, and is an excellent opportunity for professionals seeking to work on cutting-edge data technologies.<br><br>Responsibilities:<br>• Design and develop efficient data pipelines utilizing Databricks Runtime, Delta Lake, and Structured Streaming.<br>• Establish secure data access and governance protocols using Unity Catalog and workspace controls.<br>• Integrate and manage data from diverse sources, including APIs and cloud storage platforms.<br>• Create and maintain workflows using Databricks Workflows and Jobs to streamline processes.<br>• Implement cost-effective solutions and ensure high performance and scalability on Databricks.<br>• Collaborate with data scientists and analysts to deliver clean and reliable datasets.<br>• Support continuous integration and deployment processes for Databricks configurations.<br>• Optimize big data performance and scalability through advanced techniques.<br>• Provide technical guidance and troubleshoot issues related to Databricks environments.
We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This position offers the opportunity to work remotely while contributing to critical data management and integration efforts. The ideal candidate will have hands-on experience with customer master data in ECC6, and the ability to create, maintain, and manage data effectively.<br><br>Responsibilities:<br>• Develop and maintain customer master data within ECC6, ensuring data accuracy and consistency.<br>• Create new customer profiles and manage existing ones, maintaining high standards of data integrity.<br>• Support the integration process by working with custom tables related to customer data.<br>• Collaborate with cross-functional teams to ensure seamless data flow and effective data management.<br>• Utilize tools such as Apache Spark, Python, and ETL processes to extract, transform, and load data efficiently.<br>• Leverage Apache Hadoop for scalable data storage and processing solutions.<br>• Implement Apache Kafka to enable real-time data streaming and integration.<br>• Troubleshoot and resolve data-related issues, ensuring system reliability.<br>• Provide documentation and training to stakeholders on data management processes.<br>• Stay updated on industry best practices and emerging technologies to enhance data engineering workflows.