We are looking for a skilled Data Engineer to join our team on a long-term contract basis. This role is based in West Des Moines, Iowa, and offers the opportunity to work on advanced data solutions that support organizational decision-making and efficiency. The ideal candidate will have expertise in relational databases, data cleansing, and modern data warehousing technologies.<br><br>Responsibilities:<br>• Develop, maintain, and optimize data pipelines to support business operations and analytics.<br>• Perform data extraction, transformation, and cleansing to ensure accuracy and reliability.<br>• Collaborate with teams to design and implement data warehouses and data lakes.<br>• Utilize Microsoft SQL Server to build and manage relational database structures.<br>• Analyze data sources and provide recommendations for improving data quality and accessibility.<br>• Create and maintain documentation for data processes, pipelines, and system architecture.<br>• Implement best practices for data storage and retrieval to maximize efficiency.<br>• Troubleshoot and resolve issues related to data processing and integration.<br>• Stay updated on industry trends and emerging technologies to enhance data engineering solutions.
We are looking for an experienced Senior Data Engineer with a strong background in Python and modern data engineering tools to join our team in West Des Moines, Iowa. This is a long-term contract position that requires expertise in designing, building, and optimizing data pipelines and working with cloud-based data warehouses. If you thrive in a collaborative environment and have a passion for transforming raw data into actionable insights, we encourage you to apply.<br><br>Responsibilities:<br>• Develop, debug, and optimize Python-based data pipelines using frameworks such as Flask, Django, or FastAPI.<br>• Design and implement data transformations in a data warehouse using tools like dbt, ensuring high-quality analytics-ready datasets.<br>• Utilize Amazon Redshift and Snowflake for managing large-scale data storage and performing advanced querying and optimization.<br>• Automate data integration processes using platforms like Fivetran and orchestration tools such as Prefect or Airflow.<br>• Build reusable and maintainable data models to improve performance and scalability for analytics and reporting.<br>• Conduct data analysis and visualization leveraging Python libraries such as NumPy, Pandas, TensorFlow, and PyTorch.<br>• Manage version control for data engineering projects using Git and GitHub.<br>• Ensure data quality through automated testing and validation processes.<br>• Document workflows, code, and data transformations following best practices for readability and maintainability.<br>• Optimize cloud-based data warehouse and lake platforms for performance and integration of new data sources.