Data Engineer
                    
					
						<p data-start="179" data-end="264"><strong data-start="179" data-end="192">Location:</strong> Tokyo<br data-start="198" data-end="201" /> <strong data-start="201" data-end="212">JO:</strong> 06940-0013315515<br data-start="229" data-end="232" /> <strong data-start="232" data-end="250">Annual Salary:</strong> JPY 7,904,000</p><h4 data-start="266" data-end="297"><strong data-start="271" data-end="295">Department Overview:</strong></h4><p data-start="298" data-end="574">The Data Lakehouse Section is responsible for providing scalable Lakehouse platform solutions to support the business and operations of the organization. Our mission is to accelerate the use of data by providing flexible platforms that can host a wide range of data solutions.</p><h4 data-start="576" data-end="605"><strong data-start="581" data-end="603">Position Overview:</strong></h4><p data-start="606" data-end="853">This position is focused on developing and managing data warehouse solutions using a Lakehouse platform on Azure. The role involves understanding business requirements, implementing data solutions, and supporting daily operations for the platform.</p><h4 data-start="855" data-end="887"><strong data-start="860" data-end="885">Key Responsibilities:</strong></h4><ul data-start="888" data-end="1306"><li data-start="888" data-end="1000"><p data-start="890" data-end="1000">Develop and manage data warehouse solutions for business operations using Databricks and other relevant tools.</p></li><li data-start="1001" data-end="1088"><p data-start="1003" data-end="1088">Implement business requirements into the platform and ensure smooth daily operations.</p></li><li data-start="1089" data-end="1152"><p data-start="1091" data-end="1152">Design, implement, and optimize data pipelines and workflows.</p></li><li data-start="1153" data-end="1236"><p data-start="1155" data-end="1236">Collaborate with internal teams to troubleshoot and enhance platform performance.</p></li><li data-start="1237" data-end="1306"><p data-start="1239" data-end="1306">Support continuous improvement initiatives for data infrastructure.</p></li></ul><h4 data-start="1308" data-end="1335"><strong data-start="1313" data-end="1333">Required Skills:</strong></h4><ul data-start="1336" data-end="1770"><li data-start="1336" data-end="1397"><p data-start="1338" data-end="1397">3+ years of development experience, preferably with Python.</p></li><li data-start="1398" data-end="1471"><p data-start="1400" data-end="1471">Experience working with public cloud platforms (e.g., GCP, AWS, Azure).</p></li><li data-start="1472" data-end="1545"><p data-start="1474" data-end="1545">Proficiency in Databricks or similar platforms for big data processing.</p></li><li data-start="1546" data-end="1638"><p data-start="1548" data-end="1638">Ability to write clean, production-level code that is maintainable and easy to understand.</p></li><li data-start="1639" data-end="1770"><p data-start="1641" data-end="1770">Experience with common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas).</p></li></ul><h4 data-start="1772" data-end="1800"><strong data-start="1777" data-end="1798">Preferred Skills:</strong></h4><ul data-start="1801" data-end="2700"><li data-start="1801" data-end="1913"><p data-start="1803" data-end="1913">Experience with the deployment and training of machine learning models and MLOps tools (e.g., MLFlow, Seldon).</p></li><li data-start="1914" data-end="1979"><p data-start="1916" data-end="1979">Experience with RESTful APIs and batch processing applications.</p></li><li data-start="1980" data-end="2120"><p data-start="1982" data-end="2120">Familiarity with the full software development lifecycle, including agile development practices and using task management tools like JIRA.</p></li><li data-start="2121" data-end="2206"><p data-start="2123" data-end="2206">Experience with microservice development and operation using Docker and Kubernetes.</p></li><li data-start="2207" data-end="2282"><p data-start="2209" data-end="2282">Knowledge of big data technologies such as Spark, Kafka, Delta Lake, etc.</p></li><li data-start="2283" data-end="2352"><p data-start="2285" data-end="2352">Experience with data modeling and observability of data and models.</p></li><li data-start="2353" data-end="2418"><p data-start="2355" data-end="2418">Experience with production engineering practices such as CI/CD.</p></li><li data-start="2419" data-end="2464"><p data-start="2421" data-end="2464">Performance profiling and tuning expertise.</p></li><li data-start="2465" data-end="2550"><p data-start="2467" data-end="2550">Experience in developing distributed, scalable, and high-availability applications.</p></li><li data-start="2551" data-end="2630"><p data-start="2553" data-end="2630">Understanding of GPU-based applications, machine learning, and deep learning.</p></li><li data-start="2631" data-end="2700"><p data-start="2633" data-end="2700">Experience with orchestration tools such as Airflow, Kubeflow, etc.</p></li></ul><h4 data-start="2702" data-end="2732"><strong data-start="2707" data-end="2730">Personal Qualities:</strong></h4><ul data-start="2733" data-end="2968"><li data-start="2733" data-end="2789"><p data-start="2735" data-end="2789">Strong problem-solving skills and analytical thinking.</p></li><li data-start="2790" data-end="2849"><p data-start="2792" data-end="2849">Ability to collaborate effectively in a team environment.</p></li><li data-start="2850" data-end="2898"><p data-start="2852" data-end="2898">Eagerness to learn and apply new technologies.</p></li><li data-start="2899" data-end="2968"><p data-start="2901" data-end="2968">Goal-oriented and results-driven with a focus on achieving success.</p></li></ul><p><em>By clicking 'apply', you give your express consent that Robert Half may use your personal information to process your job application and to contact you from time to time for future employment opportunities. For further information on how Robert Half processes your personal information and how to access and correct your information, please read the Robert Half privacy notice <a href="https://www.roberthalf.com/jp/en/privacy">https://www.roberthalf.com/jp/en/privacy</a>. Please do not submit any sensitive personal data to us in your resume (such as such as race, beliefs, social status, medical history or criminal record) as we do not collect your sensitive personal data at this time.</em></p><hr /><p>お客様が「今すぐ応募」ボタンをクリックすることにより、ロバート・ハーフ(以下、当社)がお客様の応募内容を処理し、求人情報を今後随時ご連絡する目的で個人情報を使用することに明示的に同意ただいたこととなります。当社による個人情報の処理方法、またお客様自身の個人情報へのアクセスおよびその訂正に関する詳細については、プライバシー規約(<a href="https://www.roberthalf.com/jp/ja/privacy">https://www.roberthalf.com/jp/ja/privacy</a>)をお読みください。当社は、要配慮個人情報はお預かりしておりませんので人種、信条、社会的身分、病歴、犯罪の経歴など、取扱いに特に配慮を要する個人情報は、ご提出いただく職務経歴書・レジュメ等に含めないようお願いいたします。</p><img src="https://counter.adcourier.com/c2Fvcmkua29kYWthLjAwNjIxLjEwODk4QHJoaWpwLmFwbGl0cmFrLmNvbQ.gif">
					
					
						
					
					
						
					
                    
                        - Tokyo, 
- remote
- Contract
- 
                            7.9M - 7.9M JPY / Yearly
                        
- <p data-start="179" data-end="264"><strong data-start="179" data-end="192">Location:</strong> Tokyo<br data-start="198" data-end="201" /> <strong data-start="201" data-end="212">JO:</strong> 06940-0013315515<br data-start="229" data-end="232" /> <strong data-start="232" data-end="250">Annual Salary:</strong> JPY 7,904,000</p><h4 data-start="266" data-end="297"><strong data-start="271" data-end="295">Department Overview:</strong></h4><p data-start="298" data-end="574">The Data Lakehouse Section is responsible for providing scalable Lakehouse platform solutions to support the business and operations of the organization. Our mission is to accelerate the use of data by providing flexible platforms that can host a wide range of data solutions.</p><h4 data-start="576" data-end="605"><strong data-start="581" data-end="603">Position Overview:</strong></h4><p data-start="606" data-end="853">This position is focused on developing and managing data warehouse solutions using a Lakehouse platform on Azure. The role involves understanding business requirements, implementing data solutions, and supporting daily operations for the platform.</p><h4 data-start="855" data-end="887"><strong data-start="860" data-end="885">Key Responsibilities:</strong></h4><ul data-start="888" data-end="1306"><li data-start="888" data-end="1000"><p data-start="890" data-end="1000">Develop and manage data warehouse solutions for business operations using Databricks and other relevant tools.</p></li><li data-start="1001" data-end="1088"><p data-start="1003" data-end="1088">Implement business requirements into the platform and ensure smooth daily operations.</p></li><li data-start="1089" data-end="1152"><p data-start="1091" data-end="1152">Design, implement, and optimize data pipelines and workflows.</p></li><li data-start="1153" data-end="1236"><p data-start="1155" data-end="1236">Collaborate with internal teams to troubleshoot and enhance platform performance.</p></li><li data-start="1237" data-end="1306"><p data-start="1239" data-end="1306">Support continuous improvement initiatives for data infrastructure.</p></li></ul><h4 data-start="1308" data-end="1335"><strong data-start="1313" data-end="1333">Required Skills:</strong></h4><ul data-start="1336" data-end="1770"><li data-start="1336" data-end="1397"><p data-start="1338" data-end="1397">3+ years of development experience, preferably with Python.</p></li><li data-start="1398" data-end="1471"><p data-start="1400" data-end="1471">Experience working with public cloud platforms (e.g., GCP, AWS, Azure).</p></li><li data-start="1472" data-end="1545"><p data-start="1474" data-end="1545">Proficiency in Databricks or similar platforms for big data processing.</p></li><li data-start="1546" data-end="1638"><p data-start="1548" data-end="1638">Ability to write clean, production-level code that is maintainable and easy to understand.</p></li><li data-start="1639" data-end="1770"><p data-start="1641" data-end="1770">Experience with common machine learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., scikit-learn, NumPy, pandas).</p></li></ul><h4 data-start="1772" data-end="1800"><strong data-start="1777" data-end="1798">Preferred Skills:</strong></h4><ul data-start="1801" data-end="2700"><li data-start="1801" data-end="1913"><p data-start="1803" data-end="1913">Experience with the deployment and training of machine learning models and MLOps tools (e.g., MLFlow, Seldon).</p></li><li data-start="1914" data-end="1979"><p data-start="1916" data-end="1979">Experience with RESTful APIs and batch processing applications.</p></li><li data-start="1980" data-end="2120"><p data-start="1982" data-end="2120">Familiarity with the full software development lifecycle, including agile development practices and using task management tools like JIRA.</p></li><li data-start="2121" data-end="2206"><p data-start="2123" data-end="2206">Experience with microservice development and operation using Docker and Kubernetes.</p></li><li data-start="2207" data-end="2282"><p data-start="2209" data-end="2282">Knowledge of big data technologies such as Spark, Kafka, Delta Lake, etc.</p></li><li data-start="2283" data-end="2352"><p data-start="2285" data-end="2352">Experience with data modeling and observability of data and models.</p></li><li data-start="2353" data-end="2418"><p data-start="2355" data-end="2418">Experience with production engineering practices such as CI/CD.</p></li><li data-start="2419" data-end="2464"><p data-start="2421" data-end="2464">Performance profiling and tuning expertise.</p></li><li data-start="2465" data-end="2550"><p data-start="2467" data-end="2550">Experience in developing distributed, scalable, and high-availability applications.</p></li><li data-start="2551" data-end="2630"><p data-start="2553" data-end="2630">Understanding of GPU-based applications, machine learning, and deep learning.</p></li><li data-start="2631" data-end="2700"><p data-start="2633" data-end="2700">Experience with orchestration tools such as Airflow, Kubeflow, etc.</p></li></ul><h4 data-start="2702" data-end="2732"><strong data-start="2707" data-end="2730">Personal Qualities:</strong></h4><ul data-start="2733" data-end="2968"><li data-start="2733" data-end="2789"><p data-start="2735" data-end="2789">Strong problem-solving skills and analytical thinking.</p></li><li data-start="2790" data-end="2849"><p data-start="2792" data-end="2849">Ability to collaborate effectively in a team environment.</p></li><li data-start="2850" data-end="2898"><p data-start="2852" data-end="2898">Eagerness to learn and apply new technologies.</p></li><li data-start="2899" data-end="2968"><p data-start="2901" data-end="2968">Goal-oriented and results-driven with a focus on achieving success.</p></li></ul><p><em>By clicking 'apply', you give your express consent that Robert Half may use your personal information to process your job application and to contact you from time to time for future employment opportunities. For further information on how Robert Half processes your personal information and how to access and correct your information, please read the Robert Half privacy notice <a href="https://www.roberthalf.com/jp/en/privacy">https://www.roberthalf.com/jp/en/privacy</a>. Please do not submit any sensitive personal data to us in your resume (such as such as race, beliefs, social status, medical history or criminal record) as we do not collect your sensitive personal data at this time.</em></p><hr /><p>お客様が「今すぐ応募」ボタンをクリックすることにより、ロバート・ハーフ(以下、当社)がお客様の応募内容を処理し、求人情報を今後随時ご連絡する目的で個人情報を使用することに明示的に同意ただいたこととなります。当社による個人情報の処理方法、またお客様自身の個人情報へのアクセスおよびその訂正に関する詳細については、プライバシー規約(<a href="https://www.roberthalf.com/jp/ja/privacy">https://www.roberthalf.com/jp/ja/privacy</a>)をお読みください。当社は、要配慮個人情報はお預かりしておりませんので人種、信条、社会的身分、病歴、犯罪の経歴など、取扱いに特に配慮を要する個人情報は、ご提出いただく職務経歴書・レジュメ等に含めないようお願いいたします。</p><img src="https://counter.adcourier.com/c2Fvcmkua29kYWthLjAwNjIxLjEwODk4QHJoaWpwLmFwbGl0cmFrLmNvbQ.gif">
- 2025-10-31T03:07:20Z