Description

Job Description: We are seeking an experienced ETL Engineer to join our team. The ideal candidate will have 8 to 10 years of experience in designing, developing, and optimizing ETL processes, with strong expertise in Hadoop, PySpark, Spark, and Dask. The role involves setting up and managing data workflows, ensuring data integrity and efficiency, and collaborating with cross-functional teams to meet business needs. If you are passionate about data engineering and have a track record of delivering high-quality ETL solutions, we encourage you to apply.

Good-to-Have Skills: 
• Cloud Platforms: Familiarity with cloud-based data platforms for deploying and managing big data solutions. 
• Data Visualization: Experience with data visualization tools (e.g., Tableau, Power BI) for creating insightful visualizations and reports. 
• Data Engineering Tools: Knowledge of additional data engineering tools and frameworks, including ETL and data integration technologies. 
• Agile Methodologies: Experience with Agile development practices and methodologies for managing data projects and tasks.

Qualifications: 
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, Software Engineering, or a related field. 
• 8 to 10 years of experience in data engineering, with strong expertise in ETL processes, Hadoop, PySpark, Spark, and Dask. 
• Proven experience setting up and managing JupyterHub environments.

Education

Bachelor's degree in Computer Science