Description

  • Develop and maintain Python scripts to automate data ingestion and processing tasks for the Analytical Data Warehouse, leveraging Snowflake and Databricks.
  • Collaborate with stakeholders to understand data requirements and design database schemas accordingly.
  • Use SQL to write efficient queries, perform data manipulation, and optimize database performance.
  • Implement ETL processes to load data into the data warehouse, ensuring data quality and integrity.
  • Troubleshoot and resolve issues related to data pipelines, ETL processes, and database performance.
  • Monitor and optimize data pipelines for efficiency and scalability.
  • Collaborate with cross-functional teams to integrate data from various sources into the data warehouse.
  • Provide technical expertise and guidance to junior members of the team.
  • Stay up-to-date with industry best practices and emerging technologies in data engineering.
  • Participate in Agile development processes, including sprint planning, daily stand-ups, and retrospectives.
  • Contribute to the design and architecture of the data warehouse infrastructure.

Qualifications:

  • At least 1 year of hands-on experience with Scala.
  • Ability to work within deadlines and effectively prioritize and execute tasks.
  • Strong communication skills (verbal and written) with ability to communicate across teams, internal and external at all levels.
  • Experience in Drive automation.
  • DevOps Knowledge is an added advantage

Education

Any Gradute