Description

Job Description
Mandatory Skills: Microsoft Azure, Py spark, Data Bricks, Hadoop, Spark, Airflow, Kafka

Requirments:

Experience working with distributed technology tools for developing Batch and Streaming pipelines using.

SQL, Spark, Python

Airflow

Scala

Kafka

Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc.

Able to quickly pick up new programming languages, technologies, and frameworks.

Strong skills building positive relationships across Product and Engineering.

Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders

Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc.

Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture

Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components)

Experience working in Agile and Scrum development process.

Experience in EMR/ EC2, Data bricks etc.

Education

Any Graduate