Description

The ideal candidate will possess a strong technical background in Hadoop, Spark, Cloud technologies, Scala, and streaming frameworks like Kafka. You will work on developing, enhancing, and maintaining scalable data pipelines and infrastructure for high-volume data processing.

 

Responsibilities:

Design and implement large-scale data processing systems using Hadoop and Spark.

Develop and maintain real-time streaming applications with Kafka.

Write high-quality, maintainable code in Scala.

Collaborate with cross-functional teams to design and deploy cloud-based solutions.

Monitor and optimize the performance of data pipelines and systems.

Ensure data security and compliance with company standards.

Required Skills:

Expertise in Hadoop and Spark for data engineering tasks.

Strong proficiency in Scala programming.

Hands-on experience with Kafka and streaming data processing.

Experience with cloud platforms (AWS, GCP, Azure, etc.).

Excellent problem-solving and communication skills

Education

Any Graduate