Description

Key Responsibilities:

  • 5+ years of experience working with Apache Spark (including Structured Streaming).
  • Strong proficiency in Scala, with ability to write clean, performant code.
  • 3+ years of experience with Apache Kafka for real-time streaming and messaging.
  • Solid understanding of data modeling, ETL, and data pipeline orchestration.
  • Familiarity with Hadoop ecosystem, Hive, Parquet, or similar.
  • Experience with CI/CD, Docker, and Kubernetes is a plus.
  • Exposure to cloud platforms like AWS (EMR, MSK), Azure, or GCP is preferred.
  • Excellent problem-solving, communication, and analytical skills

Education

Any Gradute