Description

Key Skills: SCALA, Spark, Kafka

Roles and Responsibilities:

  • Design, develop, and maintain scalable data processing applications using Spark and Scala
  • Collaborate with data engineers and analysts to understand data requirements and deliver solutions
  • Implement data ingestion and processing pipelines using Kafka
  • Optimize existing data processing workflows for performance and efficiency
  • Troubleshoot and resolve issues in data processing applications
  • Stay updated with the latest industry trends and technologies in Big Data

Skills Required:

  • Strong experience in SCALA programming
  • Expertise in Apache Spark for distributed data processing
  • Working knowledge of Kafka for real-time data streaming (nice-to-have)
  • Familiarity with Big Data ecosystems and tools
  • Ability to optimize and troubleshoot complex data pipelines
  • Strong problem-solving and debugging skills
  • Willingness to relocate to Pune or join within 30 days

Education: Bachelor's degree in Computer Science, Engineering, or a related field

Key Skills
Education

Any Graduate