Description

Key Skills: Hadoop, Spark, Python, SQL, Scala, HDFS, Hive, Kafka, HBase

Roles & Responsibilities:

  • Design, develop, and maintain large-scale data processing systems using Hadoop and Spark.
  • Write optimized Spark jobs using Scala to ensure efficient data processing.
  • Implement and manage data pipelines and workflows using HDFS, Hive, Kafka, and HBase.
  • Collaborate with cross-functional teams to understand data requirements and deliver solutions.
  • Ensure high performance and responsiveness of applications by optimizing data processing tasks.
  • Stay updated with the latest trends and technologies in big data and distributed computing.

Experience Requirement:

  • 5 -8 years of experience working with Hadoop and Spark for data engineering solutions.
  • Strong expertise in developing Spark jobs using Scala and optimizing performance in distributed environments.
  • Experience in building and maintaining data pipelines involving HDFS, Hive, Kafka, and HBase.
  • Proficient in implementing best practices for big data application development and system scalability.
  • Ability to work collaboratively with data engineers, analysts, and product teams to meet business goals.

Education: Any Graduation

Education

Any Graduate