Key Skills: Spark, Scala, Flink, Big Data, Structured Streaming, Data Architecture, Data Modeling, NoSQL, AWS, Azure, GCP, JVM tuning, Performance Optimization.
Roles & Responsibilities:
- Design and build robust data architectures for large-scale data processing.
- Develop and maintain data models and database designs.
- Work on stream processing engines like Spark Structured Streaming and Flink.
- Perform analytical processing on Big Data using Spark.
- Administer, configure, monitor, and tune performance of Spark workloads and distributed JVM-based systems.
- Lead and support cloud deployments across AWS, Azure, or Google Cloud Platform.
- Manage and deploy Big Data technologies such as Business Data Lakes and NoSQL databases.
Experience Requirements:
- Extensive experience working with large data sets and Big Data technologies.
- 4-6 years of hands-on experience in Spark/Big Data tech stack.
- At least 4 years of experience in Scala.
- At least 2+ years of experience in cloud deployment (AWS, Azure, or GCP).
- Successfully completed at least 2 product deployments involving Big Data technologies.
Education: B.Tech M.Tech (Dual), B.Tech