Description

Qualifications :

· 8+ years of software development experience in Big Data technologies (Spark/Hive/Hadoop)

· Experience in working on Hadoop Distribution, good understanding of core concepts and best practices

· Good experience in building/tuning Spark pipelines in Scala/Python

· Good experience in writing complex Hive queries to derive business critical insights

· Good Programming experience with Java/Python/Scala

· Experience with AWS Cloud, exposure to Lambda/EMR/Kinesis will be good to have

· Experience in NoSQL Technologies - MongoDB, Dynamo DB

 

Roles and Responsibilities :

· Design and implement solutions for problems arising out of large-scale data processing

· Attend/drive various architectural, design and status calls with multiple stakeholders

· Ensure end-to-end ownership of all tasks being aligned

· Design, build & maintain efficient, reusable & reliable code

· Test implementation, troubleshoot & correct problems

· Capable of working as an individual contributor and within team too

· Ensure high quality software development with complete documentation and traceability

· Fulfil organizational responsibilities (sharing knowledge & experience with other teams/ groups)

· Conduct technical training(s)/session(s), write whitepapers/case studies/blogs etc.

 

Education

Any Graduate