Key Skills: Hadoop, SCALA, Pyspark, Functional Programming, SQL, Scala Build Tool, Linux
Roles & Responsibilities:
- Develop and maintain scalable data processing applications using Hadoop and SCALA.
- Implement functional programming techniques to enhance code efficiency and maintainability.
- Collaborate with cross-functional teams to design and optimize data architectures.
- Utilize strong SQL skills to manage and query databases effectively.
- Apply analytical and debugging skills to troubleshoot and resolve issues in data processing workflows.
- Leverage knowledge of Scala Build Tool and Linux for application deployment and management.
- Stay updated with industry trends and best practices in big data technologies.
Experience Requirement:
- 5-8 years of experience in Hadoop ecosystem development using SCALA.
- Deep understanding of functional programming principles and SCALA-based application design.
- Strong experience in multithreading, memory management, and data structure optimization.
- Experience working in hybrid environments and collaborating with cross-functional teams for scalable data solutions.
- Skilled in using Linux-based systems for development and deployment.
Education: Any Graduation