Seeking talented and experienced Hadoop and Spark Developer with strong Java expertise to join data engineering team.
The ideal candidate will have a solid understanding of big data technologies, hands-on experience with the Hadoop ecosystem, and the ability to build and optimize data pipelines and processing systems using Spark and Java.
Key Responsibilities:
Develop, test, and deploy scalable big data solutions using Hadoop and Spark.
Write efficient and optimized code in Java to process large datasets.
Design and implement batch and real-time data processing pipelines using Spark.
Monitor, troubleshoot, and enhance the performance of Spark jobs.
Work closely with cross functional teams to integrate big data solutions into existing systems.
Debug and resolve complex technical issues related to distributed computing.
Collaborate on system architecture and contribute to technical design discussions.
Required Skills:
Strong expertise in Java, with experience in writing optimized, high-performance code.