Description

Key Skills: Java, Spark, Core java

Roles and Responsibilities:

  • Lead the design and development of high-scaled, performant software platforms for data computation and processing.
  • Develop and implement solutions using core Java technologies, including Spring Boot and Microservices, ensuring alignment with object-oriented programming (OOP) concepts and design patterns.
  • Optimize and implement multithreading and multiprocessing techniques to process large-scale data efficiently.
  • Utilize Apache Spark in conjunction with Java to work with the Big Data ecosystem, applying best practices in data processing and transformation.
  • Contribute to the creation of data processing pipelines, utilizing Hadoop, YARN, Hive, Spark, and Spark SQL to manage high-volume data processing.
  • Write and maintain Unix-based shell scripts, and assist in automating processes through Python/shell scripting.
  • Design and implement high-performance, fault-tolerant data systems while focusing on scalability, efficiency, and security.
  • Collaborate with cross-functional teams, ensuring the integration of new features with existing systems and maintaining high coding standards.
  • Perform thorough code reviews, mentoring junior developers to ensure adherence to coding standards and development best practices.
  • Manage source code using tools like Bitbucket and Git to maintain version control and streamline team workflows.
  • Troubleshoot complex issues in production systems, providing timely resolutions to ensure system uptime and reliability.

 

Skills Required:

  • Highly experienced and skilled Java technical lead with 5+ years of experience with software building and platform engineering.
  • Extensive development expertise in building the high scaled and performant software platforms for data computation and processing.
  • Expert level knowledge of core Java concepts and framework such as Spring Boot, Microservices and well versed with OOPs concepts and design patterns.
  • Java expert with advanced skills in multithreading and multiprocessing, along with extensive experience in efficiently processing large-scale data.
  • Expertise and hands-on experience on working with Apache Spark using Java and understanding of the Bigdata ecosystem and design principles
  • Hands-on experience on Unix and python/shell scripting.
  • Good knowledge in Hadoop, YARN, Hive, Spark, and Spark SQL with extensive high volume of data processing pipeline development.
  • Strong computer science fundamentals in data structures, algorithms, databases, and operating systems.
  • Highly experienced with Unix based operating systems and shell scripting.
  • Strong analytical and logical skills.
  • Hands-on experience in writing SQL queries.
  • Experience with source code management tools such as Bitbucket, Git etc.

Education: Bachelor's Degree in related field

Education

Any Graduate