Description

Responsibilities

  • Provide application software development services typically in a defined project.
  • Develop program logic for new applications or analyzes and modifies logic in existing applications based on the technical design and coding standards so as to deliver error free output.
  • Code, test, debug, document, implement and maintain software applications.
  • Define unit test cases based on the technical design/functional design in a way that is comprehensive and verifies accuracy of developed features.
  • Execute the unit test cases defined so as to ensure error free execution in both the desktop and by deploying it in the development environment.

Required Skills

  • Extensive Knowledge on ETL and Teradata/Cloudera.
  • Good exposure to Hadoop Eco system.
  • Advanced knowledge of Python language.
  • Job scheduling tools (e.g. Autosys) & Version control tool like Git.
  • Basic knowledge on Mainframe, should be able to navigate through the jobs and code.
  • Quick learner and self-starter who requires minimal supervision to excel in a dynamic environment.
  • Strong Verbal and written Communication skills.
  • Strong analytical skills with high attention to detail and accuracy.
  • Strong organizational, multi-tasking, and prioritizing skills.

Required Experience

  • 5+ years in Hadoop ecosystem development.
  • 2+ years of development experience with Spark, Scala, and Java/Python.
  • 2+ year of UNIX development experience.
  • Minimum 1 year hands on experience in Pyspark.
  • Prior experience of working with globally distributed teams.

Education Requirements

  • Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field.


 

Education

Any Graduate