Description

Job Description

  • Experience level – 6 to 9 years
  • Experience with Apache Spark / Scala, Spark SQL, and related Spark ecosystem tools and libraries.
  • Hands-on development building spark applications using Scala
  • Knowledge of Big data technologies such as Hadoop, HDFS, distributed computing frameworks for large-scale data processing.    
  • Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
  • Knowledge or experience in the use of GIT/BitBucket, Gradle,Jenkins, Jira, Confluence or a similar tool(s) for building Continuous Integration/Continuous Delivery (CI/CD) pipelines.
  • Technical working experience in an agile environment

Education

Any Graduate