Description

About the job

 

Experience with Apache Spark / Scala, Spark SQL, and related Spark ecosystem tools and libraries.

2.Knowledge of big data technologies such as Hadoop, HDFS, HBASE and distributed computing frameworks for large-scale data processing.

3.Hands-on Linux scripting experience.

4.Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.

5.Knowledge or experience in the use of GIT/BitBucket, Gradle,Jenkins, Jira, Confluence or a similar tool(s) for building Continuous Integration/Continuous Delivery (CI/CD) pipelines.

  • 6.Technical working experience in an agile environment.

 

Education

Any Graduate