Description

Job Description

Job Description :

  1. Proficiency in Java, with a good understanding of its ecosystems
  2. Experience in Data Engineering using HDFS, Spark, Java. Good knowledge of Hadoop Architecture
  3. Transformation and aggregated data from multiple sources.
  4. Good Knowledge of Spark Architecture including Spark Core, Spark SQL, RDD, Data Set, and Data Frames.
  5. Performance tuning using Optimization techniques Caching Data in Memory, Broadcast Hint for SQL Queries.
  6. Azure/Cloud DevOps concepts, CI/CD pipeline
  7. Good knowledge on the architecture of a Spark application.
  8. Knows the concepts of deployment pipelines.
  9. Hands-on experience with git bash.
  10. Has worked with scrum methodology. Familiar with ceremonies and ways of working.
  11. Able to communicate with the end clients via email with good email etiquette and via phone.
  12. Understanding business needs, identifying the right data sources, developing scalable and reliable data pipelines to ensure a smooth process

Education

Any Graduate