Description

Responsibilities

  • Design and implement “Big Data” infrastructure for batch and real-time analytics.
  • Ensure highly interactive response times. Avoid allowing performance bottlenecks to creep into the system.
  • Interpret and analyze business use-cases and feature requests into technical designs and development tasks.
  • Be an active player in system architecture and design discussions.

Required Skills

  • Knowledge of algorithms, data structures, computational complexity.
  • Proficient in Kafka Streaming, PostgreSQL, Spark, Scala, Java.
  • Familiar with Project Management methodologies like Waterfall and Agile.
  • Effective communication, presentation, & organizational skills.

Required Experience

  • 4+ yrs experience in Spark, Scala, Java tech stacks.
  • 3+ years of Basic knowledge of UNIX/LINUX shell scripting.
  • 1+ yrs experience in Groovy/Gradle a plus.
  • Extensive experience architecting and engineering BigData processing applications using Spark, Hive.

Education Requirements

  • Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field


 

Education

Any Graduate