Description

Responsibilities


 

  • Minimum 5+ years of experience working with high volume data infrastructure.
  • Experience with AWS and/or Databricks, ETL and Job orchestration tooling.
  • Extensive experience programming in one of the following languages: Python / Java.
  • Experience in data modeling, optimizing SQL queries, and system performance tuning.
  • Knowledge and proficiency in the latest open source and data frameworks, modern data platform tech stacks and tools.
  • You are proficient with SQL, AWS, Databases, Apache Spark, Spark Streaming, EMR, and Kinesis/Kafka
  • You delight in crushing messy unstructured data and making the world sane by producing quality data that is clean and usable
  • Always be learning and staying up to speed with the fast moving data world.
  • You have good communication skills and can work independently
  • BS in Computer Science, Software Engineering, Mathematics, or equivalent experience.

Education

BS in Computer Science, Software Engineering,