Description

Responsibilities:

  • Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies
  • Work with a team of developers with deep experience in machine learning, distributed microservices, and full stack systems
  • Utilize programming languages like Java, Scala, Python and Open Source RDBMS and NoSQL databases and Cloud based data warehousing services such as Redshift and Snowflake
  • Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, and mentoring other members of the engineering community
  • Collaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiences to help millions of Americans achieve financial empowerment
  • Perform unit tests and conduct reviews with other team members to make sure your code is rigorously designed, elegantly coded, and effectively tuned for performance


Requirements:

  • Bachelor’s Degree
  • At least 3 years of experience in application development (Internship experience does not apply)
  • At least 1 year of experience in big data technologies
  • 5+ years of experience in application development including Python, SQL, Scala, or Java
  • 2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud)
  • 3+ years experience with Distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL)
  • 2+ year experience working on real-time data and streaming applications
  • 2+ years of experience with NoSQL implementation (Mongo, Cassandra)
  • 2+ years of data warehousing experience (Redshift or Snowflake)
  • 3+ years of experience with UNIX/Linux including basic commands and shell scripting
  • 2+ years of experience with Agile engineering practices
  • People Tech
  • Handling data related to HR
  • Migration of data sets from S3 to One lake
  • Get data from various vendors and load to one lake / snowflake
  • Moving external vendor applications internally into Client's environment    
  • Hands on Sr. Data Engineer with strong Python,  PySpark & AWS (EMR & Glue)
  • Building data pipelines and data movement
  • 8+ Yrs exp is a must
  • Python exp is a must
  • PySpark exp is a must
  • AWS (EMR, Glue) exp is a must
  • Nice to have: Ex Client is a plus

Education

Any Graduate