Description

Responsibilities

  • Design, Deploy, manage, and operate scalable, highly available, and fault-tolerant systems on AWS.
  • Implement and control the flow of data to and from AWS.
  • Select the appropriate AWS service based on compute, data and security requirements.
  • Identify appropriate use of AWS operational best practices.
  • Estimate AWS usage costs and identify operational cost control mechanisms.
  • Migrate on-premises workloads to AWS.
  • Trouble shoot and performance tuning of AWS Services.
  • Security Hardening of AWS infrastructure.

Required Skills

  • AWS Solution Architect Professional certification-Preferred.
  • Must be able to code in either Java or Python.
  • Good Communication skill and ability to elicit requirement from business users.
  • Ability to translate requirement to architecture and explain to development team.
  • Ability to set up connectivity between On –Premise and AWS with appropriate security configurations.

Required Experience

  • Professional with 10+ years of overall Big Data experience.
  • 4+ years of AWS experience.
  • Hands-on experience in implementation and performance tuning Hadoop/Spark implementations.
  • Experience with one or more SQL-on-Hadoop technology (Hive, Impala, Spark SQL, Presto etc).
  • Experience with one or more Data warehouse technology (SQL Server, Oracle, Teradata, Greenplum etc).
  • Experience developing software code in one or more programming languages (Java, Python, etc).
  • Experience Apache Hadoop and the Hadoop ecosystem.

Education Requirements

  • Bachelor’s Degree in Computer Science, Computer Engineering or a closely related field.


 

Education

Any Graduate