Description

Job Description:

· Experience:
Proven experience (5+ years) in building and managing big data solutions on AWS.
Hands-on experience with AWS Big Data services like Amazon S3, AWS Glue, Amazon Redshift, Amazon EMR, AWS Lambda, Amazon Kinesis, and AWS Data Pipeline.
Experience with Hadoop, Spark, Flink, and other big data processing technologies in the cloud.
Familiarity with data warehousing concepts and designing ETL pipelines.
Experience with data lake architectures and schema-on-read approaches.

· Technical Skills:
Strong proficiency in SQL and NoSQL databases.
Expertise in data processing frameworks like Apache Hadoop, Spark, Flink, and Kafka.
Experience with scripting languages such as Python, Shell, or Scala.
Familiarity with containerization technologies like Docker and Kubernetes.
Experience with AWS CloudFormation, Terraform, or similar IaC (Infrastructure as Code) tools

Education

Any Graduate