Description

 Key Responsibilities
*Design, develop, and maintain scalable, reliable, and secure data pipelines and ETL processes on AWS.
*Collaborate with cross-functional teams to gather requirements, architect solutions, and implement data workflows.
*Work with big data technologies (e.g., Spark, Hadoop, EMR) to process large datasets efficiently.
*Leverage AWS services such as S3, Glue, Redshift, Lambda, and DynamoDB for data storage, processing, and transformation.
*Implement best practices for data modeling, security, governance, and compliance.

➡️ Required Skills:
Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience).
10+ years of experience in data engineering with a focus on cloud environments, preferably AWS.
Proficiency in AWS services, including S3, Glue, Redshift, EMR, Lambda, and DynamoDB.
Expertise in programming languages such as Python, SQL, or Scala.

Education

Bachelor's degree in Computer Science