Description

Must-Have Skills

  • PySpark – advanced performance optimization
  • AWS Services – Redshift, Glue, Athena, Lambda, DMS, RDS, CloudFormation
  • SQL – strong query-writing and optimization
  • ETL Development – pipelines for structured & unstructured data
  • Python – robust coding skills
  • Data Modeling – scalable and performance-driven design


 

Key Responsibilities

  • Design, build, and optimize ETL pipelines on AWS
  • Develop and optimize data models for performance & efficiency
  • Write complex SQL queries for analytics and reporting
  • Collaborate with stakeholders to define and deliver data-driven solutions
  • Implement and maintain AWS-based data architectures
  • Ensure best practices in performance tuning, cost optimization, and scalability
  • Work in Agile teams to deliver business-critical projects


 

Ideal Candidate

  • 7–14 years of experience in AWS Data Engineering
  • Hands-on with big data & cloud-native ecosystems
  • Strong analytical and problem-solving skills
  • Comfortable working in a hybrid environment

Education

Any Graduate