Description

✅ Build scalable data pipelines using AWS (Glue, EMR, Lambda).
✅ Design & optimize data lakes (S3, Delta Lake) and warehouses (Redshift, Athena).
✅ Work with Aurora, RDS, DynamoDB for database solutions
✅ Develop serverless workflows (Step Functions) & write clean code in Python, PySpark, SQL
✅ Ensure data security, quality & compliance

What We’re Looking For:
✔ Strong AWS expertise in Glue, EMR, Lambda, Redshift, S3, DynamoDB
✔ Hands-on experience with Big Data tech (Hadoop, Spark, Delta Lake)
✔ Proficiency in Python, PySpark, SQL/PostgreSQL
✔ Experience in ETL, Data Warehousing & Cloud Architecture

Education

Any Gradute