Description

The ideal candidate should have  a strong background in data processing, cloud-based solutions, and big data technologies. 
6+ years of professional experience in Data Engineering or related roles.
Proficiency in PySpark and Python for data processing and manipulation.
Strong expertise in SQL for complex query development and optimization.
Hands-on experience with AWS Glue, Step Functions, and Lambda for serverless computing and orchestration.
Proficiency in Amazon Redshift, including data modelling and query tuning.
Deep understanding of data pipeline design, distributed systems, and cloud infrastructure.

Education

Any Graduate