Description

Must Have Experience:
 

  • 6+ years of hands-on data engineering experience
     
  • Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB
     
  • Building batch and real-time data pipelines
     
  • Python, SQL coding for data processing and analysis
     
  • Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks
     
  • Design and Develop ETL frameworks
     

Nice-to-Have Experience:
 

  • ETL development using tools like Informatica, Talend, Fivetran
     
  • Creating reusable data sources and dashboards for self-service analytics
     
  • Experience using Databricks for Spark workloads or Snowflake
     
  • Working knowledge of Big Data Processing
     
  • CI/CD setup
     
  • Infrastructure-as-code implementation
     
  • Any one of the AWS Professional Certification

Education

Any Graduate