Must-Have:
· 8+ years of relevant experience in Data Engineering and delivery.
· 8+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations.
· Strong experience with SQL, python, and Pyspark
· Good understanding of Data ingestion and data processing frameworks
· Good experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture)
· Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership as appropriate.
· Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment.
· Experience working in Agile Methodology
· Ability to learn and help the team learn new technologies quickly.
· Excellent communication and coordination skills
Good to have:
Have experience in DevOps tools (Jenkins, GIT, etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Worked on cloud implementations, data migration.
Key Skills:
· Knowledge of implementing ETL/ELT for data solutions end to end
· ingest data from Rest APIs to AWS data lake (S3) and relational databases such as Amazon RDS, Aurora, and Redshift, Lakehouse
· Understanding requirements, and data solutions (ingest, storage, integration, processing, access) on AWS
· Knowledge of analyzing data using SQL Stored procedures
· Build automated data pipelines to ingest data from relational database systems, file system
· Conducting End to End verification and validation for the entire application
· Using Git/bitbucket for efficient remote team working, storing framework, and developing test scripts
Bachelor's degree in Computer Science