Description

Primary Skill Set:
8+ years’ Experience with Datawarehouse, Datalake ETL platforms.
8+ years’ Experience with Data Modelling framework such as Data Vault 2.0/Star Model/ Snowflake Model.
8+ years’ Experience SQL scripting including SCD1, SCD2 & SCD3 logic.
Expert in distributed data processing frameworks such as Spark (Core, Streaming, SQL), Storm, Flink etc.
3+ years’ Experience with IAC (Infrastructure as Code) using Terraform, CloudFormation.
5+ years’ Experience with AWS Cloud service e.g., AWS Glue, Athena, EMR, Firehose, Kinesis, Redshift, RDS, DMS, S3, App Flow, SQS Lambda, Airflow, Eventbridge etc.
8+ years’ Experience working on some of the relational databases (Oracle/SQL Server/DB2/PostgreSQL/MySQL, SAP Hana) on AWS and/or on-prem infrastructure.
5+ years’ Experience with NoSQL solutions such as DynamoDB, Big Table, MongoDB, Cassandra.
5+ years’ Experience with Python/Java/Scala programming languages.
5+ years’ Experience with Business Intelligence technologies such as PowerBI, Tableau, Quick Sight
5+ years’ Experience with CI/CD pipelines on platforms such as GitHub, Azure DevOps
Experienced in Hadoop eco-system with AWS cloud distribution.
Experienced in Big Data ingestion tools such as Sqoop, Flume, NiFI & distributed messaging and ingestion frameworks like Theobald, Kafka, Pulsar, Pub/Sub, etc.
Agile methodology and skill, including experience with Scrum Ceremonies and work management tools (e.g., (JIRA, Confluence, ADO).
Work as a cross-functional team member to support multiple work streams and products.
Excellent Communication Skills.
Experienced in AWS Foundation and Organizations

Education

Any Graduate