Description

Responsibilities : 

 

·         Implement monitoring and alerting solutions to provide insight into overall platform health.

·         As an integral part of the Data Platform team, onboard various data sources by creating data  pipelines.

·         Provide resolutions and/ or Workaround to data pipeline related queries/ issue as appropriate

·         Ensure that Ingestion pipelines that empower the Data Lake and Data Warehouses are up and running.

·         Assist the End users of Data Platform with Query debugging and optimisation.

·         Collaborate with different teams in order to understand / resolve data availability and consistency issues.

·         Be involved in Knowledge Sharing (Knowledge Base Articles, Documentation, Forums, Blogs, etc...)

·         Exhibit continuous improvement on technical knowledge and problem resolution skills and strive for excellence


Requirements

What are we looking for?

 

·         2+ years of experience in Technical/ Application in building and supporting  data pipelines 

·         Ability to read and write SQL - and understanding of one of Relational Databases such as MySQL, Oracle, Postgres, SQL Server.

·         Good knowledge of Java & python programming knowledge 

·         Comfortable with Linux with ability to write small scripts in Bash/Python. Ability to grapple with log files and unix processes.

·         Prior experience in working on cloud services, preferably AWS.

·         Ability to learn complex new things quickly

·         Occasional involvement in support over weekends/ early mornings/ late nights.

·         Be a team player with an ability to work under pressure with good time management skills

·         Excellent written and verbal communication skills

·         Troubleshooting skills and doing root cause analysis RCA 

·         Prior experience in big data technologies like Spark, Hadoop, Kafka , Airflow etc .. is preferred.


 

Education

Any Graduate