Description

Job Description:
Skills: ADA, Spark and SQL & shell scripting
1. Create Scala/Spark/Pyspark jobs for data transformation and aggregation Produce unit tests for Spark transformations and helper methods Used Spark and Spark-SQL to read the parquet data and create the tables in hive using the Scala API.
2. Work closely with Business Analysts team to review the test results and obtain sign off.
3. Prepare necessary design/operations documentation for future usage Perform peers.
4. Code quality review and be gatekeeper for quality checks Hands-on coding, usually in a pair programming environment.
5. Working in highly collaborative teams and building quality code
6. The candidate must exhibit a good understanding of data structures, data manipulation, distributed processing, application development, and automation.
7. Familiar with Oracle, Spark streaming, Kafka, ML.
8. To develop an application by using Hadoop tech stack and delivered effectively, efficiently, on-time, in-specification and in a cost-effective manner.
9. Ensure smooth production deployments as per plan and post-production deployment verification.
10. This Hadoop Developer will play a hands-on role to develop quality applications within the desired timeframes and resolving team queries.
11. Requirements Hadoop data engineer with total 3 - 6 years of experience and should have strong experience in Hadoop, Spark, Scala, Java, Hive, Impala, CI/CD, Git, Jenkins, Agile Methodologies, DevOps, Cloudera Distribution. Strong Knowledge in data warehousing Methodology

Education

Any Graduate