Mandatory Skills - Spark, SCALA, Kafka, Streaming, technical architecture, Data Bricks, Data Engineering, data quality, Data Governance. Presales, Client Relationship, Presentation.
Years Of Experience : 15 to 20 Years
Job Description
- Hands on with Spark (dataframe), Spark SQL, Databricks, AWS Glue, Scala/Spark or PySpark, Kafka or another streaming technology. Good learning & cross skilling ability.
- Design & Architecture of big data systems. Experience in ETL, Data Governance, Data Quality, Good understanding of data operations. AWS or Azure cloud.
- Should have experience with solutioning & consulting (MUST HAVE).
Roles & Responsibilities
- Design/develop big data pipelines using Spark (batch/streaming) and other related technology stacks.
- MUST HAVE experience in solutioning big data systems and consultancy as SME on big data technologies.