Description

Job Description:

Responsibilities:

 

  • Experience in building Spark Streaming process.
  • Proficient in understanding distributed computing principles.
  • Experience in managing Hadoop cluster with all services.
  • Experience with NoSQL Databases and Messaging systems like Kafka.
  • Designing building installing configuring and supporting Hadoop Perform analysis of vast data stores.
  • Good understanding of cloud technology.
  • Must have strong technical experience in Design Mapping specifications HLD LLD.
  • Must have the ability to relate to both business and technical members of the team and possess excellent communication skills.
  • Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging.
  • Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management.
  • Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs.

Education

Any Graduate