Description

Job Description :

Strong programming experience with Java (8 or higher).
Solid understanding of Big Data ecosystems, including:
Apache Hadoop (HDFS, MapReduce, YARN)
Apache Spark (RDD, DataFrames, Spark Streaming)
Apache Kafka
Hive, HBase, Pig, or similar tools
Experience with SQL and working with large-scale structured and unstructured data.
Familiarity with distributed systems and cloud platforms (AWS, Azure, or GCP).
Knowledge of containerization and orchestration tools like Docker and Kubernetes is a plus.
Experience with CI/CD tools and practices.
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field

Education

Any Graduate