Description

Mandatory Skills: Hadoop, Spark, Hive, GCP/Azure/AWS

Job Description
 Proven experience with Hadoop, Spark (Batch/Streaming), and Hive.
 Proficiency in Shell scripting and programming languages such as Java and/or Scala.
 Strong hands-on experience with GCP/Azure/AWS and a deep understanding of its
services and tools.
 Ability to design, develop, and deploy big data solutions in a GCP/Azure/AWS
environment.
 Experience with migrating data systems to GCP/Azure/AWS .
 Excellent problem-solving skills and the ability to work independently or as part of a
team.
 Strong communication skills to effectively collaborate with team members and
stakeholders.
Responsibilities:
 Development: Design and develop scalable big data solutions using Hadoop, Spark,
Hive, and GCP/Azure/AWS services.
 Design: Architect and implement big data pipelines and workflows optimized for
GCP/Azure/AWS, ensuring efficiency, security, and reliability.

 Deployment: Deploy big data solutions on GCP/Azure/AWS, leveraging the best
practices for cloud-based environments.
 Migration: Lead the migration of existing data systems to GCP/Azure/AWS, ensuring a
smooth transition with minimal disruption and optimal performance.
 Collaboration: Work closely with cross-functional teams to integrate big data solutions
with other cloud-based services and business goals.
 Optimization: Continuously optimize big data solutions on GCP/Azure/AWS to improve performance, scalability, and cost-efficiency

Education

Any Graduate