Description

Job Description:

Bachelor's degree in Computer Science, Data Engineering, Information Technology, or a related field.
At least 10 years of experience as a Data Engineer, working with Hadoop, Spark, and data processing technologies in large-scale environments.
Strong expertise in designing and developing data infrastructure using hashtag#Hadoop, hashtag#Spark, and related tools (hashtag#HDFS, hashtag#Hive, hashtag#Ranger, etc)
Experience with containerization platforms such as OpenShift Container Platform (hashtag#OCP) and container orchestration using Kubernetes.
Proficiency in programming languages commonly used in data engineering, such as hashtag#Spark, hashtag#Python, hashtag#Scala, or hashtag#Java.
Knowledge of hashtag#DevOps practices, hashtag#CI/CD pipelines, and infrastructure automation tools (e.g., hashtag#Docker, hashtag#Jenkins, hashtag#Ansible, hashtag#BitBucket)
Experience with hashtag#Grafana, Prometheus, Splunk will be an added benefit
Strong problem-solving and troubleshooting skills with a proactive approach to resolving technical challenges.
Excellent collaboration and communication skills to work effectively with cross-functional teams.
Ability to manage multiple priorities, meet deadlines, and deliver high-quality results in a fast-paced environment.
Experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services is a plus

Education

Bachelor's Degree