Description

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, or related field.
  • 5-7 years of experience in data engineering, with a proven track record of designing and implementing scalable data solutions.
  • Proficiency in programming languages such as Python, Java, or Scala. Experience with SQL and scripting languages.
  • Strong expertise in data warehousing concepts, ETL processes, and database technologies (e.g., SQL, NoSQL, columnar databases).

· Hands-on experience with big data processing frameworks and tools such as Apache Hadoop, Apache Spark, Apache Kafka, and Apache Flink. Familiarity with distributed computing concepts is necessary for handling large-scale datasets.

· Knowledge of data warehousing concepts and experience with tools like Amazon Redshift, Google BigQuery, or Snowflake for building and managing data warehouses.

  • Hands-on experience with cloud platforms and services (e.g., AWS, Azure, GCP). Certification in cloud technologies is a plus.
  • Knowledge on containerization facilitates like Docker, Kubernetes, DevOps practices such as continuous integration, continuous deployment (CI/CD), and infrastructure as code (IaC), enabling automated testing, deployment, and management of data engineering pipelines.
  • Excellent problem-solving skills, attention to detail, and ability to thrive in a fast-paced environment.
  • Strong communication and leadership skills, with the ability to collaborate effectively with cross-functional teams and stakeholders.

Education

Bachelor's or Master's degree in Computer Science