Description

Job Description


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes using Apache Spark, Databricks, Python, and SQL.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
  • Optimize and improve existing data workflows for performance, reliability, and scalability.
  • Implement data quality checks and ensure data integrity across all data pipelines.
  • Monitor and troubleshoot data pipeline issues, ensuring timely resolution.
  • Stay up-to-date with the latest industry trends and best practices in data engineering and incorporate them into our processes.

What's on offer

  • Competitive salary and benefits package.
  • Opportunities for professional growth and development.
  • A collaborative and inclusive work environment.
  • The chance to work on impactful projects with a talented team.


 

Education

Any Graduate