Description

  • Strong experience with Databricks (including notebooks, clusters, and Delta Lake).
  • Proficient in Azure Data Factory (ADF) - building pipelines, data flows, and orchestrations.
  • Advanced skills in PySpark for big data transformation and processing.
  • Able to work end-to-end on a piece of work — from requirement gathering to implementation and validation.
  • Proactive and self-driven, with the ability to chase stakeholders for inputs, approvals, and clarifications.
  • Experience in data analysis and mapping — understanding source data, documenting mappings, and identifying data quality issues

Education

Any Gradute