Description

Responsibilities:

  • Design and develop ETL pipelines using ADF for data ingestion and transformation.
  • Collaborate with Azure stack modules like Data Lakes and SQL DW to build robust data solutions.
  • Write SQL, Python, and PySpark code for efficient data processing and transformation.
  • Understand and translate business requirements into technical designs.
  • Develop mapping documents and transformation rules as per project scope.
  • Communicate project status with stakeholders, ensuring smooth project execution.


Requirements Must have:

  • 10-12 years of experience in data ingestion, data processing, and analytical pipelines for big data and relational databases.
  • Hands-on experience with Azure services: ADLS, Azure Databricks, Data Factory, Synapse, Azure SQL DB.
  • Experience in SQL, Python, and PySpark for data transformation and processing.
  • Familiarity with DevOps and CI/CD deployments.
  • Strong communication skills and attention to detail in high-pressure situations.
  • Experience in the insurance or financial industry is preferred.

Education

Any Graduate