Description

Key Skills: Spark, Azure Databricks, Apache Spark, Python, Delta Lake, Kafka, Snowflake, Qlik, CI/CD, Azure

Roles & Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Azure Databricks and Spark.
  • Implement data processing solutions with Delta Lakes and manage multiple file formats.
  • Collaborate with cross-functional teams to ensure data quality and integrity.
  • Utilize CI/CD practices for efficient deployment and management of data solutions.
  • Work with Snowflake and Qlik for data visualization and reporting.
  • Leverage Kafka or similar streaming technologies for real-time data processing.
  • Ensure compliance with data governance and security standards in Azure.

Experience Requirement:

  • 5 - 8 years of experience developing and optimizing Spark jobs in Azure Databricks.
  • Proficient in building scalable ETL pipelines and working with Delta Lake for versioned data management.
  • Experienced in implementing CI/CD pipelines and managing deployments in cloud-based environments.
  • Knowledgeable in real-time data streaming using Kafka or similar platforms.
  • Experience with Snowflake integration and reporting via Qlik or other BI tools.

Education: Any Graduation

Education

Any Graduate