Description

Key Skills: Data Engineer, Databricks, PySpark, Azure, AWS, GCP, Azure SQL Database, Synapse Analytics, Azure Data Factory, CI/CD, Azure DevOps.

Roles & Responsibilities:

  • Design, develop, and maintain data pipelines using Databricks and PySpark.
  • Implement and manage data solutions on cloud platforms such as Azure, AWS, or GCP.
  • Optimize SQL queries and manage Azure SQL Database, Synapse Analytics, and Azure Data Factory.
  • Collaborate with cross-functional teams to understand data requirements and deliver high-quality solutions.
  • Develop CI/CD pipelines using Azure DevOps to streamline deployment processes.
  • Ensure data integrity and accuracy through rigorous testing and validation.
  • Stay updated with industry trends and best practices in data engineering.

Experience Required:

  • 3-12 years of experience with Databricks and PySpark in enterprise data environments.
  • Strong experience in managing cloud data platforms including Azure, AWS, or GCP.
  • Experience working with large-scale ETL processes, data lakes, and data warehouses.
  • Familiarity with data governance, metadata management, and data security practices.
  • Exposure to Agile methodologies and DevOps practices for continuous delivery.

Education: Any Graduation

Education

Any Graduate