Description

strong handson experience with Databricks SQL and Python and must possess practical expertise in Azure Cloud services You will play a key role in designing developing and optimizing scalable data pipelines and solutions to support enterprisewide data initiatives

Responsibilities

Develop and maintain scalable ETLdata pipelines using Databricks SQL and Python on Azure Cloud

Design and implement data integration workflows across structured and unstructured sources within the AMDM domain

Collaborate with data architects and business teams to translate requirements into efficient data solutions

Ensure data quality consistency and integrity across systems

Monitor troubleshoot and optimize data workflows in Azurebased environments

Leverage Azure services like Azure Data Factory Azure Data

Lake Azure Synapse and Azure Key Vault as part of solution delivery

Qualifications

6 years of handson experience in Data Engineering or ETL development

Strong proficiency in Databricks for distributed data processing and transformation

Advanced skills in SQL and Python for building and automating data workflows

Solid working experience with core Azure Data Services

Experience with relational and nonrelational database technologies eg SQL Server PostgreSQL Oracle

Familiarity with Stibo or similar Master Data Management MDM tools

Education

Any Graduate