Description

strong handson experience with Databricks SQL and Python and must possess practical expertise in Azure Cloud services You will play a key role in designing developing and optimizing scalable data pipelines and solutions to support enterprise wide data initiatives

Responsibilities

Develop and maintain scalable ETLdata pipelines using Databricks SQL and Python on Azure Cloud

Design and implement data integration workflows across structured and unstructured sources within the AMDM domain

Collaborate with data architects and business teams to translate requirements into efficient data solutions

Ensure data quality consistency and integrity across systems

Monitor troubleshoot and optimize data workflows in Azurebased environments

Leverage Azure services like Azure Data Factory Azure Data

Lake Azure Synapse and Azure Key Vault as part of solution delivery

Qualifications

Handson experience in Data Engineering or ETL development

Strong proficiency in Databricks for distributed data processing and transformation

Advanced skills in SQL and Python for building and automating data workflows

Solid working experience with core Azure Data Services

Experience with relational and nonrelational database technologies eg SQL Server PostgreSQL Oracle

Familiarity with Stibo or similar Master Data Management MDM tools

Skills

Mandatory Skills : Microsoft Fabric, Azure Analysis Services, PowerBI, Azure Cosmos DB, Azure Data Factory, Azure DevOps, Azure EventHub, Azure Log Analytics, Azure Logic Apps, Azure Monitor, Azure SQL, Azure Synapse Analytics, Azure Key Vault, AZURE DATA LAKE, Azure Purview, Databricks, Dimensional Data Modeling, Apache airflow ,Azure BLOB, HDInsight

Education

Any Graduate