Description

We are seeking a highly skilled Microsoft Fabric Data Engineer to
design, develop, and optimize data solutions using Microsoft Fabric and
related Azure services. The ideal candidate will have experience in data
integration, transformation, and analytics using Microsoft Fabric, Azure
Synapse Analytics, Data Factory, Power BI, and Delta Lake. This role
requires strong expertise in PySpark, SQL, and data modeling to enable
scalable and efficient data solutions.



Key Responsibilities:
Design & Develop Data Pipelines: Build scalable and efficient ETL/ELT
pipelines using Microsoft Fabric, Dataflows, and Data Factory.
Delta Lake & Medallion Architecture: Implement bronze, silver, and gold
layer architecture in Microsoft Fabric OneLake for efficient data
processing.
Work with Notebooks: Develop PySpark and SQL-based Fabric notebooks for
data transformations and analysis.
Security & Governance: Implement data security, compliance, and
governance policies using Microsoft Purview.
BI & Reporting: Collaborate with analysts and business users to create
Power BI reports and enable self-service analytics.
Automation & Monitoring: Implement CI/CD pipelines, monitoring, and
alerting for data pipelines and workflows.
Must-Have Skills & Qualifications:
Minimum 8+ years of overall experience in IT Industry.
8+ years of experience in data engineering, big data, or cloud
analytics.
Strong hands-on experience with Microsoft Fabric, Azure Synapse, Data
Factory, and Delta Lake.
Proficiency in SQL, PySpark, and Python for data processing and
transformation.
Experience with Microsoft Fabric Lakehouse & Warehouse architecture.
Knowledge of Power BI, DAX, and data visualization best practices.
Familiarity with Azure Data Lake, Blob Storage, and Parquet formats.
Experience with data security, RBAC, and governance in Azure.
Understanding of ETL/ELT best practices and data pipeline automation.
Good-to-Have Skills:
Azure certifications (DP-600, Microsoft Fabric Analytics Engineer)

Education

Any Gradute