Job Description :
Key Responsibilities:
Schema Design & Data Modeling:
Design normalized, denormalized, star, and snowflake schemas based on business requirements.
Optimize database structures to enhance performance and scalability.
Implement partitioning, indexing, and clustering strategies for efficient data retrieval.
Data Engineering & ETL/ELT Pipelines:
Develop and maintain ETL/ELT processes using Azure Data Factory (ADF), Databricks, and Synapse Analytics.
Ensure efficient data ingestion, transformation, and storage in Azure Data Lake, SQL Database, and Synapse.
Work with structured and unstructured data, optimizing data flows for analytical processing.
Cloud Data Architecture & Optimization:
Design and implement scalable Azure-based data architectures.
Utilize Azure Data Lake, Cosmos DB, Synapse Analytics, and SQL Server for data storage and processing.
Implement data partitioning, indexing, and caching strategies to optimize query performance.
Good Communication skills.
Required Skills & Experience:
8+ years of experience in data engineering, with a focus on schema design and data modeling.
Strong expertise in Azure Data Services (Azure Data Lake, Azure Synapse Analytics, Azure SQL Database, Cosmos DB, Databricks, Azure Data Factory).
Hands-on experience in schema design principles (OLTP vs. OLAP, star/snowflake schema, indexing, partitioning).
Proficiency in SQL, Python, or Scala for data processing and transformation.
Experience with Big Data processing using Apache Spark, Delta Lake, and Parquet formats.
Knowledge of data governance, security, and compliance best practices would be an added plus.
Familiarity with CI/CD pipelines for data workflows using Azure DevOps.
Experience in performance tuning, query optimization, and cost management on Azure.
Any Graduate