Description

Key Responsibilities:
 

  • Design and implement data migration strategies from AWS to Azure.
  • Develop and optimize data pipelines and ETL workflows using ADF, Databricks, and Azure Synapse.
  • Manage Azure Data Lake storage for structured and unstructured data.
  • Work with AWS Hadoop environments and transition workloads to Azure.
  • Develop and maintain Python-based ETL scripts for data transformation.
  • Collaborate with data architects and business teams to ensure data integrity and governance.
  • Monitor and troubleshoot data workflows, performance, and security.
  • Implement best practices in data engineering, cloud security, and compliance.

Required Skills & Experience:
 

  • 7 to 9 years of experience in data engineering.
  • Expertise in Azure Data Lake, Databricks, Azure Data Factory (ADF), and Azure Synapse Analytics.
  • Experience in AWS Hadoop and data migration from AWS to Azure.
  • Strong proficiency in ETL development, Python, and SQL.
  • Hands-on experience with data modeling, warehousing, and processing frameworks.
  • Knowledge of cloud security, role-based access control (RBAC), and compliance requirements.
  • Excellent problem-solving skills and ability to work in high-performance data environments.

Preferred Skills:
 

  • Experience with Infrastructure-as-Code (Terraform, Bicep) for Azure deployments.
  • Familiarity with Power BI, AI-driven analytics, and advanced data governance.
  • Previous experience in large-scale enterprise data migration projects

Education

Any Gradute