Description

Key Responsibilities:

  • Design and implement data pipelines for global and regional data synchronization (Azure SQL, Data Lake, etc.)
  • Develop solutions for secure handling of PII and non-PII data, ensuring compliance with GDPR and other regulations
  • Build and optimize ETL processes for anonymization, transformation, global synchronization and distribution of data
  • Collaborate with software architects and DevOps to integrate data flows with application logic and deployment pipelines
  • Set up monitoring, alerting, and documentation for data processes within the existing frameworks
  • Advise on best practices for data partitioning, replication, and schema evolution Requirements:
  • 4–7 years of experience as a Data Engineer in cloud environments (preferably Microsoft Azure)
  • Strong knowledge of Azure SQL, Data Lake, Data Factory, and related services
  • Experience with distributed data architectures and data synchronization across regions
  • Familiarity with data privacy, security, and compliance (GDPR, etc.)
  • Proficiency in Python, SQL, and ETL tools
  • Excellent problem-solving and communication skills
  • Hands-on and self-contributing


 

Preferred:

  • Experience with MS-SQL, Cosmos DB, Databricks, and event-driven architectures
  • Knowledge of CI/CD and infrastructure-as-code (Azure DevOps, ARM/Bicep, Terraform)
  • Prior work in global/multi-region data solutions

Education

Bachelor's degree