Description

 

Key Responsibilities:

  • Architect, develop, and maintain ETL/ELT pipelines and data workflows.
  • Build and optimize data models for analytics and reporting.
  • Integrate data from diverse sources (databases, APIs, streaming platforms).
  • Implement best practices for data quality, governance, and security.
  • Work closely with analysts, data scientists, and business teams to deliver solutions.
  • Optimize performance of data storage and processing systems.

Required Skills & Experience:

  • 9+ years in data engineering or related field.
  • Strong expertise in SQL and at least one programming language (Python, Java, or Scala).
  • Hands-on experience with cloud data platforms (AWS Redshift, Snowflake, GCP BigQuery, or Azure Synapse).
  • Proficiency in distributed data processing (Apache Spark, Kafka, Flink).
  • Solid understanding of data warehousing concepts and dimensional modeling.
  • Familiarity with CI/CD, Git, and containerization (Docker/Kubernetes).

Preferred:

  • Experience with orchestration tools (Airflow, dbt).
  • Knowledge of data security and compliance (GDPR, CCPA)

Education

Any Gradute