Description

Job Description

About the Role
We are looking for a skilled Data Engineer with expertise in SQL and DBT (Data Build Tool) to join our data team. The ideal candidate will be responsible for designing, building, and optimizing data pipelines to support analytics and business intelligence needs. You will work closely with data analysts, engineers, and stakeholders to ensure high-quality, reliable, and scalable data infrastructure.
Key Responsibilities

  • Design, develop, and maintain ETL/ELT pipelines using SQL and DBT.
  • Transform raw data into clean, structured datasets for analytics and reporting.
  • Optimize database performance, ensuring efficient data storage and retrieval.
  • Work with cloud-based data warehouses (e.g., Azure Databricks).
  • Collaborate with data analysts and stakeholders to understand data requirements.
  • Implement best practices in data modeling, governance, and quality control.
  • Monitor and troubleshoot data pipeline failures and performance issues.
  • Document data transformations, schemas, and processes for transparency.

Required Skills & Qualifications

  • 8+ years of experience in data engineering or a related field.
  • Strong proficiency in SQL (e.g., complex queries, optimization, indexing).
  • Hands-on experience with DBT (writing models, macros, tests, and documentation).
  • Experience with cloud data warehouses (e.g., Azure Databricks).
  • Knowledge of data modeling techniques (e.g., star schema, normalization).
  • Experience with version control systems (e.g., Git) and CI/CD workflows.
  • Understanding of ETL/ELT processes and data pipeline orchestration.
  • Strong problem-solving skills and ability to work in a collaborative team.

Preferred Qualifications

  • Experience with Python for data engineering tasks.
  • Familiarity with orchestration tools (e.g., Airflow, Prefect, Dagster).
  • Exposure to BI tools (e.g., Looker, Tableau, Power BI).
  • Knowledge of data security and governance best practices.

Education

Any Graduate