Description

Key Responsibilities

  • Architect, develop, and test large-scale, high-efficiency data solutions.
  • Design scalable methods to consume and process data from diverse, often unpredictable sources.
  • Create self-service data products that promote predictability and ease of use across teams.
  • Build reusable libraries and frameworks that enhance team productivity.
  • Optimize and maintain data pipelines and systems for performance, quality, and operational excellence.
  • Collaborate cross-functionally with data analysts, product teams, and engineering stakeholders to align on data strategy and execution.

 

Required Qualifications

  • 6+ years of software/data engineering experience with a strong focus on SQL and data systems.
  • Proficiency in programming languages such as Python, Java, or Scala.
  • Hands-on experience with data orchestration and processing tools like Airflow, Spark, Trino, and Kafka.
  • Proven ability to analyze complex datasets and implement high-quality, efficient solutions.
  • Familiarity with SDLC best practices, version control systems, and CI/CD pipelines.

 

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field.
  • Experience with cloud platforms such as AWS, Google Cloud Platform (GCP), or Azure for data infrastructure and storage.
  • Familiarity with Infrastructure as Code (IaC) tools like Terraform.
  • Understanding of container orchestration tools such as Kubernetes.

Education

Bachelor’s or Master’s degree in Computer Science