Proficiency with the AWS data ecosystem (S3, Glue, Lambda, Redshift, etc.).
Hands-on experience with Databricks, including Delta Lake and Spark (PySpark or Scala).
Proven Data Engineering Experience: 5+ years of hands-on experience in a data engineering or analytics engineering role, with a track record of building and managing complex data pipelines.
Strong Cloud & Big Data Expertise:
Proficiency with the AWS data ecosystem (S3, Glue, Lambda, Redshift, etc.).
Hands-on experience with Databricks, including Delta Lake and Spark (PySpark or Scala).
Expert-Level SQL & Data Modeling: Exceptional SQL skills and a deep understanding of data modeling concepts (e.g., dimensional modeling, star schemas) and data warehousing principles.
Exceptional Communicator: The ability to clearly and effectively communicate technical concepts to both technical and non-technical audiences. You are comfortable leading discussions and building consensus.
Collaborative & Product-Focused: A team player who is passionate about understanding the "why" behind the data and is dedicated to building solutions that drive business and product success.
Problem-Solving Mindset: You are intellectually curious, enjoy tackling complex challenges, and are comfortable with ambiguity.
Fintech Experience (Bonus): Previous experience working in the fintech or financial services industry is a strong plus