Responsibilities:
Design, develop, and maintain a data platform that is accurate, secure, available, and fast.
Engineer efficient, adaptable, and scalable data pipelines to process data.
Integrate and maintain a variety of data sources: databases, APIs, SAASs, files, logs, events, etc.
Create standardized datasets to serve a wide variety of use cases.
Develop subject-matter expertise in tables, systems, and processes.
Partner with Product and Engineering to ensure product changes integrate well with the data platform.
Collaborate with diverse stakeholder teams to understand their challenges and provide data solutions to meet their goals.
Perform data quality checks on sources and automate quality control capabilities.
Qualifications:
5+ years of experience as a Data Engineer.
Hands-on experience with Star/Snowflake schema design, data modeling, data pipelining, and MLOps.
Expertise in Data Warehouse technologies (e.g., Snowflake, AWS Redshift).
Proficiency in AWS data pipelines (Lambda, AWS Glue, Step Functions).
Experience in the Fintech or Financial Services industry.
Proficient in SQL and one major programming language (Python, Java).
Experience with Data Analysis Tools such as Looker or Tableau.
Familiarity with Pandas, Numpy, Scikit-learn, and Jupyter notebooks.
Experience with Git, GitHub, and JIRA.
Strong problem-solving skills and attention to detail.
Ability to simplify, automate tasks, and build reusable components.
Familiarity with agile methodologies.
Eagerness to learn about the Private Equity/Venture Capital ecosystem and the associated secondary market.
Any Graduate