Designing and implementing scalable data pipelines
Building and managing data warehouses and data lakes
Ensuring data quality and implementing data management best practices
Optimizing data storage and retrieval processes
Collaborate closely with data scientists, analysts, and product teams to support analytics and machine learning initiatives.
CI/CD orchestration and automation tools: Experience with tools such as Jenkins, GitHub etc.
Monitor and tune Snowflake query performance, warehouse usage, and credit consumption.
Collaborate closely with data scientists, analysts, and product teams to support analytics and machine learning initiatives.
Design and enforce row-level access policies and dynamic masking in Snowflake for sensitive data fields (PII, financials).
Enabled data sharing with external teams using secure shares and reader accounts while maintaining strict RBAC controls.
Experience with ETL /Scheduler tools.
Strong interpersonal, written, and verbal communication skills to interact effectively across teams and stakeholders.
Designing semantic layers, aggregate tables, and data models (Star/Snowflake) to support scalable, governed, and business-friendly analytics architecture.
Good to have:
Machine Learning and AI/LLM model training / implementation.
Background in data observability, lineage tracking, or metadata management tools