Snowflake Data Warehouse:
· Design, implement, and optimize data warehouses on the Snowflake platform.
· Ensure effective utilization of Snowflake features for scalable and high-performance data storage.
Data Pipeline Development:
· Develop, implement, and optimize end-to-end data pipelines on Snowflake.
· Create and maintain ETL workflows for seamless data processing.
Data Transformation with PySpark:
· Leverage PySpark for advanced data transformations within the Snowflake environment.
· Implement data cleansing, enrichment, and validation processes using PySpark.
Requirements:
· Proven experience as a Data Engineer, with a strong emphasis on Snowflake.
· Proficiency in Snowflake features and capabilities for data warehousing.
· Expertise in PySpark for data processing and analytics is a must.
· Experience with data modeling, ETL processes, and efficient data storage.
· Proficiency in programming languages such as Python, SQL, or Scala for data processing.
Skills:
Snowflake, PySpark
Any Graduate