Key Responsibilities:
Design and manage scalable Redshift data warehouses for analytical workloads.
Implement and maintain Star Schema-based dimensional models to support data marts and business reporting.
Develop complex SQL queries and scripts for data transformation, aggregation, and cleansing.
Optimize Redshift clusters, queries, and data distribution strategies for performance and cost.
Integrate data from multiple sources (including S3, Postgres, APIs, and third-party feeds) into Redshift using ETL/ELT tools.
Collaborate with Data Architects and Modelers to ensure data models are efficient, scalable, and align with business requirements.
Ensure data integrity, availability, security, and governance across the Redshift environment.
Monitor system performance, perform tuning, and troubleshoot operational issues.
Participate in Agile/Scrum ceremonies and work collaboratively with cross-functional teams.
Required Qualifications:
5+ years of experience in data engineering or warehousing with a focus on Amazon Redshift.
Strong proficiency in SQL, with ability to write and optimize complex queries for large datasets.
Solid understanding of dimensional modeling, Star Schema, and OLAP vs OLTP data structures.
Experience in designing analytical data marts and transforming raw/transactional data into structured analytical formats.
Hands-on experience with ETL tools (e.g., AWS Glue).
Familiarity with Amazon Redshift Spectrum, RA3 nodes, and data distribution/sort keys best practices.
Comfortable working in cloud-native environments, particularly AWS (S3, Lambda, CloudWatch, IAM, etc.).
Preferred Qualifications:
Exposure to data lake integration, external tables, and Redshift-Unload/Copy operations.
Experience in BI tools (e.g., Tableau, QuickSight) to validate and test data integration.
Familiarity with Python or PySpark for data transformation scripting.
Understanding of CI/CD for data pipelines and version control using Git.
Knowledge of data security, encryption, and compliance in a cloud environment
Any Gradute