Job Description:-
Design and manage scalable and high-performance data warehouses using Amazon Redshift.
Develop and maintain ETL/ELT pipelines to ingest, transform, and load data from various sources.
Write efficient and optimized SQL queries for data extraction, transformation, and reporting.
Collaborate with data analysts, data scientists, and business teams to ensure data accessibility and integrity.
Monitor and tune Redshift performance, including vacuuming, distribution keys, sort keys, and query optimization.
Ensure data quality, consistency, and governance across all pipelines and sources.
Integrate Redshift with other AWS services such as S3, Lambda, Glue, Athena, and Kinesis.
Create and maintain data models, schemas, and documentation for reporting and analytics.
Troubleshoot and resolve issues related to data loads, transformations, and performance.
Bachelor’s degree in Computer Science, Information Systems, Engineering, or a related field.
3+ years of hands-on experience with Amazon Redshift and data warehousing solutions.
Proficiency in SQL, including performance tuning and complex queries.
Experience with ETL tools such as AWS Glue, Apache Airflow, Talend, or Informatica.
Solid understanding of data modeling (star/snowflake schema) and data partitioning.
Familiarity with scripting languages like Python or Shell for automation tasks.
Experience working with cloud-based environments, especially AWS.
Any Graduate