Description

Key Responsibilities:
•  Design, develop, and maintain ETL pipelines using best practices and enterprise data architecture standards.
•  Write advanced SQL queries for data extraction, transformation, and analysis from structured and semi-structured data sources.
•  Work with Rhine-based pipelines to enable dynamic, metadata-driven data workflows.
•  Collaborate with data architects, analysts, and business stakeholders to understand data requirements and implement robust solutions.
•  Ensure data quality, consistency, and integrity across systems.
•  Participate in performance tuning, optimization, and documentation of data processes.
•  Troubleshoot and resolve issues in data pipelines and workflows.
•  Support deployment and monitoring of data jobs in production environments.

 

Required Qualifications:
•  Bachelor's degree in Computer Science, Engineering, Information Systems, or related field.
•  Strong hands-on experience with SQL (complex joins, window functions, CTEs, performance tuning).
•  Proven experience in ETL development using tools like Informatica, Talend, DataStage, or custom Python/Scala frameworks.
•  Familiarity with or experience in using Rhine for metadata-driven pipeline orchestration.
•  Working knowledge of data warehousing concepts and dimensional modeling.
•  Exposure to cloud platforms (AWS, Azure, or GCP) and tools such as Snowflake, Redshift, or BigQuery is a plus.
•  Experience with version control (e.g., Git) and CI/CD for data jobs

Key Skills
Education

Bachelor's degree in Computer Science, Engineering