Mandatory Skills: Strong background in Datastage Min 5 Years lastest, Java or Python, External API/Internal API, Powershell & SQL, Migration Project
Job Overview:
We are seeking a skilled Data Engineer with strong expertise in SQL, ETL development, and hands-on experience with Rhine (or similar metadata-driven orchestration frameworks). The ideal candidate will play a key role in building scalable data pipelines, managing data transformation workflows, and supporting analytics initiatives across the enterprise.
Key Responsibilities:
- Design, develop, and maintain ETL pipelines using best practices and enterprise data architecture standards.
- Write advanced SQL queries for data extraction, transformation, and analysis from structured and semi-structured data sources.
- Work with Rhine-based pipelines to enable dynamic, metadata-driven data workflows.
- Collaborate with data architects, analysts, and business stakeholders to understand data requirements and implement robust solutions.
- Ensure data quality, consistency, and integrity across systems.
- Participate in performance tuning, optimization, and documentation of data processes.
- Troubleshoot and resolve issues in data pipelines and workflows.
- Support deployment and monitoring of data jobs in production environments.
Required Qualifications:
- Bachelor's degree in Computer Science, Engineering, Information Systems, or related field.
- Strong hands-on experience with SQL (complex joins, window functions, CTEs, performance tuning).
- Proven experience in ETL development using tools like Informatica, Talend, DataStage, or custom Python/Scala frameworks.
- Familiarity with or experience in using Rhine for metadata-driven pipeline orchestration.
- Working knowledge of data warehousing concepts and dimensional modeling.
- Exposure to cloud platforms (AWS, Azure, or GCP) and tools such as Snowflake, Redshift, or BigQuery is a plus.
- Experience with version control (e.g., Git) and CI/CD for data jobs