Description

Responsible for designing, building, and maintaining data pipelines using AWS services to extract, transform, and load (ETL) data from various sources into a data warehouse or data lake, requiring expertise in AWS technologies like Glue, S3, Redshift, etc., 

• Proficiency in programming languages like Python, PySpark, Spark, to ensure efficient data processing and analysis within the cloud environment. 

• Architect and implement robust ETL pipelines using AWS Glue, defining data extraction methods, transformation logic, and data loading procedures across different data sources

• Develop scripts to extract data from diverse sources like databases, APIs, flat files, and Mainframe applications using AWS services like S3, RDS, and Kinesis, etc.,

• AWS ETL oracle Data migration experience (specific to Exadata)

• AWS DMS for delta workloads

• AWS EFS, S3

• AWS RDS Oracle

• Experience in handling Oracle Exadata compression tables

• Oracle data loader

Education

Any Gradute