Key Responsibilities
Design and implement AWS cloud-native solutions for Data Mesh, specifically the Ingestion as a Service module.
Build and enhance Spark-based frameworks to support scalable and reusable data ingestion pipelines.
Apply functional programming concepts to develop modular, maintainable microservices for the data mesh domain.
Develop, build, and test data engineering pipelines using Python and PySpark.
Collaborate with business and technology stakeholders to deliver modern, consumer-focused data platforms.
Contribute to the development of technology standards, tools, and processes in support of the overall data strategy.
Assist with conceptual data modeling and ensure alignment with data domain principles.
Communicate and apply data management concepts such as Master Data Management, Critical Data Elements, Granularity, and Normalization.
Operate and manage CI/CD pipelines using Jenkins, CloudFormation, etc.
Work with AWS Data Services including Glue, Lambda, Step Functions, DynamoDB, RDS (Postgres), Event Triggers, and Lake Formation
Any Graduate