Description

  • Design and implement backend data processing workflows using AWS Glue for serverless compute, AWS Crawler for data cataloging, and AWS Athena for querying data lakes.
  • Write efficient, maintainable code in Python, R, and SQL to support data ingestion, transformation, and analysis.
  • Orchestrate and automate complex data pipelines using Azure Data Factory to ensure seamless data flow across systems.
  • Leverage Databricks for big data processing, implementing Medallion Architecture to ensure data quality and scalability.
  • Collaborate with cross-functional teams to translate business requirements into technical solutions

Education

Any Gradute