Job Description
1. Python & PySpark
2. Azure Cloud Services such Synapse & Data Bricks & Data Factory (Data Bricks is mandatory)
3. OOPS Concept
4. Data Modelling with scalability
Key Responsibilities:
• Design, develop, and maintain scalable and robust data pipelines to support data processing and analysis.
• Collaborate with cross-functional teams to understand data requirements and implement effective solutions.
• Perform data modeling and design to ensure the integrity, availability, and performance of data systems.
• Implement and optimize ETL processes for extracting, transforming, and loading data from various sources into our data warehouse.
• Identify and troubleshoot data-related issues, ensuring data quality and integrity throughout the entire data lifecycle.
• Stay abreast of industry best practices and emerging technologies to continuously improve data engineering processes.
Any Graduate