Job Details:
- Extensive experience in designing, implementing, and supporting Data Warehousing, ETL and Business Intelligence solutions on Microsoft Fabric data pipelines
- Design and implement scalable and efficient data pipelines using Azure Data Factory, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes.
- Create and optimize data models to support business intelligence and analytics requirements.
- Develop complex SQL scripts and procedures for data extraction, transformation, and loading.
- Collaborate with business team to understand data requirements and translate them into technical specifications.
Implementation and Maintenance
- Implement data integration solutions, ensuring data quality, consistency, and security.
- Monitor and troubleshoot ETL processes, identifying and resolving issues in a timely manner.
- Maintain and optimize ETL pipelines for performance and scalability.
- Ensure data compliance and governance standards are met throughout the ETL process.
Collaboration and Communication
- Work closely with cross-functional teams including data analysts, and business stakeholders.
- Document ETL processes, data flows, and technical specifications for future reference and knowledge sharing.
- Communicate effectively with stakeholders to ensure alignment on project goals and deliverables.
Education, Experience and Skills
- Bachelor’s degree - Computer Science, Information Technology, or a related field.
- At least 10+ years of experience in ETL development, data integration, or related roles.
- Proven experience with Microsoft Fabric and other Microsoft data integration tools.
- Strong knowledge of SQL, data modeling, and data warehousing concepts.
- Proficient in Microsoft Fabric tools and environments.
- Advanced SQL scripting and database management skills.
- Strong problem-solving and analytical skills.
- Detail-oriented with a focus on data quality and accuracy.
- Familiarity with Agile development methodologies