Required Skills and Experience:
PySpark & Databricks: Strong hands-on experience with PySpark and Databricks.
Data Modeling: Solid understanding of data warehousing concepts and data modeling principles.
ETL: Experience with ETL processes and tools.
SQL: Proficiency in SQL for data manipulation and querying.
Cloud Platforms: Experience with cloud platforms like AWS.
Big Data Concepts:
Understanding of big data concepts, data lakes, and data warehouses.
Communication & Collaboration:
Excellent communication and teamwork skills.
Problem Solving & Debugging:
Strong problem-solving and debugging skills.
Additional Skills (Depending on the role):
FHIR : HI2 Interoperability FHIR knowledge
DevOps: Familiarity with DevOps for CI/CD and automation.
Data Governance: Knowledge of data governance principles and tools.
Agile/Scrum: Working experience in Agile/Scrum development methodologies.
CI/CD: Experience with Continuous Integration and Continuous Delivery practices
Any Gradute