Description

* Develop and implement a strategic data analytics roadmap for the healthcare payer business, aligned with overall business objectives.
* Design and execute complex data analysis projects focused on areas like risk rating, claims adjudication, and enrolment optimization.
* Conduct statistical analysis and modeling to identify trends, patterns, and key insights from healthcare payer data.
* Minimum 5 years of experience in healthcare payer analytics, with a proven track record of success in leading and delivering impactful projects.
* Strong understanding of risk adjustment methodologies (e.g., Hierarchical Condition Category (HCC) coding) and their impact on healthcare payer reimbursement.
* In-depth knowledge of healthcare claims and enrolment data structures and processes.
* Proven experience utilizing big data technologies like Hadoop, Spark, or similar on cloud platforms like AWS.
* Proficiency in programming languages like Scala, Python, or R for data manipulation and analysis.
* Excellent communication, presentation, and interpersonal skills with the ability to effectively translate technical findings to a non-technical audience.

Key Duties & Responsibilities: 
* ​Design, develop, and maintain robust data pipelines using Python and PySpark to process large volumes of healthcare data efficiently in a multitenant analytics platform.
* Collaborate with cross-functional teams to understand data requirements, implement data models, and ensure data integrity throughout the pipeline.
* Optimize data workflows for performance and scalability, considering factors such as data volume, velocity, and variety.
* Implement best practices for data ingestion, transformation, and storage in AWS services such as S3, Glue, EMR, and Redshift.
* Model data in relational databases (e.g., PostgreSQL, MySQL) and file-based databases to support data processing requirements.
* Design and implement ETL processes using Python and PySpark to extract, transform, and load data from various sources into target databases.
* Troubleshoot and enhance existing ETLs and processing scripts to improve efficiency and reliability of data pipelines.
* Develop monitoring and alerting mechanisms to proactively identify and address data quality issues and performance bottlenecks.

Education & Experience:
* Minimum of 5 years of experience in data engineering, with a focus on building and optimizing data pipelines.
* Expertise in Python programming and hands-on experience with PySpark for data processing and analysis

Education

Any Gradute