Key Skills: Data Engineer, Pyspark.
Roles & Responsibilities:
- Design and implement scalable data pipelines to support exposure aggregation across geographies, perils, and lines of business.
- Integrate structured and unstructured data from policy systems, claims platforms, reinsurance databases, and third-party catastrophe models (e.g., RMS, AIR).
- Build and manage data lake architectures using Microsoft Fabric, including Lakehouses and OneLake.
- Collaborate with risk analysts, brokers, and BI developers to deliver clean, reliable, and timely data for reporting and analytics.
- Develop and maintain data models optimized for Power BI, exposure dashboards, and geospatial risk analysis.
- Ensure data quality, lineage, and governance across the exposure data ecosystem.
- Support regulatory and compliance reporting related to exposure and accumulation.
Experience Requirement:
- 6-8 years of experience in data engineering with a focus on exposure aggregation, preferably in insurance and financial services.
- Extensive hands-on experience working with large-scale data pipelines and implementing data lake solutions using Microsoft Fabric.
- Strong domain knowledge in catastrophe modeling and risk accumulation analysis.
- Proficient in designing data models for reporting and analytical tools such as Power BI.
- Experience in handling cross-functional data collaboration with risk, analytics, and compliance teams