Lead the design and implementation of scalable data solutions using AWS and Snowflake.
Build and maintain robust data pipelines ensuring data quality and security.
Optimize data warehousing and analytics solutions.
Provide mentorship to junior engineers.
Collaborate with stakeholders to gather requirements and deliver data-driven insights.
Apply insurance domain knowledge (preferably in claims and loss) to data solutions.
Must-Have Qualifications:
10+ years in Data Engineering and Big Data.
Strong hands-on experience with SQL, Python, PySpark.
Expertise in data ingestion, processing frameworks, and cloud implementations.
Solid experience with AWS tools and architecture (Glue, EMR, S3, Aurora, RDS).
Skilled in debugging, performance tuning, and production deployments.
Experience in Agile methodologies.
Good-to-Have Skills:
Experience with CI/CD tools (Jenkins, Git, etc.)
Familiarity with Data Vault 2.0, data migration strategies
DevOps practices in data engineering
Additional Requirements:
Bachelor's or Master’s in Computer Science or related field
Excellent communication and problem-solving skills
Insurance industry experience is a plus
AWS or Snowflake certification is a plus
Strong understanding of data modeling, governance, and security best practices
Bachelor's or Master's degrees