Position Details
About this role
Design, build, and scale analytical data processes that power actuarial pricing, reserving, and claims analytics. Lead best practices for end-to-end automation and ensure data governance through quality checks, reconciliation, and validation rules.
Key Responsibilities
- Design and develop internal actuarial tools and scalable applications for dataset access
- Lead end-to-end automation of analytical process flow with best practices
- Embed data quality checks, reconciliation logic, and validation rules for data governance
- Define standards for automation, tool development, and quality while mentoring teammates
- Partner with actuaries, data science, and finance teams to enable pricing, reserving, forecasting analytics
Technical Overview
Responsible for actuarial data pipeline engineering using SQL, Python, or R, modern data warehousing, and ETL tools. Will develop reusable frameworks and scalable applications that standardize access to actuarial datasets and integrate tools with cloud data platforms, while enabling responsible AI-enabled automation.
Ideal Candidate
The ideal candidate is a senior data engineer with 5+ years of experience building analytical data processes in cloud-based environments. They are strong in SQL, Python, or R, and have delivered automation and data governance patterns (data quality checks, reconciliation logic, validation rules) for actuarial workflows supporting pricing, reserving, and claims analytics.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Must be proficient in SQL, Python, or R, Must have experience with modern data warehousing and ETL tools, Must be able to implement automation and data governance controls (data quality checks, reconciliation logic, validation rules)
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile