Position Details
About this role
This role involves designing and optimizing data pipelines for insurance loss modeling, focusing on telematics and third-party data integration, utilizing modern data architectures and cloud platforms.
Key Responsibilities
- Design ETL pipelines
- Mentor junior team members
- Partner with stakeholders
- Ingest large datasets
- Publish Data Products
Technical Overview
The technical environment includes Spark, Snowflake, Python, SQL, AWS services, Data Lake, Data Mesh, and modern architecture methodologies like Lakehouse, with an emphasis on DevOps/DataOps pipelines and Agile practices.
Ideal Candidate
The ideal candidate is a mid-level data engineer with expertise in Spark, Snowflake, Python, and SQL, experienced in building scalable data pipelines and working with cloud platforms like AWS. They should be proactive, collaborative, and familiar with modern data architectures such as Data Mesh and Lakehouse.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Spark or Snowflake, No cloud platform experience, No experience with ETL/ELT pipelines, Unwillingness to work in a hybrid environment
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile