Position Details
About this role
This role involves developing and maintaining data pipelines and applications within a big data platform for an insurance organization, supporting data analysis and visualization.
Key Responsibilities
- Build scalable data pipelines
- Perform data analysis
- Integrate data sources
- Automate data workflows
- Ensure data quality
Technical Overview
The technical environment includes SQL, Python, ETL workflows, cloud platforms like AWS, GCP, Azure, and tools such as Terraform, CloudFormation, and Docker for data processing and automation.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 3+ years experience in building scalable data pipelines using SQL and Python, with familiarity in cloud platforms like AWS, GCP, and Azure. They should be skilled in data analysis, ingestion, and visualization tools.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with SQL or Python, No cloud platform experience, Unwilling to work in a hybrid role in Madison
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile