Position Details
About this role
This role involves designing and building analytical data stores and integration processes to meet internal and external data needs. The focus is on creating scalable, efficient data solutions in a fast-growing environment.
Key Responsibilities
- Build ETL processes
- Design analytic data stores
- Collaborate with teams
- Research analytical solutions
- Optimize source data acquisition
Technical Overview
The technical environment includes SQL, Python, PySpark, Apache Airflow, and AWS cloud services. The stack emphasizes data pipelines, ETL workflows, and data warehousing.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 2+ years of experience in building data pipelines, ETL processes, and data warehousing solutions using SQL, Python, and AWS technologies. They are proactive, collaborative, and eager to learn new analytical approaches.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Apache Airflow, No AWS or cloud experience, No SQL or Python skills, Less than 2 years of relevant experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile