Position Details
About this role
This role involves designing, building, and maintaining scalable data pipelines and data assets to support analytics, machine learning, and business intelligence initiatives.
Key Responsibilities
- Develop data pipelines
- Optimize ETL/ELT workflows
- Design data models
- Implement data quality frameworks
- Support ML feature pipelines
Technical Overview
The position requires expertise in data engineering tools such as Python, SQL, Dataflow, BigQuery, and cloud ecosystems like GCP, with a focus on data quality, governance, and scalable processing.
Ideal Candidate
The ideal candidate is a data engineer with at least 2 years of experience designing and maintaining scalable data pipelines using Python, SQL, and cloud platforms like GCP. They are skilled in data modeling, governance, and working with big data tools to support analytics and AI initiatives.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 2 years of experience, No experience with Python or SQL, Lack of cloud data ecosystem knowledge, No experience with data pipelines
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile