Position Details
About this role
This role involves designing and maintaining scalable data pipelines and ETL processes to support analytics and reporting. The candidate will collaborate with cross-functional teams to optimize data architecture and ensure data quality.
Key Responsibilities
- Design data pipelines
- Develop ETL processes
- Collaborate with teams
- Optimize data storage
- Ensure data quality
Technical Overview
The technical environment includes big data technologies such as Apache Spark and Kafka, cloud storage solutions, and data modeling techniques. The role emphasizes data governance and real-time data processing.
Ideal Candidate
The ideal candidate is a mid-level data engineer with at least 3 years of experience in building scalable data pipelines and ETL processes. They should be proficient in SQL, Python, and big data technologies like Apache Spark and Kafka, with a strong understanding of data quality and governance.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience in data engineering, No proficiency in SQL or Python, Unfamiliarity with Apache Spark or Kafka, No knowledge of data governance or data quality
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile