Position Details
About this role
This role involves developing and maintaining cloud-based data pipelines, leveraging AWS and Databricks, with a focus on ETL and big data processing.
Key Responsibilities
- Build data pipelines
- Manage cloud data workflows
- Optimize ETL processes
- Support scalable data infrastructure
- Collaborate with data teams
Technical Overview
Focus on cloud data engineering, ETL workflows, PySpark, and big data tools like Databricks and AWS, with strong Python skills.
Ideal Candidate
The ideal candidate is a data engineer with over 3 years of experience, proficient in AWS, Python, and Databricks, with strong ETL skills and cloud platform expertise, supporting enterprise data projects.
Must-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 3 years in a data engineer role, No experience with PySpark or AWS, Lack of ETL expertise
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile