Position Details
About this role
This role focuses on developing and deploying scalable data architectures using Databricks and Spark, supporting clients in cloud-based data projects.
Key Responsibilities
- Designing data architectures
- Building scalable solutions
- Supporting cloud integrations
- Managing technical projects
- Collaborating with clients
Technical Overview
The technical environment includes Databricks platform, Apache Spark, cloud providers (AWS, Azure, GCP), with emphasis on data architecture, MLOps, and distributed processing.
Ideal Candidate
The ideal candidate is a mid-level data engineer with at least 6 years of experience in data platforms and analytics, skilled in Python or Scala, with strong knowledge of Apache Spark and cloud platforms like AWS, Azure, or GCP. They should be capable of designing end-to-end data architectures and managing technical projects.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Preferred
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 6 years of experience, No experience with Apache Spark, Lack of cloud platform knowledge, No programming in Python or Scala, No project delivery experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile