Position Details
About this role
This role involves working on big data projects using the Databricks platform, providing data engineering, data science, and cloud technology solutions to clients.
Key Responsibilities
- Designing reference architectures
- Building customer use cases
- Integrating with client systems
- Providing technical support
- Guiding project delivery
Technical Overview
The technical environment includes Apache Spark, cloud ecosystems (AWS, Azure, GCP), Databricks, and MLOps practices, focusing on scalable data architecture and distributed computing.
Ideal Candidate
The ideal candidate is a mid-level data engineer with over 6 years of experience in data platforms, proficient in Python or Scala, and experienced with cloud ecosystems like AWS, Azure, or GCP. They should have strong skills in distributed computing and data architecture design, with a focus on delivering impactful big data solutions.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Preferred
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 6 years experience in data engineering, Lack of experience with Apache Spark, No cloud ecosystem experience, No experience in data architecture design
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile