Position Details
About this role
This role involves working with clients on big data projects using the Databricks platform, providing data engineering, data science, and cloud solutions to optimize data value.
Key Responsibilities
- Designing reference architectures
- Building customer use cases
- Implementing big data solutions
- Supporting operational issues
- Collaborating with technical teams
Technical Overview
Technical scope includes data architecture design, distributed computing with Apache Spark, cloud integrations (AWS, Azure, GCP), and MLOps, primarily on the Databricks platform.
Ideal Candidate
The ideal candidate is a mid-level data engineer with over 6 years of experience in data platforms and analytics, proficient in Python or Scala, with deep expertise in Apache Spark and cloud ecosystems like AWS, Azure, or GCP. They should have strong project delivery skills and experience working directly with clients on big data solutions.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Preferred
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 6 years experience in data engineering, No experience with Apache Spark or Databricks, Lack of cloud ecosystem knowledge, Inability to work with clients or manage conflicts
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile