Position Details
About this role
This role involves working on big data projects, designing data architectures, and deploying AI and analytics solutions using Databricks and Apache Spark.
Key Responsibilities
- Work on customer data projects
- Design and build reference architectures
- Implement big data and AI applications
- Provide technical support and feedback
- Manage project scope and timelines
Technical Overview
Expertise in data engineering, distributed computing with Apache Spark, cloud platforms (AWS, Azure, GCP), and MLOps for deploying scalable data solutions.
Ideal Candidate
The ideal candidate is a senior data engineer with over 6 years of experience in data platforms, proficient in Python or Scala, and experienced with distributed computing frameworks like Apache Spark. They should have strong knowledge of cloud ecosystems (AWS, Azure, GCP) and a background in technical project delivery and customer engagement.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Required
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 6 years of experience in data engineering, No experience with Apache Spark or distributed computing, Lack of cloud ecosystem knowledge (AWS, Azure, GCP), No experience with CI/CD or MLOps, Inability to work on customer-facing projects
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile