Position Details
About this role
A role focused on designing and deploying scalable machine learning data pipelines and models in a hybrid cloud environment, primarily using AWS and Spark technologies.
Key Responsibilities
- Design distributed data pipelines
- Develop ML models for real-time inference
- Construct CI/CD pipelines for ML infrastructure
- Integrate workflows with Kafka and AWS services
- Automate model deployment and lifecycle
Technical Overview
Environment includes Apache Spark, Databricks, AWS SageMaker, Kafka, Lambda, Step Functions, with emphasis on cloud-based ML deployment, real-time inference, and CI/CD automation.
Ideal Candidate
The ideal candidate is a mid-level machine learning engineer with at least 2 years of experience designing distributed data pipelines and deploying ML models in cloud environments, particularly AWS. Strong skills in Spark, Databricks, and real-time data processing are essential.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Apache Spark or Databricks, No cloud ML deployment experience, Bachelor's degree not in relevant field
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile