Position Details
About this role
This role involves designing and maintaining scalable data pipelines, managing large-scale data ingestion, and deploying data-driven models using Databricks, Python, SQL, and AWS cloud services. The candidate will work across the full data lifecycle to enable predictive insights.
Key Responsibilities
- Design and maintain data pipelines
- Handle large-scale data ingestion
- Develop workflows with Databricks
- Perform feature engineering and data transformations
- Deploy ML models and monitor data quality
Technical Overview
Hands-on data engineering with Databricks, PySpark, Python, SQL, AWS, and cloud infrastructure tools. Focus on data pipelines, data modeling, and machine learning workflows.
Ideal Candidate
The ideal candidate is a mid-level data engineer with hands-on experience in building scalable data pipelines, proficient in Python, SQL, and AWS cloud services. They are passionate about data science, analytics, and deploying machine learning models in a collaborative environment.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with data pipelines or ETL/ELT, No cloud platform experience, Limited programming skills in Python or SQL, No familiarity with Databricks or AWS
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile