Position Details
About this role
This role involves developing and optimizing data pipelines for analytics and machine learning workloads, ensuring platform reliability, and collaborating with data scientists and analysts.
Key Responsibilities
- Develop data pipelines
- Maintain data workflows
- Collaborate with data scientists
- Optimize data platform performance
- Ensure data quality
Technical Overview
The technical environment includes Python, SQL, Databricks, Spark, cloud platforms (AWS, Azure, GCP), and data warehousing tools, focusing on building scalable data workflows.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 2+ years of experience in building and maintaining data pipelines, proficient in Python, SQL, and cloud platforms like AWS, Azure, or GCP. They should have hands-on experience with Databricks or Spark and a strong understanding of data modeling and warehousing.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with data pipelines or ETL workflows, No experience with cloud platforms (AWS, Azure, GCP), No proficiency in Python or SQL, No experience with Databricks or Spark
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile