Position Details
About this role
This role involves designing and maintaining scalable data pipelines using Python, Spark, and Databricks, supporting healthcare data integration and automation in a cloud environment.
Key Responsibilities
- Build data pipelines
- Optimize ETL processes
- Implement data governance
- Collaborate with cross-functional teams
- Automate workflows
Technical Overview
Focus on Python, Spark, Databricks, Airflow, SQL, cloud platforms like AWS, and data governance tools like Unity Catalog.
Ideal Candidate
The ideal candidate is a mid-level data engineer with expertise in Python, Spark, and Databricks, experienced in building scalable ETL pipelines and working with cloud platforms like AWS, capable of optimizing data workflows.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Spark or Databricks, No knowledge of cloud platforms like AWS, No experience with ETL tools
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile