Position Details
About this role
This role involves developing and maintaining scalable data pipelines using Databricks and PySpark, with a focus on real-time processing and data quality.
Key Responsibilities
- Build scalable data pipelines
- Monitor and troubleshoot data workflows
- Collaborate with data teams
- Optimize data transformation processes
- Support real-time data solutions
Technical Overview
The technical environment includes Databricks, PySpark, Python, SQL, and OpenSearch, emphasizing big data processing, ETL/ELT pipelines, and troubleshooting.
Ideal Candidate
The ideal candidate is a senior data engineer with extensive experience in Databricks, PySpark, and Python, capable of designing scalable data pipelines and troubleshooting large data environments.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks or PySpark, No experience with ETL/ELT pipelines, Unwillingness to work in a hybrid or flexible environment, No experience with data pipeline monitoring
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile