Position Details
About this role
This role involves designing, developing, and optimizing scalable data pipelines on cloud platforms using Databricks and Spark, supporting enterprise data needs in the financial sector.
Key Responsibilities
- Design data pipelines
- Implement medallion architecture
- Ensure data quality
- Optimize performance
- Collaborate with teams
Technical Overview
The technical environment includes Databricks, Spark, PySpark, SQL, cloud platforms (AWS, Azure), CI/CD tools, and enterprise job scheduling systems, with a focus on performance and data quality.
Ideal Candidate
The ideal candidate is a lead data engineer with at least 5 years of experience in designing scalable data pipelines using Databricks and Spark, with strong skills in Python, SQL, and cloud platforms like AWS or Azure, preferably within financial services.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks or Spark, No cloud platform experience, Insufficient years of experience, No background in financial services
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile