Position Details
About this role
This role designs and develops scalable data pipelines to support business analytics and reporting. You will use Databricks with Spark Scala and Azure Data Factory to build ETL workflows, ensure data quality and governance, and contribute to architecture decisions.
Key Responsibilities
- Design and develop scalable data pipelines using Databricks Spark Scala and Azure Data Factory
- Develop and implement robust ETL processes for ingestion, transformation, and preparation
- Monitor, troubleshoot, and resolve data pipeline issues
- Ensure data quality, consistency, and governance across sources and environments
- Document workflows, pipeline configurations, and technical specifications
Technical Overview
Hands-on development of ETL/data ingestion, transformation, and preparation pipelines using Databricks, Spark Scala, and Azure Data Factory (ADF). Includes monitoring/troubleshooting, data architecture improvement recommendations, documentation of workflows, and implementing data governance/security compliance and data modeling standards.
Ideal Candidate
The ideal candidate is a senior data engineering professional with strong experience building scalable data pipelines using Databricks and Spark Scala. They have hands-on experience with Azure Data Factory (ADF) for ETL, including monitoring, troubleshooting, and implementing data governance, quality, and modeling standards.
Must-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Must have experience with Databricks, Must have experience with Spark Scala, Must have experience with Azure Data Factory (ADF) for ETL/data pipelines
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile