Position Details
About this role
This role involves designing, developing, and optimizing large-scale data pipelines using Databricks, Snowflake, and Azure Data Engineering to support enterprise analytics and reporting.
Key Responsibilities
- Develop and maintain Databricks notebooks and workflows
- Build scalable PySpark applications
- Develop Snowflake pipelines and SQL
- Design data pipelines with Azure Data Factory
- Troubleshoot pipeline issues
Technical Overview
The position requires hands-on experience with Databricks, PySpark, Snowflake, Delta Lake, and Azure Data Factory, focusing on scalable data pipeline development, performance tuning, and data governance.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 2+ years of experience working with Databricks, Snowflake, and Azure Data Engineering. They excel in designing scalable data pipelines, optimizing ETL/ELT workflows, and collaborating with cross-functional teams.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks or Snowflake, No background in data engineering, Unfamiliarity with Azure Data Factory, No experience with ETL/ELT processes
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile