Position Details
About this role
A senior data engineering role focused on designing, developing, and optimizing large-scale data pipelines using Databricks, Snowflake, and cloud data tools to support analytics and enterprise data initiatives.
Key Responsibilities
- Develop and optimize data pipelines
- Implement ELT/ETL workflows
- Manage Snowflake and Databricks environments
- Ensure data quality and performance
- Collaborate with stakeholders on data solutions
Technical Overview
Involves working with Databricks, PySpark, Snowflake SQL, ELT/ETL workflows, and cloud data platforms like Azure Data Factory and Synapse for scalable data solutions.
Ideal Candidate
The ideal candidate is a mid-level data engineer with over 4 years of experience in big data development, proficient in Databricks, PySpark, Snowflake, and cloud data solutions. They excel at designing scalable data pipelines and collaborating across technical teams to support enterprise analytics.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 4 years of relevant experience, Lack of hands-on experience with Databricks or Snowflake, No experience with data pipeline development
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile