Position Details
About this role
This role involves building and managing large-scale data pipelines, ensuring data quality, security, and governance across enterprise data systems.
Key Responsibilities
- Build and operationalize data pipelines
- Ensure data quality and security
- Implement data governance
- Manage data systems and teams
- Optimize data architectures
Technical Overview
The technical environment includes AWS, Databricks, PySpark, Snowflake, and data lake/warehouse architectures for data engineering and analytics.
Ideal Candidate
The ideal candidate is an experienced data engineer with at least 8 years of experience, proficient in AWS, Databricks, and PySpark, with strong skills in data governance, security, and ETL development, capable of managing data teams.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 8 years of experience, No experience with AWS or Databricks, Lack of experience with ETL or data warehouse, No management experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile