Position Details
About this role
This role involves designing and maintaining large-scale data pipelines and architectures using Spark, PySpark, and AWS cloud services, supporting data analytics and reporting initiatives.
Key Responsibilities
- Partner with stakeholders
- Design data architectures
- Build ETL pipelines
- Manage cloud workflows
- Ensure data security
Technical Overview
The environment includes Spark, PySpark, Databricks, AWS services, SQL, and Unix/Linux systems, focusing on scalable data lake and data warehouse solutions.
Ideal Candidate
The ideal candidate is a senior data engineer with over 8 years of experience in Spark, PySpark, and AWS cloud services, capable of designing scalable data architectures and managing complex ETL pipelines in a collaborative environment.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 8 years of experience in relevant technologies, Lack of experience with Databricks or AWS services, No experience with SQL query optimization, Unfamiliarity with CI/CD pipelines
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile