Position Details
About this role
This role involves leading data engineering efforts to develop scalable data pipelines and architectures within a financial services organization, leveraging cloud platforms and big data technologies.
Key Responsibilities
- Develop data pipelines
- Implement ETL processes
- Design data architectures
- Collaborate with stakeholders
- Ensure data quality and governance
Technical Overview
The technical environment includes Python, Apache Spark, Hadoop, Snowflake, Databricks, AWS, and Azure, focusing on ETL, data modeling, and scalable data infrastructure.
Ideal Candidate
The ideal candidate is a senior data engineer with over 8 years of experience in building scalable data pipelines, proficient in Python, Spark, Hadoop, and cloud data platforms like Snowflake and Databricks, with strong knowledge of data architecture.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 8 years of data engineering experience, No experience with Spark or Hadoop, Lack of proficiency in Python, No knowledge of cloud data platforms
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile