Position Details
About this role
A mid-level data engineer role focused on building and maintaining large-scale data pipelines and analytics infrastructure within a financial services environment.
Key Responsibilities
- Design and develop data pipelines
- Optimize data processing performance
- Support real-time streaming applications
- Collaborate with data science teams
- Maintain data warehouse solutions
Technical Overview
Environment involves big data processing with Hadoop, Spark, Kafka, data warehousing with Redshift and Snowflake, and scripting in Unix/Linux systems, supporting real-time and batch data workflows.
Ideal Candidate
The ideal candidate is a mid-level data engineer with at least 3 years of experience in application development and big data technologies, proficient in Python, Scala, Java, and experienced with distributed data tools like Hadoop and Spark. They should have familiarity with data warehousing solutions such as Redshift or Snowflake and be comfortable working in cloud environments.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 3 years of experience, No experience with big data tools, Lack of proficiency in Python, Scala, Java, No experience with data warehousing or NoSQL
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile