Position Details
About this role
This role involves developing and supporting large-scale, real-time data processing systems using modern distributed technologies within a financial services environment.
Key Responsibilities
- Design and implement real-time data pipelines
- Collaborate with cross-functional teams
- Optimize distributed data processing
- Support cloud-based data warehouses
- Ensure system reliability and scalability
Technical Overview
Utilizes Java, Python, Snowflake, Apache Flink, and cloud services to build scalable streaming data pipelines and distributed data systems.
Ideal Candidate
The ideal candidate is a mid-level data engineer with experience in Java, Python, Snowflake, and streaming/distributed data systems, capable of designing scalable data pipelines and working with cloud-based big data technologies.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Java and Python, No familiarity with Snowflake or streaming data, Inability to work in a collaborative team environment
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile