Position Details
About this role
This role involves designing and developing data pipelines and systems for high-performance data processing, leveraging cloud platforms and distributed frameworks to support financial index data analysis.
Key Responsibilities
- Design and develop data pipelines
- Manage real-time data streams
- Optimize data processing systems
- Collaborate on system architecture
- Ensure data quality and scalability
Technical Overview
Focuses on Python-based data engineering, real-time streaming, cloud platforms (AWS, Azure), distributed computing frameworks like Spark and Kafka, and data architecture for index data processing.
Ideal Candidate
The ideal candidate is a senior data engineer with over 3 years of experience in Python, data pipelines, and cloud platforms like AWS and Azure. They should have strong skills in real-time data streaming, distributed computing, and data architecture.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with cloud platforms (AWS, Azure), No proficiency in Python, No experience with data pipelines or data engineering, Unfamiliarity with distributed computing frameworks
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile