Position Details
About this role
This role involves designing and implementing scalable data pipelines and integrations for enterprise clients using AWS cloud services and open-source tools.
Key Responsibilities
- Collaborate with architects and engineering teams to capture requirements
- Develop and improve data integrations
- Build data pipelines for customer data sources
- Ensure data quality and scalability
- Support customer solutions
Technical Overview
The technical environment includes AWS cloud platform, Python, PySpark, Apache Airflow, MongoDB, MySQL, Docker, and Kubernetes for building and maintaining data workflows.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 5+ years of experience in building scalable data integrations using AWS cloud services. They possess strong programming skills in Python and PySpark, and have hands-on experience with data pipeline tools like Apache Airflow.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Required
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of AWS experience, Less than 5 years of relevant experience, No experience with data pipelines or integrations, No AWS certifications
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile