Position Details
About this role
This role involves designing and building data pipelines, data models, and data warehousing solutions at Amazon, leveraging big data tools and technologies to support business analytics and AI initiatives.
Key Responsibilities
- Build data pipelines
- Design data models
- Develop data warehousing solutions
- Partner with stakeholders
- Implement AI data platforms
Technical Overview
The technical environment includes Hadoop, Hive, Spark, EMR, and ETL tools, with scripting in Python and KornShell, focusing on large-scale data processing and warehousing.
Ideal Candidate
The ideal candidate is a mid-level data engineer with at least 1 year of experience in data pipeline development, data modeling, and warehousing, with familiarity in big data technologies and ETL tools. They should be curious, problem-solvers, and collaborative.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with data modeling or ETL pipelines, No experience with query languages or scripting languages, No familiarity with big data technologies
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile