Position Details
About this role
This role involves designing and implementing data pipelines and models to support analytics and operational needs in a collaborative, on-site environment.
Key Responsibilities
- Design data pipelines
- Develop data models
- Optimize data warehouse performance
- Ensure data quality
- Support ETL processes
Technical Overview
The technical environment includes Snowflake, Python, Apache Spark, Hadoop, Kafka, and data governance tools, focusing on large-scale data processing and quality.
Ideal Candidate
The ideal candidate is a data engineer with experience in building scalable data pipelines using Snowflake and Python. They should be familiar with big data tools like Spark, Hadoop, and Kafka, and have a strong focus on data quality and governance.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Snowflake or Python, No familiarity with ETL processes, Inability to work on-site in Miami
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile