Position Details
About this role
A mid-level data engineering role focused on building and maintaining scalable data pipelines and architectures across multi-cloud environments for Walmart's retail analytics.
Key Responsibilities
- Design data pipelines
- Deploy workflows on GCP
- Implement multi-cloud architectures
- Monitor pipeline performance
- Troubleshoot data issues
Technical Overview
Involves designing, deploying, and troubleshooting data pipelines using PySpark, Airflow, GCP, Azure, Snowflake, and AI frameworks to support analytics and operational reporting.
Ideal Candidate
The ideal candidate is a mid-level data engineer with strong experience in building scalable data pipelines using PySpark, Airflow, and GCP services. They should have a solid understanding of data lakes, warehouses, and multi-cloud architectures, with a focus on operational reliability and cross-functional collaboration.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with GCP or Spark ecosystem, No background in data pipeline development, Inability to work with cloud orchestration tools
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile