Position Details
About this role
This role involves architecting and operationalizing Spark at scale within Cisco Data Fabric, focusing on building resilient, high-performance data infrastructure for enterprise clients.
Key Responsibilities
- Architect and operationalize Spark at scale
- Build resilient infrastructure with fault tolerance
- Design operational backbone including monitoring and observability
- Support high availability and performance at petabyte scale
- Lead technical decisions across engineering teams
Technical Overview
The technical environment includes Apache Spark, cloud platforms like AWS, modern data formats such as Parquet and Iceberg, and Kubernetes for orchestration, emphasizing scalable, fault-tolerant data solutions.
Ideal Candidate
The ideal candidate is a senior data engineer with extensive experience designing and deploying Spark at scale in cloud environments, particularly AWS. They possess strong knowledge of modern data formats and scalable architecture patterns, with a proven track record of leading technical decisions.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Spark at scale, No cloud deployment experience, Less than 8 years of relevant experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile