Position Details
About this role
This role involves developing and managing large-scale Spark-based data pipelines for classified government projects, ensuring secure and efficient data processing.
Key Responsibilities
- Develop Spark data pipelines
- Manage petabyte-scale data processing
- Lead data engineering teams
- Implement data security protocols
- Collaborate on data architecture
Technical Overview
The technical environment includes Spark, PySpark, Java Spark, AWS cloud services, Kafka, NoSQL databases like Cassandra, PostgreSQL, and containerization with Docker and Kubernetes.
Ideal Candidate
The ideal candidate is a senior Spark data engineer with over 5 years of experience in big data processing, ETL pipeline development, and distributed systems. They should hold TS/SCI clearance and have expertise in NoSQL databases and containerization technologies.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Required
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
Lack of Spark or big data experience, No TS/SCI clearance with polygraph, No Bachelor's degree, Inability to work in a hybrid environment
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile