Position Details
About this role
This role involves supporting mission-critical data initiatives by designing, building, and maintaining scalable data pipelines on the Databricks platform within a government or defense context.
Key Responsibilities
- Build and optimize data pipelines
- Implement data ingestion and transformation processes
- Collaborate with stakeholders to understand data requirements
- Support data platform operations and troubleshoot issues
- Document data models and pipelines
Technical Overview
The technical environment includes Databricks, Delta Lake, PySpark, Spark SQL, cloud storage solutions like Azure Data Lake and AWS S3, and data governance tools like Unity Catalog.
Ideal Candidate
The ideal candidate is a mid-level data engineer with at least 2 years of experience in data engineering, proficient in Databricks, PySpark, and cloud environments like AWS or Azure. They should have strong skills in building scalable data pipelines and data governance.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks or PySpark, No experience in data engineering, Inability to obtain security clearance, No Bachelor's degree
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile