Position Details
About this role
This role involves designing, building, and maintaining scalable data pipelines on the Databricks platform to support mission-critical data initiatives for government clients.
Key Responsibilities
- Build and optimize data pipelines
- Implement data ingestion and transformation processes
- Collaborate with data scientists and stakeholders
- Support platform operations and troubleshooting
- Contribute to architecture and documentation
Technical Overview
The technical environment includes Databricks, Delta Lake, PySpark, Spark SQL, cloud storage solutions like Azure ADLS and AWS S3, and data governance tools such as Unity Catalog.
Ideal Candidate
The ideal candidate is a mid-level data engineer with at least 2 years of experience in data pipeline development, proficient in Databricks, PySpark, and SQL, with a strong understanding of cloud environments and data governance.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks or PySpark, No experience in cloud environments, No Bachelor's degree in relevant field, Unable to obtain required security clearance
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile