Position Details
About this role
This role involves designing, building, and operating scalable data and analytics solutions using Databricks platform, supporting government missions with AI-enabled analytics and machine learning.
Key Responsibilities
- Design scalable data solutions
- Build and operate Databricks workflows
- Integrate Databricks with CI/CD pipelines
- Apply machine learning techniques
- Support government data projects
Technical Overview
The technical environment includes Databricks, cloud platforms (AWS, Azure, GCP), PySpark, Spark SQL, Delta Lake, Unity Catalog, and MLflow, focusing on data engineering, ML workflows, and data architecture.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 3+ years of experience in data engineering or analytics, proficient in Databricks platform, PySpark, and cloud platforms like AWS, Azure, or GCP. They should have experience with data architecture, MLflow, and working within government or regulated environments.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Preferred
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks platform, No experience in cloud platforms (AWS, Azure, GCP), No Bachelor's degree, Less than 2 years of experience in data engineering
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile