Position Details
About this role
Publix is seeking a Senior Software Engineer to join the PIMS2 Data Service, focusing on data ingestion from Kafka into a Databricks Lakehouse, implementing Delta Lake patterns, and enforcing governance and data contracts for analytics.
Key Responsibilities
- Design and optimize PySpark pipelines in Azure Databricks
- Implement Delta Lake Bronze/Silver/Gold patterns
- Enforce governance with Unity Catalog and data quality gates
- Deploy Databricks Declarative Pipelines (DLT)
- Deliver curated datasets with data contracts
Technical Overview
Stack includes PySpark on Azure Databricks, Delta Lake, Unity Catalog, and Databricks Declarative Pipelines (DLT). Also involves Azure API Management, Data Lake, and governance practices for analytics data.
Ideal Candidate
The ideal candidate is a senior data/engineer focused on data engineering with strong PySpark and Azure Databricks experience. They should be comfortable implementing Delta Lake patterns, managing Databricks governance (Unity Catalog), and building declarative pipelines to support analytics across a retail environment.
Must-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of PySpark or Azure Databricks experience, Lack of Delta Lake or Unity Catalog experience, No data governance or data contracts experience, No data lake/data ingestion experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile