✦ Luna Orbit — Data & Analytics

Sr Data Engineer - Databricks

at McKesson

📍 USA, OH, Columbus Hybrid 💰 $127K – $212K USD / year Posted April 15, 2026
Salary $127K – $212K USD / year
Type Not Specified
Experience senior
Exp. Years Not specified
Education Not specified
Category Data & Analytics

Senior Data Engineers are needed to support McKesson’s Data & Analytics platform with two complementary tracks: an ingestion-focused role and a pipeline-focused (Databricks) role. The work includes building scalable ingestion patterns and robust data pipelines into a data warehouse/lakehouse, while ensuring data is cleansed, validated, and standardized for analytics use cases.

  • Develop data ingestion and integration pipelines into the data warehouse / lakehouse
  • Define and implement scalable ingestion and pipeline patterns for batch and real-time processing
  • Write SQL and/or use Databricks or Snowflake to cleanse, apply business logic, and standardize data
  • Work with structured and unstructured data to identify, transport, and validate data for synchronization and incremental loads
  • Design conceptual and logical data models based on business requirements

This role centers on data engineering for ingestion and transformation, including integration from multiple sources into a data warehouse/lakehouse and support for both batch and real-time processing. Key technologies are SQL, Databricks (preferred), and Snowflake, with experience expected in incremental loads, data validation, and conceptual/logical data modeling.

The ideal candidate is a Senior Data Engineer experienced in building both ingestion and pipeline solutions for a data warehouse/lakehouse environment. They have strong SQL skills and hands-on Databricks experience (preferred), with ability to use Snowflake as needed, and they can implement scalable batch and real-time processing with incremental loads. They collaborate well with analytics and technology partners and can mentor junior engineers for the ingestion-focused track.

Design and build robust data pipelines using Databricks and modern data engineering best practicesDevelop data ingestion and integration pipelines from various sources into the data warehouse / lakehouseDefine and implement scalable ingestion and pipeline patterns to support both batch and real-time processingWrite code in SQL and/or cloud-based tools such as Databricks (preferred) or Snowflake to cleanseapply business logicand standardize data according to business rules
A background in pharmacy claims data (strong plus)Databricks (preferred)Snowflake
DatabricksSnowflakeSQL
data ingestiondata integration pipelinesdata warehouselakehousescalable ingestion patternsbatch and real-time processingSQLDatabricksSnowflakedata cleansingbusiness logicdata standardizedata modelingincremental loadsdata validationunstructured data
data pipelinesdata ingestiondata integrationdata warehouse / lakehousescalable ingestion patternsbatch processingreal-time processingstructured dataunstructured datadata synchronizationincremental loadsdata validationSQLDatabricksSnowflakedata cleansingbusiness logic implementationdata standardizationconceptual and logical data modelscomplex data solutions across systemscommissioning data across systemsdata quality assurancepipeline monitoring
collaboration with Data Systems AnalystsAnalytics and Technology partnersmentoring junior engineers (for ingestion-focused role)technical strategy and executionproactive communication to business partnersissue resolution communicationrisk-aware and secure solution mindset (trustedstablereliableresponsivesecure)
Industry Healthcare IT
Job Function Build scalable ingestion and Databricks-based data pipelines that integrate sources into a warehouse/lakehouse for analytics.
Role Subtype Data Engineer
Tech Domains SQL / PostgreSQL, Databricks, Cloud & Infrastructure
Visa Sponsorship No
Sr Data EngineerSenior Data EngineerData Engineer - Databricksdata ingestioningestion patternsdata integrationdata warehouselakehousebatch and real-time processingincremental loadsdata synchronizationdata validationSQLDatabricksSnowflakedata cleansingbusiness logicdata standardizedata modelingconceptual data modelslogical data modelsunstructured datasecureresponsiveColumbusOHdata pipelines

Must reside in the Columbus, OH area to support a hybrid work schedule, No visa sponsorship available; candidates must be authorized to work in the United States on a permanent basis without the need for current or future sponsorship, Must demonstrate experience with SQL and Databricks (preferred) or Snowflake for cleansing/transformation

Apply for this Position →

Get matched to jobs like this

Luna finds roles that fit your skills and career goals — no endless scrolling required.

Create a Free Profile