Position Details
About this role
This role involves designing and migrating enterprise data pipelines to cloud platforms, leveraging Databricks, Hadoop, and Spark, with a focus on automation and cost efficiency.
Key Responsibilities
- Design enterprise data pipelines
- Migrate Hadoop to Databricks
- Build Spark workloads
- Develop CI/CD pipelines
- Ensure data integrity
Technical Overview
The technical environment includes AWS, Databricks, Hadoop, Hive, Spark, EMR, and CI/CD tools like Terraform and Jenkins, with expertise in schema evolution and serverless automation.
Ideal Candidate
The ideal candidate is a senior data engineer with 5+ years experience in building and migrating enterprise data pipelines using AWS and Databricks, with strong skills in Spark, Hadoop, and CI/CD tools.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Required
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 5 years experience, No experience with AWS or Databricks, Lack of experience with Hadoop or Spark, No background in data pipeline migration
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile