Position Details
About this role
This role involves developing large-scale data pipelines and supporting data migration and analytics initiatives using Apache Spark, Databricks, and Kafka, with a focus on AI-ready data architecture.
Key Responsibilities
- Build and maintain data pipelines
- Support data migration projects
- Develop AI-ready data architectures
- Utilize Spark, Databricks, Kafka
- Ensure data accuracy and traceability
Technical Overview
The technical environment includes Apache Spark, Databricks, Kafka, PostgreSQL, and Python. The role emphasizes scalable data engineering, data migration, and distributed processing for enterprise data platforms.
Ideal Candidate
The ideal candidate is a data engineer with at least 3 years of experience working with Apache Spark, Databricks, and data pipelines. They are proficient in Python and SQL, with a focus on large-scale data processing and AI-ready data architecture.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 3 years of experience, No experience with Spark or Databricks, Lack of data pipeline or migration experience, Inability to work remotely, No knowledge of Kafka or PostgreSQL
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile