Position Details
About this role
This role involves developing and managing scalable data pipelines using leading technologies like Databricks, Spark, and AWS to support PayPay's rapid growth.
Key Responsibilities
- Create data ingestion pipelines
- Optimize large-scale data workflows
- Develop data processing workflows
- Manage data platform infrastructure
- Ensure high performance and reliability
Technical Overview
The technical environment includes Databricks, Delta Lake, Spark, Scala, AWS cloud services, Airflow, Kafka, and Terraform for data pipeline creation and management.
Ideal Candidate
The ideal candidate is a mid-level data engineer with 3+ years of experience in cloud data platforms, proficient in Databricks, Spark, Scala, and AWS, with strong data pipeline development skills.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Databricks or Spark, No AWS or cloud platform experience, Less than 3 years of relevant data engineering experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile