Position Details
About this role
This role involves designing, optimizing, migrating, and supporting Hadoop and Spark-based data processing systems, with a focus on high-performance distributed data environments.
Key Responsibilities
- Design and optimize Hadoop applications
- Troubleshoot data and performance issues
- Manage Hadoop, Nifi, Spark clusters
- Support migration to new environments
- Perform system performance tuning
Technical Overview
Technical environment includes Hadoop, Spark, Nifi, Kubernetes, cloud platforms like AWS and GCP, Linux, scripting languages, and database management.
Ideal Candidate
The ideal candidate is a mid-level data engineer with hands-on experience in Hadoop, Spark, and data pipeline management. They should have strong troubleshooting skills and familiarity with cloud platforms like AWS or GCP, capable of supporting large-scale data systems.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Certifications
Preferred
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with Hadoop or Spark, No experience with cluster management tools, Unable to support migration or troubleshooting in distributed systems
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile