Position Details
About this role
Seeking an experienced Big Data Engineer to design, develop, and maintain scalable data pipelines and solutions using Hadoop, Spark, and cloud platforms. The role involves working with large-scale data processing frameworks and optimizing data systems for performance and scalability.
Key Responsibilities
- Design and maintain data pipelines
- Work with Hadoop and Spark frameworks
- Develop ETL processes
- Optimize data processing performance
- Collaborate with cross-functional teams
Technical Overview
The role involves working with Hadoop ecosystem tools such as Hive, Pig, Oozie, and Apache Spark, along with programming in Python and Scala. Cloud platforms like AWS and GCP are integral, with a focus on ETL, distributed processing, and data modeling.
Ideal Candidate
The ideal candidate is a senior Big Data Engineer with over 10 years of experience in data engineering, proficient in Hadoop ecosystem, Apache Spark, Python, and Scala. They possess strong hands-on experience with distributed data processing frameworks, cloud platforms, and ETL pipeline development.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Less than 10 years of experience, Lack of experience with Hadoop, Spark, or cloud platforms, No experience with ETL or distributed data processing
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile