Position Details
About this role
We are seeking a Data Engineer to design, build, and maintain scalable data pipelines and architectures, enabling data-driven decision-making across the organization. You will integrate diverse data sources, optimize storage, and support analytics and ML initiatives.
Key Responsibilities
- Develop and implement scalable ETL processes using Informatica/Talend
- Design and maintain data warehouses and lakes on Azure Data Lake, Hadoop, Hive
- Build data pipelines with Spark, Python, Java, Bash
- Collaborate to integrate linked data sources and develop RESTful APIs
- Perform database design with Microsoft SQL Server and Oracle
Technical Overview
Role involves ETL development with Informatica/Talend, data warehouse and data lake design (Azure Data Lake, Hadoop, Hive), pipelines with Spark/Python/Java, RESTful APIs, and relational DB design (SQL Server, Oracle). Agile delivery and cross-functional collaboration are required.
Ideal Candidate
The ideal candidate is a mid-level Data Engineer with 3+ years of experience building scalable data pipelines and data platforms using Hadoop-based ecosystems, cloud storage, and BI tooling (Looker) in an Agile, data-driven environment.
Must-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of hands-on experience with Hadoop/Spark ecosystems, Insufficient SQL proficiency with SQL Server or Oracle, Reluctance to work in a remote environment
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile