Position Details
About this role
This role involves developing and maintaining data pipelines, supporting data orchestration, and ensuring data quality and governance for enterprise knowledge management initiatives.
Key Responsibilities
- Develop data pipelines
- Support data orchestration
- Perform data transformations
- Collaborate with stakeholders
- Ensure data governance
Technical Overview
Utilizes Azure Data Factory, Databricks, dbt, and Python to build scalable data pipelines, perform data transformations, and support metadata management in a hybrid work environment.
Ideal Candidate
The ideal candidate is an entry-level data engineer with 3+ years of experience in building data pipelines, supporting data orchestration, and working with Azure Data Factory and Databricks. They should be collaborative, detail-oriented, and capable of supporting enterprise data initiatives.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
Lack of experience with Azure Data Factory or Databricks, No Python scripting skills, Inability to work in a hybrid environment, No experience with data pipelines or orchestration, Poor collaboration skills
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile