Position Details
About this role
This Senior DataOps Engineer role owns data analytics infrastructure, bridging DevOps and data engineering. You will build CI/CD and Everything-as-Code automation, provision cloud Spark clusters, run large-scale data processing, and maintain data pipelines with production monitoring and incident response.
Key Responsibilities
- Build, maintain, and run CI/CD pipelines and infrastructure-as-code
- Provision and operate cloud Spark clusters and distributed data processing environments using tools like Airflow, Databricks, or EMR
- Write, test, and maintain data pipelines with production monitoring
- Design scalable, secure infrastructure templates and deployment automation across AWS, Azure, GCP, or OCI
- Investigate and resolve data pipeline/integration issues with root-cause analysis and durable fixes
Technical Overview
You will manage CI/CD pipelines and infrastructure-as-code with Terraform/Ansible across AWS, Azure, GCP, or OCI, and operate Spark-based data processing environments using Apache Spark on managed platforms like Databricks, AWS EMR, or GCP Dataproc. You will orchestrate workflows using Apache Airflow or Prefect, write/maintain data pipelines in Python with Java/Scala knowledge, and use root-cause analysis to resolve pipeline and integration issues while tuning performance and controlling costs.
Ideal Candidate
The ideal candidate is a senior DataOps Engineer with 6+ years of experience in DevOps, DataOps, or data platform engineering and hands-on expertise with Apache Spark on at least one managed platform (Databricks, AWS EMR, or GCP Dataproc). They are strong in Python and have practical experience with pipeline orchestration such as Apache Airflow (or Prefect) plus CI/CD using GitLab CI, Jenkins, or GitHub Actions. They build and operate secure, scalable infrastructure using infrastructure-as-code (Terraform/Ansible) across major cloud platforms, and they resolve data pipeline incidents with root-cause analysis and durable fixes.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
6+ years in a DevOps, DataOps, or data platform engineering role, Hands-on experience with Apache Spark and at least one managed Spark platform (Databricks, AWS EMR, GCP Dataproc, or equivalent), Proficiency in Python, Pipeline orchestration experience (Apache Airflow or Prefect), CI/CD experience with GitLab CI, Jenkins, or GitHub Actions, Infrastructure-as-code proficiency with Terraform or Ansible
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile