✦ Luna Orbit — DevOps & SRE

DataOps Engineer

at Radcube LLC

📍 Remote, US Remote Posted April 14, 2026
Type Full-Time
Experience mid
Exp. Years 5+ years
Education Bachelor’s degree in Computer Science, Engineering, or a related STEM field
Category DevOps & SRE

Radcube is seeking a DataOps Engineer to support enterprise data and analytics initiatives in a HIPAA-compliant, regulated cloud environment. The role focuses on end-to-end data operations including pipeline orchestration, CI/CD, monitoring, data quality/governance, and secure handling of PII/PHI in modern AWS and Databricks-based data platforms.

  • Design and manage DataOps frameworks for scalable data pipelines; Build CI/CD pipelines for automated testing, deployment, and version control; Implement orchestration using Apache Airflow and AWS Step Functions; Establish observability and monitoring and troubleshoot pipeline issues; Enforce data quality, governance, HIPAA compliance, and PII/PHI protection (encryption and IAM/RBAC)

You will design and manage DataOps frameworks and build CI/CD pipelines for data workflows with automated testing and deployment. The technical environment includes AWS services (S3, Glue, Lambda, Step Functions, Redshift), orchestration with Apache Airflow and AWS Step Functions, and observability via CloudWatch, Datadog, Prometheus, and Grafana—within data lake/lakehouse architectures on AWS and Databricks.

The ideal candidate is a DataOps/platform engineer with 5+ years building and operating regulated data pipelines in a HIPAA-compliant AWS environment. They are strong in CI/CD for data workflows, pipeline orchestration with Apache Airflow and AWS Step Functions, and observability using CloudWatch, Datadog, Prometheus, and Grafana—while enforcing data quality, governance, and PII/PHI protections using IAM/RBAC and encryption.

HIPAA-compliant regulated cloud environmentApache AirflowAWS Step FunctionsCI/CD pipelines for data workflowsPythonSQLAWS (S3GlueLambdaStep FunctionsRedshift)observability and monitoring (CloudWatchDatadogPrometheusGrafana)data qualityvalidationand governancePII/PHI protectionencryptionand access controls (IAM/RBAC)
DatabricksDatadogPrometheusGrafana
Apache AirflowAmazon Web Services (AWS)AWS S3AWS GlueAWS LambdaAWS Step FunctionsAmazon RedshiftDatabricksCloudWatchDatadogPrometheusGrafanaIAMRBAC
DataOps frameworksdata pipeline orchestrationCI/CD pipelinesautomated testingdeploymentversion controlApache AirflowAWS Step FunctionsmonitoringobservabilityCloudWatchDatadogPrometheusGrafanadata qualitydata validationdata governanceHIPAAPII/PHI protectionencryptionIAM/RBACPythonSQLAWS S3AWS GlueAWS LambdaAmazon RedshiftDatabricksdata lake/lakehouse architectures
DataOps frameworksdata pipeline orchestrationdata pipelinesCI/CD pipelines for data workflowsautomated testingdeploymentversion controlApache AirflowAWS Step Functionsmonitoring data pipelinestroubleshoot data pipelinesobservability and monitoringCloudWatchDatadogPrometheusGrafanadata qualitydata validationdata governanceHIPAA compliancePII/PHI protectionencryptionaccess controlsIAM/RBACAWS S3AWS GlueAWS LambdaAWS Step FunctionsAmazon Redshiftdata lakelakehouse architecturessecure data processingincident responseroot cause analysisfailure recoveryhealthcare data ingestionAPIsclaimsEHR dataPythonSQL
collaborate with data engineerscollaborate with architectscollaborate with business teamsoperational excellenceimprove efficiencyoptimize reliabilitysupport incident responsepromote best practices
Industry Healthcare IT
Job Function Operate and improve secure, monitored, production data pipelines using DataOps across AWS and Databricks.
Role Subtype DevOps Engineer
Tech Domains Amazon Web Services, DevOps & SRE, Docker, Python, SQL / PostgreSQL, Kubernetes, Azure, Databricks
DataOps EngineerDataOpsRegulated CloudData hub servicesHIPAA-compliantHIPAAdata pipeline orchestrationCI/CD pipelinesautomated testingApache AirflowAWS Step FunctionsmonitortroubleshootobservabilityCloudWatchDatadogPrometheusGrafanadata qualitydata governancedata validationPII/PHI protectionencryptionIAMRBACPythonSQLAWS S3AWS GlueAWS LambdaAmazon RedshiftDatabrickslakehouse architectures

Minimum 5+ years of Data Engineering / DataOps / Platform Engineering experience, Strong experience with Python and SQL for data processing and automation, Hands-on experience with AWS data services (S3, Glue, Lambda, Step Functions, Redshift), Ability to build/operate data pipeline orchestration and monitoring in a HIPAA-compliant environment

Apply for this Position →

Get matched to jobs like this

Luna finds roles that fit your skills and career goals — no endless scrolling required.

Create a Free Profile