Position Details
About this role
Design and implement Python-based automation to support reconciliation, including robust ETL pipelines for data integration. Build, test, and deploy scalable workflows in cloud and enterprise data environments.
Key Responsibilities
- Design, develop, and maintain ETL pipelines
- Refactor and validate Python automation with test cases
- Implement scalable solutions using Python with Databricks and Apache Airflow
- Troubleshoot pipeline issues and ensure data integrity
- Collaborate with stakeholders to translate requirements into maintainable solutions
Technical Overview
The role centers on Python automation and ETL workflows that extract, transform, and load data into an integrated platform. The stack includes Databricks and Apache Airflow, with AWS services such as AWS Lambda and Amazon S3, plus SQL database development/maintenance/administration. It also includes testing and troubleshooting to ensure data integrity across systems.
Ideal Candidate
The ideal candidate is a Python-focused data automation engineer with 3+ years building and refactoring Python for ETL data pipelines and reconciliation workflows. They have strong SQL database experience (3+ years) and hands-on cloud data platform skills using Databricks and Apache Airflow, plus AWS Lambda and Amazon S3. They are a U.S. citizen able to pass a background investigation, with DHS clearance (CBP/ICE) or DoD Top-Secret preferred and OCR/Document Understanding experience a plus.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
3+ years of experience with Python, U.S. citizenship required (must be able to pass a background investigation by the client agency)
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile