Position Details
About this role
Senior Data Engineer responsible for designing and delivering large-scale data processing solutions on cloud platforms. Will build and maintain data pipelines for ingestion and processing, mentor teammates, and collaborate across functions to enable data-driven capabilities.
Key Responsibilities
- Design frameworks for large-scale data processing
- Develop data pipelines on Google Cloud Platform using Google Data Flow and Data Proc
- Ingest data via REST APIs / Microservices and define workflows
- Collaborate with cross-functional teams to deliver data solutions
- Mentor team members on best practices
Technical Overview
Cloud-based data engineering scope including Google Cloud Platform, BigQuery, Databricks, Spark; data ingestion via REST APIs / microservices; CI/CD for data pipelines using Docker, GitHub, and Jenkins; programming in Java/Python; SQL/NoSQL databases.
Ideal Candidate
The ideal candidate is a senior data engineer with 5+ years of experience building large-scale data pipelines on cloud platforms (GCP/AWS/Azure). They should be proficient in Java/Python, SQL/NoSQL, and modern data tooling like BigQuery, Databricks, Spark, and Kafka Streams, with a track record of delivering end-to-end data ingestion and processing solutions.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Bachelor's degree in Computer Science or related field is required, Minimum 5 years data engineering experience, Lack of hands-on experience with GCP / BigQuery / Databricks, No experience with Hadoop, Spark, or Kafka Streams, Not willing to work onsite in Bolingbrook, Illinois
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile