About this role
Staff Data Engineer responsible for owning business-critical data engineering processes and data architecture to enable analytical and reporting solutions for key NPI projects in the CES CTO organization. The role includes building data infrastructure, pipelines, and databases on the Miro platform, while ensuring quality, availability, and performance.
Key Responsibilities
- Own data infrastructure, pipelines & database implementations for a new solution being built on the Miro platform
- Ensure data quality, data availability & performance for required datasets
- Manage ETLs & data storage solutions and build data loading/transformation methods
- Design and build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications and manage metadata structures
- Design robust data models and analyze impact of changes to downstream systems/products while recommending alternatives
Technical Overview
Designs and builds automated Extract, Transform & Load (ETL) jobs based on data mapping specifications, manages metadata and reusable ETL components, and creates robust data models. Works with tools and concepts spanning Informatica/Talend, data catalogs and lineage (Alation/Collibra), SQL across Oracle/MySQL/PostgreSQL, and production CI/CD plus infrastructure-as-code with observability (logging, metrics, alerting) and performance tuning.
Ideal Candidate
The ideal candidate is a senior-to-staff level data engineer with proven experience designing and operating production-grade ETL/ELT pipelines and robust data models. They have hands-on SQL (Oracle, MySQL, PostgreSQL), programming experience in Java/Python/Scala, exposure to Informatica or Talend, and experience with data catalog/lineage tools like Alation or Collibra, plus CI/CD and infrastructure-as-code practices.
Must-Have Skills
own data infrastructurepipelines & database implementationsensure data qualitydata availability & performancemanage ETLs & data storage solutionsdesign robust data modelsdesign and build automated ExtractTransform & Load (ETL) jobsmanage metadata structures for reusable ExtractTransform & Load (ETL) componentsanalyze impact of changes to downstream systems/productsexposure to ExtractTransform & Load (ETL) tools like Informatica or Talendexposure to industry standard data catalogautomated data discovery and data lineage tools (e.g.AlationCollibraetc.)hands-on experience in programming languages like JavaPython or Scalahands-on experience in writing SQL scripts for OracleMySQLPostgreSQLproven experience designingbuildingand operating production-grade ETL/ELT pipelinesexpertise in data modeling for analytics and operational workloadsexperience with CI/CDinfrastructure-as-codeloggingmetricsalertingand performance tuning for data systemsdemonstrated ability to design and document complex data architectures and lead others
Tools & Platforms
MiroInformaticaTalendAlationCollibraJavaPythonScalaSQLOracleMySQLPostgreSQLCI/CDinfrastructure-as-codeloggingmetricsalerting
Required Skills
data infrastructuredata pipelinesdatabase implementationsdata qualitydata availabilitydata performanceETLExtractTransform & Load (ETL)ELTdata transformationdata mapping specificationsmetadata structuresdata modelsInformaticaTalenddata catalogautomated data discoverydata lineageAlationCollibraJavaPythonScalaSQL scriptsOracleMySQLPostgreSQLCI/CDinfrastructure-as-codeloggingmetricsalertingperformance tuningMiro platform
Hard Skills
data infrastructuredata pipelinesdatabase implementationsdata qualitydata availabilitydata performanceETLsdata storage solutionsdata loading methodsdata transformation methodsExtractTransform & Load (ETL) jobsdata mapping specificationsmetadata structuresreusable ExtractTransform & Load (ETL) componentsdata modelsdata architectureproduction-grade ETL/ELT pipelinesExtractTransform & Load (ETL) componentsrelational databasesnon-relational databasesdata modeling toolsInformaticaTalenddata catalogautomated data discoverydata lineage toolsAlationCollibraJavaPythonScalaSQLOracleMySQLPostgreSQLCI/CDinfrastructure-as-codeloggingmetricsalertingperformance tuningMiro platformdata ecosystems (infrastructurepipelinesdatabase implementations)NPI projectscompliancesecurityoperational stability of data pipelines
Soft Skills
partnering with PTs & developersability to understand data needscollaboration with infra and security teamsability to lead others in implementing themanalyze impact of changes to downstream systems/productsrecommend alternatives
Keywords for Your Resume
Staff Data Engineerdata engineering processesdata architecturedata infrastructuredata pipelinesdatabase implementationsdata qualitydata availabilitydata performanceETLExtractTransform & Load (ETL)ELTdata storage solutionsdata transformationdata mapping specificationsmetadatareusable ExtractTransform & Load (ETL) componentsdata modelsproduction-grade ETL/ELT pipelinesdata modeling toolsInformaticaTalenddata catalogautomated data discoverydata lineageAlationCollibraJavaPythonScalaSQL scriptsOracleMySQLPostgreSQLCI/CDinfrastructure-as-codeloggingmetricsalertingperformance tuningMiro platformNPI projects
Deal Breakers
Hands-on experience writing SQL scripts for Oracle, MySQL, PostgreSQL, Proven experience designing, building, and operating production-grade ETL/ELT pipelines, Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend, Exposure to data catalog and data lineage tools such as Alation or Collibra, Legal authorization to work in the U.S. is required; employer will not sponsor employment visas
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile