Position Details
About this role
This role focuses on developing explainable AI methods for foundation models used on biological and omics data. You will build benchmarks and evaluations, create fit-for-purpose datasets, and publish research while working cross-functionally with computational biology, data science, and engineering teams.
Key Responsibilities
- Extract biological insights from foundation models and omics data
- Develop post-hoc and intrinsic explainability methods
- Build rigorous benchmarks and evaluation tasks
- Fine-tune, evaluate, and debug AI models and data at scale
- Publish in relevant conferences and journals
Technical Overview
You will work with transformer-based and related state-space foundation models for -omics data, developing both post-hoc and intrinsic explainability techniques. The role emphasizes graph neural networks, large-scale model fine-tuning/evaluation/debugging, and implementing rigorous benchmark-driven assessment using Python with the PyTorch ecosystem.
Ideal Candidate
The ideal candidate is a senior-level AI researcher with a PhD (or equivalent MS/BS experience) in a STEM field and strong experience applying foundation models to bioinformatics and omics data. They have hands-on expertise in explainable AI, including post-hoc and intrinsic explainability, and can build rigorous benchmarks and evaluation pipelines using Python and the PyTorch ecosystem.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Must have PhD, MS, or BS in a related STEM field (as specified), Must have proven experience with Python and the PyTorch ecosystem, Must have expertise with graph neural networks
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile