Position Details
About this role
This role leads AI evaluation and safety efforts within the AI Governance team, focusing on developing methodologies for responsible AI deployment and oversight.
Key Responsibilities
- Lead AI evaluation efforts
- Develop safety methodologies
- Conduct red team exercises
- Collaborate with external researchers
- Translate technical findings into recommendations
Technical Overview
The position involves working with AI frameworks, conducting red team exercises, and applying causal inference and experimental design to evaluate AI systems' safety and performance.
Ideal Candidate
The ideal candidate is a senior AI researcher or scientist with extensive experience in AI evaluation, safety, and governance. They possess strong leadership skills, a background in AI frameworks, and a track record of advancing AI safety methodologies.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Clearance & Visa
Keywords for Your Resume
Deal Breakers
Lack of experience in AI evaluation or safety, No background in AI governance, No leadership or mentorship experience, Bachelor's degree only without relevant experience
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile