Position Details
About this role
Principal AI Testing & Monitoring leads validation and monitoring programs to ensure AI/ML models meet enterprise standards for robustness, fairness, transparency, and regulatory compliance. This role partners with Data Science, Engineering, Legal, Compliance, Risk, and IT to govern AI across the lifecycle.
Key Responsibilities
- Design, development, and execution of AI/ML model validation frameworks across use cases
- Conduct bias audits, adversarial testing, and stress testing
- Apply statistical testing, benchmarking, and explainability (XAI)
- Develop enterprise AI monitoring frameworks
- Integrate monitoring insights into governance dashboards
Technical Overview
Develops and executes independent validation frameworks, bias audits, adversarial and stress testing, and AI monitoring with observability. Applies explainability techniques (XAI), synthetic data generation, and risk-based remediation aligned with RUAI principles and regulatory requirements.
Ideal Candidate
The ideal candidate is a senior AI governance expert with 5+ years in AI/ML validation and monitoring, deep experience with bias/adversarial/testing, and strong knowledge of regulatory frameworks (EU AI Act, GDPR, CCPA, NIST RMF). They excel at cross-functional collaboration with legal, risk, and data science teams and can translate complex AI risk findings into actionable guidance.
Must-Have Skills
Nice-to-Have Skills
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of 5+ years in independent AI model testing/validation, No experience with AI governance or regulatory frameworks, Inability to work with cross-functional teams (legal/compliance/risk)
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile