Position Details
About this role
This role involves building and scaling AI evaluation frameworks and agent architectures at Lattice, focusing on large language models and production AI systems. The candidate will define metrics, optimize performance, and collaborate across teams.
Key Responsibilities
- Design AI evaluation frameworks
- Architect reusable agent infrastructure
- Collaborate on evaluation strategies
- Build RAG pipelines
- Optimize AI system performance
Technical Overview
The position requires hands-on experience with LLM-based systems, prompt engineering, RAG pipelines, and agent orchestration. The environment emphasizes reliability, observability, and performance in AI/ML infrastructure.
Ideal Candidate
The ideal candidate is a mid-level AI engineer with at least 5 years of experience in developing and deploying production AI/ML systems, especially with large language models. They are skilled in prompt engineering, evaluation, and agent orchestration, with a strong analytical mindset.
Must-Have Skills
Nice-to-Have Skills
Tools & Platforms
Required Skills
Hard Skills
Soft Skills
Industry & Role
Keywords for Your Resume
Deal Breakers
Lack of experience with LLM systems, No background in AI/ML engineering, Inability to work with data and statistics, No experience with production AI systems
Get matched to jobs like this
Luna finds roles that fit your skills and career goals — no endless scrolling required.
Create a Free Profile