Generative AI is reshaping how customers find answers and solve problems - but how do you know if the response can be trusted?
At Salesforce, accuracy isn’t an afterthought — it’s foundational. Through our own journey with Agentforce on Help, we’ve developed a practical, repeatable framework to evaluate and continuously improve generative AI performance.
In this session, our Agentforce experts will share:
- How to define and measure “answer quality” in generative AI
- The framework to evaluate and improve Agentforce responses at scale
- How humans-in-the-loop (HITL) and AI work together to drive continuous learning and accuracy
You’ll leave with a clear framework to evaluate your own AI agents — and ensure every customer interaction is accurate, consistent, and built on trust.
Featured Speakers
Emily Dunn
Program Strategy Manager, Agentforce Evaluations
Salesforce
Maya Robles-Wong
Director, Agentforce Evaluations
Salesforce
Cristina Mondini
Director, Digital Success
Salesforce