Here’s What a Broken Conversation with AI Tells You and How Design is the Fix

Take a note from human communications patterns to learn how to spot potential trouble sources with AI interactions.
When misunderstandings happen in conversations, what do you do? Humans will fix them naturally and quickly, usually within a few turns. But if it takes more than six turns to repair a misunderstanding, the conversation may start to fall apart or stop altogether.
But what happens with human-to-AI interactions? Similarly, it matters that when communications break down, there’s a way to clear things up to get back on track. And, we can use conversation design and what we understand about human turn-taking patterns to shape how AI recognizes the cues to repair misunderstandings.
Let’s dig a little deeper to learn how to spot trouble sources and best practices for ensuring AI works for humans.
What we’ll cover:
Why conversations break down
Ways to assess AI interactions for trouble sources
Even good AI agents hit snags
Tips for managing three breakdown scenarios
How to design a topic in Agent Builder
Repair broken AI conversations quickly
Why conversations break down
In human conversations, we take turns. And, if one person doesn’t understand something the other person has said, they typically signal it by asking for a clarification. If the conversation doesn’t get back on track after six turns, it increases the risk of frustration and even abandonment.
What is a trouble source?
In conversation analysis, a trouble source is the exact moment something in the conversation causes a misunderstanding or breaks the flow – such as a confusing word, phrase, or idea.
Research shows that repair attempts between humans usually happen within a few turns. A repair is when we clarify and fix misunderstandings quickly. Let’s look at an example of a turn-taking interaction between F1 engineer Xavi Marcos and Ferrari driver Charles Leclerc. The repair begins in turn two when the driver signals trouble in understanding.
Turn | Utterance | Repair role |
---|---|---|
1 | Engineer: “And try original line, turn seven eight for comparison.” | Trouble source |
2 | Driver: “What?” | Repair initiated to fix the problem |
3 | Engineer: “Try original line turn seven and eight.” | Repetition of trouble source |
4 | Driver: “I don’t understand. Horizontal line? What the hell is that?” | Repair attempt to clarify the problem |
5 | Engineer: “Original line, like the beginning of the race.” | Repair attempt by rephrasing |
6 | Driver: “Original line, you said?” | Repetition of trouble source term to confirm understanding |
7 | Engineer: “Original, yes.” | Confirmation, but trouble continues |
8 | Driver: “What the hell does that mean?” | Repair attempt expressing confusion and frustration |
9 | Engineer: “Just forget it; it’s last lap.” | Repair abandonment |
Imagine the cognitive load during this 40-second interaction – the mental effort of rephrasing, repeating, and repairing. In high-pressure contexts such as driving at 200 miles per hour (322 km/h) during a Formula 1 race, communications between the engineer and driver need to be smooth. Wasting time and increasing cognitive load leads to frustration and can affect the outcome of the race.
Ways to assess AI interactions for trouble sources
When testing and evaluating the performance, behavior, and reliability of AI, it takes about 12 turns to accurately spot key moments and identify patterns in the turn-taking structure and context:
- What the human and AI are trying to do.
- Where things break down or become messy.
- What conversation strategies develop throughout the extended exchange.
- How or if the conversation recovers.
The goal when designing experiences is to ensure repairs happen within six turns. But, extending the evaluation of AI interactions to 12 turns provides better signals to evaluate if the design is effective, catch when things go wrong, and see if the conversation recovers, goes back on track, or starts to fall apart.
Different companies or organizations will have varying business needs or requirements when handling potential breakdowns. Let’s look at practical examples to see how we could design effective repair strategies for common scenarios.
Even good AI agents hit snags
Imagine an AI agent that answers common questions about a product or a service and schedules meetings on your behalf. Sounds like a dream for a business owner or a seller at an enterprise company. However, even a well-scoped agent will encounter common breakdown scenarios:
- Knowledge gaps: an inquiry that falls outside the agent’s knowledge base.
- Business policy constraints: a request about a discount that requires a human seller.
- Technical failures: a meeting scheduling fails because of an API issue or incomplete calendar set up.
Tips for managing three breakdown scenarios
It’s important for us to understand agent behavior, limitations, as well as how the actions and tools in the Agent Builder generate outputs, because those action outputs guide the agent response. This helps anticipate where the breakdown can occur and design appropriate fallback responses.
Design guardrails for AI experiences
You can design a fallback with human handoff, which you can build directly into the Agent Builder topic instructions as a standard response pattern.
- Knowledge gaps: “When encountering inquiries or questions outside the Answer Questions with Knowledge action, you must tell the customer that they could directly reach out to the seller, who can provide them with the needed information.”
- Business policy constraints: “You must not make any commitments regarding discounts, promotions, pricing, or additional costs. Any inquiries about discounts, pricing, or quotes must be handed off to the seller.”
- Technical failures: “If you encounter an error while getting the Return Calendar Link action, you must call the Get Record Detail action using the Seller Id to get the seller’s email address. Then you must tell the customer that they could directly reach out to the seller. You must always mention the seller’s name and their actual email address in the agent response.”
Use a structured approach to get feedback
What happens when breakdown patterns emerge repeatedly? Repetition and rephrasing are clear signals that repair is happening. You want to understand conversation breakdown patterns, so you can improve the user experience and functionality.
This is when you can use a proactive, structured approach to capture user feedback. Specifically, you want to get consent to log the conversation when these patterns emerge:
- Users start rephrasing requests.
- AI agents apologize and repeatedly clarify.
- Conversations become repetitive.
- Unpleasant or negative language appears.
Sometimes human handoff isn’t possible, so logging conversations becomes a solution to capture interactions and improve the experience.
How to design a topic in Agent Builder
To get a proactive feedback collection and consent to log the conversation, you can design a topic in Agent Builder. It can detect broken AI conversation patterns using specific engagement signals and detection rules.
What this looks like in a sample conversation:
Turn | Responses |
---|---|
1 | User requests for the agent to do something. |
2 | Agent responds with a dispreferred response. (trouble source) |
3 | User rephrases, repeats initial request, or tries to repair the conversation. |
4 | Agent responds with a dispreferred response. |
5 | User tries to either continue to repair the conversation or indicates frustration or dissatisfaction. |
6 | Agent acknowledges that the last few messages might not be meeting the user’s needs and it asks the user for feedback on how it can improve the experience and then gets user’s consent to log the conversation. |
Engagement signals: If there’s a high level of rephrasing, repetition, or repair within a session, it could mean that the session wasn’t successful or can indicate misunderstanding or dissatisfaction.
Pattern detection rule: Include a rule, such as, “When a user has rephrased or repeated their request more than twice, uses frustrated or dissatisfied language, or when you’ve provided two consecutive clarification responses without successfully addressing their core need, you must acknowledge the difficulty and ask specifically how you could better help them. If the issue remains unresolved after your targeted attempt to help, get user consent to log the conversation.”
Repair broken AI conversations quickly
AI experiences should be designed with relevant grounding data, thoroughly tested for understanding diverse user expressions of terminology and concepts, and engineered to resolve misunderstandings and ambiguities in as few turns as possible.
Remember these points when designing AI:
- Trust and retention – If the AI doesn’t understand what the user says after a couple of tries, the user may stop believing in it and lose confidence. So, we need to ensure that we’re strategic and intentional in designing AI behavior.
- Reduce frustrations – If there’s a high level of rephrasing and repetition within a multi-turn interaction, it means that a session isn’t successful or can indicate misunderstanding or dissatisfaction. Users don’t want to rephrase more than three times, which feels like hard work that can become irritating and end in abandonment.
- Time to value – If it takes long multi-turn interactions for users to get or achieve what they need, it makes the experience feel slow even if the AI responds in a timely manner. So it’s important to resolve issues in as few turns as possible.
In human-to-human conversations, not every misunderstanding or ambiguity needs fixing. We choose our battles – like when Marcos the F1 engineer said, “Just forget it; it’s last lap.”
For human-to-AI experiences, however, the interactions need to be successful. So, knowing when to address broken AI conversations and when to anticipate trouble sources are essential.