The Best Way To Build AI Agents That Customers Trust
The Best Way To Build
AI Agents That Customers Trust
Think giving your AI agent character is just a marketing gimmick? It's actually a smart move to protect your brand.
How do you learn to trust someone? You watch whether they behave in consistent ways, take responsibility for their actions, and act in alignment with their values. In other words, you figure out if they've got good character.
Guess what? It's the same with AI agents. If you want customers and employees to use your artificial intelligence (AI) agents, those agents need to be trustworthy and predictable. Trust is so important, it's driving companies to increasingly embed character-driven AI in their agents - defining what an agent sounds and acts like, and what values it embodies.
Character-driven AI is not just a smart branding move or a conversational AI best practice. It's critical for brand safety. When an agent goes off script - or simply feels "off" - it can, at best, break user trust. At worst, it can tank a company's reputation.
What exactly is character-driven AI and how can companies use it to create agents that build loyalty and trust? Let's explore some good and not-so-good examples, and look at how you can apply conversational AI best practices to your next agent.
Mention the word "character" and most people think of personality, which makes sense. Giving your agent a name and fun (or otherwise business-appropriate) persona can be a great marketing move. But that definition of character hails from several iterations ago. "When we first started designing bots and AI assistants, we mainly defined personality to mean voice and tone," said Yvonne Gando, senior director of user interface and user experience at Salesforce.
Now, she said, the goal is "to make the actual behavior more durable and consistent, which means you have to have a system of decisions that cover all the different use cases and how everything can go wrong. It's not about personality in the marketing sense; it's about character in the systems sense."
Think of character-driven AI as your agent growing into a mature adult. When agents were introduced a few years ago, they were like eager-beaver interns, wanting to help with everything. They were also loose cannons. Now, they're developing into senior leadership, consistently on-brand and aligned with company values.
Put simply, character-driven AI is about making sure your agent behaves the way the company intends, every time and at every customer touchpoint, no matter how much you scale.
It's not about personality in the marketing sense; it's about character in the systems sense.
For proof of how important good character is, consider what happened to an international delivery service when its AI-powered chatbot went off script. After prompting from a frustrated customer, the chatbot used profanity, told a joke, wrote poetry about how useless it was, and criticized the company as the "worst delivery firm in the world."
Most companies aren't eager for that kind of press - or the bigger problem it presents. When users don't trust an agent, they stop using it. And they may decide not to buy your goods or services, either.
If character is not well-defined, an agent can also slip into contextual drift, or stray from its original instructions and script. That was the case for Anthropic's Project Vend experiment, when the company asked its AI agent, Claudius, to run an office vending machine. Claudius took orders and restocked the machine. But over time, it made increasingly bizarre decisions, at one point claiming it would hand-deliver items to employees wearing a "blue blazer and red tie."
Likewise, people can lose trust when an agent that's supposed to be helpful only ends up disrupting their workflows. If a nurse asks an agent to identify medication interactions, for example, but the agent can't access the patient's files, the nurse may grow frustrated and figure out the interactions on their own. "Those are the silent killers of adoption, those teeny-tiny micro moments where trust is broken," said Gando. "People just want to get stuff done."
Done well, character-driven AI reflects the personality of your business and creates trust with customers and employees. A banking agent that communicates in a zipped up, straightforward manner can make customers feel a measure of assurance that their money is safe. And when a surfboard company's agent's vibe is all sun and sand, customers are probably psyched to hit the beach.
Get character right and customers will use your agent. Get it wrong and customers notice.
If there's one company that's nailed character, it's Duolingo. Though it's not an agent, the AI-powered app is always on-brand (playful and quirky), including the sarcastic but friendly daily nudges from its mascot, Duo. Duolingo's character is so engaging and consistent, the app has more than 50 million daily active users.
But you can also define character by how well an agent responds to various situations. Ben Richards, managing director, Canada customer growth and transformation at Salesforce, pointed to the example of a junk removal company. One customer may call and tell the company's agent, "Hey, I just won the lottery, I need you to clear out all the junk in my house because I'm replacing it." Another may say their mother passed away and they need help clearing out the house.
"The way the AI responds in those scenarios tells you a lot about the way you build character in your organization," Richards said. "But it's also about guardrails. You don't want your agent to forget halfway through a conversation that someone's mother passed away. So, when you build guardrails, it's very important that they're not just about actions, but the way you converse and build the experience."
If you want your AI agent to exhibit good, solid character - from which it will not stray - Gando said it needs to be built upon these four layers:
- Intent. What is the agent trying to accomplish?
- Communication style. What tone, pacing, and words would you like the agent to use?
- Decision boundaries. When should the agent act, ask more questions, or escalate the conversation to a human?
- Trust guardrails. How will the agent handle uncertainty, authority, or risk?
With these layers in mind, here is how to build an AI agent that is character-driven and consistent, and embraces conversational AI best practices every time.
People just start building without having defined brand attributes - even if it's just voice and tone guidelines - and then they're surprised when the agent doesn't do what they want it to do.
Get insights and advice just for you
More Feature Articles
Explore more stories about building AI agents, customer trust, and growing with technology.


