Skip to Content
Skip to Footer

Artificial Intelligence

Building Generative AI We Can Trust

2022 will be remembered as an inflection point for generative AI — AI that doesn’t just classify or predict, but creates content of its own, be it text, imagery, video, or even executable code. And, it does so with a human-like command of language. 

It was the year that we saw large foundation models (a deep learning algorithm that has been pre-trained with large data sets with a wide variety of data, which can transfer knowledge from one task to another) like ChatGPT, StableDiffusion, and Midjourney drive widespread attention by delivering capabilities that would have seemed like science fiction only years ago. 

But despite their flashy output, these models’ most striking feature might be their sheer flexibility. Unlike even the most advanced machine learning models of the last decade, foundation models appear capable of producing truly incredible content and solving a staggering range of problems — from writing poetry and explaining physics, to solving riddles and painting pictures. And with additional training and human guidance, their applicability can be extended even further.

What do foundation models mean for the future of work?

As a technologist and researcher, I’ve long believed that AI won’t just enhance the way we live, but transform it fundamentally. In particular, the more I explore Conversational AI, the more convinced I am that it will increasingly dissolve the correlation between a technology’s power and the expertise required to harness it. AI is placing tools of unprecedented power, flexibility, and even personalization into everyone’s hands, requiring little more than natural language to operate. They’ll assist us in many parts of our lives, taking on the role of superpowered collaborators.

For engineers, marketers, sales reps, and customer support specialists, the role of AI in day-to-day work will only grow in the coming years. At Salesforce, we’ve spent years embedding state-of-the-art AI within business applications spanning sales, service, marketing, and commerce, and today, our Customer 360 platform is generating more than 200 billion AI-powered predictions per day.

But is the revolutionary promise of generative AI truly at hand, or somewhere on the horizon? It’s a more complex topic than the press coverage might suggest. 

My role as Executive Vice President and Chief Scientist of Salesforce Research has given me a unique perspective on this. We operate within the world’s biggest companies, reaching billions of people in one form or another, and serve industries that touch every facet of society. That means everything we put in the hands of our customers has to offer mission-critical reliability as well — the kind that engenders lasting trust. 

And while no one denies the power of generative AI as a whole, our ability to trust it is another matter entirely.

The dual nature of generative AI: new capabilities paired with new risks

Generative AI promises an entirely new way to interact with machine intelligence, but it introduces what might be an entirely new kind of failure as well — confident failure. The poised, often professional tone these models exude when answering questions and fulfilling prompts make their hits all the more impressive, but it makes their misses downright dangerous. Even experts are routinely caught off guard by their powers of persuasion. 

For example, in December 2022, researchers at the University of Chicago and Northwestern University used ChatGPT to generate abstracts based on titles taken from real articles in five medical journals. Then, when given a mix of original and fictitious abstracts in a blinded review, expert reviewers misidentified 32% of those generated by ChatGPT as originals and incorrectly identified 14% of the originals as being generated by ChatGPT. 

This is a complex challenge, but it’s an easy one to understand. In a forthcoming paper, a team of cognitive scientists from MIT, UCLA, and UT Austin analyzed the models that power generative AI in terms analogous to the human brain. Their conclusion was twofold: foundation models like the large language models (LLMs) trained on textual data have mastered one aspect of our capacity for language — formal linguistic competence, which describes the complex but superficial ability to follow rules like grammar, conjugation, and word choice — while barely scratching the surface of functional linguistic competence, which refers to the use of language to express non-linguistic skills like knowledge of history, common sense, and reasoning ability. 

Simply put, LLMs are masters of language, but only language. Their ability to craft a perfectly written paragraph is, unsettlingly, entirely divorced from what the paragraph’s component sentences actually mean.

That’s a dangerous imbalance and one that simply must be rectified before these tools can play a mission-critical role in the real world.

Generative AI use cases vary — and so does the challenge of trust

But given the sheer scope of this technology’s potential across the enterprise, I’m certain this is a challenge worth confronting. After all, generative AI can help create complex content like artwork, prose, and even code; summarize information from a range of sources and deliver it in abbreviated form or an interactive question-and-answer session; facilitate lifelike conversational search and information retrieval; help better understand analytics, and more. These are revolutionary possibilities, and they’re well worth exploring.

Human intervention remains necessary in each of these use cases, but to varying degrees and at varying points in the process. For instance, a human editor may fact-check and refine AI-generated marketing copy before it’s put to use. In other cases, such as a non-designer relying on AI to design an ad banner or email layout, the feedback may be binary — accepted or not. In more advanced applications, AI can play a collaborative role, helping experts automate the lower-level tasks of a project while their creativity is focused on the more sophisticated challenges. 

Consider, for example, an IT administrator partnering with a code-generation model in the creation of a custom application. The AI can flesh out routine components and subsystems in response to simple prompts — for example, “organize the log files in directories named after each network, and automatically delete existing files over six months old” — while the human focuses on developing the kind of novel, problem-solving logic that delivers unique value.

Even when human oversight is an integral part of the workflow, there’s much we can do to make AI a safer and more transparent partner. Much of this begins with awareness, arming users with a better understanding of AI’s strengths and weaknesses. Algorithms can help as well; for instance, surfacing confidence values — the degree to which the model believes its output is correct — should become a standard part of AI-generated output. Lower-scored content may still have value, but human reviews can provide a deeper level of scrutiny. Providing explainability or citing sources for why and how an AI system created the content it did can also address issues of trust and accuracy.

And this isn’t merely a forward-looking question for researchers. Salesforce AI products are already built with trust and reliability in mind, including guardrails intended to help guide our customers make ethically-informed choices.

For example, Einstein Discovery includes a feature called “sensitive fields” that allows an administrator to indicate input fields that may require restrictions on their use, or might add bias to a model, like age, race, or gender. In the most extreme cases, such fields can be excluded from use in the model altogether. Einstein Discovery can proactively identify fields that are highly correlated with the sensitive fields and therefore act as proxy variables — consider, for example, the way United States ZIP codes can be a proxy for race, even unintentionally. Once informed, admins can choose to exclude those fields from the model as well.

We’re also more aware than ever of the responsibility we face in the creation of these tools — especially when it comes to the collection of training data. Machine learning models have long been capable of biased performance and toxic output, and this risk is only exacerbated by the length, detail, and human-like realism of the content generated by today’s models. Preserving their power and flexibility while ensuring they aren’t being taught to parrot historical prejudice and generate misinformation is an extensive, ongoing effort.

Interestingly, however, concerns over ownership and the contents of training data is an opportunity to harness the unique strengths of AI in the enterprise; here at Salesforce, we’re training models on data that’s not just massive in scale but directly relevant to the use cases our customers care most about. It’s helping us explore entirely new applications even as we leverage a safer, more controlled form of training data and deliver a more trustworthy result.

CodeGen, for example, is our open-source large foundational model – and the world’s largest up-to-date model trained on programming languages. CodeGen can be used for translating a user’s natural language description of a solution into code written in languages like Python. It also provides a next-generation auto-complete capability, speeding up the development process for developers of all kinds, and allowing those time savings to be channeled into truly creative tasks.

Trust: The most critical element of generative AI’s development

These are big ideas, and we’ve only just begun to truly grapple with the challenges that await. But there’s a simple idea connecting everything we do: Salesforce’s core value of trust. 

The world must be given good reasons to trust these models at every level, from trusting the content they create to trusting the things they say to trusting the platforms on which they run. If we can deliver that, without compromise, there’s no doubt in my mind this technology will change the world for the better.

Go deeper:

  • Read the five guidelines Salesforce is using to guide the responsible development of generative AI here
  • Learn more about Salesforce research here and by following @SFResearch on Twitter.
Astro

Get the latest Salesforce News