Salesforce Execs Weigh In: What Is Generative AI?
Generative artificial intelligence (AI) exploded on the scene in late 2022, sending people and businesses into a frenzy of curiosity and questions over its potential.
But what exactly is generative AI? Put simply, generative AI is technology that takes a set of data and uses it to create something new – like poetry, a physics explainer, an email to a client, an image, or new music – when prompted by a human.
Unlike traditional AI models, generative AI “doesn’t just classify or predict, but creates content of its own […] and, it does so with a human-like command of language,” explained Salesforce Chief Scientist Silvio Savarese.
Of course, the ability to classify and predict data accurately is a critical element to successful generative AI: The product is only as good as the data it has to work with.
“AI is only as good as the data you give it and you have to make sure that the datasets are representative.”Paula Goldman, Salesforce Chief Ethical and Humane Use Officer
How does generative AI work?
There are several approaches to developing generative AI models, but one that is gaining significant traction is using pre-trained, large-language models (LLMs) to create novel content from text-based prompts. Generative AI is already helping people create everything from resumes and business plans to lines of code and digital art. But the technology’s potential at Salesforce and for enterprise businesses goes beyond making images of polar bears playing bass guitar.
The user gives the tool direction on what to produce, and then, based on the LLMs it has to work with, the AI generates something — be it words, code, or when thinking even bigger, things like novel proteins.
Eventually, Savarese predicts, these AI tools will “assist us in many parts of our lives, taking on the role of superpowered collaborators.” For enterprises, it is especially important to include a human in the loop approach when developing and using generative AI technologies. By doing so, businesses can validate and test automated workflows with human oversight and intervention before unleashing fully autonomous systems. This can help prevent potential risks and ensure that the technology is being used in a responsible and ethical manner. Moreover, having a human in the loop can help build trust and confidence in the technology among stakeholders and customers.
Digging deeper, it typically does this using one of two types of deep learning models: generative adversarial networks (GANs) or transformers.
- GANs are made up of two neural networks: a generator and a discriminator. The two networks compete with each other, with the generator creating an output based on some input and the discriminator trying to determine if the output is real or fake. The generator then fine-tunes its output based on the discriminator’s feedback, and the cycle continues until it stumps the discriminator.
- Transformer models, like ChatGPT, (which stands for Chat Generative Pretrained Transformer), create outputs based on sequential data (like sentences or paragraphs) rather than individual data points. This approach helps the model efficiently process context and is why it’s used to generate or translate text.
- While GANS and transformers are among the most popular generative AI models, several other techniques are used as well, such as variational autoencoders (VAEs), which also rely on two neural networks to generate new data based on sample data, and neural radiance fields (NeRFs), which is being used to create 2D and 3D images.
How is generative AI changing business?
Generative AI models like ChatGPT, StableDiffusion, and Midjourney have captured the imagination of business leaders around the world.
In fact, a new Salesforce survey found that two-thirds (67%) of IT leaders are prioritizing generative AI for their business within the next 18 months, with one-third (33%) claiming it as a top priority.
As recent Einstein GPT news from Salesforce highlights, the technology is “open and extensible – supporting public and private AI models purpose-built for CRM – and trained on trusted, real-time data.”
Salesforce has been exploring how to develop and deploy generative AI to support customer needs for years. For example, the company introduced CodeGen, which democratizes software engineering by helping users turn simple English prompts into executable code. Another project, LAVIS (short for LAnguage-VISion), helps make AI language-vision capabilities accessible to a wide audience of researchers and practitioners.
More recently, Salesforce’s ProGen project revealed that by creating language models based around amino acids instead of letters and words, generative AI was able to produce proteins that have not been found in nature, and in many cases, are more functional. With further research, the idea is that these proteins can be used to develop medicines, vaccines, and treatments for diseases.
Ketan Karkhanis, Salesforce’s Executive Vice President and General Manager of Sales Cloud, said that while the technology may be a boon for large businesses, it’s helpful for small- and medium-sized businesses (SMBs) too.
“Capabilities like automated, AI-generated proposals and customer communications, along with predictive sales modeling, will give SMBs even more powerful tools to help them provide great customer experiences, manage operating expenses, and achieve sustainable growth,” Karkhanis said.
Clara Shih, the CEO of Salesforce AI, believes that generative AI “will completely reshape the field of customer service.”
“With generative AI layered onto Einstein for Service and Customer 360, we’ll have the ability to automatically generate personalized responses for agents to quickly email or message to customers … freeing human agents to spend more time deeply engaging on complex issues and building long-term customer relationships,” Shih said.
What are the risks and opportunities of generative AI?
While the potential of generative AI is enormous, it “is not without risks,” according to Paula Goldman, Salesforce Chief Ethical and Humane Use Officer and Kathy Baxter, Principal Architect for Salesforce’s Ethical AI practice.
In a coauthored article, the pair pointed out that it’s “not enough to deliver the technological capabilities of generative AI. We must prioritize responsible innovation to help guide how this transformative technology can and should be used — and ensure that our employees, partners, and customers have the tools they need to develop and use these technologies safely, accurately, and ethically.”
In an interview with Silicon, Goldman shared, “Accuracy is the most important thing when applying AI in a business context because you have to make sure that if the AI is making a recommendation for a prompt, for a customer chat or a sales-focused email, that it’s not making up facts.” Ensuring data is accurate and trustworthy is foundational to any AI application.
The authoritative feel of ChatGPT responses is itself something to be mindful of, said Savarese, who warned it could lead to what he deems “confident failure.”
“The poised, often professional tone these models exude when answering questions and fulfilling prompts make their hits all the more impressive, but it makes their misses downright dangerous,” Saverese said. “Even experts are routinely caught off guard by their powers of persuasion.”
Scale the reliance on tools like ChatGPT up to the enterprise level and it’s easy to see how high the stakes could get. But IT leaders are on guard: Nearly six in 10 (59%) said they think generative AI outputs are inaccurate.
Then there’s the question of how to use generative AI ethically, inclusively, and responsibly.
That’s why Salesforce is building trusted AI capabilities with embedded guardrails and guidance to help catch potential problems before they happen. If the world is going to realize the potential of generative AI, it will need good reasons to trust these models at every level.
Responsible AI also means sustainable AI. AI consumes significantly more power than traditional workloads and 71% of IT leaders agree generative AI would increase their carbon footprint through increased IT energy use.
Despite the need to explore generative AI inclusively and with intention, the technology holds vast potential for the future of CRM.
Generative AI at Salesforce — what does it mean for CRM?
AI has long been integral to the Salesforce platform. For example, Einstein AI technologies deliver more than 200 billion daily predictions across the Customer 360, helping businesses close deals faster, provide AI-powered human-like conversations for frequently asked questions, and better understand customer behavior.
Recently, Salesforce announced Einstein GPT, the world’s first generative AI for CRM. From personalized sales emails to auto-generated code, Einstein GPT will deliver AI-created content across every sales, service, marketing, commerce, and IT interaction, at hyperscale. And, it’s built for customers in a way that’s relevant to them — Einstein GPT uses data from Data Cloud combined with public data to create content across the Customer 360.
And, it will do so with the same foundation of inclusivity, responsibility, and sustainability at the core of any Salesforce product. Read about generative CRM and what it means for businesses.
Learn more about Einstein GPT and how it marks the next big milestone in Salesforce’s AI journey.
- Read the news about Einstein GPT, the world’s first generative AI for CRM
- Read how Salesforce is building generative AI we can trust
- Learn more about IT leadership’s perceptions of generative AI