PATRICK STOKES:
So we're going
to answer and bring to
life the most important
question of the day,
which is, how do I
Maybe more
importantly, how do I
leverage all of the
productivity gains
that we can get
from generative AI
without giving away all
of our company's data?
Now Marc talked a
little bit about this.
We understand how
it works today.
We put our data
into databases.
And databases have this
kind of inherent concept
You put the data
into a database,
you're specifying
what database
It goes to, what table,
what row, what field,
And within that
location, we
can put access
controls on top.
We can specify
who's allowed
to pull the data out of
that particular location.
And on top of
that, we can build
all of the security, all
of the data governance,
all of the access
controls that we
need to keep our
data safe across all
of the employees
that we have
and define how that data
ultimately gets used.
But large language
models are
completely different
because they don't really
Instead, they learn data.
And learned data is
very, very different.
If I asked you all in the
room what an apple is,
you can probably,
immediately
tell me what an apple is.
But if I ask you to tell
me where in your brain
is the knowledge about
what an apple is,
nobody would be able
to tell me that.
A large language
model works very much
The reason you
know what an apple
is because over
time, you've
come to identify
certain properties
You know that an
apple is round.
You know that it
grows on trees.
You know that it's red,
but sometimes it's green.
And all of these
properties combined
give you the knowledge
of what an apple is,
excuse me, in that
learned environment,
we can't put those same
types of access controls
We can't control how
the data comes out
of the large
language model, how
So that's the problem that
we're trying to solve.
Well, it all starts
with a prompt.
Now a prompt is
a word you're
going to hear a
whole hell of a lot
over the next couple
of years and months.
You've all done
this, probably,
A prompt it's
just a question.
It's the question
that you're
going to ask the
large language model.
So let's consider
things in the context
Let's pretend I'm an
investment manager
And I want to write an
email inviting my client
to discuss our investment
services, fairly
And from this fairly
simple question,
I am going to get an
answer or a generation
that is two things
at the same time.
It's incredible
what this was
able to generate
with very, very
This is something
that I can take.
I can manipulate
a little bit.
And this is an
amazing first start.
But this is also
terrible and unusable.
It doesn't know anything
about my customer,
or my company, or the
products that I sell.
This reminds me of
every recruiting email
It's just right
to the trash.
And it's because
it's missing context.
It's missing information
about my business.
Now, I want to include
that information.
Now, what most of you
would probably think is,
uh, I know what
I need to do.
I need to train a
large language model.
I need to spend
months of time
giving it data
and feeding all
of this data
about my business
into the large language
model to train it.
But you don't
need to do that.
You can continue to use
the prompt in a technique
called grounding,
because a prompt is more
A prompt is an
entire canvas
to provide
detail, context,
And within those
instructions,
we can ask for more
of what we want.
So for example,
within this prompt,
I can tell them
about myself.
I'm an investment
manager at Cumulus Bank.
I can tell the prompt
about my customer
or my company, excuse me.
We're a one stop solution
for personal banking
I can tell the prompt
about my customer, Lauren
She's been a customer
for over seven years.
She has a checking
account, savings account,
I can even tell this
prompt about some
of the most recent
information.
Have you all heard
that large language
models, most of them today
in the consumer space,
they're a year in fact,
maybe more out of date.
We need to know
exactly what's
going on in our
business right now.
So we can put that
in the prompt.
Lauren downloaded an
empowering sustainable
future whitepaper from
our website last week.
So we know she's
interested in sustainable
investments or a
sustainable future.
I can include information
about our newest products
that we've just launched.
We've actually
just launched
a new green energy
investment opportunity.
And then finally, I can
include information that
may be true, may not
be true at any given
time, which is we're
having an event.
So maybe we're having an
event in a few months,
maybe we're not
in a few weeks.
And this is the type
of logic instruction
that we can
include as well.
And when we ask for
a generation based
on all of that, we
get something that
This isn't
something that we
have to download, and
cut, and paste, and start
injecting our data
into, and then
worry about where
we paste it back.
This is something I
can just click a button
and use immediately
in my application,
but there's
still a problem.
The problem is that all
of that customer data
Now, before I
explain how we're
going to solve
that problem,
I want you to just
consider all of the data
that Salesforce has
about your business, all
of the context
across sales,
service, commerce,
marketing,
across the data cloud,
all of that telemetry data
coming from Marc's
car, his two cars.
All of that
data coming in,
this is all data
that we can ground
in that prompt, that
we can add as context
to get a better generation
on the other side.
But the problem
is, if you look
at that data, that
sensitive data,
there's PII data in there.
There's Lauren
Bailey in there.
How do we protect
all of that?
It's better than
training, but how
do we protect
all of that data
from getting lost in
the large language model
and not being able to
control how we recall it
And that's where the
Einstein GPT trust
The Einstein
GPT trust layer
creates separation
between all
of your corporate
enterprise data
stored in your CRM, in
databases where we can
And it allows you to
responsibly ground
all of your prompts
in that data
without that data ever
leaving Salesforce.
Now, we do this with
a number of methods--
secure data retrieval,
dynamic grounding,
We do toxicity
detection and something
Now, let's take a look
at just a few of those.
So if we go back
to our prompt,
let's look at
data masking.
So as I mentioned, I have
some PII data in here.
That's personally
identifiable information.
That's information that
I don't want to go out
So we have a technique
called data masking.
We can simply
mask that data.
And now that PII data
does not travel across
But even cooler
than that is we
can take this
entire prompt.
And when we're
done with it,
The prompt never gets
stored anywhere else.
We add the context
about our business.
We get our
generation back.
We mask all of
the PII data.
And then we
delete the prompt
so it never enters into
the LLM in a stored way.
None of that
sensitive data
Now to show you
exactly how this works
to dig into the
technical details
and the architecture
of how our team, many
of them over there
put this all together,
we're going to
bring back up Srini.
SRINI TALLAPRAGADA:
Thank you, Patrick.
So let me explain the
architecture, the Trusted
It's grounded
in what Marc was
explaining about trust
so that your data--
the two things you
have to all remember,
we never look at your
data, Salesforce.
Also, we never
share your data
with any other customers.
And any learning
happens for your data
is within your
trusted boundary
Those are high
level concepts.
But the way our
architecture builds up
The bottom most
layer is what
we call a trusted
infrastructure layer
Hyperforce gives
us data residency,
On top of that, we have
a data cloud layer.
As Marc explained and
Patrick explained,
these are lakehouse,
petabyte-scale
[INAUDIBLE] real time data
lakehouse architecture
built natively
into the platform.
So you get to use the full
power of the platform.
And data cloud allows you
to do a unified profile.
It allows you to
do zero ETL copy.
So if you already
have your own ETL,
it allows you to
reach out onto that.
And then it has a
lot of connectors.
And using our
MuleSoft connectors--
and it has a lot
of governance.
And using our
MuleSoft connectors,
you can bring in billions
of records and petabytes
of data and
leverage all of that
And the next
layer is what we
There are a lot of models.
Every day, you see the
new model coming in.
What we believe is,
where this is going to go
is there'll be
multiple models.
And some models will
be good for some tasks.
You don't need a
heavy duty model
At some point, based on
the [INAUDIBLE] pricing,
and performance, and
what it is there,
But we don't want
you to figure it out,
each model has
to be optimized
for security, for
compliance, for buyers,
We will abstract
it for you.
We'll run a
model tournament
and give you the
best use cases.
And we'll solve
it for you.
And that's why an open
model is important.
So we'll have a lot of
Salesforce models, which
I'll go a little
bit deeper into it.
We will let you bring
your own models.
A lot of our customers
have big data science
And they want to build
their own models.
We'll let you bring
your own models
or we are going to use
any of the partner models
with a lot of these
top of the line models.
And that's what the open
model ecosystem is there.
The next layer is,
ultimately, any time
you call a model, we
want the trust layer,
This layer is
super important.
This allows us to
securely get data,
either from our data cloud
or your customer data.
You have to do dynamic
grounding, which is
And then you need
to, if required,
do toxicity detection,
or data masking,
People want to know,
what all things happened?
And people want
to know, what
is my audit trail of all
the prompts you're doing?
And then you want to
ensure that none of it
is retained in the models.
Now let's say this
infrastructure is there.
If you are our
applications,
all our applications
will come up
So if you are
a Sales Cloud,
you'll get a Sales
Cloud assistant
which will help you
close deals faster.
If you are a
service cloud agent,
you'll have an assistant
which will help
And as Marc said,
if you're Slack,
Slack I think
personally that is going
It's that entire
interface.
Slack is going to wake
up and allow you to--
Slack is where the
enterprise knowledge
It will allow
you to do that.
That's where if
you're users.
That's great, but
Salesforce always
What if you're
a trailblazer?
Now, all our trailblazers
can build builders,
whether you're a low
code or a pro code
We'll have
prompt builders.
You see how important
is prompting.
So you will be
able to build--
using our app
builder tools,
you'll be able to build
new generative AI apps,
Now if you are an
ISV on our platform,
you get the entire stack.
And you can build
a whole new class
of generative apps and
put it on the AppExchange.
And if you are
an SI partner,
you can use this stack
to implement and generate
more value for
our customers.
That's how the
entire thing is
And to go a
little bit deeper
into how the
GPT layer works
and how the
data flows work,
let me explain
a little bit.
In the CRM apps,
from the apps,
And the prompt is
going to be combined
with your company data and
a secure data retrieval.
We do the dynamic
grounding,
do any data masking
is required.
And it goes through
a secure gateway.
In the gateway,
allows you to talk
They'll be Salesforce
hosted models
in our own VPCs
or you may want
to call an external model
with the shared trust
And the critical
piece is we'll
ensure that none of the
data is retained there.
No prompts, nothing
is retained there.
No context is
retained in the LLM.
That is what we
call zero retention.
Once it generates,
we still
want to have
toxicity build
filters, bias filters,
and things like that.
And obviously for a lot of
CISOs and enterprise data
architects, they'll
want audit trails.
And that's what goes
back to the CRM map.
That's how the GPT
trust layer works.
Let me explain
a little bit
about why I think right
LLM for the right task
we'll have great
models, best
in class model for
specific tasks Open AI,
where the data will
be still retained
in Salesforce, but
with joint moderation.
Salesforce will host,
in our infrastructure
globally, say in our
VPC multitenant models
on our side or we'll
allow you to bring in.
And at the app layer
in the gateway--
you will not need to know.
Our promise to
you, historically,
has always been
that we abstract
As things come in, we'll
run a model tournament.
Model A may be
very good today.
Model B will come and that
will be good for this.
We will abstract
all of that.
We will do a random
model tournament.
We'll pick the
best and cheapest.
And as things go,
and keep on changing,
as this space
keeps evolving,
That's our promise to you.
I also want to
talk a little bit
about our deep investment
in Salesforce LLMs.
I hinted at the
start, but if you see,
we've been investing in
LLMs right from 2018.
We have a world class
AI research team.
And we published
more than 200 papers,
These are all peer
reviewed journals,
And more than 200
patents in this area.
And some of these
things, models,
And some of the
models we have,
we are what we call
SOTA or state of the art
And we'll continue to
invest in those models,
use open source models,
use the partner models
Pick the model
which is right
for you in the cheapest,
best way to do the job
and handle all this
complexity for you
so you can do what you do
best, which is providing
And with that,
I would like
to thank all our
AI researchers,
and engineers, and some
of them in the room.
Can please stand up and
get round of applause.
Next, what we
want to do is
we want to show you how
AI Cloud delivers trusted
With that, please
roll the film.
- In the past, the
Formula One experience
could have been seen more
romantic with a picnic
on the heel of a track,
watching the heroes.
The evolution from the old
days has been incredible.
Today, the F1 experience
is very special.
- F1 is the
greatest sporting
and entertainment
spectacle on the planet--
the smell of the tyres,
the sound of the car
- As soon as the
lights go out you,
you know you are watching
something history may
- Try to survive,
it has been
phenomenal to
capture new fans
interested in the
behind the scene
experience, knowing
our drivers.
- It's exciting
fans worldwide.
- We have over 500
million fans, over a third
of them are new in
the last four years.
- Formula One keeps
getting bigger,
the impact keeps
getting bigger.
- Our database continuing
to grow 30%, 40% year
- The key for us
is to make sure
that we have the
customer at the center.
- And that for our
strategic partnership
with Salesforce
is imperative.
We want to make
sure that we're
designing an
experience for our fan
We have 23 races, but
only 1% of our fan base
actually gets
to be at a race.
- And so how do we
engage the other 99%?
We've seen a much younger
and a much more female
- They'll have
the TV broadcast
They'll have a
device showing
all of the live
content, another
showing all of that rich
data that comes through.
- Everyone has a different
relationship with Formula
And that single
source of truth,
we're creating these
digital experiences
that really
resonate with a fan
- With AI data and
CRM, we are now
able to innovate
like never before.
- Innovation has
been always the drive
of our success, the
DNA of Formula One.
- From an AI
perspective, we
can truly talk to our fans
on a personalized, one
- Speak different
language,
have different narratives.
- Understand those
different markets.
- Be proactive, create
these magical connectors,
and bring them closer
to the action than ever
The key to doing
that is data.
- We've been
collecting data
across all of
these touchpoints.
- We have so much
data coming in to us.
- Salesforce helps
us visualize the data
And we're using data cloud
to create personalized
experiences in a way
that nobody's ever
We like to think
of ourselves
And as we grow
our fan base
and we grow our data,
working with Salesforce,
we're finding
loads of new ways
to bring those
experiences to life.
- That's really
what is all about,
the joy of our fans,
embracing the unique
This is really
magic for us.
SRINI TALLAPRAGADA: And
to bring it to life,
please welcome
Sanjna Parulekar,
senior director
product marketing.
SANJNA PARULEKAR:
Hi, everyone.
So you've seen how
generative AI works.
So now we're going
to have a little fun
and show you how you can
experience it day to day
So before we
get into it, I
want to give a hand over
here for Tim and Dillon
who are our demo drivers.
All right, now Formula One
is using AI data and CRM
seamlessly
behind the scenes
to provide amazing
customer experiences.
And as Marc
mentioned, we're
here to talk
about AI today,
but it all starts
with the data.
And what you're
seeing here
is the home page
for Formula One.
It has all of their rich
fan engagement data--
every touchpoint,
every activation,
and every event that
the customer has been.
To but how did we get
to this neat unique
Well, it all starts
with connecting
And data cloud
makes it super
simple to connect to
all of the data that
is relevant to your
ultimate customer
In this case, it's the
fan for Formula One.
So you can connect to
any customer cloud,
any external data
source or data
lake, or even any legacy
system using MuleSoft.
And as we know, data
is the fuel for AI.
So being able to bring
in all of this data,
no matter where it
sits is extremely
important for providing
that end personalization.
So once we've connected
to all of that data,
we need to make sure that
it's harmonized into one
consistent format
because you
have data sitting in a
lot of different systems.
So a single fan could
be represented in an IP
address, in maybe
a touchpoint
with your mobile
app, or maybe
their Hotmail address
from their college account
or something like that,
but they're one customer
So with data
cloud, you can
harmonize all of that data
into one consistent view.
And with that,
the data is now
ready for any sort
of personalization
that we want
to start with.
And the end
result is actually
this very clean,
very unified view
I can now see every single
touchpoint that Abby
has had with Formula One--
the races she's attended,
what's coming next, the
purchases she's made,
really anything
about her that
gives me that full
picture of who she is.
So now that I have
this unified view
of the customer,
this is where
it gets interesting
with AI,
Now what we want to do
is create a landing page
And I can tell you, as a
former Salesforce admin
myself and a
current marketer,
creating landing
pages is hard.
And it's hard
for two reasons.
The first is it's
difficult to personalize
it just the way you want.
And the second
is that you never
know how it's really
going to perform.
Now, this is
where Einstein GPT
is extremely
important in helping
So as I start to prompt
Einstein GPT with what
I want to do,
in this case,
creating a
landing page, all
of this content
that's being populated
on the page is
not just content
This is content based on
what is performed best
So I can rest assure,
as that marketer,
that I'm not only
building quickly,
but I'm building for
that end personalization
that my customer is
really going to want.
So Einstein GPT is going
to help me create more
personalization on
this page, more product
And I even want to
add an interactive map
for our customers
and fans that
will be on site at
the Miami Grand Prix.
Now, as a
marketer, this is
where my job ends and a
developer's work begins.
So let's head over
into the developer
Now right here
within the IDE,
my developer can
customize this map
and make it
super interactive
for that end landing
page, which is
And with just a few
prompts and descriptions
of what I want to
achieve, the code
will be automatically
generated.
And this is powered
by our own Salesforce
Now I want to pause
here for a second
because in this new
era of generative AI,
there's a lot of talk
about the future of work
and what these new roles
will be of the future.
And the future is
our trailblazers.
Our trailblazers are
building with APEX.
Today and with
these tools,
they'll be more efficient
than they have ever
been before, which
is simply incredible
So now that we've
built the page,
let's see what
it looks like.
Now, with this
page, this is
fully personalized
to the end fan.
Personalized to them
and their journey
OK, so another
really special part
about F1's
business is the way
that they treat
their end fans
with their
hospitality reps.
So for them, a sales rep
is a hospitality rep.
And they pride themselves
on this trusted
Now, this trust can
take a lifetime to build
and a moment to break
with the wrong level
of personalization or
feeling like that rep
really doesn't know them.
So Einstein GPT helps them
scale this relationship
and nurture it with
the most up to date
So the first thing I
want Einstein GPT's help
with here is to update
my account description.
And it can help me do
this really quickly,
bringing in that
public data that's
relevant to my customer
alongside private data,
which is really important.
It'll also surface
my relevant contacts
that I might want
to reach out to
for the upcoming
Miami Grand Prix.
So it's not just
surfacing knowledge,
but it's
surfacing actions,
which helps me go faster
as that hospitality rep.
So next I want to
compose an email
and invite my end
customer to an upcoming
race with a really
tailored email here.
Now, another
special thing that I
want to call out
about Einstein GPT
is that this
isn't just based
on everything a rep
has done in the past.
This is based on outcomes.
So this email that's
being generated
is based on
the emails that
have been most highly
performant with customers
So we're going to go
ahead and edit that email,
bring it in to the
email composer,
and send that right
to the customer.
So the next step
of this story
is super important
because this
is where it gets very
personal with our
customer inside of Slack.
Now, the hospitality
reps within F1
are using Slack as
their centerpiece
for productivity across
all of their channels
and all of
their customers.
Now, this is the
updated sales home
page for all the Formula
One hospitality reps.
And it's enriched with all
of the amazing data they
have in CRM about
their customer.
So I can see over here
that this end customer
that I just sent
the email to
has accepted our
invitation to the race.
And what's automatically
been created
is the Slack Canvas
which has really
the best starting
point I could ever
ask for in terms
of information
that I want to give
to my customer.
It has some
relevant information
for VIPs, some
experiences,
But as that
hospitality rep,
I might want
to personalize
And Einstein GPT is
there for me as well.
I can ask Einstein to help
me identify what the best
experiences are at
the Miami Grand Prix
and then update
this Canvas.
So I've gotten that
great first stop
And I can also continue
to customize this.
So we're going
to share this
with the customer
in our joint channel
because we're
collaborating directly
And we want to
make it really,
really important
for them to have
every single
experience they want.
And it looks like
they actually have
And one thing that I
really love about Slack
myself at Salesforce
is using a Huddle
for those quick
questions that might not
require a whole phone call
or a whole conversation.
So we get on a huddle
with the customer.
And we understand
a little bit
more deeply what they
want to experience on site
Now, not only was I able
to do that super quickly,
but after the
call is done,
I'll get a helpful summary
right in the channel
So this is just
a snapshot of all
of the various ways
that Formula One
can be using
AI Cloud across
MARC BENIOFF: Very
good, thank you so much.