Skip to Content

How To Unlock the Power of Generative AI Without Building Your Own LLM

An image showing a key floating in front of a bright light with numbers representing data on the sides. A creative look at train your own LLM.
Want to get started with a large language model quickly? You have several options, from training your own model to using an existing one through APIs. [Image created with Firefly/Adobe]

Large language models are the foundation for today's groundbreaking AI applications. Instead of training an LLM on a massive dataset, save time by using an existing model with smart prompts grounded in your data. Here’s how.

Everyone wants generative AI applications and their groundbreaking capabilities, such as creating content, summarising text, answering questions, translating documents, and even reasoning on their own to complete tasks.

But where do you start? How do you add large language models (LLMs) to your infrastructure to start powering these applications? Should you train your own LLM? Customise a pre-trained open-source model? Use existing models through APIs?

Training your own LLM is a daunting and expensive task. The good news is that you don’t have to. Using existing LLMs through APIs allows you to unlock the power of generative AI today, and deliver game-changing AI innovation fast.

How can a generic LLM generate relevant outputs for your company? By adding the right instructions and grounding data to the prompt, you can give an LLM the information it needs to learn “in context,” and generate personalised and relevant results, even if it wasn’t trained on your data.

Your data belongs to you, and passing it to an API provider might raise concerns about compromising sensitive information. That’s where the Einstein Trust Layer comes in. (More on this later.)

What is an LLM?

Large language models (LLMs) are a type of AI that can generate human-like responses by processing natural-language inputs.

In this blog post, we’ll review the different strategies to work with LLMs, and take a deeper look at the easiest and most commonly used option: using existing LLMs through APIs. 

As Salesforce’s SVP of technical audience relations, I often work with my team to test things out around the company. I’m here to take you through each option so you can make an informed decision.

1. Train your own LLM (Hint: You don’t have to)

Training your own model gives you full control over the model architecture, the training process, and the data your model learns from. For example, you could train your own LLM on data specific to your industry: This model would likely generate more accurate outputs for your domain-specific use cases than a general-purpose model. 

But training your own LLM from scratch has some drawbacks, as well:

  • Time: It can take weeks or even months.
  • Resources: You’ll need a significant amount of computational resources, including GPU, CPU, RAM, storage, and networking.
  • Expertise: You’ll need a team of specialised Machine Learning (ML) and Natural Language Processing (NLP) engineers.
  • Data security: LLMs learn from large amounts of data — the more, the better. Data security in your company, on the other hand, is often governed by the principle of least privilege: You give users access to only the data they need to do their specific job. In other words, the less data the better. Balancing these opposing principles may not always be possible.

2. Customise a pre-trained open-source model (Hint: You don’t have to)

Open-source models are pre-trained on large datasets and can be fine-tuned on your specific use case. This approach can save you a lot of time and money compared to building your own model. But even though you don’t start from scratch, fine-tuning an open-source model has some of the characteristics of the train-your-own-model approach: It still takes time and resources, you still need a team of specialised ML and NLP engineers, and you may still experience the data security tension described above.

3. Use existing models through APIs

The last option is to use existing models (from OpenAI, Anthropic, Cohere, Google, and others) through APIs. It’s by far the easiest and most commonly used approach to build LLM-powered applications. Why? 

  • You don’t need to spend time and resources to train your own LLM.
  • You don’t need specialised ML and NLP engineers.
  • Because the prompt is built dynamically into users’ flow of work, it includes only data they have access to.

The downside of this approach? These models haven’t been trained on your contextual and private company data. So, in many cases, the output they produce is too generic to be really useful.

Get started with an LLM today

The Einstein 1 Platform gives you the tools you need to easily build your own LLM-powered applications.

A common technique called in-context learning can help you get around this. You can ground the model in your reality by adding relevant data to the prompt

For example, compare the two prompts below:

Prompt #1 (not grounded with company data):

Write an introduction email to the Acme CEO.

Prompt #2 (grounded with company data):

You are John Smith, Account Representative at Northern Trail Outfitters.

Write an introduction email to Lisa Martinez, CEO of ACME.

Acme has been a customer since 2021.

It buys the following product lines: Edge, Peak, Elite, Adventure.

Here is a list of Acme orders:

Winter Collection 2024: $375,286

Summer Collection 2023: $402,255

Winter Collection 2023: $357,542

Summer Collection 2022: $324,573

Winter Collection 2022: $388,852

Summer Collection 2021: $312,899

Because the model doesn’t have relevant company data, the output generated by the first prompt will be too generic to be useful. Adding customer data to the second prompt gives the LLM the information it needs to learn “in context,” and generate personalised and relevant output, even though it was not trained on that data.

The more grounding data you add to the prompt, the better the generated output will be. However, it wouldn’t be realistic to ask users to manually enter that amount of grounding data for each request. 

Luckily, Salesforce’s Prompt Builder can help you write these prompts grounded in your company data. This tool lets you create prompt templates in a graphical environment, and bind placeholder fields to dynamic data that’s available through the Record page, flows, Data Cloud, Apex calls, or API calls.

A screenshot of Salesforce's Prompt Builder, which can be used to train your own LLM.
Salesforce’s Prompt Builder.

But adding company data to the prompt raises another issue: You may be passing private and sensitive data to the API provider, where it could potentially be stored or used to further train the model.

Use existing LLMs without compromising your data

This is where the Einstein Trust Layer comes into play. Among other capabilities, the Einstein Trust Layer lets you use existing models through APIs in a trusted way, without compromising your company data. Here’s how it works:

An flow chart showing how Einstein Trust Layer interacts with existing models and CRM apps.
The Einstein Trust Layer interacts with existing models and CRM apps.
  1. Secure gateway: Instead of making direct API calls, you use the Einstein Trust Layer’s secure gateway to access the model. The gateway supports different model providers and abstracts the differences between them. You can even plug in your own model if you used the train-your-own-model or customise approaches described above.
  2. Data masking and compliance: Before the request is sent to the model provider, it goes through a number of steps including data masking, which replaces personal identifiable information (PII) data with fake data to ensure data privacy and compliance.
  3. Zero retention: To further protect your data, Salesforce has zero retention agreements with model providers, which means providers will not persist or further train their models with data sent from Salesforce.
  4. Demasking, toxicity detection, and audit trail: When the output is received from the model, it goes through another series of steps, including demasking, toxicity detection, and audit trail logging. Demasking restores the real data that was replaced by fake data for privacy. Toxicity detection checks for any harmful or offensive content in the output. Audit trail logging records the entire process for auditing purposes.

How the Einstein 1 Platform works

The Einstein 1 Platform abstracts the complexity of large language models. It helps you get started with LLMs today and establish a solid foundation for the future. The Einstein 1 Platform powers the next generation of Salesforce CRM applications (Sales, Service, Marketing, and Commerce), and provides you with the tools you need to easily build your own LLM-powered applications. Although Einstein 1 is architected to support the different strategies mentioned earlier (train your own model, customise an open-source model, or use an existing model through APIs), it is configured by default to use the “use existing models through APIs” strategy, which lets you unlock the power of LLMs today and provides you with the fastest path to AI innovation. 

The Einstein 1 Platform combination of Prompt Builder and the Einstein Trust Layer lets you take advantage of LLMs without having to train your own model:

  • Prompt Builder lets you ground prompts in your company data without training a model on that data.
  • The Einstein Trust Layer enables you to make API calls to LLMs without compromising that company data.
Computational ResourcesML and NLP engineersRelevant outputsTime to innovation
Train your own modelHighestYesHighestSlowest. Training a model can take months
Customise an open-source modelMediumYesMediumMedium. Can also take months
Use an existing model through APIsLowestNoLowestFastest. Start immediately with API calls
Use an existing model through APIs with in-context learning powered by Prompt Builder and the Einstein Trust Layer
LowestNoHighFastest. Start immediately with API calls

Hot off the press: the freshest trends in generative AI for sales

Discover how 1,000+ sellers are using generative AI at work, and learn the areas of focus for closing the trust gap that remains.

Christophe Coenraets Senior Vice President, Trailblazer Enablement

More by Christophe

Get the latest articles in your inbox.