The brain inside a human-like head representing an LLM (large language model) ingests information represented by charts, graphs, and icons

What are LLMs (Large Language Models)?

Large language models (LLMs) underpin the growth of generative AI. See how they work, how they're being used, and why they matter for your business.

Enterprise AI built into CRM for business

Salesforce Artificial Intelligence

Salesforce AI delivers trusted, extensible AI grounded in the fabric of our Salesforce Platform. Utilize our AI in your customer data to create customizable, predictive, and generative AI experiences to fit all your business needs safely. Bring conversational AI to any workflow, user, department, and industry with Einstein.

A welcome message with Astro holding up the Einstein logo.

AI Built for Business

Enterprise AI built directly into your CRM. Maximize productivity across your entire organization by bringing business AI to every app, user, and workflow. Empower users to deliver more impactful customer experiences in sales, service, commerce, and more with personalized AI assistance.

Imagine a workforce with no limits.

Transform the way work gets done across every role, workflow, and industry with autonomous AI agents.

Large Language Models (LLMs) FAQs

Large Language Models (LLMs) are a type of artificial intelligence model trained on vast amounts of text data, enabling them to understand, generate, and process human language.

LLMs use deep learning architectures, particularly transformers, to identify patterns, grammar, and context within massive datasets, allowing them to predict the next word in a sequence.

Key capabilities include text generation, summarization, translation, question answering, content creation, and code generation, often based on a given prompt.

LLMs learn through pre-training on enormous collection of texts, followed by fine-tuning on more specific datasets to adapt them for particular tasks.

Benefits include automating content creation, enhancing customer service (chatbots), improving data analysis, personalizing communications, and accelerating research.

Applications range from writing articles and emails to powering intelligent chatbots, generating creative content, assisting with programming, and summarizing lengthy documents.

Challenges include potential for "hallucinations" (generating false information), ensuring factual accuracy, addressing biases from training data, and managing computational costs.