Legacy Guide: Agentforce Reasoning, Subagents, Instructions, and Actions
Formerly titled "Introducing the Agentforce Guide to Reasoning, Topics, Instructions and Actions"
Formerly titled "Introducing the Agentforce Guide to Reasoning, Topics, Instructions and Actions"
This guide covers a legacy experience. It applies to the legacy Agentforce builder under Setup → Agents.
As of December 2025, a new agent building experience is available via the App Launcher. We recommend using the new experience, which gives you greater control over agent behavior and outcomes. Click here to switch to the new Agentforce Guide to Hybrid Reasoning, Script, Subagents, and Actions.
Interface and steps differ between that new experience and the legacy experience outlined below.
Additionally, key terminology changes are in progress. As of April 2026, these terms are changing to align with industry norms.
Thanks for your patience as we update those terms across resources, including the graphics in this guide.
AI agents are revolutionizing organizations by increasing efficiency, reducing manual effort, and creating a more sophisticated and adaptive workplace.
This guide explores the core elements of Agentforce, the Salesforce platform for building AI agents. In this resource, you’ll find details on how Agentforce works, and the key capabilities and tradeoffs that technical practitioners need to know when building with Agentforce.Agentforce.
An agent is a type of software that uses generative AI to make decisions about what to do next and how to do it. An agent can understand a question (often called an utterance), autonomously reason to determine what actions it needs to reach its goal, identify what data is needed, and then take action, with or without human intervention. Agents use large language models (LLMs) instead of strict, pre-written rules. This makes agents more dynamic than rules-based automation, but it’s also a significant shift from traditional software that follows hard-coded instructions.
Key capabilities of AI agents
While agents don’t follow hard-coded logic like traditional software, Agentforce provides components to add additional controls to how your agents reason. There are also a number of features that make Agentforce extensible. Here’s a quick look at these components:
| Component | When to Use | Skills Required |
|---|---|---|
| Agent Invocable Actions | To invoke an agent from Flow or Apex | Low-code |
| Agent API | To invoke an agent from outside Salesforce | Pro-code |
| Agent Variables | To add additional controls to how your agent reasons through subagent and action selection | Low-code |
| Agentforce SDK | To build an agent from scratch using Python code via a programmatic interface to Salesforce’s Agentforce infrastructure | Pro-code |
| Model Builder | Customize a generative AI model or create a predictive model | Low-code |
Strategic planning is a critical part of deploying an AI agent. If your organization doesn’t have a strategy in place, we suggest taking the AI Strategy badge on Trailhead . From here on out, we assume you’re already familiar with the process of defining your AI vision, forming an AI council, establishing AI governance, identifying AI use cases, and building a roadmap.
Building an agent requires time and resources. Careful planning will help you get it right the first time. Before you begin building any agent, define a use case and create a process map for each agent you plan to build. The Agent Planning badge on Trailhead covers process mapping in the “Outline the Agent’s Work ” unit. Outline the ideal user experience, as well as how the system will respond to user input and handle potential errors or issues.
The resulting diagram will help ensure you understand the flow. It will help you generate instructions and know where to use actions, variables, and filters. Benefits of this agent planning approach include the following:
Before we continue, it’s important to note that agents aren’t the only generative AI tool available on the Agentforce 360 Platform. Prompt templates are another powerful tool for building applications that use generative AI. Prompt templates, built in Prompt Builder, allow you to define a set of structured, reusable instructions that guide a generative AI model to produce specific outputs. They can reference Salesforce data through predefined fields, data graphs, and contextual data retrieval augmented generation (RAG). Prompt templates are also highly secure–all prompts are routed through Salesforce’s trust layer—honoring permissions, masking sensitive data, and flagging toxic outputs.
Prompt templates are single-turn interactions with AI, and are an ideal fit for one-off tasks that don’t require memory or multi-step reasoning. For example, a prompt template is ideal when you need to reword a sentence or summarize a case, because ongoing context isn’t needed. When designing solutions with prompt templates it’s important to remember that they are stateless (they don’t retain memory between turns) and that they do not make decisions or take actions. Prompt templates generate a response based on the input and logic you provide at design time.
Prompt templates can be used on their own in an embedded AI solution, or you can add a Prompt template to an agent via agent actions. Using a prompt template on its own is ideal when:
Prompt template use cases:
Keep in mind that while Prompt templates can dynamically fill in data and generate responses based on the dynamic inputs that get rendered at run time, they cannot reason through options or take action.
Agents are software systems that autonomously decide what to do, in what order, and how to do it, based on evolving context. Agents go beyond a single prompt as they can plan, reason, call external actions (like API calls or database lookups), and react based on outcomes. They can choose different paths or responses depending on what they learn mid-process. Agents are best when:
AI agent use cases:
Let’s explore how Agentforce understands user requests and decides what actions to take. This section will walk you through the Reasoning Engine at the core of Agentforce’s decision-making process. Just like understanding the order of execution is key to understanding what’s happening when a record gets saved in Salesforce, knowing how the Reasoning Engine operates behind the scenes is key to how Agentforce works.
The Reasoning Engine uses a series of prompts, code, LLM calls, and a set of three key building blocks to help agents understand and respond effectively. Think of the following three elements – subagents, instructions, and actions – as the levers you control to make agents work for you. When you adjust these elements, you’re engineering the prompts that the Reasoning Engine uses to understand, decide, and act. That’s right: Agentforce uses prompts in the Reasoning Engine to classify subagents and actions. You’re prompt engineering every time you build an agent in the legacy agent builder.
Before we dive in deeper into the Reasoning Engine, let’s take a closer look at subagents, instructions, and actions, three important pieces of metadata that you define each time you build an agent with Agentforce .
Subagents are the foundation of your agent’s capabilities, defining what it can do and the types of customer requests it can handle. Think of them as specialized departments with specific expertise, actions that lead to action, and instructions. When a customer sends a message, your agent first determines which “department” (subagent) should handle the request, then follows that department’s guidelines and uses its tools to help the customer. Subagents also have a scope that defines what an agent can and cannot do within that specific topic area.
Instructions are the guidelines that direct how conversations are handled within a subagent, guiding action selection, setting conversation patterns, and providing business context. Clear and distinct subagents prevent overlap and ensure the Reasoning Engine correctly classifies customer requests. Instructions should be clear, specific, and actionable to guide the agent effectively.
Your agent uses actions to get information or perform tasks. When defining actions, it’s crucial to understand how the Reasoning Engine processes them. The engine reviews available actions based on their names, descriptions, and inputs, as well as the subagent instructions and conversation context. Agentforce comes with a number of standard agent actions, and you can create custom agent actions to further extend your implementation. Always check to see if a standard action can be used before creating a custom action. Design actions with reusability in mind, as they can be used across multiple subagents. Below is a list of the custom agent actions available and when you should use them.
| Component | When to Use | Skills Required | Additional License Required? |
|---|---|---|---|
| Prompt Template | To invoke an LLM to generate a response. Prompt template actions are one way an agent uses RAG. | Low-code | Yes |
| Flow | To run low-code rules-based automation and record retrieval | Low-code | No |
| Apex code | To run pro-code rules-based automation and record retrieval | Pro-code | No |
| MuleSoft API | To retrieve data from legacy systems and other external applications in a complex enterprise environment | Pro-code | Yes |
| External Service | To retrieve data from REST APIs that support OpenAPI specs | Low-code | Yes |
| Predictive Model | To use predictive AI with your agent | Low-code | Yes |
You might be wondering exactly how an agent uses subagents, instructions, and actions to get work done. Here’s a step-by-step breakdown of what happens inside the Reasoning Engine whenever an agent is invoked.
This legacy diagram uses the term “topics” for what we now call subagents.
The process begins when a message or query is received from a user, or when an agent is invoked from an event, data change, or API call.
The Reasoning Engine analyzes the user’s message to classify it under the most relevant subagent. For this classification step, the Reasoning Engine looks at subagent name and subagent classification description only. If no appropriate subagent matches, it uses a default classification.
The scope, instructions, and actions associated with the selected subagent are injected into the prompt alongside the original user message and conversation history, typically the last six turns. The resulting prompt is sent to the LLM to determine what the agent should do next.
The agent analyzes the combined input (user message, instructions, potential actions) and decides the next step:
Before sending the final response, the agent performs one last check to ensure its proposed response is grounded in and adheres to the provided instructions for the subagent. This step checks that the response:
The final, validated response is sent to the user. If the grounding step fails, the agent will retry and attempt to produce a grounded response. If it’s not able to produce a grounded response, it will send a standard message to inform the user it can’t help with the request.
Understanding this workflow helps explain why each component of your agent — subagents, instructions, and actions — must be carefully designed to work with this reasoning process. But it doesn’t stop there.
To provide additional control and add deterministic logic to your agentic workflow, Agentforce uses Conditional Filtering. It’s like giving your agents the same dynamic visibility as conditional form fields – so they see exactly what they need, when they need it.
Conditional filters act as gatekeepers that determine whether to consider a subagent or action during the reasoning process. Unlike instructions that guide the LLM’s decisions, filters operate at a system level to completely remove or include subagents and actions based on specific conditions.
Conditional filtering enhances the agent’s performance in two critical ways:
1. Improved Subagent Classification Accuracy
By removing irrelevant subagents from consideration based on conversation state, you reduce the “semantic noise” during the subagent classification process. This makes it easier for the LLM to select the correct subagent for a user query.
For example, if a user hasn’t yet authenticated, filters can hide all subagents related to account-specific actions. This prevents the agent from misclassifying general queries into incorrect subagents that would ultimately lead to authentication errors or inappropriate responses.
2. Contextually Appropriate Action Selection
Once a subagent is selected, filters further refine which actions within that subagent are available based on the current conversation state:
How Conditional Filtering Works
The Reasoning Engine supports filtering based on two types of variables: context variables and custom variables. This table shows the properties of each variable type.
| Component | Context Variables | Custom Variables |
|---|---|---|
| Can be instantiated by user | No | Yes |
| Can be input of actions | Yes | Yes |
| Can be output of actions | No | Yes |
| Can be updated by actions | No | Yes |
| Can be used in filters of actions and subagents | Yes | Yes |
| Supported types | Text/Number | Text/Number |
These are variables derived from the messaging session and can include:
Context variables are particularly useful for personalizing interactions based on known customer information without requiring the agent to ask for it conversationally. When designing a solution with context variables, it’s important to be aware that they are set at session initiation and are immutable during that session.
Custom variables store information returned from actions. These can be used for:
Filters are based on the values of context and custom variables. Filters can be applied at both the subagent and action level:
Here is a simple view of the Reasoning Engine that shows how subagent-level and action-level filters fit into the reasoning flow.
This legacy diagram uses the term “topics” for what we now call subagents. We recommend building agents in the new Agentforce Builder (see new guide linked at top of this Legacy Guide) to enjoy the updated controls built into the new Reasoning Engine.
The most common use case for filtering is controlling access to sensitive operations:
Filter: "Requires Authentication"
Condition: authenticationStatus = "verified"
Applied to: Account Management Subagent, Payment Processing Subagent
This ensures that even if a user asks about their account or payments before authenticating, the agent will not allow these subagents to be called.
Filters can also help process steps run in the correct order:
Filter: "Order Number Required"
Condition: orderNumber != null
Applied to: Check Order Status Action, Modify Order Action
This ensures order-related actions are only available after an order number has been collected and stored in a variable.
It’s important to understand the distinction between filtering and instructions so that you can build agents that are both reliably accurate and contextually adaptive.
Another part of the Reasoning Engine is citations. You can use citations to validate the sources used by the LLM to generate a response. The diagram below shows where citations fit into the Reasoning Engine flow.
This diagram also highlights the composable architecture of the Reasoning Engine. Escalation, citations, and guardrails are modular components used by the Reasoning Engine when building an agent using the Agentforce for Service template. Today, the modular components used by the Reasoning Engine are set on a template-by-template basis. We’re working to make these components even more like Lego pieces that can be swapped in and out of an agent, potentially even by customers in the future.
This legacy diagram uses the term “topics” for what we now call subagents. We recommend building agents in the new Agentforce Builder (see new guide linked at top of this Legacy Guide) to enjoy the updated controls built into the new Reasoning Engine.
We’ve covered a lot already. Now let’s take a step back and walk through a complete example of how subagents, instructions, and actions work together with the Reasoning Engine when a customer asks an agent a question.
Customer Message: “I ordered a red sweater yesterday but I need to change the delivery address.”
Now that you understand how subagents, actions, and instructions engineer prompts that drive the Reasoning Engine, let’s take a look at some best practices for creating them.
Subagents are the foundation of your agent’s capabilities. They define what your agent knows how to do and what kinds of customer requests it can handle. The three elements of a subagent are the subagent name, classification description, and scope.
The subagent name is one of the key metadata points the Reasoning Engine uses during the subagent classification step. It's critical that the name clearly and concisely defines the subagent’s core function to ensure the correct match for a user's request. Let’s explore what makes a good name.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| Customer Info | Provide order status and details | Clearly describes the job to be done |
| Help | Answer technical questions | Specifies the type of help provided |
| Transactions | Help update payment details | Specifies the type of help provided |
The subagent classification description describes what user messages should trigger the subagent. Used alongside the subagent name in the the classification step, this description critical for helping your agent understand when to use this subagent.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| Handle order-related questions. | Provide customers with updates on their order details and status after validating their order number. | Clarifies subagent scope. |
| Help with accounts. | Assist users with login issues, account creation, and password resets. | More specific; enables agent to pick the correct subagent. |
| Verify before handling payment issues. | Help users add or update their payment information, including credit cards and PayPal details. | Specifically mentions to redirect to a different subagent. Reminder: Use conditional subagent filters for higher determinism. |
If your agent consistently fails to select the correct subagent for user queries, subagent names and descriptions are the first place you should investigate and refine.
Scope defines the boundaries of what your agent can and can't do within this subagent.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| Handle order questions and issues. | Your job is only to answer questions related to a customer’s order status, return status, or return and repair policy. Never initiate or generate an order or return. | Sets clear boundaries on what the agent should and shouldn’t do. |
| Help with login problems. | Your job is only to help customers who cannot login by resetting their password or looking up their username. You cannot update account information or modify permissions. | Explicitly states activities the subagent can perform and boundaries. |
Let’s see how to configure a subagent at design time so that an agent can help users reset their passwords. This is what the subagent, instructions, and actions might look like:
| Component | Content |
|---|---|
| Subagent Name | Password Reset |
| Classification Description | Assist customers who have forgotten passwords, can’t log in, need credential resets, are locked out, or are experiencing login problems. Help users change passwords or recover account access. |
| Scope | Your job is only to help customers reset passwords or recover usernames. You can verify identity via email/phone and initiate password resets. You cannot access account details beyond verification or modify any customer information other than passwords. |
| Instruction |
|---|
| Ask which verification method the customer prefers (email or phone) before proceeding with identity verification. |
| Use Verify Customer Email or Verify Customer Phone action based on customer preference. Don’t attempt password reset until verification succeeds. |
| After verification, explain the reset process: “I’ll send a secure reset link to your email that expires in 24 hours.” |
| Use Security Question Verification only if the customer can’t access their registered email/phone. |
| After completing a reset, ask if they need help with anything else related to account access. |
| Action Name | Description | Input(s) |
|---|---|---|
| Verify Customer Email | Verifies identity by matching email to an account. Returns verification status and customer ID if successful. | Email Address: Customer’s email (format: example@domain.com). |
| Verify Customer Phone | Verifies identity by sending a code to the customer’s phone. Use when email verification isn’t possible. | Phone Number: 10-digit number without special characters. |
| Send Password Reset Email | Sends a 24-hour expiry reset link to verified email. Use only after successful verification. | Customer ID: Verified ID from successful verification |
At runtime, here’s what happens when a customer is interacting with our agent from a company’s website:
Instructions are the guidelines that tell your agent how to handle conversations within a subagent. Instructions help the agent make decisions about what actions to take and how to respond.
Instructions play several crucial roles in your agent’s decision-making process:
Without clear instructions, your agent might select the wrong actions, misunderstand user requests, or provide inconsistent responses. But remember that instructions are merged into a prompt and sent to the LLM, and are therefore non-deterministic. They do not replace the need for coded business rules within the action.
When the Reasoning Engine processes a customer request, it uses your instructions to:
The more clear and specific your instructions are, the more consistently your agent will perform.
When building your agent, it’s crucial to understand when to use instructions vs. actions to implement functionality.
Use actions for critical business logic that must be consistently enforced, such as complex calculations, sensitive information processing, and multi-step operations that require specific sequencing.
In contrast, use instructions for guiding conversation flow, helping the agent select appropriate actions based on context, defining response formatting and tone, and establishing clarification strategies when information is ambiguous.
Refund Order Action Example:
public with sharing class RefundOrderHandler {
public class RefundResult {
@AuraEnabled public Boolean canReturn;
@AuraEnabled public String message;
}
@AuraEnabled
public static RefundResult processRefund(Id orderId, Date orderDate) {
RefundResult result = new RefundResult();
if (orderDate == null || orderId == null) {
result.canReturn = false;
result.message = 'Invalid input: Order ID and Order Date are required.';
return result;
}
Date today = Date.today();
Integer daysSinceOrder = today.daysBetween(orderDate);
if (daysSinceOrder > 30) {
result.canReturn = false;
result.message = 'Order cannot be returned. More than 30 days have passed.';
} else {
result.canReturn = true;
result.message = 'Order can be returned. Sending return slip.';
sendReturnEmail(orderId);
}
return result;
}
Here are some examples of instructions that work well with the Reasoning Engine.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| Get the customer’s order details. | If a customer inquires about their order status, offer all lookup options, including email address, order date, or order ID. | Provides specific guidance and uses language similar to the action name. |
| Help with device issues | Before using the Answer Questions with Knowledge action to retrieve troubleshooting information, clarify what type of device (iOS or Android). Include the device type in the SearchQuery of the Answer Questions with Knowledge action. | Gives clear instruction on what information to gather first and specifies which action to use. |
| Use knowledge for product questions. | For questions about product features, first identify which specific product the customer is asking about. Then use the Knowledge action with the exact product name to retrieve accurate information. | Provides a clear sequence of steps and specifies how to make the action more effective. |
| Check if customers need help. | After providing information on shipping status, always ask if the customer needs help with anything else related to their order. | Specific about when and how to follow up. |
Actions are the actions your agent uses to get information or perform tasks.
When your agent handles a customer request, the Reasoning Engine:
For this process to work effectively, your actions need clear, descriptive names and instructions that help the Reasoning Engine understand when and how to use them. To minimize latency and improve performance, don't assign more than 15 actions to a subagent, and remember that actions can be reused across subagents.
Each action in your agent has three important parts that need to be configured: action name, action instructions, and action input instructions.
Identify each action with a clear, descriptive name so the Reasoning Engine knows exactly what the action does.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| GetOrderInfo | LookupOrderStatus | Clearly describes what information the action provides |
| UpdateContactRecord | UpdateCustomerPhoneNumber | Specifically describes what is being updated |
| ProcessPmt | ProcessPayment | Avoids abbreviations for clarity |
Action instructions tell the Reasoning Engine what the action does and when to use it. These instructions are critical for helping your agent select the right action at the right time.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| Updates a phone number. | Updates the user’s phone number associated with their record. If there is no matching record, it will create a new record. | Explains what the action does and how it handles edge cases. |
| Gets tracking information. | Returns tracking information for a customer order based on the tracking number and destination ZIP code. | Explains when to use this action and what information it requires. |
| Provides knowledge. | Searches the knowledge base for answers to user questions about products, policies, or procedures. Use this action when the user asks “how to” questions or needs information that isn’t specific to their account. | Explains when the action should be used in the conversation flow. |
| Checks account. | Verifies if a customer account exists and returns account status information. Use this action when customers are trying to determine if they already have an account or if their account is active. Requires either an email address or phone number to perform the lookup. | Clearly explains the purpose, when to use it, and what information is needed. |
Best practices for action instructions:
Action input instructions define what information the action needs and how the agent should collect it from the customer. Clear input instructions help the agent gather the right information in the right format.
| Bad Example | Good Example | Why It’s Better |
|---|---|---|
| Enter order ID. | The order ID is an 18-character alphanumeric identifier. | Provides format details. |
| Customer email. | The customer’s email address used for account verification. Format should be a valid email address (example@domain.com). | Specifies format and validation requirements. |
| Search query. | A detailed search query describing the user’s question. Include specific product names, error codes, or symptoms mentioned by the user to improve search results. For technical issues, always include the device type (iOS/Android) and app version if mentioned. | Explains how to construct an effective query with specific elements to include. |
| Phone number. | The customer’s 10-digit phone number without spaces or special characters. If the customer provides a number with formatting (like 555-123-4567), remove the special characters before passing to the action. | Provides clear formatting instructions and handling guidance. |
Best practices for action input instructions:
This is a question we hear from our customers often. The short answer, yes. Data 360 is an integral part of Agentforce because the Data 360 architecture is used for certain features in Agentforce like Agent Analytics and Digital Wallet. Data 360 infrastructure also powers indexing and unstructured data searches, as well as feedback logs and audit trails. Data 360 can also provide additional extensibility. Agentforce customers can also choose to enable features like Bring Your Own Lake (BYOL) and Bring Your Own Large Language Model (BYO-LLM) to use data and models built on platforms outside of Salesforce with agents built on Agentforce .
From accessing data from other data lakes through data federation, to using the hyperscale infrastructure for petabyte-scale data, utilizing Data 360's architecture with Agentforce ensures that customers experience better AI outcomes today. This powerful architecture also ensures long-term viability for successful agent adoption, no matter how big or complex the underlying datasets that power those agents may be.
Curious what specific Agentforce features are powered by Data 360? The following table details the Agentforce features Data 360 provisions by default, along with the optional features customers can enable to extend their implementation.
| Agentforce Feature Powered by Data 360 | Description | Provisioning |
|---|---|---|
| Data Library Automation | Automates creation of search indexes and retrievers to support agent actions like Answer Questions with Knowledge | Provisioned by Default |
| Agent Analytics | Streams usage data to Data 360 for Reports and Dashboards | Provisioned by Default |
| Retrieval Augmented Generation (RAG) | Enables users to augment their prompts with data from Salesforce and Data 360, retrieved at inference time | Provisioned by Default |
| Audit Trail & Feedback Logging | Generative AI audit data | Optional |
| Bring Your Own Large Language Model (BYO-LLM) | Allows users to use their own LLM | Optional |
| External Data Sources (non-CRM) | Enables users to ground AI-generated responses with external sources | Optional |
| Unstructured Data | Enables users to ground AI-generated responses in unstructured data | Optional |
| Real Time Data Graphs | Enables near real-time grounding of AI-generated responses using normalized data from multiple Data 360 sources | Optional |
We’ve covered the key elements that make Agentforce work, including the Reasoning Engine, and how to use subagents, instructions, and actions. Understanding these components is key to using Agentforce effectively. Use this guide to improve outcomes as you implement Agentforce . Check out the provided resources to learn more.
Find blogs, guides, demo videos, and more resources at Agentblazer.com and Agentforce.com
Agentforce is Salesforce’s platform for building agents that go beyond simple chat interactions. Unlike standard generative AI tools, these agents can autonomously plan, reason, and take action to achieve specific goals, with or without a human in the loop.
Agentforce has evolved from basic AI interactions into a comprehensive development lifecycle within Agentforce Studio, introducing the Agentforce Builder and Agent Script for enhanced deterministic control. This shift includes rebranding "Topics" as Subagents to better define their specialized functional roles. Ultimately, the platform has transitioned from a prompt-centric approach to a hybrid-reasoning model, prioritizing reliable logic over probabilistic natural language prompts.
Yes! See https://www.salesforce.com/agentforce/guide/
While these guides provides technical detail about how Agentforce works, they're not official implementation guides with click paths and troubleshooting tips. Find official Agentforce Implementation Guides on Salesforce Help.
The Agentforce Guide is about building AI agents using the Agentforce platform on Salesforce, covering core elements such as agents, subagents, instructions, actions, and the Reasoning Engine.
The legacy guide is intended for technical practitioners and architects involved in building and deploying AI agents built before December 2025 using Agentforce as it existed before Agentforce Script and hybrid reasoning.
The guide covers Agentforce Fundamentals, the difference between Prompts and Agents, How Agentforce Reasons, Best Practices for various components, and whether Agentforce needs Data 360.
Agentforce enhances enterprise productivity by introducing AI agents that can autonomously plan, reason, and act, reducing manual effort and increasing efficiency.
The key benefits include the ability of agents to adapt to different situations, plan effectively, and use tools autonomously or with human intervention, as well as the importance of Data 360 in powering various Agentforce features.
Yes, the guide provides some implementation advice, including strategic planning, defining subagents and their scope, writing clear instructions, and best practices for configuring actions. However, while this guide provides technical detail about how Agentforce works, it's not an official implementation guide with click paths and troubleshooting tips. Find official Agentforce Implementation Guides on Salesforce Help.
Agentforce addresses responsible AI through mechanisms such as filtering, grounding checks, and careful design of actions and instructions to ensure agents behave responsibly and provide accurate responses.
Take a closer look at how agent building works in our library.
Launch Agentforce with speed, confidence, and ROI you can measure.
Tell us about your business needs, and we’ll help you find answers.