Agentforce Guide to Achieving Reliable Agent Behavior: A Framework for 6 Levels of Determinism

Flowchart graphic showing the Agentforce building blocks.
Graphic showing the control levels for increased agent behavior.
Flow chart graphic showing a high-level decision tree of the Agentforce Reasoning Engine.

Activity Steps Description
Agent Invocation 1 Agent is invoked.
Classify Topic 2-3 The engine analyzes the customer's message and matches it to the most appropriate topic based on the topic name and classification description.

Agent Script transforms the Topic Selector into a fully configurable element, eliminating the "black box" of probabilistic LLM routing. By treating navigation as a programmable topic, you gain absolute transparency and control, allowing you to align the agent’s decision-making logic precisely with your specific business requirements and architectural standards.
Execute Topic’s Agent Script and Build Instructions / Resolve Instructions & Available Actions 4-5 Execute scripted actions as dictated by instructions. These are actions that should be executed once a topic is chosen, before the system proceeds to evaluate the non-deterministic instructions or the rest of the conversational context.

Prompt and Conversation History Sent to LLM
6 Once all scripted actions are executed, a prompt with the topic scope, instructions, and available actions along with the conversation history are sent to LLM.
Note: Instructions are covered in level 2, Agentic Control.
LLM Decides to Respond or Run an Action 7 Using all this information, the engine determines whether to:
• Run an action to retrieve or update information
• Ask the customer for more details
• Respond directly with an answer
If the LLM decided to respond, step 12 is executed.
Action Execution 8-9 If an action is needed, the engine runs it and collects the results.
Run After-Action Logic 10 Only applicable with Agent Script: With Agent Script, actions can have deterministic transitions to other actions or topics. Those will always be executed after the action is executed.
Action Output Returned + Action Loop 11 The engine evaluates the new information and decides again what to do next — whether to run another action, ask for more information, or respond.
Grounding Check - LLM Responds to Client 12 Before sending a final response, the engine checks that the response:
• Is based on accurate information from actions or instructions
• Follows the guidelines provided in the topic's instructions
• Stays within the boundaries set by the topic's scope
Note: It's possible with Agent Script to add a step to format the final answer.
The grounded response is sent to the customer.
Graphic showing the flow of Topic Classification from agent conversation to plan.
Graphic showing the flow of classifying actions from an agent conversation to a plan.
Graphic showing the looping over next action classification in the flow from agent conversation to plan.
Graphic showing the reasoning engine in action in the flow from an Agent conversation to plan.
Salesforce UI showcasing plan tracing within Agent reasoning.
Flowchart graphic showing an Agent flow with RAG between Platform and Data 360.

Context Variables Custom Variables
Can be instantiated by user X
Can be Input of Actions
Can be output of Actions X

Can be updated by actions
X
Can be used in filters of actions and topics
Flowchart graphic showing the retrieving, setting, and using stages of troubleshooting.
Flowchart graphic showcasing an Agent using filters for troubleshooting or providing resolution.
Flowchart graphic showing a marketing journey.
Graphic showing the control levels for increased agent behavior.


reasoning:
  instructions: ->
    before_reasoning :  
       # Deterministic: This runs automatically upon topic entry.
       # The LLM has no choice here. It simply receives the output.
    instructions
       # Now, the LLM is prompted with the result already in context
       | You are speaking to a customer. Their VIP status is {!@variables.is_vip}.
       # any further instructions (normal reasoning) go next
      Whatever instructions the agent needs for reasoning.


reasoning:
  instructions: ->
     if @variables.is_vip == true:
        # Skip credit check for VIPs deterministically
        run @actions.apply_auto_approval
        | Inform the customer their loan is auto-approved due to VIP status.
    else:
        # Enforce credit check for everyone else
        run @actions.initiate_credit_check
        | Tell the customer we are checking their credit score now.


 if @variables.stock_level == 0:
        # Immediately hand off to the "Backorder" topic
        @utils.transition to @topic.handle_backorder



   # Explicitly binding an action's output to a variable
    run @actions.check_inventory with sku=@variables.current_sku
    set @variables.stock_level = @outputs.quantity_available



 reasoning:
  instructions: ->
    run @actions.get_incident_status with zip=@user.zip
    set @variables.is_outage = @outputs.active_incident
    | If {!@variables.is_outage}, acknowledge the specific incident immediately.


 if @variables.credit_score < 600:
   # The agent is physically blinded to the "Increase Credit" instructions. 
   # It only sees "Debt Counseling" instructions that are fetched through RAG
   | Focus solely on explaining credit repair resources. Insert $Debt_Counseling_Retriever.results
 else:
   | You are authorized to offer a limit increase up to $5k.


 if @variables.safety_check_complete == false:
   # Prevent the user from ending the topic
   | Acknowledge the user's side-note, then pivot back to the required field: 
{!@variables.missing_field}.
   @utils.stay_in_topic




# The LLM cannot summarize or "rewrite" this. It is forced to output it.
| "Disclaimer: I am an AI agent. I cannot provide financial advice."

Summary Table: The Architect’s Cheat Sheet

Feature Levels 1–5 (guided autonomy) Level 6 (Agent Script)
Primary driver Probabilistic engine (LLM decides) Deterministic graph (Code decides)
Logic source Natural language prompts if/else Logic, state management, transition logic
Action execution "Agent, here is a tool. Use it if you want." "Agent, run this tool. Now."
Context memory Implicit through LLM context window (except when using level 4) Explicit through mutable variables used all throughout the script
Use case examples Knowledge search, shopping, creative writing Authentication, payments, compliance, diagnostics
Build effort low (mainly prompting) medium/high (scripting/logic)
Risk tolerance medium low (zero-trust)

AI Determinism FAQs

The six levels of determinism in AI are: instruction-free topic and action selection; agent instructions; data grounding; agent variables; deterministic actions using flows, Apex, and APIs; and agentic control with Agent Script.

Understanding AI determinism is crucial for building reliable agents that can perform critical business functions accurately and consistently, striking a balance between creative fluidity and enterprise control.

In AI, "deterministic" refers to the ability of a system to produce the same output given the same input and conditions, imposing a rigidity and discipline essential for reliable agent behavior.

Non-determinism in AI systems arises primarily due to the use of Large Language Models (LLMs), which are non-deterministic by nature, allowing agents to be flexible and adaptive.

The levels of determinism progressively enhance the determinism of AI agents, thereby affecting their autonomy - as the levels progress, agents become less autonomous but more reliable and aligned with business processes.

Less deterministic AI systems present challenges in terms of reliability and compliance with business requirements, as their inherent non-determinism can lead to unpredictable behavior.

Businesses manage AI systems with varying levels of determinism by applying a layered approach that includes thoughtful design, clear instructions, data grounding, state management through variables, and deterministic process automation using flows, Apex, and APIs.