Salesforce Headless 360: What the Agent Consumer Means for Your Integration Architecture

Understand where the new platform capabilities fit within established Salesforce integration patterns, and what shifts when agents become consumers of your org.
When Salesforce launched the SOAP API in 2000, it made a bet that proved prescient: expose the platform, and developers will build on it in ways no single company could match alone. At the time, most internet businesses treated APIs primarily as internal infrastructure. Salesforce treated its API as a business model, turning a CRM product into a platform, seeding the AppExchange ecosystem, and establishing an API-first design philosophy that still shapes how the company thinks about platform access today.
Salesforce Headless 360 is the same kind of move. The 60+ Model Context Protocol (MCP) tools, the preconfigured coding skills exposed through Headless 360, and the Agentforce Experience Layer extend the platform surface in a way that changes how the platform can be consumed. In 2000, the new consumer was the external developer. In 2026, the new consumer is the agent. The innovation is similar: open the platform to a new set of consumers, and the ecosystem expands in ways that cannot be predicted and cannot be matched by a single product team.
The integration patterns you already know still hold. What changes is who is calling them, at what scale, and with what expectations. Four considerations in particular require your attention:
- Where you enforce business logic
- How you map the new consumer to your existing integrations
- What your metadata layer needs to communicate
- How you validate that your platform behaves correctly when consumed by agents rather than humans
Move business logic enforcement out of the interface
In a user interface, you can hide a field on a page layout so users cannot see it, make a field read-only so users cannot edit it, and enforce a required approval sequence through screen flow navigation. These are legitimate patterns for human users, and they have served most orgs well. The interface does the work so the platform does not have to.
In a traditional integration, you design the sequence. Step A leads to Step B. Flow, apex and other traditional automation orchestrate the process inside the org, and the interface guides the user through it. With Salesforce Headless 360, that changes. The agent determines the execution order at runtime based on its own reasoning. The orchestration moves outside the org to the agentic system consuming the platform, and the interface moves with it. If a business rule exists only in the user interface, agents interacting directly with the platform via MCP will not be subject to it.
The architectural requirement is to move enforcement to the center of the platform. Validation rules and triggers execute at the object level, governing both agent and human user regardless of the entry point. Formula fields and roll-up summary fields provide a deterministic record state at the database layer, giving an agent a reliable source of truth for reasoning tasks. Permission sets and sharing rules govern what the agent can access, just as they govern what a human can access.
Start by auditing your org for logic currently isolated in the presentation layer. If a business constraint is essential to process integrity, it belongs in a validation rule, record-triggered flow, or Apex, not in a page layout or a screen flow.
The design work then lies in ensuring that the tools you expose are idempotent (able to handle multiple duplicative calls gracefully) , properly scoped, and governed by constraints that hold true regardless of the order in which they are called. You are no longer designing only for a user navigating a user interface. You are now also designing for a consumer that treats your business logic as a catalog of composable capabilities.
Introducing Salesforce Headless 360. No Browser Required.
Everything on Salesforce is now an API, MCP tool, or CLI command, and agents can use all of it.
Map the new consumer to your existing integration patterns
Salesforce Headless 360 does not introduce new integration patterns. It adds a non-deterministic caller to the patterns you already have, and that changes the implementation requirements for each of these patterns.
Request-and-reply is unchanged as a pattern: a consumer calls a service and waits for a response. What changes is that the caller is now an agent reasoning about which action to take, and the execution order is determined at runtime rather than scripted in advance. You can no longer rely on sequence to maintain integrity. Every tool you expose on this pattern must be idempotent. If an agent retries a call during a reasoning loop, it must not result in duplicate records or inconsistent state.
Data virtualization is also unchanged as a pattern: external data is queried without being moved into Salesforce. What changes is that the consumer is now an agent treating external systems as part of its available toolset. The virtualized metadata must be descriptive enough for an agent to understand the schema it is querying, not just readable by a human. The mechanism you use to expose that surface, whether MCP Bridge, an external service, or another connector, depends on what your external system supports.
Asynchronous patterns introduce a different consideration. When an agent triggers a process that does not resolve immediately, it needs to be configured to wait for a response rather than proceed on an assumption. Unlike a human who can check back later, an agent reasoning through a task will either wait, retry, or fail depending on how it is configured. If you are exposing long-running processes as MCP tools, design explicitly for the async case: define how the agent recognizes completion, how long it waits, and what it does if the response does not arrive.
For the complete picture of where MCP, MCP Bridge, Agent2Agent (A2A), and traditional APIs sit relative to each other, read How to Choose the Right Integration Pattern for Agentforce and the Salesforce Integration Patterns Fundamentals.
Recognize what agents need from your metadata layer
Agents don’t need UI. They need context. Field definitions, validation rules, visibility conditions, object relationships: these are what an agent reads to understand the data it is working with and the constraints it is operating within. An agent navigating a record doesn’t render a page layout. It reads the metadata layer behind it, and it reasons from what it finds there.
This is where metadata quality becomes an architectural concern in a way it never fully was before. A field named “Status” with no description and fifteen picklist values that only a tenured employee understands is a usability problem when a human uses it. It is a reasoning failure when an agent does. The agent has no institutional knowledge to fall back on. It works from what you have defined, nothing more.
The immediate implication is to treat your object and field definitions as a first-class part of your architecture. Meaningful field names, accurate descriptions, well-governed picklist values, and clear relationship definitions are not just documentation hygiene. They are the context an agent needs to operate correctly. In a traditional model, keeping logic in Dynamic Forms and screen flows has always reduced total cost of ownership. In an agentic model, those same metadata definitions become the instruction set an agent reads to reason about your data.
Test the platform, not the screens
In a traditional development cycle, testing follows a familiar sequence: unit tests validate individual components, system integration testing (SIT) validates that systems interact correctly along scripted paths, and user acceptance testing (UAT) verifies that a human can navigate a series of screens to achieve a result. When the consumer is an agent, each of these phases shifts.
Unit testing is the least disrupted. You are still testing individual Apex methods, validation rules, and flow outcomes in isolation. The difference is that idempotency becomes an explicit test criterion. Every unit needs to behave correctly when called multiple times, in any order, by a caller that may retry without warning.
SIT is where the most significant shift occurs. Traditional SIT validates system interactions along a scripted path. With agents, you need to test system interactions along non-scripted paths. The agent may call your integrations in sequences your SIT scenarios never anticipated. Expand your SIT coverage to include varied tool call sequences, not just the happy path, and verify how your validation rules and triggers behave when tools are called in orders a human would never naturally produce.
UAT breaks down most severely. You cannot UAT an agent the way you UAT a screen. The Agentforce Testing Center replaces that model for agentic consumers, letting you run batch tests across hundreds of scenarios simultaneously and validate that the correct actions fire for a given input. This is where you identify gaps in your enforcement layer before they reach production. If you are testing with external agents, factor in token consumption at the LLM layer as well. Batch testing at this scale will drive costs that human-scale UAT never did.
For performance testing, focus on per-transaction governor limits: SOQL query counts, DML statements, and CPU time. A human navigating a complex record spreads their platform consumption across minutes of interaction. An agent reasoning through the same task may fire the equivalent number of queries and DML operations in a single transaction, without pause. Design your MCP tools with that headroom in mind, keep each tool narrowly scoped to a single responsibility, and test under load conditions that reflect agent-scale consumption rather than human-scale interaction.
What holds and what shifts
The quality of your foundation now determines what your agents can do and how well they do it. That foundation will be exercised at a scale and pace that human interaction never imposed. A validation rule that occasionally catches a data entry error becomes a systematic control at agent scale. A sharing rule scoped loosely because humans rarely traversed certain data becomes an exposure risk when an agent can reach it dynamically and repeatedly. Business logic that lives in institutional knowledge rather than in your configuration is logic the agent will never have access to.
The foundation does not change. What changes is who is consuming it, at what scale, and with what expectations. The architects who navigate that well are the ones who move their enforcement to the schema, invest in their metadata layer, and test their platform for consumers that don’t follow the paths they designed.
Where to go from here
- Read How to Choose the Right Integration Pattern for Agentforce and the Salesforce Integration Patterns Fundamentals for the complete picture of where MCP, MCP Bridge, and A2A fit.
- Join the conversation in the Salesforce Architect Community Group.
Subscribe to the Salesforce Architect Digest on LinkedIn
Get monthly curated content, technical resources, and event updates designed to support your Salesforce Architect journey.









