What is an Agentic Work Unit (AWU)?
An Agentic Work Unit (AWU) is a platform-level metric that captures the breadth of activity happening across the Agentic Enterprise, from Agentforce to Slack. While the industry has traditionally focused on tokens (the raw data processed by a model), tokens only tell you about consumption. AWUs are a measure for the total volume of work our platform performs on behalf of customers.
How do Agentic Work Units work?
An AWU is one discrete task accomplished by an AI agent. It is the moment where raw intelligence is converted into real work. It’s a prompt processed, a reasoning chain completed, or—most importantly—a tool invoked. To provide a complete picture of this value, we are tracking two key figures:
- AWUs: The total volume of work our platform performs on behalf of customers.
- Tokens Processed: Our footprint in the global AI compute economy, grounding our scale in the infrastructure layer.
Why not just measure tokens?
To date, AI success has been measured through the consumption of tokens. But tokens only measure how much an AI talks, not the work it actually completes. That’s why we’re excited to introduce the Agentic Work Unit (AWU)
Does an AWU always equal a fixed amount of tokens?
No. The relationship is elastic. As our platform innovation matures, and our customer’s implementations improve, we expect to see a divergence in tokens versus AWUs, implying that more work is getting done for less cost.
High-frequency, deterministic tasks (like triggering a Flow or calling an API) become increasingly “token-lean”. Conversely, complex reasoning and autonomous problem-solving may actually see an increase in input tokens. Especially as agents perform more sophisticated actions like running evaluations to determine the quality of work, designing and optimizing agents with vibe coding, and leveraging even more context for the best possible responses.
The objective here is a high “inference-to-work” ratio: using input tokens to produce concise, high-value output tokens. This is critical because in the world of LLMs, output tokens can be up to 10x more expensive than the input used to get there. Our goal isn’t simply to use fewer tokens; it’s to make sure that every expensive output token spent is maximizing the work being done.
What kind of growth are we seeing with AWUs?
We’re seeing rapid acceleration across our customers. To date, we’ve generated 2.4B AWUs —771M in Q4 alone. Service Agents grew 106% quarter-over-quarter to reach 129M Q4 AWUs and Employee Agents experienced a 62% QoQ increase. In Slack, AI Search grew 116% QoQ, complemented by a 44% increase in File Summaries and the successful debut of our new Slackbot.
Ready to take the next step with Agentforce?
Build agents fast.
Take a closer look at how agent building works in our library.
Get expert guidance.
Launch Agentforce with speed, confidence, and ROI you can measure.
Talk to a rep.
Tell us about your business needs, and we’ll help you find answers.