We’ve Reached Peak LLM: Here’s What the Next Phase of AI Looks Like

The next AI breakthrough isn't the next frontier model. It's the realization that LLMs, powerful as they are, were never meant to work alone.
Key Takeaways
The AI industry has spent the past two years in an arms race over model size, focusing on more parameters, longer context windows, and more training data. But while everyone’s been watching the horsepower wars, a more consequential shift has been happening: Large language models (LLMs) are becoming foundational infrastructure, not the primary source of innovation.
The next AI breakthrough isn’t the next frontier model. It’s the realization that LLMs, powerful as they are, were never meant to work alone. They’re car engines, not complete vehicles. And the companies that will win with AI will adopt entire systems around them. That includes memory architectures that enable continuity; reasoning modules that handle complex logic; simulation environments that continuously improve performance; multimodal capabilities that understand text, images, video and spatial reasoning; and orchestration layers that coordinate it all.
This is a fundamental shift in how AI delivers value. LLMs generate text very well, but lack native long-term memory of past conversations. And because they’re predictive rather than logical, they often struggle to reason through complex, multistep problems reliably.
On their own, they also can’t learn or update their internal knowledge during a chat. But as part of a larger system — what Salesforce refers to as system-level AI — they become transformative.
“LLMs, for the most part, have matured and become commoditized,” said Itai Asseo, senior director of incubation and brand strategy, Salesforce AI Research. “An LLM, on its own, is powerful, but it doesn’t give a company a complete solution.”
This was echoed by Salesforce CEO Marc Benioff in a recent Time magazine article, in which he noted that LLMs are “incredible achievements” but are also “increasingly interchangeable infrastructure.”
Why system-level AI beats LLM innovation
As LLMs become more homogenized and commoditized, available to everyone through APIs, the new frontier of innovation is assembly. Building a transformative AI system means moving past the chatbox and integrating specific capabilities that turn that engine into a far smarter and more valuable business system.
“It’s important for anyone concerned with the business application of AI to recognize that the most meaningful recent breakthroughs aren’t happening at the model layer,” Silvio Savarese, executive vice president and chief scientist, Salesforce AI Research, recently wrote.
So what does system-level AI look like in practice? Here are four key components that transform an LLM from a chatbot to a business system.
1. Long-term memory
One of the problems with standalone LLMs is that they’re stateless by default: Each new conversation starts without a memory of the last. It’s like Groundhog Day for data. System-level AI adds a memory architecture that remembers. This creates continuity and allows the AI to pick up exactly where the last conversation, whether with a human or an AI agent, left off.
What might this architecture look like? Salesforce scientists, who recently wrote on the 360 blog that “without robust memory, an AI agent is like a brilliant consultant with amnesia,” have developed a “block-based extraction method that maintains the accuracy of long context (aka long conversation histories) while dramatically reducing costs.”
The approach works in two phases:
- Parallel extraction: Breaks conversation history into manageable chunks and extracts relevant memories from each in parallel.
- Smart aggregation: Combines those snippets into a briefing for the AI to use in its response.
So, for example, instead of combing through an entire library to deliver an answer, the memory layer summarizes each chapter of every book in parallel and presents a synopsis to the LLM.
2. Reasoning and planning
A reasoning engine is the executive function of an AI system. While standard LLMs predict the next likely word, reasoning-enhanced systems pause to plan multistep approaches before responding. They digest information, apply business logic, and map out a multistep plan before taking action, just as any businessperson would. This capability can be built into the LLM itself or operate as a separate orchestration layer like Salesforce’s Atlas Reasoning Engine.
3. Action and orchestration
This is the layer where AI moves from talking to doing. Through APIs and orchestration, the system interacts with your enterprise software, bridging organizational boundaries. For example, one agent could check inventory, while another updates a customer record, or processes a refund.
“The algorithms gave us the basic concepts to be able to do this, but now we’re going to see more purpose-driven models that are just not language models, pure reasoning models, pure action models, or pure memory models,” said William Dressler, senior director, delivery leader at Salesforce
The orchestration layer for tying that all together, he said, will become more important than any single model.
To that point, Savarese, in his article, described a semantic layer — a protocol that lets AI agents from different organizations communicate with each other – that interprets intent, verifies, and negotiates terms without human intervention.
“Consider purchasing a car,” he wrote. “Your personal AI agent doesn’t just negotiate with the dealership’s agent, it simultaneously coordinates with insurance providers, lenders, and service providers, each represented by their own AI agents.”
Check out this video for an explanation of agent-to-agent collaboration.
4. World models
LLMs are trained on text and, more recently, images and video. But we live in a three-dimensional environment, and looking at a video is not the same as understanding the real world within it.
World models will enable spatial intelligence: AI’s ability to perceive, reason about, understand, and interact with the physical world. For example, on a factory floor, a world model could see and predict that a robotic arm was about to collide with a human, and change its trajectory.
In a Substack essay, AI pioneer and co-founder of World Labs Fei-Fei Li calls this AI’s next frontier.
“Our view of the world is holistic — not just what we’re looking at, but how everything relates spatially, what it means, and why it matters,” she wrote. “Understanding this through imagination, reasoning, creation, and interaction — not just descriptions — is the power of spatial intelligence. Without it, AI is disconnected from the physical reality it seeks to understand.”
What does this mean for businesses? World models allow AI to simulate physical outcomes before they happen. Instead of predicting what would normally happen based on past patterns, they can model what would happen under certain conditions, whether that’s supply chain disruptions, manufacturing line changes, or autonomous robotics navigating real environments. This capability is still nascent, but it represents the shift from AI that produces words to AI that understands the physical world.
What does this mean for your AI strategy?
If you’ve invested in LLMs over the past two years — worldwide spending is already in the hundreds of billions — you haven’t wasted your capital. You’ve simply laid the foundation. Just as the internet moved from the plumbing of routers to the usability of apps, AI is rapidly moving up the tech stack. The industry is shifting its focus from the foundational models you can’t see to the agentic systems and apps you use every day.
System-level AI doesn’t replace the LLM; it completes it. Your LLM is a car engine: powerful, but useless without a chassis, wheels, and a driver. Memory, reasoning, and orchestration are what turn that raw engine into a vehicle that can navigate complex business goals. In this next phase, the competitive advantage will be in the domain expertise and proprietary context you use to steer it.
Start with the business problem to solve
Here’s how to think about system-level AI. Don’t ask, “Should we add memory to our model?” Ask if your use cases require continuity across sessions. Let the business problem drive which system components you need.
For example, do your customer service teams need past interaction context? If yes, look into options to extend your AI system capabilities, such as memory. If your teams need more in-depth analysis or troubleshooting, you probably need reasoning capabilities. If you need AIs across different departments to coordinate and take action on one issue, you may need orchestration or planning components.
Get a first look at how businesses are using AI agents
Explore how agents are already helping companies across sales, service, internal operations and more.
Not every AI application needs every component. Focus on what you need to solve your biggest limitations.
“Going forward, it’s all about focusing less on the technology itself and more on the business problems to solve,” said Asseo.
A case in point: Salesforce is using AI agents to capture lost revenue, automating the entire cold-call-to-meeting process for prospective customers who were previously slipping through the cracks.
How to prepare for the move to system-level AI
Understanding these components is one thing. Deploying them effectively is another. Your competitive advantage will come from knowing which components to combine, when to deploy each capability, and how to orchestrate them for your business problems.
But be prepared: Coordinating memory systems, reasoning engines, and API calls means activating new systems, not just prompting chatbots. This requires infrastructure-level thinking, clear ownership across teams, and close monitoring as components interact.
The human element
The tech infrastructure is only half the story. The biggest opportunity is the mental shift you need to work alongside system-level AI.
“The more I work with this technology, the less I focus on what the technology is doing and more on how individuals will adapt to this radical transformation,” said Dressler.
This means treating AI not as a chatbot you prompt, but as a team member that can reason and execute complex tasks. The companies that figure out this organizational and cultural shift will be the ones that realize value from system-level AI.
In his Time essay, Benioff wrote that as the gap between AI innovation and adoption begins to close, “the task before us is not to predict which LLM will win in the marketplace, but to build systems that empower AI for the benefit of humanity. The choices we make now — about architecture, governance, and partnership between people and machines — will determine whether we turn this moment of possibility into lasting progress that strengthens institutions, expands opportunity, and unlocks human potential.”
System-level AI beats models
The shift to system-level AI is already underway, and as LLMs commoditize, your competitive advantage will move from models to complete systems.
Your LLM investments are foundational for system-level AI. The skills you’ve developed around prompt engineering, workflow design, and understanding model limitations transfer directly to it. You’re building on what works, but you do need to think differently, and bigger.
Start asking which combination of AI capabilities you need to solve your business problems. That shift is where your edge lies. The models have matured. Now it’s time to activate the systems that make them transformative.
What’s your agentic AI strategy?
Our playbook is your free guide to becoming an agentic enterprise. Learn about use cases, deployment, and AI skills, and download interactive worksheets for your team.












