Skip to Content
0%

Fueling the Next Frontier: How Salesforce Research Empowers Academic Innovation

In the rapidly accelerating world of Artificial Intelligence, the bridge between academic theory and industry application is more vital than ever. At Salesforce AI Research, we believe that the most transformative breakthroughs happen when we collaborate with the brightest minds in the academic community.

Our academic grant program is designed not just to fund research, but to foster deep, meaningful partnerships that define the future of Trusted AI, Agentic systems, and Deep Learning.

A Partnership for Impact

While many industry players offer grant programs, Salesforce AI Research takes a unique approach centered on agility and collaboration. We are not looking to simply distribute resources; we are looking to co-create the future.

Our program identifies and supports university faculty and researchers tackling the hardest problems in computer science — from simulation environments and ambient intelligence to agent to agent communication and AI for social good

By partnering with Salesforce, grant recipients gain more than financial support; they gain access to a world-class research team and critical insights into the actual challenges customers face. This exposure helps focus their work on solving real-world problems, ensuring that theoretical advancements have a clear pathway to influencing how businesses and society interact with technology.

Celebrating Our 2025 Academic Partners

Nanyun (Violet) Peng, University of California Los Angeles

Project: Multi-Agent Persuasion: An Agent Simulation Engine for Long-Term Social Influence

Professor Peng’s research addresses a critical gap in how we understand AI influence, moving beyond simple one-on-one interactions to study persuasion as a complex, networked phenomenon. Her team is developing the “Multi-Agent Persuasion Simulation Engine” (MAP-SE), a novel framework that unifies memory, social adaptation, and network dynamics. Unlike current systems that treat persuasion as a short-term exchange, this project simulates how influence emerges and propagates across adaptive agents with long-term memory and evolving social roles.

This work sits at the vital intersection of AI alignment, computational social science, and policy simulation. By introducing dual-role agents capable of both persuading and being persuaded, the research will reveal how information—and misinformation—stabilizes or polarizes within a society. The ultimate goal is to provide the field with a foundational tool for building safe, culturally aware AI systems that can navigate the ethical complexities of real-world communication.

Victor Zhong, University of Waterloo

Project: High-Fidelity, Distributional Evaluation of AI Systems Using Data-Grounded User Profiles

As Large Language Models increasingly power critical applications, the disparity between controlled benchmarks and real-world performance has become a significant challenge. Professor Zhong is tackling this by developing a rigorous framework for “distributional evaluation,” which moves away from static, synthetic user personas to models grounded in large-scale empirical data. His approach extracts natural language profiles from public user-generated content to create “Language Personas” that accurately reflect the diversity and “long tail” of actual user behaviors.

The significance of this research lies in its potential to make AI evaluation more scalable, realistic, and statistically valid. By validating that these simulated personas reliably predict human-system interactions, the project aims to deliver an open-source framework that allows developers to test for safety and alignment across a vast spectrum of user types. This contribution will be instrumental for the industry, enabling more comprehensive pre-deployment testing that identifies bias and failure modes before they affect real users.

Percy Liang, Stanford University

Project: Fully Open-Source Models for Agentic Tasks

While open-weight models have advanced rapidly, true transparency in AI development—spanning code, data, and training methodologies—remains rare. Professor Liang’s project aims to bridge this divide by training a powerful, fully open-source model specifically designed for agentic tasks like coding, reasoning, and tool use. Leveraging “Marin,” an open development framework, his team is building high-performing 8B and 32B base models from scratch, ensuring that every step of the process, from data curation to reinforcement learning, is documented and reproducible.

This initiative is crucial for the scientific community’s ability to understand generalization and prevent test-set contamination. By focusing on post-training for complex capabilities such as long-context understanding and function calling, the project will produce a robust agentic model by Fall 2026. More importantly, it establishes a new standard for openness, providing researchers with the tools and datasets necessary to rigorously study and advance the state of agentic AI. Watch Percy’s talk at Dreamforce below:

Driving the Future of AI

Our grant program is a testament to our belief that innovation is a team sport. By supporting the academic community, we are investing in a future where AI is not only more powerful but also more trusted, equitable, and sustainable.

Congratulations to our 2025 grant recipients. We look forward to partnering with these visionaries as they push the boundaries of agentic systems and reinforce our commitment to empowering the next generation of academic innovation.

Discover More AI Innovation

Visit our research site to learn more about our current projects, meet our team, and explore future collaboration opportunities.

Get the latest articles in your inbox.