As the development and deployment of large language models (LLMs) accelerates, evaluating model outputs has become increasingly important. The established method of evaluating responses typically involves recruiting and training human evaluators, having them…
Co-authored by Hannah Cha, Orlando Lugo, and Sarah Tan At Salesforce, our Responsible AI & Technology team employs red teaming practices to improve the safety of our AI products by testing for malicious…
Retrieval Augmented Generation (RAG) has not only gained steam as one of the most invested areas of research in generative AI but also gathered considerable popularity and commercialization opportunities. RAG is typically applied…
Tl;Dr: Salesforce AI Research and Tableau AI collaborated to build the Pulse insight summary feature, GA for all Tableau Cloud customers starting in early 2024. The feature combines the power of generative AI…
Tl;Dr: Salesforce AI Research and Tableau AI collaborated to build the Pulse insight summary feature, GA for all Tableau Cloud customers starting in early 2024. The feature combines the power of generative AI…
We've introduced xLAM, our family of in-house Large Action Models, designed for function calling, reasoning, and planning. These models are designed to streamline and simplify the integration of AI into your workflows, reducing the complexity often associated with LLMs.
Huan Wang, Shelby Heinecke, Juan Carlos Niebles, Caiming Xiong TL;DR: We release xLAM, a series of LLMs optimized for function calling and AI Agents. It offers several variants designed to serve different application…
As part of our commitment to innovation in enterprise RAG and trusted AI, we’re excited to release SFR LlamaRank, a state-of-the-art reranker from Salesforce AI Research. LlamaRank is a language model specialized for…
Simply put, AI Assistants are built to be personalized, while AI Agents are built to be shared (and scaled)—and both techniques promise extraordinary opportunities across the enterprise.
We are excited to open-source 🍃MINT-1T, the first trillion token multimodal interleaved dataset and a valuable resource for the community to study and build large multimodal models.