From new open source models to evaluation frameworks, our AI Research team has been moving the needle in AI. Take a look at some of our 2024 highlights.
Salesforce's trusted AI architecture for red teaming leverages automation to scale ethical AI testing, utilizing a tool called fuzzai to simulate diverse adversarial scenarios and enhance model robustness. By automating adversarial prompt generation and response validation, fuzzai helps secure AI interactions while reducing human exposure to harmful content.
Now generally available, Agentforce for Developers represents a significant step in Salesforce's mission to drive innovation and deliver intelligent development tools. Let’s explore how Agentforce, powered by Salesforce AI Research’s large language models, is transforming the way you code.
Time series forecasting is becoming increasingly important across various domains, thus having high-quality, diverse benchmarks are crucial for fair evaluation across model families.
The SFR-Embedding-Mistral marks a significant advancement in text-embedding models, building upon the solid foundations of E5-mistral-7b-instruct and Mistral-7B-v0.1.
As the development and deployment of large language models (LLMs) accelerates, evaluating model outputs has become increasingly important. The established method of evaluating responses typically involves recruiting and training human evaluators, having them…
Co-authored by Hannah Cha, Orlando Lugo, and Sarah Tan At Salesforce, our Responsible AI & Technology team employs red teaming practices to improve the safety of our AI products by testing for malicious…
Retrieval Augmented Generation (RAG) has not only gained steam as one of the most invested areas of research in generative AI but also gathered considerable popularity and commercialization opportunities. RAG is typically applied…