AI Agent Security

Your Guide to AI Agent Security

Learn how to ensure your agents are not only effective, but also trustworthy and compliant.

State of IT Security
Learn how 2,000+ security, privacy, and compliance leaders are navigating the AI era in the 4th Edition State of IT report.
Salesforce mascot Astro standing on a tree log while presenting a slide.
Stay up to date on all things security and privacy.

Sign up for our monthly newsletter to get the latest research, industry insights, and product news delivered straight to your inbox.

6 Security Steps to Prepare for Agentforce
Empower your Agentforce with a foundation of security and trust.

AI Agent Security FAQs

You can track unusual activity by monitoring the inputs an agent receives, the actions it takes, and the systems it interacts with. Be on the lookout for any abrupt changes in its decision patterns, as well as any unexpected requests or actions that fall outside of the agent's intended scope. Pairing this with clear alerting rules will help ensure that you can respond quickly when an issue might occur.

Start by assigning clear, role-based permissions for anyone who might be creating, testing, or deploying agents. Keep detailed logs that show who made updates and when.

Yes, the Agentforce Trust Layer helps keep sensitive information protected by masking private data in prompts, preventing external models from retaining inputs, and applying policy-based controls. Together, these security features make it possible to maintain strong data protection while still providing copilots with the freedom to accomplish their tasks.

Salesforce provides safeguards designed to help defend against common and less-common security issues (such as unauthorized access, harmful inputs, and misuse). Built-in safeguards include enhanced auditing and data protection, reducing your exposure to the risks that can affect AI agents.