Skip to Content

Are You Ready For An AI Audit?

inspector doing an AI audit
An AI audit is a check-up on your AI systems to ensure they’re operating ethically, transparently, and within regulatory guidelines. [Studio Science]

Regulators, academics and politicians are calling for independent AI audits as a way to protect the public from AI’s potential harms. How to prepare.

Is your business ready for an audit? No, not the kind involving accountants and IRS agents. We’re talking about an AI audit, a kind of check-up on your AI systems to ensure they’re operating ethically, transparently, and within regulatory guidelines, with an eye toward understanding how they make decisions. 

As generative artificial intelligence (AI) has proliferated, ethics and transparency have quickly emerged as key considerations among regulators, consumers, academics, and even private companies with skin in the game. Salesforce, for example, supports a strong but nuanced approach to AI regulation. Both the White House and members of Congress have called for independent AI audits as a way to protect the public from AI’s potential harms. But there does not exist — yet — any national standard or baseline for doing so. 

In the absence of national regulation, individual states have stepped up. According to the National Conference of State Legislatures, 40 states have proposed or enacted dozens of bills focused on regulating the design, development and use of AI. The New York State Legislature has proposed two bills that would force employers to conduct bias audits if they use AI for hiring, giving applicants the right to sue not only the employer but the tech companies creating the AI products. 

At the same time, the field of professional AI auditing has emerged. In late 2023, a new group, the International Association of Algorithmic Auditors (IAAA), was formed to create an AI auditor code of conduct, along with training curricula and a certification program for AI auditors. 

According to the Federation of American Scientists, “an algorithmic audit examines automated decision-making systems to ensure they’re fair, transparent, and accountable.” The audit looks at data inputs, the decision-making process, model training, and the outcomes, to identify biases or errors. These audits can be done by an independent third party or a dedicated internal team. 

By scrutinizing the inner workings of AI systems, organizations can proactively identify and address potential vulnerabilities. 

This scrutiny “will become pretty dominant within the next year,” said William Dressler, regional vice president of AI and data architecture, and the head of innovation in the global AI practice at Salesforce. “Having those [safety] mechanisms in place now is going to be a no-brainer, just like we all have antivirus software on our computers.” 

Now that you understand the landscape around AI audits, you’re probably wondering what safeguards you can put in place to ensure your AI systems are safe, ethical, transparent, and trustworthy. Here’s a run through the steps you may want to consider, plus some concepts and terminology you’ll need to understand to do this right. 

Key terminology on AI safety

There is no magic wand to make your AI safe and trustworthy. So the best course of action is to implement technology safeguards like those used in the Einstein 1 platform, designed specifically to protect your data and your organization from potential AI harm. These include:

  • Data masking, which replaces sensitive data with anonymized data. This ensures you’ve protected all personally identifiable information, like names, phone numbers, and addresses, when writing AI prompts.  
  • Toxicity detection, which flags toxic content like hate speech. It does this by using a machine learning model to scan and score the answers an LLM provides, adding an additional layer of protection against harmful or inappropriate content. 
  • Zero retention means that no proprietary data is stored within an app or its supporting platforms. When this is enabled in the context of generative AI, content within prompts and outputs are never stored in the LLM, and are not learned by the LLM. They simply disappear. 
  • Dynamic grounding understands the context surrounding data within a prompt to ensure the most up-to-date and factual information is used for an output, helping to mitigate or eliminate AI hallucinations.
  • Secure data retrieval allows users to securely access the data to ground generative AI prompts within the context about your business, while maintaining permissions and data access controls.
Salesforce’s Chief Product Officer, David Schmaier discusses the AI revolution, and the importance of trusted data

Next, let’s dive deeper into AI audits.  

What are the elements of an AI audit?

An AI audit involves stakeholders across the organization, including senior leaders, legal, developers, security, compliance, and AI practitioners. 

Some of the key aspects of an AI audit include: 

Security and privacy: This examines the security measures a company uses to protect its AI systems from outside threats, including that its data is managed in a way that protects privacy. 

Ethical considerations: This analyzes AI systems to identify and mitigate biases that may result in unfair or discriminatory outcomes, including assessing the impact of AI systems on different demographic groups and society at large. 

Transparency and explainability: This assesses AI systems to understand how transparent they are in their decision-making processes and inner workings. It may also include an explainability assessment, algorithm analysis, data transparency review, and an examination of the AI model. This all helps explain why AI does what it does.  

Accuracy: An accuracy assessment evaluates the performance, reliability, and consistency of the AI model’s predictions or decisions. This may include error and accuracy analysis, and validation, where you’d compare the AI system’s outputs against what you know is true. 

Compliance: This determines whether you’re following legal, industry, and internal guidelines and regulations. 

Audit AI for trusted AI

As Paula Goldman, chief ethical and humane use officer at Salesforce, noted, “It’s not enough to deliver the technological capabilities of generative AI. We must prioritize responsible innovation to help guide how this transformative technology can and should be used — and ensure that employees, partners, and customers have the tools they need to develop and use these technologies safely, accurately, and ethically.”

To get there, Salesforce has developed five guidelines for the development of trusted generative AI – covering safety, accuracy, sustainability, honesty, and empowerment – that can serve as a guidepost for your own AI practice. 

Get the latest articles in your inbox.