{"id":67601,"date":"2025-02-19T06:28:32","date_gmt":"2025-02-19T14:28:32","guid":{"rendered":"https:\/\/www.salesforce.com\/?p=67601"},"modified":"2025-07-29T10:59:59","modified_gmt":"2025-07-29T00:59:59","slug":"ai-accountability","status":"publish","type":"post","link":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/","title":{"rendered":"In a World of AI Agents, Who&#8217;s  Accountable for Mistakes?"},"content":{"rendered":"\n<p>Remember the <a href=\"https:\/\/www.bbc.com\/travel\/article\/20240222-air-canada-chatbot-misinformation-what-travellers-should-know\" target=\"_blank\" rel=\" noopener\">viral case<\/a> of an airline forced to honor incorrect fare terms that a chatbot gave a customer? As AI agents step into bigger roles, automating and carrying out complex tasks,&nbsp;accountability for mistakes like this is a real concern. <\/p>\n\n\n\n<p>When AI makes decisions on its own, who&#8217;s on the hook if things go sideways? How can you ensure AI accountability \u2014 and prevent mistakes from happening in the first place?<\/p>\n\n\n\n<p>Artificial intelligence (AI) accountability is a major concern for both employees and executives. In fact, it\u2019s such a pressing issue that The Wharton School recently launched the <a href=\"https:\/\/ai-analytics.wharton.upenn.edu\/wharton-accountable-ai-lab\/\" target=\"_blank\" rel=\" noopener\">Accountable AI Lab<\/a>, a research initiative focused on AI\u2019s ethical, regulatory, and governance challenges. <\/p>\n\n\n\n<p>For businesses, AI accountability is critical: It builds trust, mitigates risk, and ensures compliance. Companies must be able to explain and justify AI\u2019s decisions, and if those decisions are incorrect, rectify the outcomes. Without clear accountability, businesses face not only legal exposure, but reputational damage and loss of customer confidence.<\/p>\n\n\n\n<p>\u201cCompanies are already held accountable for what their AI does,\u201c said Jason Ross, product security principal at Salesforce. \u201cBut there are legal, ethical, and social issues coming together in a way with agentic AI that hasn\u2019t happened with other technology, even cloud and mobile.\u201d<\/p>\n\n\n\n<p>Unlike conventional software, which follows predefined rules, agentic AI learns, adapts, and generates responses dynamically, making its decision-making process less predictable. This autonomy creates challenges in pinpointing responsibility when mistakes occur.<br><\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-to-ensure-ai-accountability\">How to ensure AI accountability <\/h2>\n\n\n\n<p>Businesses need a multifaceted approach to AI accountability grounded in, among other things, AI-specific technology safeguards, quality data, and new organisational governance.\u00a0\u00a0<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Why Agentforce Makes AI Agents Reliable for Business | Salesforce\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/7j-neyNXWjk?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><figcaption class=\"wp-element-caption\">This is how Agentforce, the agentic layer of the Salesforce platform, makes agents reliable for businesses. <\/figcaption><\/figure>\n\n\n\n<p>Silvio Savarese, Salesforce\u2019s chief scientist, <a href=\"https:\/\/www.salesforce.com\/blog\/the-agentic-ai-era-after-the-dawn-heres-what-to-expect\/\" target=\"_blank\" rel=\" noopener\">wrote recently<\/a> of the importance of establishing clear oversight frameworks to ensure a comprehensive approach to AI accountability. Consider adding these five frameworks to your AI implementation.&nbsp;<\/p>\n\n\n\n<div class=\"layout-one wp-block-salesforce-blog-offer\">\n\t<div class=\"wp-block-offer__wrapper\">\n\n\t\t<div class=\"wp-block-offer__content\">\n\t\t\t<h2 class=\"wp-block-offer__title\"><strong>Unlock the value of agentic AI<\/strong><\/h2>\n\t\t\t\t\t\t\t<p class=\"wp-block-offer__description\">We believe that business is the greatest platform Get actionable steps for building a Centre of Excellence that can help you unleash the possibilities of agentic AI.<\/p>\n\t\t\t\n\t\t\t\n\t\t\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t<a class=\"wp-block-button__link\" target=\"_self\" href=\"https:\/\/www.salesforce.com\/au\/form\/agentforce\/salesforce-coe-best-practices-in-the-age-of-agentic-ai\/?d=701ed00000R4TAMAA3&#038;nc=701ed00000R4vxpAAB\">Get started<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t<div class=\"wp-block-offer__media\">\n\t\t\t\t\t<\/div>\n\t<\/div>\n\n\t\t\t<div class=\"wp-block-offer__graphics wp-block-offer__contour\"><\/div>\n\t\n\t\t\t<!-- Standard Illustration -->\n\t\t<img decoding=\"async\" class=\"wp-block-offer__graphics wp-block-offer__illustration\" src=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/themes\/salesforce-blog\/dist\/images\/offer-block\/offer-illustration-layout-one.png\" alt=\"\">\n\n\t\t<!-- Small Accent Illustration -->\n\t\t\t\t\t<img decoding=\"async\" class=\"wp-block-offer__graphics wp-block-offer__accent\" src=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/themes\/salesforce-blog\/dist\/images\/offer-block\/offer-accent-layout-one.png\" alt=\"\">\n\t\t\n\t\t<!-- Left Side Illustration -->\n\t\t\n\t\t<!-- Cloud Illustration -->\n\t\t\t\t\t<img decoding=\"async\" class=\"wp-block-offer__graphics wp-block-offer__cloud\" src=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/themes\/salesforce-blog\/dist\/images\/offer-block\/offer-cloud-layout-one.png\" alt=\"\">\n\t\t\n\t<\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-set-up-clear-chains-of-responsibility-for-an-ai-agent-s-decisions\">Set up clear chains of responsibility for an AI agent&#8217;s decisions<\/h3>\n\n\n\n<p>To ensure accountability, businesses should establish unambiguous chains of responsibility for AI decisions. This involves identifying who is accountable for each step the AI takes, from initial deployment to final output. You might need to create new roles to oversee AI functions, such as a Chief AI Officer (CAIO) or<strong> <\/strong>an<strong> <\/strong>AI Ethics Manager<strong>,<\/strong> who would&nbsp; monitor, review, and be accountable for the performance of AI systems.<\/p>\n\n\n\n<p>A CAIO would make sure AI systems follow company guidelines, and in the event of an error, be the first point of contact for rectifying the situation. Similarly, teams formed to audit and track AI decision-making processes would make sure decisions align with corporate values and ethical standards.<\/p>\n\n\n\n<p>According to a <a href=\"https:\/\/insights.issgovernance.com\/posts\/roughly-15-percent-of-large-u-s-companies-disclose-board-oversight-of-ai-iss-corporate-analysis-finds\/\" target=\"_blank\" rel=\" noopener\">March 2024 study<\/a>, roughly 15% of companies in the S&amp;P 500 already provide some degree of board-level oversight of AI.&nbsp;<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-create-systems-that-detect-and-correct-incomplete-incorrect-or-toxic-outputs\">Create systems that detect and correct incomplete, incorrect or toxic outputs <\/h3>\n\n\n\n<p>AI models don\u2019t always get things right. They hallucinate, misinterpret context, or reinforce biases. To minimise harm, businesses need real-time monitoring systems, robust audit trails, and the ability to intervene fast in the event of an error.<\/p>\n\n\n\n<p>One example is the Salesforce research team\u2019s <a href=\"https:\/\/www.salesforce.com\/au\/agentforce\/what-is-rag\/\">advancements in retrieval-augmented generation (RAG)<\/a>, which improves how AI systems access and verify information. This enables rapid evaluation and course-correction, ensuring that AI systems deliver accurate, reliable results you can trust.<\/p>\n\n\n\n<p>Similarly, human-in-the-loop monitoring system<strong>s<\/strong> allow for continuous oversight of AI outputs, flagging and correcting issues before they escalate. A framework might include:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Automated dashboards that flag potentially harmful or incorrect outputs,<\/li>\n\n\n\n<li>Fallback mechanisms that default to a human if an AI response is flagged as problematic, and<\/li>\n\n\n\n<li>Regular audits and bias evaluations to continually assess AI accuracy and validity over time.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-define-processes-that-balance-agent-autonomy-with-human-control\">Define processes that balance agent autonomy with human control <\/h3>\n\n\n\n<p>AI agents need clearly defined boundaries. Not everything should be left to automation, especially high-stakes decisions that require human judgment. Organisations should build structured intervention frameworks<strong> <\/strong>to determine when and how humans should step in. In finance, for example, agents might recommend investment strategies, but would require human input for any investments over a certain threshold. In healthcare, doctors could require human review of a high-risk diagnosis or decision.\u00a0<br><\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-develop-approaches-for-making-things-right-when-mistakes-occur\">Develop approaches for making things right when mistakes occur<\/h3>\n\n\n\n<p>AI failures don\u2019t just impact internal operations; they can erode trust with customers and employees. Businesses need structured plans for remediation, communication, and systematic improvement when things go wrong. Consider an ecommerce platform using AI for customer support. If the AI makes a wrong decision about, say, a refund, the remediation blueprint would include:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Immediate rollback procedures to correct the error,<\/li>\n\n\n\n<li>Proactive customer communication, acknowledging the mistake and outlining next steps,<\/li>\n\n\n\n<li>Compensation guidelines, like offering credits, and<\/li>\n\n\n\n<li>Long-term corrective actions, such as retraining the AI model to prevent future errors.<\/li>\n<\/ul>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-establish-new-legal-and-compliance-frameworks-that-explicitly-address-ai-agent-accountability\">Establish new legal and compliance frameworks that explicitly address AI agent accountability <\/h3>\n\n\n\n<p>The regulatory landscape is still evolving, and many existing laws don\u2019t account for autonomous AI decision-making. So, companies need to develop their own AI-specific governance structures that combine legal, ethical, compliance, and operational expertise to make sure AI is used responsibly. This could include establishing a cross-functional AI Center of Excellence (CoE), where different teams work together to continually assess AI systems against evolving legal requirements and ethical standards.<\/p>\n\n\n\n<p>The CoE could oversee independent audits that verify compliance with industry standards, as well as its own internal standards. It could also create transparency reports that disclose how AI models make decisions.&nbsp;&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-how-salesforce-advances-ai-accountability\">How Salesforce advances AI accountability <\/h2>\n\n\n\n<p>What does AI accountability look like? Consider Salesforce\u2019s approach to AI safety, trust, and ethics, which can provide a foundation for your organisation.\u00a0<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The <a href=\"https:\/\/www.salesforce.com\/au\/artificial-intelligence\/trusted-ai\/\">Einstein Trust Layer<\/a>, a set of features that protects the security of your data and improves safety and accuracy, can evaluate and score content based on its toxicity. Is the content biased or hateful? The scores are logged and stored in <a href=\"https:\/\/www.salesforce.com\/au\/data\/\">Data Cloud<\/a>, a platform for unifying and harmonising data from across your company, as part of an audit trail. Data Cloud can generate reports on audit trail data and user feedback.<\/li>\n\n\n\n<li>The trust layer also verifies the safety and accuracy of responses generated by your large language model (LLM), drastically reducing the likelihood of a bad response.<\/li>\n\n\n\n<li><a href=\"https:\/\/www.salesforce.com\/au\/artificial-intelligence\/prompt-builder\/\">Prompt Builder<\/a>, a tool for building generative AI prompt templates, uses system policies to decrease the risk of the LLM generating something inaccurate or harmful. The policies are a set of instructions to the LLM for how to behave. For example, you can tell it to not generate answers when it lacks information about a subject.\u00a0<\/li>\n\n\n\n<li>Dynamic grounding improves the accuracy and relevancy of your AI\u2019s results by combining your structured and unstructured data with an AI prompt, thereby adding more context and accuracy to the response.&nbsp;<\/li>\n\n\n\n<li>AI systems should be designed to provide clear and understandable explanations for their decisions. Salesforce&#8217;s Einstein platform offers explainability tools that help users understand the factors that influence AI outputs.&nbsp;<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-ai-accountability-is-critical-for-business\">AI accountability is critical for business <\/h2>\n\n\n\n<p>AI isn\u2019t just another tool. It\u2019s a decision-maker that shapes business outcomes in real time \u2014\u00a0which means the old rules around safety and governance don\u2019t apply. Companies that fail to build AI accountability into their foundations may face a range of negative consequences. It\u2019s time for new governance structures, oversight mechanisms, and AI-specific safeguards that match the power of the technology itself. This is the new backbone of trust in the AI age.<\/p>\n\n\n\n<div class=\"layout-one wp-block-salesforce-blog-offer\">\n\t<div class=\"wp-block-offer__wrapper\">\n\n\t\t<div class=\"wp-block-offer__content\">\n\t\t\t<h2 class=\"wp-block-offer__title\"><strong>Unlock the value of agentic AI<\/strong><\/h2>\n\t\t\t\t\t\t\t<p class=\"wp-block-offer__description\">We believe that business is the greatest platform Get actionable steps for building a Centre of Excellence that can help you unleash the possibilities of agentic AI.<\/p>\n\t\t\t\n\t\t\t\n\t\t\t\t\t\t\t<div class=\"wp-block-button\">\n\t\t\t\t\t<a class=\"wp-block-button__link\" target=\"_self\" href=\"https:\/\/www.salesforce.com\/au\/form\/agentforce\/salesforce-coe-best-practices-in-the-age-of-agentic-ai\/?d=701ed00000R4TAMAA3&#038;nc=701ed00000R4vxpAAB\">Get started<\/a>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\n\t\t<div class=\"wp-block-offer__media\">\n\t\t\t\t\t<\/div>\n\t<\/div>\n\n\t\t\t<div class=\"wp-block-offer__graphics wp-block-offer__contour\"><\/div>\n\t\n\t\t\t<!-- Standard Illustration -->\n\t\t<img decoding=\"async\" class=\"wp-block-offer__graphics wp-block-offer__illustration\" src=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/themes\/salesforce-blog\/dist\/images\/offer-block\/offer-illustration-layout-one.png\" alt=\"\">\n\n\t\t<!-- Small Accent Illustration -->\n\t\t\t\t\t<img decoding=\"async\" class=\"wp-block-offer__graphics wp-block-offer__accent\" src=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/themes\/salesforce-blog\/dist\/images\/offer-block\/offer-accent-layout-one.png\" alt=\"\">\n\t\t\n\t\t<!-- Left Side Illustration -->\n\t\t\n\t\t<!-- Cloud Illustration -->\n\t\t\t\t\t<img decoding=\"async\" class=\"wp-block-offer__graphics wp-block-offer__cloud\" src=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/themes\/salesforce-blog\/dist\/images\/offer-block\/offer-cloud-layout-one.png\" alt=\"\">\n\t\t\n\t<\/div>\n","protected":false},"excerpt":{"rendered":"<p>AI changes our understanding of responsibility for decisions and actions gone wrong.<\/p>\n","protected":false},"author":145,"featured_media":67603,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"sf_justforyou_enable_alt":true,"optimizely_content_id":"ed18d95e7fd04a2b988515e7dc794669","post_meta_title":"","ai_synopsis":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"sf_topic":[2858,3285],"sf_content_type":[],"coauthors":[2771],"class_list":["post-67601","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","sf_topic-artificial-intelligence","sf_topic-agentforce"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>5 Ways to Ensure AI Accountability In Your AI Agents<\/title>\n<meta name=\"description\" content=\"Who&#039;s on the hook if AI gets it wrong, and how can you ensure AI accountability and prevent mistakes from happening in the first place?\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"In a World of AI Agents, Who&#039;s Accountable for Mistakes?\" \/>\n<meta property=\"og:description\" content=\"AI changes our understanding of responsibility for decisions and actions gone wrong.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\" \/>\n<meta property=\"og:site_name\" content=\"Salesforce\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/salesforce\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-19T14:28:32+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-07-29T00:59:59+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1500\" \/>\n\t<meta property=\"og:image:height\" content=\"844\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Lisa Lee\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@salesforce\" \/>\n<meta name=\"twitter:site\" content=\"@salesforce\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Lisa Lee\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\"},\"author\":[{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/person\/image\/c44ffdb20409b1c976ab87158760fa61\"}],\"headline\":\"In a World of AI Agents, Who&#8217;s Accountable for Mistakes?\",\"datePublished\":\"2025-02-19T14:28:32+00:00\",\"dateModified\":\"2025-07-29T00:59:59+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\"},\"wordCount\":1265,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png\",\"inLanguage\":\"en-AU\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\",\"url\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\",\"name\":\"5 Ways to Ensure AI Accountability In Your AI Agents\",\"isPartOf\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png\",\"datePublished\":\"2025-02-19T14:28:32+00:00\",\"dateModified\":\"2025-07-29T00:59:59+00:00\",\"description\":\"Who's on the hook if AI gets it wrong, and how can you ensure AI accountability and prevent mistakes from happening in the first place?\",\"inLanguage\":\"en-AU\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage\",\"url\":\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png\",\"contentUrl\":\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png\",\"width\":1500,\"height\":844,\"caption\":\"AI agent's autonomy creates challenges in pinpointing responsibility when mistakes occur. [image credit: Aleona Pollauf\/Salesforce]\"},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#website\",\"url\":\"https:\/\/www.salesforce.com\/au\/blog\/\",\"name\":\"Salesforce\",\"description\":\"Learn how to get ahead of trends and supercharge professional relationships\",\"publisher\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.salesforce.com\/au\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-AU\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#organization\",\"name\":\"Salesforce\",\"url\":\"https:\/\/www.salesforce.com\/au\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/logo\/image\/\",\"url\":\"\",\"contentUrl\":\"\",\"caption\":\"Salesforce\"},\"image\":{\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/salesforce\",\"https:\/\/x.com\/salesforce\",\"https:\/\/instagram.com\/salesforce\",\"http:\/\/www.linkedin.com\/company\/salesforce\",\"http:\/\/www.youtube.com\/Salesforce\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/person\/image\/c44ffdb20409b1c976ab87158760fa61\",\"name\":\"Lisa Lee\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-AU\",\"@id\":\"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/person\/image\/f9aed54d06a1658c853af8969a355321\",\"url\":\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2024\/02\/Lisa-Lee.webp?w=128&h=96&crop=1\",\"contentUrl\":\"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2024\/02\/Lisa-Lee.webp?w=128&h=96&crop=1\",\"width\":128,\"height\":96,\"caption\":\"Lisa Lee\"},\"description\":\"Lisa Lee is a contributing editor at Salesforce. She has written about technology and its impact on business for more than 25 years. Prior to Salesforce, she was an award-winning journalist with Forbes.com and other publications.\",\"url\":\"https:\/\/www.salesforce.com\/au\/blog\/author\/lisa-lee\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"5 Ways to Ensure AI Accountability In Your AI Agents","description":"Who's on the hook if AI gets it wrong, and how can you ensure AI accountability and prevent mistakes from happening in the first place?","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/","og_type":"article","og_title":"In a World of AI Agents, Who's Accountable for Mistakes?","og_description":"AI changes our understanding of responsibility for decisions and actions gone wrong.","og_url":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/","og_site_name":"Salesforce","article_publisher":"https:\/\/www.facebook.com\/salesforce","article_published_time":"2025-02-19T14:28:32+00:00","article_modified_time":"2025-07-29T00:59:59+00:00","og_image":[{"width":1500,"height":844,"url":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png","type":"image\/png"}],"author":"Lisa Lee","twitter_card":"summary_large_image","twitter_creator":"@salesforce","twitter_site":"@salesforce","twitter_misc":{"Written by":"Lisa Lee","Est. reading time":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#article","isPartOf":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/"},"author":[{"@id":"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/person\/image\/c44ffdb20409b1c976ab87158760fa61"}],"headline":"In a World of AI Agents, Who&#8217;s Accountable for Mistakes?","datePublished":"2025-02-19T14:28:32+00:00","dateModified":"2025-07-29T00:59:59+00:00","mainEntityOfPage":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/"},"wordCount":1265,"commentCount":0,"publisher":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/#organization"},"image":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage"},"thumbnailUrl":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png","inLanguage":"en-AU","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/","url":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/","name":"5 Ways to Ensure AI Accountability In Your AI Agents","isPartOf":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage"},"image":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage"},"thumbnailUrl":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png","datePublished":"2025-02-19T14:28:32+00:00","dateModified":"2025-07-29T00:59:59+00:00","description":"Who's on the hook if AI gets it wrong, and how can you ensure AI accountability and prevent mistakes from happening in the first place?","inLanguage":"en-AU","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/"]}]},{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.salesforce.com\/au\/blog\/ai-accountability\/#primaryimage","url":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png","contentUrl":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png","width":1500,"height":844,"caption":"AI agent's autonomy creates challenges in pinpointing responsibility when mistakes occur. [image credit: Aleona Pollauf\/Salesforce]"},{"@type":"WebSite","@id":"https:\/\/www.salesforce.com\/au\/blog\/#website","url":"https:\/\/www.salesforce.com\/au\/blog\/","name":"Salesforce","description":"Learn how to get ahead of trends and supercharge professional relationships","publisher":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.salesforce.com\/au\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-AU"},{"@type":"Organization","@id":"https:\/\/www.salesforce.com\/au\/blog\/#organization","name":"Salesforce","url":"https:\/\/www.salesforce.com\/au\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/logo\/image\/","url":"","contentUrl":"","caption":"Salesforce"},"image":{"@id":"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/salesforce","https:\/\/x.com\/salesforce","https:\/\/instagram.com\/salesforce","http:\/\/www.linkedin.com\/company\/salesforce","http:\/\/www.youtube.com\/Salesforce"]},{"@type":"Person","@id":"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/person\/image\/c44ffdb20409b1c976ab87158760fa61","name":"Lisa Lee","image":{"@type":"ImageObject","inLanguage":"en-AU","@id":"https:\/\/www.salesforce.com\/au\/blog\/#\/schema\/person\/image\/f9aed54d06a1658c853af8969a355321","url":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2024\/02\/Lisa-Lee.webp?w=128&h=96&crop=1","contentUrl":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2024\/02\/Lisa-Lee.webp?w=128&h=96&crop=1","width":128,"height":96,"caption":"Lisa Lee"},"description":"Lisa Lee is a contributing editor at Salesforce. She has written about technology and its impact on business for more than 25 years. Prior to Salesforce, she was an award-winning journalist with Forbes.com and other publications.","url":"https:\/\/www.salesforce.com\/au\/blog\/author\/lisa-lee\/"}]}},"jetpack_featured_media_url":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png","jetpack_sharing_enabled":true,"distributor_meta":false,"distributor_terms":false,"distributor_media":false,"distributor_original_site_name":"Salesforce","distributor_original_site_url":"https:\/\/www.salesforce.com\/au\/blog","push-errors":false,"primary_topic":{"term_id":2858,"name":"Artificial Intelligence","slug":"artificial-intelligence","term_group":0,"term_taxonomy_id":2858,"taxonomy":"sf_topic","description":"","parent":0,"count":110,"filter":"raw"},"featured_image_url":"https:\/\/www.salesforce.com\/au\/blog\/wp-content\/uploads\/sites\/4\/2025\/02\/TSK-39920_Agentic_Ai_Accountability.png?w=1500","_links":{"self":[{"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/posts\/67601","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/users\/145"}],"replies":[{"embeddable":true,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/comments?post=67601"}],"version-history":[{"count":2,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/posts\/67601\/revisions"}],"predecessor-version":[{"id":67605,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/posts\/67601\/revisions\/67605"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/media\/67603"}],"wp:attachment":[{"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/media?parent=67601"}],"wp:term":[{"taxonomy":"sf_topic","embeddable":true,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/sf_topic?post=67601"},{"taxonomy":"sf_content_type","embeddable":true,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/sf_content_type?post=67601"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.salesforce.com\/au\/blog\/wp-json\/wp\/v2\/coauthors?post=67601"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}