BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Assistants Everywhere: Why Ethical And Responsible AI Is Our Most Important Investment

Following

We are quickly approaching one year since the emergence of generative AI with ChatGPT’s launch in November 2022 and enterprises are continuing to strategize on how to properly implement the technology into their business to help achieve more productivity, cost savings, and operational efficiency.

With generative AI’s rise, people are using the technology to help identify the fastest way to close a deal, provide quick answers to repetitive questions and tasks, better understand customer behavior, and create highly personalized shopping experiences for customers.

Businesses across all industries are racing to use generative AI and AI assistants to improve their operations and drive customer loyalty by providing the best experience possible. However, as the use of AI assistants has increased, it’s evident that companies need to take a data-first approach to ensure these assistants are not only effective, but safe, for use in the business world.

We caught up with Claire Cheng, senior director of AI engineering at Salesforce, the world’s #1 AI CRM empowering companies of every size and industry to connect with their customers through the power of data, AI, CRM, and trust. Claire shares why investing in ethical and responsible AI is the most important investment a business can make to stay ahead.

Gary Drenik: Tell me why ethical and responsible AI should be the top priority when building AI assistants.

Claire Cheng: As brands increasingly adopt AI to increase efficiency and meet customer expectations, nearly three-quarters of customers are concerned about the unethical use of AI. A recent Prosper Insights & Analytics survey found that 43.6% of people who use tools like ChatGPT are using it for research. These AI assistants, which are built off of large language models (LLMs), are trained on trillions of documents that help generate the information people are looking for. These massive amounts of data inevitably contain biases that could be reflected in the model’s output, if organizations don’t take a data-first approach to using AI.

Organizations need to create AI with trust at the center of every AI-related effort. At Salesforce, we prioritize ethical and responsible AI by helping our customers use their trusted proprietary data within Data Cloud to power their AI outcomes, and foster a data-first approach that inherently aligns with ethical practices in enterprise-level applications. This approach ensures that our customers can trust that their AI-powered insights and actions are deeply rooted in respect for customer privacy and data governance, paving the way for enterprise AI solutions that are not only powerful but also principled and trustworthy.

Drenik: What are the biggest ethical hurdles the data community faces as we lean into this new evolution of AI-powered assistants?

Cheng: The biggest ethical hurdle the data community faces is educating both the producers and consumers of data about AI’s limitations and the biases that algorithms could introduce when generating outputs pulled from the data used for training. This is why the Einstein Trust Layer and other technologies are so important to help companies minimize concerns and hurdles by being able to trust AI-generated content and predictions.

There is a skills gap that needs to be filled, as more than 60% of skilled professionals believe they lack the necessary skills to effectively and safely use AI. As more complex models are developed, these biases can be reinforced and amplified if left unchecked, or if people aren’t able to identify these limitations early on in order to refine AI models before launch.

Drenik: How can businesses best account for making sure their data is trusted, accurate and secure?

Cheng: AI is only as good as the data that powers it, which is why businesses need to be deliberate about their data strategy. This is why Salesforce’s Data Cloud has been so widely received - it’s the key to helping businesses better understand and unlock customer data and deliver actionable insights in real-time and at scale.

Another important step is how they fine-tune the AI models and how these models get access to data, and to collect data from trusted sources. For instance, to increase the quality of predictions and generations, Salesforce uses the dual approach of grounding in customer data and continuous iterative improvement with customer feedback.

Dynamic grounding — the process of retrieving relevant, factual, up-to-date data and restricting the LLM for generations — remains key for delivering highly relevant and accurate outputs. For example, by grounding a generative AI model with a customer’s data from trusted sources, such as their knowledge articles, the generated answers are more accurate and relevant.

Customer feedback also helps validate and improve model accuracy. Having customers provide explicit and implicit feedback on how they use or alter their AI outputs, immensely helps organizations refine an AI assistant and improve the end result via feedback-based learning.

Drenik: Can generative AI be used to identify biases in AI assistants and data? Or will humans always be needed in order to properly do so and ensure trust?

Cheng: 73% of IT leaders have concerns about the potential for bias in generative AI. When testing AI models for bias, it’s important to perform these tests in simulated environments. At Salesforce, for example, we start with predictive AI, which analyzes patterns of historical data to forecast future outcomes. We mitigate bias by testing these algorithms over time and develop quantitative measures of bias and fairness. This helps detect and correct bias-related issues prior to deployment.

Tackling bias in generative AI remains a more complex problem, requiring new evaluation methods such as adversarial testing. With adversarial testing, our team works to intentionally push generative AI models’ and applications’ boundaries to understand where the system may be weak or susceptible to bias, toxicity, or inaccuracy.

There may come a time when humans won’t be as needed to ensure trust, but in the early stages of generative AI development and building trust with the end user, it’s important for humans to stay involved to help recognize and reject adversarial inputs so that an AI model doesn’t produce toxic, biased, unsafe, or inaccurate outputs.

Drenik: What are the privacy implications for businesses using generative AI, in addition to trust, and how can they ensure proper guardrails are used to make sure sensitive customer data isn’t accidentally leaked, for example?

Cheng: Privacy and trust are the most important drivers for businesses using generative AI. In fact, 79% of customers say they’re increasingly protective of their personal data. That’s why safety is one of Salesforce’s Trusted AI Principles.

It’s critical that organizations make every effort to protect the privacy of any personally identifiable information (PII) present in the data used for training and create guardrails to prevent additional harm. This can be done by force publishing code to a sandbox rather than automatically pushing it to production to analyze what outputs are generated before external use.

Organizations need to also respect data provenance and ensure consent to use data is granted, especially when pulling from open-source and user-provided sources. A recent Prosper Insights & Analytics survey found that 48.1% of adults have denied permission for mobile apps to track their data, so businesses will need to first ensure permission has been granted before using any data sets in the development of AI assistants.

Drenik: How can companies ensure that IT teams using generative AI solutions are properly trained with the skills to find data biases and safely build AI assistants?

Cheng: A recent Salesforce survey found that 66% of senior IT leaders believe employees don’t have the skills to leverage generative AI successfully. However, nearly all respondents (99%) believe that organizations must take measures into their own hands to successfully leverage the technology.

To start, organizations must adopt a set of guiding principles for employees to follow when interacting with AI assistants and analyzing the data used to train them. For example, Salesforce was the first company to publish principles for developing generative AI. Once these principles are in place, investing in the upskilling of employees to better understand what trusted data sources look like and how to keep sensitive data secure are both ideal places for organizations to start training employees.

For example, at Salesforce, both employees and our end users can visit Trailhead to get started on building their AI skill sets as generative AI and AI assistants become more present in everyday work.

Drenik: Thanks, Claire, for sharing your insights on AI assistants and why ethical and responsible AI is the most important investment organizations can make as this technology becomes more widely used in business.

Check out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.