Fewer than half of managers in organisations that are using or planning to start using Artificial intelligence (AI) are very confident that they and their companies understand the ethical risks. Here, Sassoon Grigorian recaps a recent webinar with leaders in ethical AI, Infusing Ethics into AI Policy.
AI is having a big impact on organisations across Australia and New Zealand. From assisting with process automation and decision making to predicting customer behaviour, AI is changing the way we work. According to a Salesforce-commissioned YouGov survey, 62% of managers believe embracing AI is important to their organisation’s ability to survive and stay competitive.
However, only 40% of these managers currently using AI in business processes, or planning to use AI in the next 12 months, are very confident that they/their organisations understand the potential ethical risks of AI.
Importantly, the survey found that 73% of managers currently using AI believe that organisations using AI should have a designated person for Ethical AI. Yet, only 56% report having someone in such a role.
Only 25% of managers are very confident of their organisation’s ability to implement AI processes and systems responsibly by taking into account privacy and safety of consumers.This demonstrates that while AI can deliver an enormous range of benefits, there are some key ethical issues that must be addressed.
For example, data bias in AI algorithms can lead to discrimination. User safety can be compromised by malicious players. Customer data privacy also needs to be protected.
That’s why infusing ethics into AI policy is vital now, and as the technology evolves.
Edward Santow, Australian Human Rights Commissioner, makes a clear distinction between ethics in AI and the rule of law.
“AI is enabling us to do things we’ve always done, but in powerful new ways. As such, there are already human rights, anti-discrimination, and privacy laws in place that should be the first port of call in determining what you can and can’t do with AI,” he said in the Salesforce/Observer Research Foundation Infusing Ethics into AI Policy webinar.
“Ethical rules are secondary to the law. Ethics can help us uphold the law and fill the gaps where the law is silent.”
To fill those gaps, organisations must set ethical parameters that govern how they develop and use AI-based technologies. Speaking in the same webinar, Kathy Baxter, Principal Architect, Ethical AI Practice at Salesforce, explained what those parameters look like at Salesforce.
“We need to empower our users. To do so, our AI needs to be inclusive and respect the rights of everyone it impacts. So we created an AI charter that lays out what our AI principles are as a company,” Baxter said.
“We believe we must safeguard all the data we are entrusted with and ensure what we are building protects and respects human rights. It must be accountable, and we seek and leverage feedback from our customers and civil society groups. Transparency is also important. We must be clear about how we build our models and explain to our users how our AI makes predictions or recommendations.”
Watch the webinar
This clear ethical framework must be built into the DNA of the AI design and development process. Baxter explained how this is achieved at Salesforce.
“Salesforce works on the agile development methodology. During the very early design stages, we do an assessment with the teams to identify all the intended and unintended consequences of the AI application. We do an analysis of the likelihood and seriousness of the impact, and ask ‘should this application even exist in the first place?’. If the answer is ‘yes’, we identify the strategies we need to put in place to ensure those unintended consequences are mitigated as much as possible.”
However, infusing an ethical framework into the design and development of AI-based technologies may not always be practical.
“We need to be careful of the term ‘by design’,” David Hardoon, Senior Advisor on Data & AI, UnionBank of the Philippines, explained during the webinar. “If an AI methodology or solution algorithm is applied within a specific context or application, then you can hard code the ethics in. But if you have something that needs to be applied more generally from east to west you have to deliberately allow for certain flexibility. In these cases, we need a second line of defence.”
That’s why Salesforce also builds in user education and guidance to the company’s AI-based applications.
“Salesforce is a platform, so what our customers do with our product may not be directly within our control,” Baxter said. “But, in the vast majority of cases, harm occurs not through malice but through lack of understanding of the context. We build in guidance and education to make our customers aware of when they are using sensitive fields, how to understand training data, and how to identify if there is a disparate impact occurring.”
This second line of defence can be reinforced, and Rahul Panicker, Chief Innovation Officer at Wadhwani AI, India, suggests reliance on processes.
“When it comes to AI, it’s processes and systems that can save us, not anticipation of unintended consequences. And this is not specific to AI. There is a long history of how to establish processes that ensure safety in technology development,” Panicker said.
“Start with control testing for safety. Then you move on to a controlled pilot, and into an uncontrolled pilot that tests whether it works in the real world but still with safeguards in place Finally, the most important step is post-deployment monitoring to catch the consequences we could not anticipate.”
As such, Panicker argued that government regulation of AI should be application-focused.
“Self-driving cars need to be regulated differently than healthcare AI, which needs to be regulated differently than AI used in banking. It’s the application domain that identifies the use case, the potential risks and the stakeholder ecosystem.”
When developing and using AI-based technologies, organisations must adhere to relevant human rights, anti-discrimination, and privacy laws. Organisations should also create an ethical framework to govern AI development and use the framework where the law is silent.
This ethical framework should be built into the AI design and development process, and where flexibility is required, there should be additional focus on post-deployment monitoring. This will mitigate unintended consequences and ensure that organisations – and the people they serve – will experience the full benefits of AI-based technologies.