A study by the Australian Council of Learned Academies has shown that 79 per cent of Australians surveyed want artificial intelligence (AI) programmed to be ethical. 

In the era of artificial intelligence, the questions posed in the research – ‘what kind of future society do we want to be?’, ‘how prepared are we to harness and regulate AI?’ – are becoming prevalent across industry and society as we learn more about AI’s capabilities. AI is only as intelligent as the information we feed it and, unlike most humans, cannot identify its biases. 

Last year, a technology company scrapped its AI recruiting tool after the tool was found to discriminate against female candidates. Upon realising the bias, which included penalising CVs containing the word ‘women’s’, that company disabled the tool. 

This is not an anomaly. Earlier this year, Capgemini found that 41 per cent of senior executives have scrapped an AI system due to an ethical issue. Executives in 90 per cent of organisations believe ethical issues have arisen from the use of AI systems over the past two to three years. 

Why are these problems arising? The executives themselves pointed to pressure for fast implementation, ignorance of ethical considerations, and ignorance of the need for dedicated resources for ethical AI.  

And concern about ethical AI is not just held inside companies. The same Capgemini survey found that 34 per cent of consumers would stop interacting with a company if an AI interaction resulted in ethical issues. 

On the flipside, 62 per cent of consumers surveyed said they would place higher trust in a company whose AI interactions they perceived as ethical, 59 per cent would be more loyal, and 55 per cent would purchase more from the company and advocate for it on social media.

Salesforce uses the power of AI to deliver more than 8 billion predictions to customers every day. We believe trust and ethical AI go hand in hand with successfully using this powerful technology for good. We have created an Office of Ethical and Human Use of Technology to develop and implement an ethical framework across Salesforce.  

As companies and governments look to harness the power of AI technology, we believe now is the time to put in place Australia’s first framework for the ethical use of AI. 
 

Industry and government collaboration 

 

The recent first step by the Australian Government to consult with industry and experts earlier this year is welcomed. Produced by CSIRO’s Data61, the discussion paper encouraged organisations and government bodies to consider what principles should guide them in their development and use of AI. At a state level, the NSW Government will host its first AI Thought Leaders Summit in November. The aim is to ensure government agencies drive better outcomes for citizens above all else. However, more can be done to create a robust national AI ethics framework.

Australia trails on overall readiness for AI, compared to Singapore and Hong Kong, with Singapore having taken action to regulate and promote the design of AI through the creation of an advisory council to work closely with industry and government. 
 



An Australian national advisory council on ethics and AI is part of catching up. It needs members representing a mix of sectors including business, not-for-profit, academic and government, and a diverse range of backgrounds that reflect community diversity. It should also not be limited to technologists, including human rights advocates, ethicists, economists and community members.
 

Preparing workers for AI

 

The effectiveness of AI policies hinges on workers being able to capitalise on AI-specific skills and knowledge. 

We’re well aware of the economic contribution our ecosystem creates in regards to jobs. In 2017, IDC predicted Salesforce will create more than 79,000 Australian jobs within the ecosystem by 2022. Come 2023, the 2018 Deloitte Access Economics ACS Australia’s Digital Pulse 2018 Report predicts 3 million Australians will work within ICT occupations. 

Demand in ICT is growing, and those working in AI will require a mix of skills and ability to  develop and maintain complex systems and applications. But with universities producing less than 5000 domestic ICT graduates a year, Australia will need to look abroad to find workers with those skills in AI, data science, cybersecurity and blockchain. 

Salesforce is currently working with industry partners and universities to prepare the next generation of Australian ICT workers. Reskilling is becoming a necessity in today’s job market and already we are seeing students using their Salesforce certifications to find work outside their degree disciplines. 

This kind of lifelong learning is invaluable for the next generation and programs like these, and ones provided by TAFEs, should grow to include AI-specific skills such as applied statistics, computational thinking, graphical modelling, robotics, programming languages and cognitive science theory.
 

Building trust in AI

 

For AI to grow and deliver on its promises, it must earn and keep the trust of individuals, organisations, and governments alike. From policy through to development, AI professionals need to ensure AI is trained, implemented and monitored following the highest ethical principles.  

Algorithms are only as good as the data they are fed, so it’s important to track training and performance over time. Datasheets for data sets and Model Cards for model reporting can identify if there are issues with representativeness in the data or bias in the model, but this isn’t enough to mitigate potential harm – it takes a much larger effort for humans to analyse why an issue exists and figure out how to mitigate it.

This effort will build trust in what the AI system is producing, and increase trust with consumers and employees.

Up-front training is important in this – and plenty is available, including our own Trailhead modules such as Responsible Creation of AI.
 

The time for action on AI ethics is now

 

There are important universal principles for AI systems: fair treatment for all people, empowerment and engagement for everyone, reliability and safety in performance, security and privacy. But these will not implement themselves. 

We already have responsible regulation for software or hardware inside an aeroplane, car or medical device – AI should be treated similarly. 

A national AI ethics framework is critical for Australia’s economy and society. 

We have reached a point where a strong set of guiding values will be welcomed by both industry and government. As Salesforce’s co-founder Marc Benioff has said, technology is not inherently good or bad. It’s what we do with it that matters.

Find out how IT leaders are planning to use AI, and other trends driving IT. Download the Salesforce Enterprise IT Trends report.