Skip to Content

Ethical AI — Where There’s a Will, There’s a Way

adorable girl on laptop
[®McKinsey Jordan/ Stocksy United]

Business, government and academic leaders continue to sound the alarm about responsible AI, but important progress is being made. Here’s the latest.

There may be no more pressing issue in technology than taming the Wild West that is artificial intelligence (AI).  

At the World Economic Forum’s first Global Technology Governance Summit (GTGS), an event dedicated to responsible design and deployment of emerging technologies, attendees sounded the alarm  — and not for the first time —on AI. Specifically, that global adoption is outrunning the ability or, in some cases the will, to use it responsibly and ethically. 

As AI has grown — worldwide spending is expected to hit $110 billion in 2024 — it has come under fire across sectors and industries for inherent algorithmic bias. The issue of responsible AI is so pervasive that the Partnership on AI, a group dedicated to advancing the understanding of AI, launched an incident tracker in 2020 to chronicle and, hopefully, learn from its failings. 

AI promises to make orgs 40% more efficient by 2035, which corresponds to a staggering $14 trillion in economic value.

world economic forum

Ethical issues, bias, inclusiveness, privacy, accountability, and transparency are serious problems that have been recognized since at least 2009, yet problems persist. That’s largely because of a lack of diversity in AI development teams, lack of processes to evaluate technology before it’s launched into society, lack of incentives to make ethical decisions, lack of regulation to force companies to do the “right thing,” and other issues. 

Lack of ethnic and gender diversity in AI development teams is a huge issue. AI processes involve data gathering, cleaning, implementation, and deployment. If Black individuals are not involved in these steps (and they are often not) it can lead to bias within AI systems. Thus, the ethical use of AI cannot be fully addressed without greater diversity in its development. And there is a way to go. Only 22% of AI professionals globally are female, with Black individuals even less represented. In fact, a nonprofit, BlackinAI, emerged in 2017 to increase the presence of Black practitioners in the field. 

Ethical AI — Who’s minding the store?

In the absence of any formal standards or frameworks, non-regulated organizations are left on their own to police themselves, and develop and follow their own ethical AI guidelines. The stakes are enormous. 

AI promises to make organizations 40% more efficient by 2035, which corresponds to a staggering $14 trillion in economic value. When viewed through that lens, we can compare AI to the changes wrought by the Industrial Revolution: innovation, invention, efficiency, and productivity on a massive scale. Business leaders convening virtually at January’s WEF conference agreed about its promise and its peril, noting that trust is the big inhibitor to realizing AI’s full potential. That sentiment was echoed at GTGS by government officials. 

“Our global approach needs a revamp,” said Jason Matheny, deputy assistant to the President for technology and national security. “We don’t yet have design principles to provide safety guarantees in AI.” 

He said the U.S. supports AI principles put forth by the Organization for Economic Cooperation and Development (OECD) and WEF’s Global Partnership on AI, but “we need ways to reliably measure those properties, and we don’t have those methods currently.”  

Translating the principles into instruments of governance, he said, is the most challenging part. 

Ethical AI can’t wait

Getting AI right is hard work but more important than ever. Lives may depend on it. Here are four strategies to get you started from Salesforce’s principal architect of ethical AI practice. n

Ethical AI principles and frameworks do exist. The private sector, academia, government, and professional associations have proposed hundreds of frameworks and toolkits. But as Harvard Business Review noted, these are mostly “ill-defined principles” that “cannot be clearly implemented in practice.” There is not much technical personnel can do, the article notes, to clearly uphold such high-level guidance. “While AI ethics frameworks may make for good marketing campaigns, they all too frequently fail to stop AI from causing the very harms they are meant to prevent.” 

Companies need individual ethical AI principles

Kathy Baxter, Salesforce’s principal architect of ethical AI practice, agrees that we need formal standards but says it’s important that companies develop their own frameworks too. 

“Companies need to create AI principles and frameworks that work within their own set of values, vision, and product development lifecycles,” Baxter said. “If you have someone familiar with how to develop these, it is faster and easier to develop and implement a bespoke set of principles and frameworks that you know your company will buy into, rather than trying to work someone else’s into your company.”

In 2020, 14 U.S. states introduced 43 articles of legislation related to AI. Thirty six failed.

These bespoke frameworks will still be important for companies to maintain and iterate on even after the establishment and operationalization of formal standards. Also key — buy-in from top leaders to ensure AI principles are actually put into practice and adopted within the organization. 

The importance of public/private partnerships in ensuring more responsible use of AI cannot be understated. And in the U.S., there is much work to be done. Recent legislative attempts at the state level to establish guidelines and regulations have not gotten much traction. In 2020, 14 states introduced 43 articles of legislation related to AI. Thirty six failed. Six are pending. Only one, in Utah, passed, related to the creation of a deep technology initiative in higher education. 

Worth noting: the European Union is advancing groundbreaking AI regulation to address the issues mentioned above. At the same time, China invests heavily and aims to become a global center for AI innovation by the end of this decade. 

Meanwhile, the state of Washington introduced legislation earlier this year that would have established the most concrete AI regulations in the country. It sought to ban the government use of AI that discriminates. Civil rights groups hailed it as a much-welcome and needed step toward ensuring equality, while industry groups cautioned against unintended consequences stemming from regulatory overreach. The legislation ultimately failed, criticized as vague and poorly written. However, industry watchers are hopeful it will be improved, rewritten, and reintroduced at a later date.

Businesses have a key role in educating federal, state, and local governments about AI, what it can do, how to use it, and the potential for misappropriation. They can serve as promoters and facilitators of collaboration between government, industry, society, and NGOs. 

2021 may be a year of real change

So, what’s going on now? Standards bodies like the Institute of Electrical and Electronics Engineers (IEEE) and the National Institute of Standards and Technology (NIST) have formed working groups to, among other things, define responsible AI. What is bias? What is considered harmful bias? How do you measure it? Which definition of fairness should be used in different contexts? Importantly, the groups are working to define safe thresholds to identify which definitions of bias must be prioritized for each type and use case of AI. 

“We can never say any AI is 100% unbiased,” Baxter said. “Just as we can’t say a medical treatment is 100% risk free, so instead, we need to talk about safe thresholds.” 

Organizations are also moving beyond principles and into operationalizing ethics in AI. The Responsible Use of Technology project, which is part of WEF’s AI and machine learning platform, has created a community that promotes sharing responsible technology tools and practices in a multi-stakeholder format. 

“Leaders are looking for proven techniques that they can apply within their organizations to drive more ethical behavior around technology. The World Economic Forum is providing these leaders a safe environment to learn from one another while advancing the practice of responsible AI,” said Daniel Lim, Salesforce fellow at the World Economic Forum, leading this project. 

How Salesforce infuses ethics into AI

It happens all the time. AI incorrectly identifies someone due to inaccuracies in facial recognition technology. Companies must recognize the unintended consequences of their good intentions. Want to hear more about what we’re doing?

Throughout 2020, the group hosted a series of workshops that gave participants opportunities to learn from experts from Deloitte, the Markkula Center for Applied Ethics at Santa Clara University, Salesforce, and Microsoft on how these organizations can apply responsible innovation techniques to drive more ethical behavior around technology, and mitigate risks in AI products. In February, the group published the first in a series of documents  — a case study — that examines how companies can incorporate ethical thinking into the development of technology, and how they might operationalize ethics in tech. 

Accountability in AI will require public/private partnership, and progress is being made. For example, the resurrection of the Algorithmic Accountability Act, which was introduced in Congress in 2019 but went nowhere. It was one of the first bills to address AI bias on a federal level and would have required tech companies to conduct bias audits on their AI systems. The three sponsoring senators plan to reintroduce the bill this year in a radically changed political environment. 

The Biden Administration named Eric Lander as the first Cabinet-level appointee to the Office of Science and Technology, raising hopes that science and tech will be viewed as critical as other issues.

According to Fast Company, “there may be an appetite for finally enacting guardrails for a technology that is increasingly part of our most important automated systems. But the real work may be passing legislation that both addresses some of the most immediate dangerous AI bias pitfalls and contains the teeth to compel tech companies to avoid them.” 

More optimism at the federal level — the Biden Administration named Eric Lander, president and founding director of the Broad Institute of MIT and Harvard, geneticist, molecular biologist, and mathematician, as the first Cabinet-level appointee to the Office of Science and Technology. Given the stature, there is real hope that U.S. leaders now view science and tech issues as important as other issues. 

The positive thing, says Salesforce’s Baxter, is “there are a lot of people paying attention to this problem,” including WEF and Equal AI, which are keeping the pressure on industry and lawmakers to enact real change. 

Operationalizing AI frameworks

Many aspects of AI are still evolving, specifically these principles: 

  • Ethics: responsibly source the data, manage data and code and reduce bias as much as possible. Acknowledge that eliminating all bias, while laudable, is an impossible goal that is almost surely doomed to fail. 
  • Explainability: systems and algorithms need to be transparent and explainable to users and regulators. Why is an algorithm predicting what it is, and how is it coming to those decisions? 
  • Security: AI systems and software should follow the highest security standards.
  • Accountability and governance: there must be transparency in the accountability and governance of AI frameworks. One example: audit results of bias in AI must be made public.  

At the national level, some governments are planning or have already implemented AI strategies as part of their countrywide growth plans. Case in point: Turkey has established an AI institute to coordinate AI initiatives, and it established a Big Data and AI Applications Department under its digital transformation office to ensure secure and quality data sharing.

Mustafa Varnank, Turkey’s Minister and Industry and Technology, said at GTGS that by 2025, AI employment in the country will reach 50,000; the number of AI graduates will rise by 10x; 15% of total R&D expenditure will be for AI, and the commercialization of AI will be supported through public procurement and open datasets. 

“Drastic changes in technology always make people anxious,” said Varnank. “However, the experience we have gained since the first Industrial Revolution has revealed important facts [one being that nations and organizations must adopt new technology or risk being left behind] that may guide us in the Fourth Industrial Revolution.”

Lisa Lee Contributing Editor, Salesforce

Lisa Lee is a contributing editor at Salesforce. She has written about technology and its impact on business for more than 25 years. Prior to Salesforce, she was an award-winning journalist with Forbes.com and other publications.

More by Lisa

Get the latest articles in your inbox.