Marketing Cloud, Trailhead...
At Salesforce, Equality is one of our core values, and we are deeply committed to using our platform to drive racial equality and justice, including with our technology. As we continue to address the role technology plays in racism and bias, facial recognition technology has come to the forefront. In 2017, when we launched our AI Einstein Vision product, we made the conscious decision to prohibit facial recognition as part of our commitment to Equality.
We have thought deeply about how we develop our AI technologies and the safeguards we develop for their responsible use. Not allowing the use of our products for facial recognition is a central part of this.
Why this matters
We have long held concerns about facial recognition technology, both around its accuracy and the harm it can cause, particularly to communities of color.
Pioneering AI researchers Joy Buolamwini and Timnit Gebru demonstrated in their 2018 research that facial recognition is most accurate on lighter versus darker skin, and better for men than women. When used for surveillance or by law enforcement, this can lead to misidentification — which has potentially life-altering consequences, particularly for vulnerable communities. Use of facial recognition in public spaces can create opportunities for political manipulation, discrimination, and more. The risks to transgender, nonbinary, or gender non-conforming individuals are also acute.
Facial recognition also struggles to adapt to the “real world.” Error rates can climb sharply when factors like aging, camera viewpoint, distance, barriers, illumination, shadows, and movement are introduced. For example, a 2017 study showed that leading algorithms identifying individuals walking through a sporting venue – a very challenging environment – had accuracies ranging between 36% and 87%, depending on camera placement.
Our next steps
We want to be clear — facial recognition technology can have important uses. For example, when individuals voluntarily use it to unlock devices, such as our phones, and the data is stored locally, it can enhance product security. With better research and thoughtful regulation, as well as respect for privacy and human rights, we see its potential.
We have put a range of other guidelines in place for the use of our AI technologies. Our Acceptable Use Policy’s Einstein section also includes other safeguards, such as requiring our customers make clear when end users are interacting with Bots versus humans, and preventing customers from using Einstein predictions to make legal – or similarly consequential – decisions, without a human making the final call.
As technology rapidly evolves and changes with the world around us, we will continue to reevaluate our products and policies. We have seen progress in some areas, and know we have more work to do in others. But, we are committed to working with our communities to get this right.