Artificial intelligence (AI) is booming in tech today. Companies are eager to acquire services that study data and make predictions so they can anticipate their customers’ needs. Human beings collect, choose, and input the data in the first place, and in that human-to-machine transaction, bias can occur. People canunwittingly input biased data that AI will propagate and amplify. This can have a significant impact – from what treatments doctors prescribe to people of different demographics to whether people get hired because of the college they attended.

Salesforce’s Einstein team is tasked with a particularly interesting challenge: Build AI tools that any business can use — ethically. (Learn how here.) That means educating customers, giving them insight into how AI works, and providing transparency into what Einstein reveals about their own data. All of this is top of mind as the team unveils new Einstein platform tools today. You can also learn more about the new tools at TrailheaDX, the Salesforce developers conference in late May.

Addressing bias was foundational to the team, with its first job listing asking not what college you attended, but how you would approach protecting pandas. Data Science and Engineering VP Vitaly Gordon wrote the ad to attract critical thinkers from different backgrounds, and it worked. “The real story of addressing bias in AI at Salesforce is how we built an eclectic team for that purpose,” says Einstein EVP and GM John Ball.

We sat down with the team to ask how they are building AI tools the right way at Salesforce. Here is that conversation:  


Do you think it’s important to address the potential for bias in artificial intelligence, and if so, why?

Sarah Aerni, director: Working in this area has been important in my life’s work. I previously worked in biomedical informatics and I thought it was having a major impact. My intentions were really positive, but at some point, while I was building these predictive AI models, I lost touch with the fact that there was a human being attached to every single datapoint. It's really important for our team to think about this while we’re teaching others and enabling others.


Natalie Casey, engineer: Working with something so new, it’s important for people who understand how machine learning works to explain that bias can seep into models, and that this is something you have to watch out for. You have to actively keep an eye out for it.


Till Bergmann, senior data scientist: An algorithm will pick up a pattern and amplify it, so if there is bias in the data — it might be sexism or racism — AI can enforce it. And the bias may be in a system not out of any kind of ill will, but because someone is unaware that they have that bias. The problem is that AI can spread it and make it worse, or at least more widespread. The AI itself is neither smart not stupid. It learns what we teach it. Adjusting the algorithm can have a huge impact – more than just on the person who introduced the bias.


Shubha Nabar, senior director: On the flip side, with machine learning and automation, there is also the opportunity to de-bias your data and your algorithms. You can potentially teach the machine to overcome bias, which could be much harder to do with humans. So while it’s important to be cautious about bias, once you identify it, there's also the possibility of building systems that are more fair than purely human systems.



Chalenge Masekera, manager: We have started to look at diversity in tech, and hopefully people are continuously evaluating their views and any prejudices they might have. Now it’s more of a priority because AI is automated and at a much larger scale. So there are greater sums of data; greater quantities of data that can enhance that human bias. So we have to proactively address it before the bias is in the data.


Leah McGuire, principal data scientist: In some ways, it's nice if the bias is in an algorithm and you can easily identify it and address it, whereas the originating bias in human systems can be much harder to address. But if you're aware of bias, you can actually anticipate it and help correct the behavior. People may not be aware that they're unconsciously choosing particular things, and you can help them see it.



Has a team focus on diversity helped you address bias in AI?  

Sara Asher, senior director: One advantage of being in data science is that it’s a relatively new field. So the rules of who should apply and who should be hired are less rigid than in other fields. That means there’s the freedom to look at candidates who don’t necessarily have a computer science degree from a certain university. Instead, you can look at people and decide if they are smart and talented and capable of doing this work. That has worked well for our team.

Leah McGuire: It’s to our advantage that there is not one unified job description for a data scientist. Because then you could just have a series of checkboxes that everyone goes through,  and you would end up with people who have very similar perspectives. But to address bias you need multiple perspectives. We need to see things from different perspectives if we are going to make sure our products are ethically sound for customers.

Shubha Nabar: A lot of the work we do is aimed at providing transparency so that our customers have insight into individual predictions. This then gives them an audit trail, so they can look back and see why certain decisions were made. If they inadvertently made biased decisions in hiring, or targeting customers, this transparency can help them detect and remove bias from their data so it doesn’t persist going forward. They can begin measuring their values by checking how many of the decisions they make are in line with the values they espouse.

Natalie Casey: I have definitely had a lot of great conversations with the team about how to maximize the positive input of the work we’re doing. This includes working with customers to help them set up and learn about their tools. A big part of our work is education. The customers know their data more than we ever could. We aren’t able to tell them what is and isn’t an ethical use of their data since we don’t know how they are using it. It’s up to us to provide enough transparency into how our tools work that they can see what they're doing and understand the impact.

Till Bergmann: For most people, AI is like black magic. They don’t know all the internal things going on. It’s up to us to expose all the factors that play a role in AI making a prediction.

Chalenge Masekera: You can tell customers all the metrics, but you also have to educate them about how AI can enhance any bias in their data. We can’t assume they just know. I believe people don’t want to be biased. It’s mostly not understanding. We have to be very explicit about it. When we explain the results, they’re glad to see how they are using our AI products.

Sarah Aerni: When it comes to diverse teams, clearly there is a need in our space and all spaces in tech. A diverse understanding of how products will be used and perceived, a grasp of how they will be used in the wild is only possible if we have unique perspectives. Some of the products we build target non-expert users and we really try to address this. If we are truly wanting to help democratize this space, we are responsible for making sure that everyone is aware and has the tools. We are the stewards of AI, and we need to make sure that guarding against bias is part of the product.