Now I am absolutely thrilled
to announce our upcoming keynote
session with AI expert Dr.
Joy Buolamwini. She's
an artist, the founder
of the Algorithmic Justice League,
and author of the National
bestseller, Unmasking AI.
So without further ado,
please give a warm Salesforce
Dr. Joy and our own Chief Equality
Officer, Alexandra Siegel.
- Amazing.
- I have to do this.
- Yes. Definitely show off the book.
Let it be known. You're an author. Okay.
We are so excited to be
here with you, Dr. Joy.
I know you've had a big impact on me
and my career when I think about equality,
because I remember your TED Talk in 2017.
- Oh wow. You're going wayyyyy,
I remember when YouTube, you
know, was the the It Girl
and not TikTok, but I remember,
I remember you had this brilliant TED Talk
where you talked about some
of the bias you were seeing,
and now today you're here with a new book.
You've done so much to
trailblaze on the topic of AI
and equality and make people
really aware of the biases
that do exist and how
we can mitigate them.
folks from all different industries, folks
who are interested in
learning more about AI,
who work in AI who have influence in AI.
to hear from you a little
bit about your journey
and how this became a focus
for you and your career.
- Yeah. So it started, as
some of you know, with a mask.
It wasn't too long ago, we
were celebrating a Halloween.
So when I was a grad
student at MIT Media Lab,
I was busy working on a class project.
It was an art installation.
I wanted to look like Serena
Williams in the morning.
And so I built this mirror
that could project anything
onto my reflection just because
it was the MIT Media Lab.
And that's the kind of thing we do.
So anyways, I'm testing, I'm having fun,
and then one of my friends says,
we should do a girl's night
out, come over to my place,
we're going do masks, things like that.
so I'm thinking she means
bring a costume mask.
So I grabbed this white mask
and it turned out she meant
So it was not what I was thinking, but
because of that misunderstanding,
I had a white mask in my office
as I was building the system to try
to make myself look like
Serena Williams in the morning.
And as I was working on it,
I was using face detection software
that didn't detect my face consistently.
So I first actually
drew a face on my palm,
like a smiley face and a nose.
It detected the face on my palm.
So when that, that's why I was like, okay,
anything is up for grabs.
So I pull up the white mask
and I didn't even get it
all the way over my face
- Wow.
- Had some questions, right?
So that, as you can see,
that led to some things.
But that is how it got started.
In this place where I was
exploring my creativity,
there was nothing about bias
or inequality in my mind.
It was more so how can
I have fun with tech
- I think it's just so amazing
and interesting that something
that you were just trying
to have fun with unlocked
this bigger world
and also your platform and
influence on the space.
We have been talking a lot about
how we bring everyone along
and bridge some of the fear gap with AI.
And you talk a lot about the
very real fears we should have,
but how do you think we should
think about bringing everyone
along and unlocking the power
and the potential of AI?
- Yeah, I think it's really
important to understand
that the power and
potential of AI is the power
and potential of us. Of individuals.
And it's so easy to think
of AI or tech as the savior.
I'll be honest, part of why
I got into computer science,
I was like, great robots,
algorithms, I got this.
I was not, again, if
not for this side quest
that became the
Algorithmic Justice League.
I wasn't looking into that.
And in some ways I really
wanted to believe the fiction
And also being from
a marginalized background,
you could be anybody online.
I had my little web design, you know,
little web design business in high school.
It was always "Mr. So and so, thank you."
They didn't know who it was
and things of that nature.
And so for me, I really wanted to believe
I can participate and it's all
about merit and showing up.
And in some ways, that
was my story, right?
Going to Georgia Tech, having
all of these scholarships,
but when I had that
encounter with the white mask,
it felt a bit demoralizing
because the field that I loved
didn't necessarily always
And so I really had to confront that.
And in that moment it
would've been easy to step back.
And instead I leaned
in with curiosity to say,
I have some suspicions as
to why this is happening,
but my opinion is not research.
And so actually taking the time
to use the lived experience
that brought the insight
to then get to the empirical evidence
that could be published, so others
who might not have the
same experiences that I do
or many of you have, can see
what other people are going
through or where the
limitations are currently
so we can actually reach our aspirations.
And so it made me realize that
the very things people try
to use against me was really my superpower
to changing the industry.
- I love that. I'm sure a lot
of people can relate to that.
And I want to unpack the tech neutrality,
the lie of tech neutrality.
And for those who don't really know
what did you discover
about bias and technology?
- Yes. After I found myself
coding in whiteface,
literally at the epicenter
of innovation, here I am MIT,
mecca of tech, what is going on?
And so I put on my research hat,
and you mentioned the TED Talk earlier,
I had posted a TED Talk
where I gave an example
of my friend and my face.
You see my friend, she's lighter skinned,
(she's Chinese American) face
detected. My face? Not detected
until I put on the white mask.
So that was already out there.
And I thought, you know what?
People might want to check my claims to
So I started testing my
TED profile image on AI demos
and some of these AI demos
that did facial analysis
of some sort or classification
didn't detect my face,
And so this is, this gave me an insight
I was like, huh, is it just my face?
And that led to my MIT
master's thesis work,
which was called Gender Shades.
And the whole point of
that research was to say,
do AI systems read
different faces differently?
Across gender, across skin
type, across the intersection.
And we saw in some cases with some
of the big tech companies we tested,
there was flawless performance
for the lighter skin males,
the pale males. Flawless, right?
For darker skinned women like myself.
And also I tested the faces of
the Women of Wakanda as well.
And we saw misidentifications,
misclassifications.
And so that was the
research I became known for
because at the time when
the Gender Shades paper was
published, it actually showed
the largest disparities in
accuracy for commercially
sold AI products.
And we're talking about all
these tech companies you've
probably heard of sitting
in your pocket. Yeah.
- Well, we won't shame
any of them, but you know,
thank you for pointing it out.
- Well we believe in naming
and changing, not naming and shaming.
And it was also one of the
biggest lessons for me in
that research was seeing
those companies change
their approaches and
change their product launches
and all of that in
response to the research.
- I think that's so inspring.
Yes, if you want to clap
I think that's so inspiring.
And a message we really
want our audience to hear is
and when you have a place at that table,
you can actually change
and influence the future.
So I think that is a really
inspiring part of your story.
And you mentioned this, the
algorithmic Justice league.
I wanna learn more about
how that came to be.
What was the origin story?
- I mean, it sounds cool,
but DC Comics did try to sue, so
I like to couch it in the
legacy of justice leagues
that came before the fictional.
but HAL was articulated
in 2016 as part of
that TED Talk, because I thought,
oh my goodness, if we are
encountering what I now
call the coded gaze, right?
And so how many have heard of
the male gaze, the white gaze,
the post-colonial gaze, ok.
Okay. To the lexicon,
we add the coded gaze.
And it is very much a reflection
of who has the power (us)
to shape technology with our preferences,
with our priorities,
what we think matters,
what we think is cool,
what we think is worthy.
And also with our prejudices
known or unknown, right?
- I love that. I love that.
Well, you also on that similar talk track,
you talk a lot about how
we need to address the lack
of women and people of
color in stem fields
and why it's so important
for us to be represented.
And we talk at Salesforce
a lot about this concept
of representation matters.
How would you suggest someone get started?
- For me, I had great role models
and the research shows how
important role models are.
So I am the daughter of
an artist and a scientist.
So being a poet of code is probably not
so surprising when you know that.
And my dad's a professor
of medicinal chemistry
and pharmaceutical sciences.
So he was actually using neural nets
for computer aided drug
discovery when I was yay high.
So I would go to his office,
he was a professor at Ole Miss,
he was calling it Olé Miss.
So you can tell we were
immigrants coming in.
And he would let me feed cancer cells.
And he wanted me to get into,
you know, the chemistry side.
But I saw the computers
and I heard the dialect.
And then my dad was
always entrepreneurial,
but he had a full-time
job and kids, I had time.
So he would get all of these
computers for the house
and I would just be the
beneficiary playing around,
networking with the computers
and things of that nature.
And so that's what got me into tech.
But it was also that both
of my parents always made
me feel that if I wanted
And for me not knowing,
I think this orientation
that ignorance doesn't mean stop.
Ignorance is an invitation
to learn more, was so helpful.
You don't know it at the time,
you're just in wherever you are
doing whatever you're doing.
But that truly made me
feel like, okay, this is
Let's see how far we can go.
- I love that. I love the
power of having role models
and also leaning into
asking questions rather
than running away from it.
Absolutely. So that's fantastic.
And I wanna talk to you about
this intersection of art
and technology because you
have such a beautiful way
of bridging that divide or what
people perceive as a divide.
And you write beautiful poetry as well.
So tell us a little bit about that.
- Oh yes. So by the time I got to MIT
and I had done the Gender Shades work,
which I mentioned a little bit earlier,
that was my third degree, coming
from a West African family.
Anyone heard of tiger parents?
We have lion parents, eh?
So anyhow, for my, I was like,
cool, Rhode Scholarship, Fulbright
Fellow, I should be good.
Right? Dad calls, where's
that PhD? PH what?
I only applied to one place I
got in. So then I continued.
So by the time I'm working on
my fourth degree, I wanted to,
I knew I could do the
academic steps, right?
Do the research, publish the
thing, get the credential,
call your dad, understand
it's not enough, do it again.
like continue, right?
- Yeah. There's a pattern.
I figured that one out. So I thought, huh,
my mom's an artist, so what
if the PhD's a poetic PhD?
That was my challenge to myself.
And I remember even talking
to someone who won't be named
and literally being laughed at
like, what do you even mean?
What? What does that look like?
And so what that ended up
looking like is a poem called
"AI, ain't I a woman", which was inspired
by Sojourner Truth's 19th
century speech in Akron, Ohio.
That's called the "Ain't
I a woman" speech.
She likely wouldn't have said, ain't I?
And so this is also its own projection,
but that's what the
speech became known as.
to explore it was when I did
the gender shades research
who looked like me had the
worst results across the board.
I wanted to move it from
performance metrics.
Yes, you can see the accuracy disparities
What does it look like? Right?
So when I say can machines
ever see my queens
as I view them. Can machines
ever see our grandmothers
as we knew them, that brings
it to another kind of space.
Or if I say the Amazonians peeked
through windows blocking
deep blues as faces,
increment scars, old burns,
new urns, collecting data,
chronicling our past,
often forgetting to deal
with gender, race, and class.
Again, I ask, ain't I a woman?
It takes it to a different
place in the conversation.
And for me, one of the most
powerful experiences I had
with "AI, ain't I a woman", Megan Smith,
who's former Chief Technology
Officer of the United States,
she was invited on the
EU global tech panel
and she did something that's
so important when it comes
She pulled up a chair from
me, she said, it's great
that I'm here, but look
at who's coming up.
And in that space, that was
when I first showed
And later as we continued those
conversations, it was shown
to all of the EU defense ministers.
- Wow.
- Ahead of conversations on lethal
So you're seeing some of
the most powerful tech
Missing the face of Oprah
Winfrey, of Michelle Obama.
If you can't get these faces,
we might not wanna put
the drone with the gun
and the facial recognition out there.
Right? And it, in those
moments, it made me see
how powerful the storytelling piece was
and how powerful the poetry was.
And to be honest, I had been
afraid to bring out that part
because especially as somebody
in a marginalized position in
the tech industry, I was like,
ha, look at my credentials.
Bonafide certified. Right?
You know all You do have a
lot of them though. So true.
Lying parents, lying
parents, these things,
hopefully I've arrived,
I got them an honor.
Two honorary degrees, you
think I'd would arrived
Nah, they're still calling
you. What's the next degree?
There's still something else.
There's still something else, but,
but those experiences just really made me
- That's amazing because I
think we've seen throughout
history, you've had to have both, we have
to both have the innovation and
the technical understanding,
but also the storytelling,
poetry, arts is the way
that you influence change as well.
So I love that those two
exist together for you.
- Absolutely. And I was
afraid to pull it out
because I thought it would
somehow undermine my credibility.
But when I was testifying in Congress
and I would use a screenshot
of AI, a thai woman, right?
The one where Oprah's
face is being misgendered
and seeing how people responded to that,
it left no question about
the power of storytelling
and the power of art in my own practice.
And I just wish I had embraced that part
of myself sooner in my work.
- That's incredible.
And I think kind of on
that same thread, you've done a lot
of work in the beauty space as well.
And I know you have
your Decoding the Beauty
I'd love to hear a little
bit about how that started.
So again, when I got
into computer science,
to be involved in anything
beauty related, to be honest.
Biggest tomboy,
skateboarder, pole vaulter,
like didn't really wear makeup
until more recently kind of thing.
Right. You know, so this
is not what I expected.
So when I was asked to do this
campaign, I actually said,
I don't think I'm the one.
He was like, this looks
great, blah, blah, blah,
but maybe you want somebody else.
And they said, no, we
actually really want you
to be the face of this campaign.
I was like, okay, you can use my face,
but you gotta use my brains too.
So it ended up
being both an ad campaign,
think Vogue, Allure, Good
Morning America, all of that,
which was really important in terms
of cultural representation.
But it was also paired
with an algorithmic audit
because they came to the
Algorithmic Justice League.
So that was going to be a part of it.
And what was so important to me about
that particular algorithmic
audit was I wanted to show
that we're talking about one
of the largest consumer
goods product
companies in the world, that
it was possible for a company
of that size to do an audit,
make those results public, which is rare,
and also make some actual commitments.
So through that process, they came up
with the consented data promise,
and it was actually inspired
by their "no skin retouch" promise.
So the moment I decide I'm going
to model this is when they
want to make commitments
to not airbrush, which is all good.
You know, truth in
advertising, I get it.
Yay, integrity. But I won't
lie, it was pressure.
I was working out, I was sleeping on time,
I was plant-based, water, no soda.
And I was doing the things
and I was like, oh, this
is kind of why you want
You know? So when you have
that pledge, when you have
that commitment, it puts,
it put me on my toes.
Right? And so using that, I said, well,
here's another way, right.
Of raising the standard with this
So I was really surprised
because basically when they came,
I was like, do you know what I do though?
I audit things, I show problems.
Headlines are shared. Do
you really want to do this?
They said, yes. If I find
something you're not able
to correct, would you
shut down the system?
They said yes. I was like,
oh, this is interesting.
It's interesting. Okay,
alright, let's go in.
They had not talked to legal
- Only negotiate
with the marketing team.
We had great storytelling. We
had great storytelling, right?
But it actually ended up working well.
And we continue to
collaborate to this day,
but it was really important
for me also in this process
to see what are the real
challenges when you're trying
to build something for consumers.
Because on the academic space,
you might look at the AI models,
you might look at various data sets,
but you don't necessarily always get
to see systems in production
behind the scenes conversations
about the trade-offs,
the deadlines, what you
committed to, and why
it might be hard to walk back now,
but yes, we really wanna
adhere to our values
Kind of thing. So, it
was, it was really helpful in
And then also as somebody who,
as a little girl I was made
fun of for my dark skin
and so forth, to be in a beauty
campaign that was also
about computer science
And I remember the morning
when I was doing my first
satellite media tour, I had
this amazing glam squad team,
and one of the ladies, she was Nigerian,
probably late twenties or early thirties,
and she looked at the ad like,
it's a whole spread in September Vogue,
which if you're in the fashion world,
Right. And she started crying
and she said, I know this
is for the little girls,
- That's incredible. And
I, it really speaks to
what we've talked about today,
which is when you center
equality in the work,
it's not just the right thing to do,
but it also is better for business.
And it is also helping
our future generations.
I think we've all had that experience
of not seeing ourselves reflected in,
whether it's the magazines
or the technology around us.
And in listening to you, I
think what's so inspiring is
that you have an optimism.
And I know a lot of folks who
work in this work, or DEI work
or anything adjacent to
understanding how bias shows up.
It can, it can feel heavy sometimes,
but what keeps you optimistic,
and how do you think AI is
going to shape the world
around us, and how can we
actually shape how it does that?
- I mean, if
you're named Joy, it's hard
to just be down all the time.
It's like self-fulfilling
prophecy, you know?
and I would be mopey, my mom
would always kind of joke
with me, like, how can
we name you happiness?
And sad face, you know, sort of
thing.
- How do you stay optimistic?
- Oh, okay. So I remember
this was probably 2020, a hard time
to stay optimistic, right?
Having a pandemic and jazz.
And I was really trying
to figure out, do I want
to continue the
Algorithmic Justice League?
Do I want to explore other things?
And the mother of a
7-year-old reached out,
and it turned out Aurora (seven
years old) had watched the
documentary adult
documentary "Coded Bias" four
times more than my dad at the
- You should talk to him about that.
- He's like, I only need
to see it once, what?
There are multiple versions. Anyhow.
and she had a class project, which was to,
it was a living wax museum,
and they had to select
a historic figure.
I didn't know how I felt
about being historic.
- I love that.
- But she had her white glasses, her red,
she looked like the cover, right?
And then she had this
video, she's like, I'm Dr.
Joy and I have a bunch of degrees.
And then she started listing them out.
Also, she had done her homework
that she was a former pole vaulter.
- Okay. - I like, it's doing
better than most journalists,
But just her enthusiasm and
being at such a young age
and wanting to know what she
at seven years old could do for
And now in the space
where we see generative AI
and we see that AI natives
are starting to crawl.
When I think of that next
generation, every time I'm down,
I just have to remember
that little 7-year-old.
Or I'm like, okay, I got,
I gotta do it for history
as my historic wax figure kind of thing.
So that definitely keeps me hopeful.
- That's amazing. And as we
think about shaping the future
of the world with AI why is it important
to work together across
nonprofits, businesses,
Why are all those roles important?
- Yeah, we all have perspectives
the others don't have.
That was one of the things I
really enjoyed about working on
From the outside, you might
have certain critiques,
but you don't necessarily know all
of the internal challenges.
And sometimes when you
invite critical friends,
that your colleagues might be shy of.
I think it's really important
when even thinking about this
space of algorithmic audits
to understand the difference
between first, second, and
third party audits, right?
So first party checking your own homework,
you might be a little lenient,
you know, second party,
have your friends check your
homework, you hired them,
but you know, they're still a little like,
Give me that extra credit.
And then you have third
party, which is more of
what the Algorithmic Justice League does.
And this might be the proctor who,
I'm not saying they're not
invested in your future,
but they're not incentivized
in the same way, right.
And so that's why I think in terms of
where the public sector
can come in, when it comes
if you look at things like
the National Institute
of Technologies Risk Management Framework,
or the AI Bill of Rights,
these external proctors, right?
Or critical friends as well,
Again, because we all have perspectives
- And when you think about in
the audience, we have folks
and some who are learning
to use AI every day.
Where does bias show up
in that day-to-day use,
or when you're designing AI?
- Oh, that's such a great question
because one of the things we do
with the Algorithmic Justice
League is we invite people
We call it Bias in the Wild.
And we also have, yeah,
it's wild out here.
People are discovering things.
And where does this bias come from?
I mean, you can think about
an entire AI life cycle.
So the design, this goes
back to the coded gaze,
whose priorities are being
put in, whose preferences
and who might not have been at the table.
Right? When you created
that system, you intended
to be good, but now your AI that's meant
to detect skin cancer
doesn't work for the majority
of the world, you know,
that has darker skin
and that kind of thing. Yeah.
- And so we have this audience
here that's so excited,
but also tentative about the future of AI.
What advice would you give them?
- Well, I will say there
are reasons to be tentative,
as you all are quite aware.
Something that's been
weighing on me lately is
this headline of a 14-year-old boy
shortly after having an ongoing back
and forth with a chatbot
that had been created
to form emotional connections
with people, right?
It's an "AI companion",
I put that in quotes,
but knowing that there's
so much loneliness
and a void that is typically
but when there's this lack
of connection, how easy it is
And part of why it weighed
on me so heavily, not just
but it wasn't a lone
incident. In "Unmasking AI"
I start the introduction, I
talk about a man in Belgium
whose widow says he would
still be alive, right?
If not for this chatbot kind
of pseudo relationship he'd created
where the chatbot actually encouraged him
And so it really started
making me think about
AI and delusion and AI and assistance.
So who likes "Finding Nemo"?
Let's take it to another
space. "Kung Fu Panda", right?
So when you're watching
something animated,
you understand that the reason
you're seeing movement is
because of how our eyes work, right?
And knowing how our eyes work,
we've developed mechanisms.
So animation looks smooth.
And then later we learned, okay,
you've got to add some motion blur.
And all of these things, we
know that there's a voice actor
It doesn't take away
from the entertainment.
You can still form, I
really like Squirt, you know,
like all of those things.
But I think once we've
moved into AI systems,
sometimes people don't always
see the illusion. Depending on
And so something I'd like
to challenge Salesforce
with Agentforce and others, is
how do we establish the piece of it
where we're very clear that it's AI?
Because I think, I hope, you
know, the 14-year-old boy
had the suicide is a warning
that we heed in terms
of how we design these systems
and what claims we make
about these systems.
Because so often the definition
It used to be OCR optical
character recognition,
Like these are the toy examples
you learn when you're like,
okay, this is how we're going to learn AI.
Now you're like, cool.
The check, the number,
And then it keeps continuing.
And so the stories we tell
ourselves about what AI is,
what it can do, and what it
can't do are really important.
And I think it's very
dangerous if we say AI
as a companion can solve
for loneliness, right?
When it's the human connections
that are needed in that case.
So that's something that's
been really presenting itself
What are the ways in
which we make the illusion
So we're not confusing humans
and machines in
potentially dangerous ways,
- Something that we need
to pay attention to.
And I know we have a lot of
teams that are focused on this.
And you mentioned a few times
that people have laughed at you.
They laughed at you with
the arts and technology
- I thought it was funny.
- You are funny, you're very funny.
And but I think you said
they laughed at you in
that they didn't really fully understand
or believe in what you saw for the world.
And even you've talked about with bias
and technology, people sort
of were critical of that.
Like, can technology really be biased?
- Now you tell me robots are, oh my God,
the search engine is sexist.
Come on, let 'em code. Let 'em
code. You know? Yeah, yeah,
You've been there a few times,
I can tell these
conversations are prevalent.
So what would you say to the audience here
who have big ideas when it comes to AI
who have thoughts of their own
or things they'd like to try?
What advice would you give them
to overcoming that judgment
or, you know, criticism
that people may have?
- The criticism will always be there.
That literally of, I've
advised world leaders,
AI governance is on the agenda, right?
There's this whole summit
that's happening, right?
Even in this moment, there's
still people who will
or discrimination in these systems.
And I see it happen in a way that you have
to be very careful about.
Unlike when I started, you know,
and in the book I talk about
the very mean TED comments.
It's just like there's no,
there's no good reason
to read TED comments.
So going through that, where
we are in the conversation,
I don't get the side eye.
and AI instead I get
the Oh, yeah, yeah, we,
you know, facial recognition AI.
Good stuff. Yeah, yeah. We
know. We know we been down.
Actually, we always knew, oh,
like hold on, let's go back.
You didn't always know, but
we're here now. Okay, cool.
and I think it's really
important that we stay vigilant,
is sometimes the assumption
that you already know the story.
And in thinking you
already know the story,
So I remember some of the
tech companies that I audited,
we had different sorts of response.
One response was, we
guaranteed nothing, so.
No response, use at your own risk,
Now that we have legislation
and regulations coming out.
Another response was, thank you.
We already knew about this.
We released the model, we're fine.
So I decided to check
out their new models.
Literally first photo
I put it was a "Tech 35
to get on my dad's good side. 35 under 35.
I show that photo. I
mentioned it's 35 under 35,
because I was aged 50, as well
as also being misgendered.
And this was after that
headline of "We fixed it", right?
And then I did the "AI ain't I a woman",
which is a spoken word piece,
but it's also visual algorithmic audit.
And I show that company with
all of these misidentifications.
because they were so confident
that they had addressed the issue.
And it makes me think of a car recall.
You might have identified it,
but what are the models that
are still on the road?
what are the new issues you
might have reintroduced?
And so it really made me understand
that when we're talking
about issues of bias
and issues of discrimination
and AI harm, it's not a product focus,
What's the process we put in place
so we don't kid ourselves into
a false sense of progress?
Even as I started my research
looking at facial analysis
technologies, facial
recognition technologies,
the gold standards at the
time were actually failing us
because they were a majority
male, majority lighter skinned
And so for people that look like me,
let's say you're less
than 4% of the dataset,
you could still get a
A, looking great, right?
And actually failing an
entire group of people.
- Well, I think that so speaks
to why it's important for us
And for those in the audience
who are thinking about being at the table,
why would you say it's important
- In terms of being at the table,
it's important to have a voice
to be cautious about
tokenistic representation.
When I talk about Megan
Smith expanding that table
for me at the EU Global Tech Panel,
the invitation wasn't come.
So we can say, we filled our quota.
The invitation was come
because your insights matter,
and they're valuable here.
And not only are they valuable,
but we're going to listen.
And so I think as we're thinking
through decision making
processes, and I've seen this time
so people talk about human in the loop.
I talk about human in the
last minute ethics loop.
You already did the thing,
you are already going to production.
Now you want to check that box.
Community engagement, right?
But if it's community engagement,
right at the end when you've
already answered the major
questions, that's where you end up
with it can be well-meaning,
but nonetheless tokenistic.
And so when we say we have a
seat at the table, I'm like,
Which side of the table? You know?
And to what end for sure.
The other thing that
I've learned, even time
and time again, I can't talk
about the specific company
right now, but I'm doing some
behind the scenes work with one
of the biggest brands in the world.
where the intention is to be inclusive,
I immediately, just by
the experiences I've had,
saw issues in their product lines.
You know, that you, this is,
you got tens of thousands
of people around the world.
just like when I did the
facial recognition technologies
test, black people been
just seeing their whites
of their eyes and their teeth.
Like I, it wasn't like, ooh,
no one ever knew this, right?
What I did was I put numbers behind it,
published the research, had
the institutional clout of MIT
and all of those things to
give it a particular spotlight.
But it wasn't news, right?
It was more so verification
and opening it to others.
And so it made me think, I'm
sure there are many people
inside the company who knew this.
Why weren't their voices heard?
Or why did they think
they couldn't contribute?
Or sometimes what we'll
learn with the bias
and the wild stories they did speak up.
Yeah, yeah, yeah. We'll
get to it.
Yeah-yeah-yeah, we already knew.
So it's also when people are at the table,
will their voices be heard?
Will their insights
be taken into decision
Because again, human in the
last minute ethics loop,
we already know what we're
going to do,
- I love that. And it
completely speaks to the fact
that we need to focus
on diversity, equity,
inclusion within the workplace
as it's connected to the work
that we do outside of the
workplace and with technology.
- And to that, I like to call,
talk about the exclusion overhead.
of our Bias in the Wild reports
we got was from somebody
who worked at a large tech company
that used facial recognition internally
for some of their
conference room video tech.
And it turned out that many
of the conference rooms
didn't work that well
There was one that had better
lighting and other things.
And so they had to wait until
that particular conference
room was available to be able
to use it like everybody else.
And this is an example of
what I call the exclusion overhead.
It wasn't that the system
didn't work for them at all,
but how much did you
have to contort yourself
or put on a white mask,
dehumanize yourself to be seen
as quote unquote human by a
system that wasn't created
And so that's something I think
we can all think about when
it comes to internally, what
are the processes we've had
and are we increasing
the exclusion overhead,
And again, it might not be so obvious
because generally people
are scrappy and resourceful.
I got my project done right?
I had to go get a white mask to do it,
but I managed to make it a happen.
So I think really making
space to truly hear
And reducing the exclusion
overhead are steps
- Well, it is always such an
honor to get any time with you.
I feel like I learned so much
in everything that you say,
in the stories that you tell.
I know you have a poem
that you'd like to share
Yes. So, all right. Do
we wanna hear a poem?
- Okay. Yeah. See, I don't
hide my poetry anymore.
Very excited. We love this.
So this is from the epilogue
of the book, Unmasking AI
and it, the epilogue is actually
called Seat at the Table.
Perfect. And in this case,
it was this last chapter
I wrote in San Francisco
after being invited to an AI roundtable
and Governor Newsom, who I
was seated right next to him,
I kept trying to look over
that perfectly coiled hair
so I could see the President.
All right, so this is called
"Unstable Desire".
Where be the guardrails
now.
Hallucinations taken as prophecy
destabilized
on a middling journey
to outpace, to open chase, to claim
supremacy, terrain indefinitely.
Haste and pace, control altering deletion,
unstable desire remains undefeated.
The fate of AI still
uncompleted. Responding
with fear, responsible AI beware.
to believe our humanity
is more than neural nets
and transformations of
collected muses. More than data
and errata, more than
transactional diffusions,
are we not transcendent beings
bound in transient forms?
Can this power be guided with care?
Augmenting the light alongside
economic destitution?
Temporary band-aids cannot hold
the wind when the task ahead
is to transform the
atmosphere of innovation.
The Android dreams entice the
nightmare schemes of vice.
- Yes, Dr. Joy, everyone. Thank you. Thank
Amazing. Thank you so much
for being here with us,
and thank you everyone
for dedicating a half day
to talk about AI and equality.
We do have a networking event after this,
and please take away
some of these insights
and build a future that's
truly for everyone.
- All good? All right.
- Thank you. Thank you everyone.