60 Minutes and (En)Coded Bias
Back to BlogAI Bias + Media

60 Minutes and (En)Coded Bias

Recently, CBS's 60-minutes aired a segment on racial bias in facial recognition technology, referring to a December 2019 NIST study as a 'landmark study' while failing to mention the groundbreaking research on which it was based.

Dr. Dédé Tetsubayashi|7 min read

Gut Reaction

Droplets of water building until the cup overflows. You spend time. Spend effort. And when you see a final product, without your name—without your history—you wonder if you're the one who's crazy.

Recently, CBS's 60-minutes aired a segment on racial bias in facial recognition technology, referring to a December 2019 National Institute of Standards and Technology (NIST) study as a 'landmark study' while failing to mention the groundbreaking research on which the NIST study was based, and conducted by AI-research pioneers and Black women, Joy Buolamwini, Dr. Timnit Gebru and Inioluwa Deborah Raji.

Ms. Buolamwini, who spent hours prepping the 60-minutes team, was summarily not given credit for her work, nor was she acknowledged as the one who made the work groundbreaking; thus erasing her from the narration while her work and knowledge were credited to what she refers to as 'Pale Males.'

Appalled, I reshared a LinkedIn post by The Algorithmic Justice League as well as one by Ms. Buolamwini herself on my personal LinkedIn profile with the following comment hoping to inform others of how Black women continue to be treated as disposable within the tech community.

'Pay attention: this is what misogynoir looks and feels like'

The Erasure of Black Women in AI Research

Ms. Buolamwini, who founded the Algorithmic Justice League, first published her findings in 2018 with her MIT thesis, Gender Shades, and later the documentary Coded Bias. Her work, along with that of Dr. Gebru and Raji, laid the foundation for understanding how facial recognition systems fail people of color, particularly Black women.

The NIST study that 60 Minutes called 'landmark' built upon this earlier work. Yet the segment made no mention of these researchers. This pattern of erasure—where Black women's contributions are overlooked while their work is absorbed into mainstream narratives—is all too common in tech and academia.

This is not an isolated incident. It's part of a pattern: Black women do the hard work of identifying bias, building datasets, running experiments, publishing findings—and then watch as institutions credit themselves or white researchers for 'discovering' what Black women have been saying all along.

Why Attribution Matters

Attribution isn't just about giving credit where it's due. It's about ensuring that the people most affected by biased systems are recognized as the experts on those systems. It's about challenging who gets to be seen as an authority. It's about honoring the labor and risk that marginalized researchers take on when they challenge powerful institutions.

When we erase the contributions of Black women in AI research, we perpetuate the very dynamics that lead to biased systems in the first place: the assumption that certain voices matter more than others, that certain perspectives are more legitimate, that certain people are more worthy of recognition.

Ms. Buolamwini risked her career to call out tech giants. She faced skepticism, dismissal, and hostility. And now, when her work is finally being taken seriously, she's being written out of the story. This is how institutional racism works in practice.

What We Can Do

Cite Black women. Credit Black women. Amplify Black women. When you see erasure happening, name it. When media outlets fail to credit the researchers whose work they're reporting on, call them out. Share the original sources. Tell the full story.

And if you're in a position of power—if you're hiring, if you're funding, if you're publishing, if you're producing—do the work to ensure that credit goes where it's due. The people who did the hard work deserve recognition. History should remember who led the way.

About Dr. Dédé Tetsubayashi

Dr. Dédé is a global advisor on AI governance, disability innovation, and inclusive technology strategy. She helps organizations navigate the intersection of AI regulation, accessibility, and responsible innovation.

Work With Dr. Dédé
Share this article:
Schedule a Consultation

Want more insights?

Explore more articles on AI governance, tech equity, and inclusive innovation.

Back to All Articles