One of the biggest misconceptions about tech is aptly summarized by Hessie Jones: '[Tech is] afforded a supremacy that humans feel comfortable not questioning. And yet, technology isn't just a neutral tool.'
As AI/ML continues to become rapidly enmeshed in our daily lives, so have discussions of ethics, and the lack thereof, in tech. The dangers of the myriad intersectional biases in tech design have made their way from the confines of esoteric spaces into broader mainstream discussions of diversity, equity, inclusion, and ethics.
What Is Intersectionality?
Coined by legal scholar Kimberlé Crenshaw in 1989, intersectionality recognizes that people hold multiple identities simultaneously, and that the intersection of these identities creates unique experiences of privilege and oppression. A Black woman doesn't experience racism and sexism separately—she experiences them as interlocking systems that create a distinct form of discrimination.
In tech, this means that systems can fail in ways that are only visible when you look at the intersection of identities. Facial recognition might work fine for white men and white women separately—but fail specifically for Black women. A hiring algorithm might not discriminate against women or Black candidates individually, but discriminate against Black women specifically.
This is why diversity initiatives that focus on single dimensions—'hire more women' or 'hire more people of color'—often fail. If you hire white women and Black men, you might check your diversity boxes while still creating an environment that's hostile to Black women. Intersectionality demands that we look at the full picture.
Why Tech Needs Intersectionality
Technology scales bias. A biased hiring decision affects one person. A biased algorithm affects millions. When we build systems without intersectional analysis, we bake in discrimination at scale—discrimination that's often invisible to the people building the systems because they don't share the identities being harmed.
The Gender Shades project found that facial recognition systems had error rates of less than 1% for lighter-skinned men, but up to 35% for darker-skinned women. This isn't a bug—it's the predictable result of training data and testing protocols that didn't account for intersection of race and gender.
Designing with Intersectionality
Although many big tech companies are hiring in-house DEI consultants and broaching the topic of more equitable design, true intersectional thinking requires more than diverse hiring. It requires asking whose experiences we're centering, whose data we're training on, and whose voices have power in the design process.
It means disaggregating your data. Not just 'how does this system perform for women?' but 'how does it perform for Black women, Asian women, disabled women, trans women?' It means user testing with people at the intersections. It means giving power to the people most likely to be harmed by your systems.
And it means recognizing that you can't design for experiences you don't understand. This is why diverse teams matter—not as tokens, but as experts. People who live at the intersections understand failure modes that others miss. Their knowledge isn't optional; it's essential for building systems that work for everyone.
