Ethical AI and Smart Lock Systems
Back to BlogAI Ethics + Housing

Ethical AI and Smart Lock Systems

I recently sat down with a group to discuss the pros and cons of emerging technologies from my perspective as both anthropologist and ethical technologist; specifically, smart lock systems.

Dr. Dédé Tetsubayashi|9 min read

I recently sat down with a group to discuss the pros and cons of emerging technologies from my perspective as both anthropologist and ethical technologist; specifically, smart lock systems. For those who may be unfamiliar with smart lock systems, they are a relatively new technological advancement that uses image and facial recognition software to enable users entry to businesses and residences thereby eliminating physical keys; this technology is believed to increase safety and ease of accessibility.

The Promise

No more misplaced and lost keys, costly locksmith services, and the ease of allowing temporary and/or limited access to vendors, guests, and service providers. These are the pros and admittedly, they are cost-efficient, time-saving, and an added convenience for some—perhaps, many. For people with certain disabilities, keyless entry can be genuinely liberating.

The Myth of Infallible AI

There is a misconception that AI is godlike, infallible even; but AI is a product of wholly fallible human design. Coded into the complex algorithms are the same biases we deal with in our day to day human experiences. Laypersons are more apt to buy into a science they don't fully understand and this leaves an already over-policed, vulnerable faction of the population at a heightened risk of unprecedented, unmitigated harm.

When Technology Meets Racial Bias

Within the past month, a Black sixteen year old was seriously wounded after being shot by a White homeowner for ringing his doorbell in error. It is not only reasonable to suspect, but to assert the homeowner reacted with such heightened hostility towards the error and minor inconvenience due to his own preconceived notions about race.

What, then, do we do when there is a digital intermediary—a smart lock system—that carries those same racial biases in its code? We've already established that facial recognition technology misidentifies Black faces at dramatically higher rates. Now imagine that same flawed technology controlling who can enter their own home.

The Surveillance Problem

Smart locks that use facial recognition create databases of who enters buildings and when. They can be used to surveil tenants and employees. For marginalized communities already subject to over-surveillance, this adds yet another layer of monitoring to daily life.

We must ask: Who controls the data these systems generate? What happens when the technology fails for certain users more than others? How might landlords and employers misuse this surveillance capability? What recourse do tenants have when they're locked out of their own homes by biased technology?

Convenience Is Not Enough

Convenience is not a sufficient justification for building pervasive surveillance into our homes and workplaces. Before deploying these technologies, we need robust discussions about consent, data ownership, algorithmic accountability, and the differential impacts on vulnerable communities.

As with all emerging technologies, we must ask not just 'can we build this?' but 'should we build this, and for whom?' The answer, as always, requires centering those most likely to be harmed—not just those most likely to benefit.

About Dr. Dédé Tetsubayashi

Dr. Dédé is a global advisor on AI governance, disability innovation, and inclusive technology strategy. She helps organizations navigate the intersection of AI regulation, accessibility, and responsible innovation.

Work With Dr. Dédé
Share this article:
Schedule a Consultation

Want more insights?

Explore more articles on AI governance, tech equity, and inclusive innovation.

Back to All Articles