When the Robot Doesn’t See Dark Skin

By Joy Buolamwini

When I was a college student using A.I.-powered facial detection software for a coding project, the robot I programmed couldn’t detect my dark-skinned face. I had to borrow my white roommate’s face to finish the assignment. Later, working on another project as a graduate student at the M.I.T. Media Lab, I resorted to wearing a white mask to have my presence recognized.

My experience is a reminder that artificial intelligence, often heralded for its potential to change the world, can actually reinforce bias and exclusion, even when it’s used in the most well-intended ways.

A.I. systems are shaped by the priorities and prejudices — conscious and unconscious — of the people who design them, a phenomenon that I refer to as “the coded gaze.” Research has shown that automated systems that are used to inform decisions about sentencing produce results that are biased against black people and that those used for selecting the targets of online advertising can discriminate based on race and gender.

Related Content