Article

Facial recognition technology is finally more accurate in identifying people of color. Could that be used against immigrants?

Microsoft this week announced its facial-recognition system is now more accurate in identifying people of color, touting its progress at tackling one of the technology’s biggest biases.

But critics, citing Microsoft’s work with Immigration and Customs Enforcement, quickly seized on how that improved technology might be used. The agency contracts with Microsoft for a set of cloud-computing tools that the tech giant says is largely limited to office work but which can also include face recognition.

Columbia University professor Alondra Nelson tweeted, “We must stop confusing ‘inclusion’ in more ‘diverse’ surveillance systems with justice and equality.”

Today’s facial-recognition systems more often misidentify people of color because of a long-running data problem: The massive sets of facial images they train on skew heavily toward white men. A Massachusetts Institute of Technology study this year of the face-recognition systems designed by Microsoft, IBM and the China-based Face++ found their accuracy in classifying a person’s gender was 99 percent for light-skinned males and 70 percent for dark-skinned females.

[One of the few police departments to use Amazon’s facial-recognition tech has stopped – for now]

In a project that debuted Thursday, Joy Buolamwini, an artificial-intelligence researcher at the MIT Media Lab, showed facial-recognition systems consistently giving the wrong gender for famous women of color, including Oprah, Serena Williams, Michelle Obama and Shirley Chisholm, the first black female member of Congress. “Can machines ever see our grandmothers as we knew them?” Buolamwini asked.

Related Content