AI researchers want to study AI the same way social scientists study humans

By Karen Hao 

Much ink has been spilled on the black-box nature of AI systems—and how it makes us uncomfortable that we often can’t understand why they reach the decisions they do. As algorithms have come to mediate everything from our social and cultural to economic and political interactions, computer scientists have attempted to respond to rising demands for their explainability by developing technical methods to understand their behaviors.

But a group of researchers from academia and industry are now arguing that we don’t need to penetrate these black boxes in order to understand, and thus control, their effect on our lives. After all, these are not the first inscrutable black boxes we’ve come across.

“We've developed scientific methods to study black boxes for hundreds of years now, but these methods have primarily been applied to [living beings] up to this point,” says Nick Obradovich, an MIT Media Lab researcher and co-author of a new paper published last week in Nature. “We can leverage many of the same tools to study the new black box AI systems.”

Related Content