Project

EqualAIs: Facial Recognition, Adversarial Attacks and Policy Choice

This 2018 AI and Governance Assembly project, EqualAIs, was a multifaceted investigation into the technical, policy and societal questions raised by the increasing capabilities of facial recognition software and the unprecedented capability for automated, real time identification and tracking of individuals.

Our team demonstrated the technical feasibility of facial recognition adversarial attacks through the creation of a working prototype, successfully attacking the major APIs' facial recognition classifiers. We filed a FOIA with partners including the ACLU for information about US Customs' use of facial recognition algorithms and we sought to encourage public dialogue about facial recognition policy choices through a series of public talks and the creation of open source resources.

This 2018 AI and Governance Assembly project, EqualAIs, was a multifaceted investigation into the technical, policy and societal questions raised by the increasing capabilities of facial recognition software and the unprecedented capability for automated, real time identification and tracking of individuals.

Our team demonstrated the technical feasibility of facial recognition adversarial attacks through the creation of a working prototype, successfully attacking the major APIs' facial recognition classifiers. We filed a FOIA with partners including the ACLU for information about US Customs' use of facial recognition algorithms and we sought to encourage public dialogue about facial recognition policy choices through a series of public talks and the creation of open source resources.