We present Norman, world's first psychopath AI. Norman was inspired by the fact that the data used to teach a machine learning algorithm can significantly influence its behavior. So when people say that AI algorithms can be biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even "sick" things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms.
Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (its name is redacted due to its graphic content) that is dedicated to documenting and observing the disturbing reality of death. Then, we compared Norman's responses with a standard image-captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots–a test that is used to detect underlying thought disorders.
Visit norman-ai.mit.edu to explore what Norman sees!