Article

Trusting ChatGPT helps to improve it

By Natalia Ponjoan

When writer and journalist Juan José Millás had a conversation with ChatGPT in September, he pretended to have a psychoanalysis session with the tool. He wanted to use the Turing test to find out if the chatbot could talk to him like a real person — specifically, like a psychoanalyst — and not a computer. The journalist told the artificial intelligence about his dreams and fears, waiting for the AI to guide him in therapy, but, among other things, it always told him that it was an imaginary situation and explained that it was a language model. Millás called his virtual psychoanalyst narrow-minded and forgetful; ultimately, that told him that the AI had failed the test.

In conversations like Millás’, a person’s prior beliefs about an artificial intelligence (AI) agent, such as ChatGPT, have an effect on the conversation and on perceptions of the tool’s reliability, empathy and effectiveness, according to researchers from the Massachusetts Institute of Technology (MIT) and Arizona State University, who recently published a study in the Nature Machine Intelligence journal. “We have found that artificial intelligence is the viewer’s intelligence. When we describe to users what an AI agent is, it doesn’t just change their mental model; it also changes their behavior. And since the tool responds to the user, when people change their behavior, that also changes the tool’s behavior,” says Pat Pataranutaporn, a graduate student in the Fluid Interfaces group at the MIT Media Lab and a co-author of the study.

Related Content