Placebo effect shapes how we see AI

By Alison Snyder

The preconceived notions people have about AI — and what they're told before they use it — mold their experiences with these tools in ways researchers are beginning to unpack.

Why it matters: As AI seeps into medicine, news, politics, business and a host of other industries and services, human psychology gives the technology's creators levers they can use to enhance users' experiences — or manipulate them.

What they're saying: "AI is only half of the human-AI interaction," says Ruby Liu, a researcher at the MIT Media Lab.

  • The technology's developers "always think that the problem is optimizing AI to be better, faster, less hallucinations, fewer biases, better aligned," says Pattie Maes, who directs the MIT Media Lab's Fluid Interfaces Group.
  • "But we have to see this whole problem as a human-plus-AI problem. The ultimate outcomes don't just depend on the AI and the quality of the AI. It depends on how the human responds to the AI," she says.

What's new: A pair of studies published this week looked at how much a person's expectations about AI impacted their likelihood to trust it and take its advice.

Related Content