• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

The assumptions you bring into conversation with an AI bot influence what it says

By Nick Hilden

Do you think artificial intelligence will change our lives for the better or threaten the existence of humanity? Consider carefully—your position on this may influence how generative AI programs such as ChatGPT respond to you, prompting them to deliver results that align with your expectations.

“AI is a mirror,” says Pat Pataranutaporn, a researcher at the MIT Media Lab and co-author of a new study that exposes how user bias drives AI interactions. In it, researchers found that the way a user is “primed” for an AI experience consistently impacts the results. Experiment subjects who expected a “caring” AI reported having a more positive interaction, while those who presumed the bot to have bad intentions recounted experiencing negativity—even though all participants were using the same program.

“We wanted to quantify the effect of AI placebo, basically,” Pataranutaporn says. “We wanted to see what happened if you have a certain imagination of AI: How would that manifest in your interaction?” He and his colleagues hypothesized that AI reacts with a feedback loop: if you believe an AI will act a certain way, it will.

Related Content