Article

A chatbot that asks questions could help you spot when it makes no sense

Copyright

STEPHANIE ARNETT/MITTR | WELLCOME COLLECTION

STEPHANIE ARNETT/MITTR | WELLCOME COLLECTION

By Melissa Heikkilä

AI chatbots like ChatGPT, Bing, and Bard are excellent at crafting sentences that sound like human writing. But they often present falsehoods as facts and have inconsistent logic, and that can be hard to spot.

One way around this problem, a new study suggests, is to change the way the AI presents information. Getting users to engage more actively with the chatbot’s statements might help them think more critically about that content.

Related Content