By James O'Donnellar
Getting rid of disclaimers is one way AI companies might be trying to elicit more trust in their products as they compete for more users, says Pat Pataranutaporn, a researcher at MIT who studies human and AI interaction and was not involved in the research.
“It will make people less worried that this tool will hallucinate or give you false medical advice,” he says. “It’s increasing the usage.”
Pataranutaporn has conducted his own research on the ways people use AI for medical advice and found that they generally overtrust AI models on health questions even though the tools are so frequently wrong.
“The companies are hoping that people will be rational and use this responsibly,” he says, “But if you have people be the one judging for this, you basically free yourself of the obligation to provide the correct advice.”