If a therapy bot walks like a duck and talks like a duck then it is a medically regulated duck
Research output: Contribution to journal › Comment/Debate › Contributed › peer-review
Contributors
Abstract
Large language models (LLMs) are increasingly used for mental health interactions, often mimicking therapeutic behaviour without regulatory oversight. Documented harms, including suicides, highlight the urgent need for stronger safeguards. This manuscript argues that LLMs providing therapy-like functions should be regulated as medical devices, with standards ensuring safety, transparency and accountability. Pragmatic regulation is essential to protect vulnerable users and maintain the credibility of digital health interventions.
Details
| Original language | English |
|---|---|
| Article number | 741 |
| Journal | npj digital medicine |
| Volume | 8 |
| Issue number | 1 |
| Publication status | Published - 5 Dec 2025 |
| Peer-reviewed | Yes |
External IDs
| ORCID | /0000-0002-3730-5348/work/199963873 |
|---|---|
| ORCID | /0000-0002-1997-1689/work/199963891 |
| ORCID | /0000-0003-3323-2492/work/199963912 |
| ORCID | /0009-0004-7808-2701/work/199964112 |
| Scopus | 105024135832 |