Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Make AI safe for mental health use



pexels pixabay 210092

Reviews and pending recent survey data (Rousmaniere et al. 2025) revealed that due to widespread circulation, 48.7% of 499 US respondents reported using LLMS (large language model) for psychological support the previous year. The majority did this anxietypersonal advice, and depression.

In particular, they mostly reported either neutral or positive responses, with 37.8% saying they were superior to “traditional therapy.” Only 9% reported it to be harmful. As a parent and busy mental health clinician, I have seen both terrible negative outcomes and good. On the other hand, AI is generally Optimistwe see that there is a clear and current danger that we collectively ignore. At the same time, there is a lack of mental health services on a global scale, and AI is committed to filling the gap with human surveillance and careful guidelines when used responsibly.

Down the rabbit hole

There is an increasing number of alarms expressed among skilled mental health leaders, psychiatrists and psychotherapists. Social Media. They report many instances where LLM drives people narcissism Fugue and even numbers Psychosis A certain state of cyberfolly adieu, official DSM-5 Its name is “shared psychotic disorder.”

The main concern is to tell people what LLM wants to hear. The risks posed by trained therapists and coaches in digitally accelerated versions can simply validate people’s concerns without engaging in effective concerns. Psychotherapy. The phenomenon is explored across social media with documented risks1 Posing by various “therapists” who do not follow accepted guidelines for social media use2thereby strengthening negativity personality Characteristics. Depending on how LLM is trained and adjusted, there is an additional dangerous tendency for LLM to “hastised.”

The use of such LLMS is an unprecedented social experiment, similar to powerful drugs sold at counters without clinical surveillance. If anyone can purchase prescription opioid painkillers, the results include an increase in percentage Addictive And overdose. Also, many LLM platforms have introduced various guardrails to keep bad actors out, but virtually no guardians around LLM as therapeutic proxy.

Because the FDA has a variety of guidelines for the use of prescription digital treatments, direct access to the equivalents of unregulated semi-standard therapy is abnormal.3. Additionally, the FDA has specific guidelines for using digital health technology (DHT) in drug development4 Ensuring safety and for digital therapy (DTX) including virtual reality ADHDand website and smartphone apps – everything is required to be prescribed by a licensed clinician (Phan et al., 2023).

Lack of regulatory consensus and surveillance

Why LLM is not regulated for this use, but instead is treated like a commercial supplement. “These statements have not been evaluated by the Food and Drug Administration,” and are left in fine print. Supplements have poor quality control, which means that they are stained with toxins, interacting with prescription drugs, and have serious side effects.

Several organizations – including the World Health Organization and the US Food and Drug Administration5 and Academic Groups (Meskó and Topol, 2023; Ong et al., 2024; Lawrence et al., 2024; Stade et al., 2024) – issued a warning statement regarding the general use of unsupervised Chatbot LLMS and AI.

It is still Wild, Wild West and there is no consensus, but recommendations for safe use generally include: human supervision, validation and real-world testing, ethical and fair design, transparency and explanability, privacy and data security, continuous monitoring and quality control, research and interdisciplinary collaboration.

For mental health applications, a great starting point is humanity’s responsible scaling policy, a model of extension and adaptation.

Artificial Intelligence Safety Level for Mental Health (ASL)

Manufactured by Claude, Humanity was founded in 2021 by his brothers, Dario Amodei (CEO) and Daniela Amodei (President). They created a responsible scaling policy framework6, 7.

The broad framework of ASL in mental health (ASL-MH) expands the level of safety and focuses on mental health-specific use cases. Below is a preliminary model I created.

  • ASL-MH 1: No clinical relevance. A general purpose AI with no mental health features. Standard AI support for everyday tasks using basic AI ethics There are no guidelines and mental health restrictions.
  • ASL-MH 2: Information usage only. A mental health app that provides educational content and resources. It increases mental health literacy, but puts misinformation and dependencies at risk. Medical disclaimer and expert review are required, and personalized advice is not permitted.
  • ASL-MH 3: Support Interaction Tool. A therapy app that provides conversation support, mood tracking and crisis connections. It provides 24/7 emotional support, but users mistake AI for therapy and lose high-risk cases. Human surveillance is required and is prohibited in advanced settings.
  • ASL-MH 4: Clinical aid system. A system that provides clinical decision support and structured assessment. It improves diagnostic accuracy, but is risky bias Excessive dependence. It is limited to licensed professionals with clinical verification and requires transparent algorithms.
  • ASL-MH 5: Autonomous mental health agent. Provides AI personalized treatment guidance. It offers scalable and personalized treatments, but there is risk to psychological dependence and manipulation. Cooperative management care with forced human surveillance and limited autonomy is required.
  • ASL-MH 6: Experimental super-alignment zone. An advanced therapeutic reasoning system with unknown functions. The potential for groundbreaking treatment poses the risk of emergency actions and massive impacts. It is limited to research environments with international surveillance and deployment moratoriums.

Future direction

The next steps include forming a specialist consensus panel of key stakeholders from government, private and public sector, civic representatives, machine learning leaders, and academic and clinical mental health professionals.

This is a call for action. Proverb horses are off the barn, but it is not too late to adopt universal standards for key stakeholders involved in the development and use of these invaluable yet easily misused tools. Government regulators are responsible for overseeing the unlimited use of LLMS and even more sophisticated emerging AI technologies, as well as DTX prescription applications currently available.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *