top of page
Search

Your Voice Matters


ree

The FDA’s Digital Health Advisory Committee is holding a public meeting on Generative Artificial Intelligence–Enabled Digital Mental Health Medical Devices and they want to hear from you!

The way we regulate AI in mental health now will shape the future of safe, ethical, and human-centered care. Deadline for public comments: TOMORROW, October 17, 2025

Comments submitted by the deadline will go directly to the FDA committee before the November 6th public meeting.👉 Submit your comment here: https://tinyurl.com/7duw6cmw

Let’s make sure innovation in mental health keeps human presence, empathy, and safety at its core.


Here's my argument:


I am a licensed mental health counselor in private practice and a PhD candidate in Psychology at the California Institute of Integral Studies (CIIS). My doctoral research looked into how embodied mindfulness approaches might improve interoceptive awareness, therapeutic presence, and the working alliance in virtual therapy. I have more than 20 years of therapy expertise combining somatic and contemplative techniques. I have also presented internationally on embodied consciousness and relational attunement in digital environments.


The rapid addition of Generative Artificial Intelligence (AI) to digital mental health devices is both an enormous opportunity for change and an immense challenge for regulators. These technologies promise to make healthcare more accessible, tailor treatments to each patient, and help clinicians do their jobs better. However, they also come with new and poorly understood hazards that are especially dangerous when used on vulnerable groups. I have listed and described the following risks:


1. Output that can't be predicted and the risk of hallucination

 

Large language models (LLMs) can create believable but wrong or contradictory information, which might be dangerous for those looking for mental health guidance (Blease et al., 2023; Lamparth et al., 2024). In simulated crisis interactions, even well-tuned LLMs have generated insufficient or hazardous reactions (Grabb, 2024). These erratic outputs require stringent human-in-the-loop regulations and constraints on autonomous system behavior in therapeutic settings.

 

2. Absence of Clinical Evidence and Mechanistic Clarity

Despite considerable enthusiasm, there is few empirical evidence that generative-AI systems can duplicate, maintain, or increase therapeutic partnership, self-regulation, or emotional engagement, which are essential processes of psychotherapy efficacy (Gibson et al., 2024). Initial research predominantly emphasized usability and satisfaction indicators, rather than proven clinical outcomes or neurobiological correlations. This lack of evidence shows how important it is to get phased regulatory approval and keep an eye on the market before these systems are sold as therapeutic or diagnostic devices.

3. Too much trust and the illusion of empathy

Anthropomorphic design aspects might make people trust AI systems too much or think they can replace human care (Blease & Bernstein, 2023). The illusion of empathy, where users think a computer understands their feelings, might hide the fact that embodied attunement, reflective presence, and the complex processes of rupture and repair that make up human treatment are missing (Ekdahl & Ravn, 2021; Howard & Bussell, 2018). These relational and intercorporeal elements, fundamental to psychotherapy involvement, cannot be replicated in disembodied AI systems.

 

4. Prejudice, Fairness, and Cultural Constraints

Generative model training data inadequately represent non-Western, non-English, and marginalized communities (Hagerty et al., 2022). Because of this, AI systems might not understand idioms of distress that are distinctive to a culture or perpetuate bias that is already there. Generative systems could make inequality worse instead of better if they don't have outside fairness audits and representative datasets.

 

5. Privacy and Safety of Data

Generative AI devices gather very personal conversational and behavioral data, frequently encompassing emotional disclosures, self-harm ideation, or trauma histories. It is hard to make this kind of data anonymous, and it can be reused for model retraining without permission (Price et al., 2023). The FDA must mandate explicit rules for how data can be used, enforceable user permission, and clear ways to delete data and check its accuracy.

 

6. Gaps in liability and regulation

When something bad happens, like giving bad advice or not handling a crisis well, it's not clear who is to blame. The FDA's own notice from September 2025 says that "generative artificial intelligence–enabled digital mental health medical devices pose novel risks requiring evolving regulatory approaches" (Federal Register, 2025). Because generative models might change after they are deployed, a static premarket evaluation is not enough. A regulatory framework that is always changing is necessary.

 

7. Keeping Therapeutic Presence and Embodied Engagement

In somatic psychology, therapeutic presence is established through embodied intersubjectivity, characterized by the reciprocal sensing, regulation, and co-presence of therapist and client (Geller & Greenberg, 2012; De Jaegher et al., 2016). In contrast, generative AI devices function by symbolic mediation devoid of intercorporeal resonance. AI systems devoid of embodied feedback mechanisms are incapable of replicating these relational or neurophysiological processes.

 

8. Suggestions

• Before clinical use, require independent investigations to confirm the results.

• Require that people watch over high-risk situations in real time (like crisis intervention).

• Set rules for how data should be handled, retrained, and updated that make things clear.

• Set tight rules for marketing that don't allow unverified claims about health benefits.

• Support ethical and safety review committees that comprise clinicians, somatic psychologists, AI ethicists, and patient advocates from other fields.

 

References

Blease, C., & Bernstein, M. (2023). AI chatbots and the illusion of empathy in mental health care. Journal of Medical Ethics, 49(7), 457–462.De Jaegher, H., Di Paolo, E., & Gallagher, S. (2016). Can social interaction constitute social cognition? Trends in Cognitive Sciences, 20(10), 794–805.Ekdahl, L., & Ravn, S. (2021). In touch through the screen: Digital intercorporeality in online psychotherapy. Phenomenology and the Cognitive Sciences, 20(4), 745–762.Federal Register. (2025, September 12). Digital Health Advisory Committee; Notice of Meeting; Establishment of a Public Docket. U.S. Food and Drug Administration.Geller, S. M., & Greenberg, L. S. (2012). Therapeutic presence: A mindful approach to effective therapy. American Psychological Association.Gibson, C., Park, S., & Khosla, M. (2024). Assessing the clinical validity of AI chatbots for psychological support: A systematic review. Frontiers in Digital Health, 6(112), 160–175.Grabb, C. L. (2024). Evaluating the safety of generative AI responses in mental health crises. arXiv preprint arXiv:2406.11852. Hagerty, A., Rubinov, I., & Johnson, A. (2022). Bias and fairness in AI-driven mental health technologies. Journal of Artificial Intelligence Research, 74, 103–127.Howard, M., & Bussell, H. (2018). Habituated: A Merleau-Pontian analysis of the smartphone. Library Trends, 66(3), 267–288.Lamparth, M., Grabb, C., & Vasan, A. (2024). Safety and unpredictability in large language model deployment for healthcare. Nature Digital Medicine, 7(2), 45–60.Price, C. J., Hooven, C., & Gross, C. (2023). Interoceptive awareness and emotion regulation: Mechanisms of change in mindfulness-based interventions. Neuroscience & Biobehavioral Reviews, 148, 105065.

 

 
 
 

Comments


bottom of page