Industry is increasingly exploring the use of AI chatbots to potentially diagnose and treat various medical conditions, including in the area of mental health. FDA is just beginning to develop its regulatory framework for approved, cleared, or authorized devices in the mental health space based on generative AI technology. The medtech industry, healthcare providers, and the public are closely watching FDA developments and guidance regarding the use of generative AI across the medical device space.
In its latest effort to add clarity to the regulation of generative AI, on November 6, 2025, FDA held a Digital Health Advisory Committee to address “Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices.” This is the second meeting of the Committee, following the inaugural meeting that took place last year on November 20-21, 2024 on “Total Product Lifecycle Considerations for Generative AI-Enabled Devices.” The meeting offered insights not only into FDA’s approach to regulating generative AI in mental health medical devices, but also medical devices more generally.
Key takeaways from last week’s meeting include:
- FDA acknowledged that certain use cases, such as general wellness devices, are not medical devices subject to FDA’s oversight and that it plans to provide further regulatory clarity for products that are medical devices;
- FDA continues to focus on a risk-based approach and a total product lifecycle approach (TPLC);
- FDA is thinking critically about how clinical trial design may need to differ to account for the unique considerations for generative AI-enabled therapeutic devices;
- FDA and Committee members raised the importance of physician / human oversight and intervention in mental health based generative AI tools; and
- FDA stated that it intends to exercise its oversight over products that are medical devices and noted that enforcement would be prioritized towards use cases with a higher potential for harm.
Background
FDA’s Center for Devices and Radiological Health (CDRH) established the Committee in October 2023 to provide ongoing, diverse external input to complement the internal expertise of the Digital Health Center of Excellence (DHCoE). The Committee is composed of nine voting members, with additional temporary voting members contingent on the topic of discussion.
This development is the latest in the government’s attempt to regulate and encourage the use of digital health products. For example, the Centers for Medicare and Medicaid Services (CMS) has recently established new codes and reimbursement rates for digital mental health treatment and issued a request for comments on reimbursement for software as a service (SaaS). For more information on these CMS developments, see our alert here.
Key FDA Takeaways
FDA opened the meeting by noting that generative AI is already transforming healthcare and could help address critical public health needs in mental health—particularly by expanding access and supporting earlier intervention amid rising demand and limited care availability. FDA emphasized that its approach to regulatory oversight of medical devices is risk-based, with enforcement priority going to uses that carry a higher potential for harm. FDA distinguished software that doesn’t meet the statutory definition of a medical device, such as an app designed to promote general wellness through the provision of daily motivational tips, from other apps that might meet the statutory definition, but for which FDA may exercise enforcement discretion and not actively regulate given the low risk. For example, FDA described an app that is designed to help patients diagnosed with anxiety by providing users with a daily skill as an example where it perceived low risk to patient safety and would intend to exercise its enforcement discretion, even though it considers the devices within its regulatory reach. FDA explained that it is focusing on apps intended to provide therapy for specific psychiatric disorders that would require premarket review.
FDA made clear that it needs to better understand the benefits, risks, and potential risk mitigation strategies that can be employed in connection with the use of generative AI-enabled medical devices. CDRH Director Dr. Michelle Tarver highlighted the unique challenges with generative AI-enabled medical devices, saying that FDA was interested in perspectives on “how these devices can remain safe for the long term in the real world as many of these technologies will continue to evolve and adapt over time.” That challenge was a key part of the meeting and discussion focused on the risk-benefit analysis for these types of devices and how to account for their ability to rapidly adapt.
FDA solicited input on how to consider and manage such risks and actively sought input on potential controls and clinical trial designs, including how to establish effective control arms and how to establish appropriate study endpoints for devices that may treat multiple mental health conditions. In addition, a recurrent question throughout the day was how to build in controls for crisis escalation and to ensure the appropriate management, including human intervention, for emergent safety risks that an AI-enabled device might encounter (such as suicidal ideation or self-harm). FDA invoked ISO 14971 to illustrate a risk management framework, and noted that the regulatory framework should weigh benefit against potential risk, accounting for the severity, type, quantity, rate, probability and duration of potential risks associated with the device. A few specific considerations that FDA pointed to, and that industry should keep in mind when developing this space, include:
- Whether the device is intended to be an adjunct to treatment, or used as standalone mental health treatment;
- How clinical performance data could be used to validate the effectiveness of risk control measures; and
- The importance of monitoring post market device performance.
Committee Discussion
During the committee discussion, members of the advisory committee provided feedback in response to three different scenarios posed by FDA in which AI-enabled devices could be deployed either to treat or to diagnose mental health conditions. These scenarios focused on risk management throughout the total product lifecycle and involved the deployment of a prescription therapy device built on a large language model (LLM) designed to mimic a traditional therapy session. Although specific to mental health, these scenarios also are instructive for what FDA is focusing on when evaluating generative AI applications more generally. These considerations include:
- What labeling is appropriate for a generative AI-enabled devices and does it differ from more traditional devices?
- Given that most digital health medical devices are currently prescription devices, how should FDA consider risks and special regulatory controls for over-the-counter products?
- Do specific patient populations pose unique considerations when integrating generative AI into a medical device (for example, a therapy device indicated for an adolescent or pediatric population)?
Committee members listed a number of regulatory protections (e.g., well-designed clinical trials, strong safety profile, extensive history of use) important for ensuring safety in the context of each use case. Of particular importance to committee members was assurance that a qualified human would be prompted to intervene as appropriate in the event of a crisis, highlighting the continued importance of considering how to integrate human oversight into AI-enabled devices. The committee members also pointed out that striking a balance between pre and post-market requirements would be essential to promoting innovation while protecting patient safety.
Many of these comments echo the policy considerations discussed in Sidley’s October 2024 Fireside Chat With Former FDA Commissioner Dr. Scott Gottlieb.
Materials for the meeting, including the panel roster, agenda, and discussion questions, are available here.
The docket remains open for public comment through December 8, 2025.
Attorney Advertising—Sidley Austin LLP is a global law firm. Our addresses and contact information can be found at www.sidley.com/en/locations/offices.
Sidley provides this information as a service to clients and other friends for educational purposes only. It should not be construed or relied on as legal advice or to create a lawyer-client relationship. Readers should not act upon this information without seeking advice from professional advisers. Sidley and Sidley Austin refer to Sidley Austin LLP and affiliated partnerships as explained at www.sidley.com/disclaimer.
© Sidley Austin LLP




