The Council for International Organizations of Medical Sciences Working Group XIV (CIOMS) Draft Report offers comprehensive principles and best practices, translating global artificial intelligence (AI) requirements — such as those in the EU Artificial Intelligence Act (EU AI Act) — into practical guidance for pharmacovigilance (PV). In the U.S., where no overarching AI legislation exists, the report can guide lawmakers, regulators, and other stakeholders as they develop approaches to using AI in PV. The consultation period for the draft report is currently open, and interested parties are encouraged to take advantage of this opportunity by providing comments on the draft report by June 6, 2025.
The EU AI Act, adopted in 2024, is the first comprehensive legal framework for AI, using a risk-based approach that classifies AI systems into four categories. High-risk systems, including those in healthcare and PV, face strict requirements for risk management, transparency, human oversight, and data protection. Whether an AI system in PV is “high-risk” depends on its specific use and may require case-by-case assessment. Within the medicinal product lifecycle, the European Medicines Agency (EMA) distinguishes between systems that pose a “high patient risk,” where patient safety is directly affected, and those with a “high regulatory impact,” where the AI system significantly influences regulatory decision-making.
The Draft Report translates the EU AI Act’s high-level requirements into actionable guidance tailored to the realities of PV, complementing ongoing regulatory efforts by the EMA, which has emphasized the importance of leveraging AI responsibly in PV. In particular, EMA’s 2024 Reflection Paper on the Use of AI in the Medicinal Product Lifecycle echoes CIOM’S recommendations, calling for regulatory impact and risk assessments, documentation of model performance, and alignment with good pharmacovigilance practices requirements. EMA’s regulatory science strategy and its AI-specific working groups underscore the need for harmonization between innovation and patient safety — a goal the Draft Report operationalizes for PV stakeholders. The Draft Report helps organizations implement AI systems that are legally compliant, scientifically robust, and ethically sound while supporting harmonization across regions and preparing for future regulatory and technological changes.
While the U.S. has not yet implemented comprehensive legislation, the U.S. Food and Drug Administration (FDA) January 2025 guidance, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products” (FDA Guidance), details the risk-based considerations and methodologies the FDA is considering in its approach to AI development and deployment, aligns with the Draft Report’s themes, and provides helpful context for how life sciences companies should assess AI implementation in the PV space. The new U.S. Presidential administration has also committed to making AI development and integration are priorities. The Draft Report’s use cases and presented challenges are helpful guideposts for stakeholders innovating in the PV space while waiting for further direction from regulatory authorities.
In many ways, the Draft Report serves as a practical bridge between the high-level regulatory requirements of the EU AI Act and the evolving U.S. regulatory position and explores the practical realities of implementing AI in PV. Accordingly, the Draft Report highlights several key steps life sciences companies should take as they seek to implement AI in PV activities:
- Translate Regulatory Principles into PV Practice. While the EU AI Act sets out broad obligations for high-risk AI systems, and FDA Guidance proposes a risk-based framework to tailor risk management based on the particular role of an AI system commensurate with outcomes from failure modes, the Draft Report contextualizes these risk management considerations within the unique workflows, data categories, and risk profiles of PV. Companies should look to the use cases presented in the Draft Report as guideposts for interpreting risk in light of these regulatory principles. For example, the Draft Report provides detailed guidance on how to conduct risk assessments specific to PV use cases, such as individual case safety report processing and signal detection. It emphasizes the need for risk-based human oversight, tailored to the potential impact of AI errors on patient safety and regulatory decision-making — directly reflecting the EU AI Act’s focus on proportionality and context of use and helping frame risk-assessment considerations that are discussed in the FDA Guidance.
- Operationalize Human Oversight. The EU AI Act requires that high-risk AI systems be subject to appropriate human oversight. The Draft Report expands on this by defining practical models of oversight — such as human in the loop, human on the loop, and human in command — and mapping these to specific PV tasks. It provides concrete examples of how life sciences companies can implement, monitor, and adapt oversight models over time, ensuring that human agency and accountability are maintained in line with both regulatory and ethical expectations, and provides examples of use cases where these models of oversight are explained and incorporated. These use cases are instructive for helping stakeholders think about how to evaluate human oversight in relation to the task derived-risk analysis that is detailed in FDA’s Guidance, which stresses the importance of evaluating not only the reliability and credibility of a given AI model or system but also assessing the human element charged with oversight responsibility.
- Ensure Validity, Robustness, and Continuous Monitoring. The Draft Report details how to establish reference standards, validate AI models using real-world PV data both qualitatively and quantitatively, and set up continuous performance monitoring to detect model drift or emerging risks. It also addresses the challenges of data quality and representativeness in PV, offering strategies to mitigate biases and ensure that AI systems remain reliable as clinical practices and data sources evolve.
- Build in Transparency and Explainability. Transparency is a cornerstone of the EU AI Act, which mandates clear documentation, traceability, and explainability for high-risk AI systems. While the FDA Guidance mentions the importance of keeping systems transparent to help regulatory evaluation (underscoring the importance of model development visibility for regulatory decision-making), it is largely silent as to public-transparency concerns. The Draft Report provides a PV-specific roadmap for achieving transparency objectives: It outlines what information should be disclosed to stakeholders (e.g., model architecture, expected inputs and outputs, human-AI interaction), how to document performance evaluations, and how to implement explainable AI techniques to support regulatory audits, user trust, and error investigation. The Draft Report also highlights the importance of communicating the provenance of data and the role of AI in generating or processing safety information. Throughout the development cycle, companies should identify what information will need to be disclosed, catalog it accordingly, and ensure that a given AI system can be adequately described and explained to relevant stakeholders under the applicable legal structures.
- Address Data Privacy and Cross-Border Compliance Issues. The Draft Report reinforces the need for strict data privacy controls in PV, reinforcing data protection frameworks such as the EU General Data Protection Regulation. It discusses the heightened risks posed by generative AI and large language models, including potential re-identification and linkage of previously anonymized data, and provides practical recommendations for de-identification, data minimization, and secure data handling that life sciences companies should seek to implement.
- Promote Nondiscrimination. The EU AI Act requires that AI systems avoid discriminatory outcomes. The FDA Guidance highlights the importance of identifying, and accounting for, bias in data sets and model training. The Draft Report operationalizes these goals by advising on the selection and evaluation of training and test datasets to ensure representativeness, and the implementation of mitigation strategies for identified biases. It frames nondiscrimination as regulatory and ethical imperatives in PV.
- Establish Governance and Accountability Structures. The Draft Report recommends the establishment of cross-functional governance bodies, assignment of roles and responsibilities throughout the AI lifecycle, and regular review of compliance with guiding principles. It provides tools such as a governance framework grid to help organizations document actions, manage change, and ensure traceability — facilitating both internal oversight and external regulatory inspection.
- Comment on the Draft Report. Developed through consensus among regulators, academics, and industry representatives, the report is expected to shape regulatory expectations globally. The current consultation period offers life sciences companies a key opportunity to contribute to the development of future standards for AI in PV, and they should review the Draft Report and take advantage of that opportunity by providing comments on the draft report (here) by June 6, 2025. U.S. based entities may wish to participate given that the finalized report may provide a roadmap for U.S. lawmakers and regulators to draw on as they develop their own approach to AI and its applications in the PV space. Active participation in public consultations, such as the Draft Report, helps contribute to the development of best practices and ensure that all perspectives are considered in shaping future guidance.
Sidley Austin LLPはクライアントおよびその他関係者へのサービスの一環として本情報を教育上の目的に限定して提供します。本情報をリーガルアドバイスとして解釈または依拠したり、弁護士・顧客間の関係を結ぶために使用することはできません。
弁護士広告 - ニューヨーク州弁護士会規則の遵守のための当法律事務所の本店所在地は、Sidley Austin LLP ニューヨーク:787 Seventh Avenue, New York, NY 10019 (+212 839 5300)、シカゴ:One South Dearborn, Chicago, IL 60603、(+312 853 7000)、ワシントン:1501 K Street, N.W., Washington, D.C. 20005 (+202 736 8000)です。