Pharmacovigilance (PV) refers to both the science and activities related to detecting, assessing, understanding, and preventing adverse effects (AEs) or any other possible drug-related problems. For this purpose, data regarding particular AEs and medication errors are collected. PV activities have traditionally relied heavily on human contribution, for example, in the review of individual case safety reports (ICSRs). In an increasingly data-driven world, PV data generation is steadily growing, resulting in resource capacity limitations — and a largely unmeasured “human error” rate in PV. This is leading in particular larger pharmaceutical companies to seek more efficient PV processes. The year 2023 may, and should, mark the time for both larger and smaller pharmaceutical companies to embrace artificial intelligence (AI) options in PV and begin or continue to learn with it as it evolves.
“Hyper-PV” has changed the PV landscape
During the Covid-19 pandemic, both competent authorities and the general public paid close attention to the development of vaccines as well as other medicinal products. This public attention, and the broad use of the vaccines and relevant medicinal products, led to extremely large volumes of safety information data — and the subsequent regulatory requirements to review and report more safety data more frequently. For the affected pharmaceutical companies this meant a state of “hyper-PV,” during which many of them turned to automation and AI to help manage and meaningfully assess these large volumes of safety data.
AI in the context at hand describes an umbrella of technologies used to perform tasks “automatically” that typically require human intelligence. PV activities have a high potential to be automated and enhanced by AI, for instance in case processing and case causality assessment — by scanning text files and for coding and classifying of ICSRs.
The experience gained in the last couple of years and companies discovering the considerable cost-cutting possibilities in using AI for repetitive tasks has helped boost AI technology. Developers of AI are learning how to make it a technology worth investing in, even for processes that require significant human input. Slowly but steadily, AI is thus turning into a tool that pharmaceutical companies must seriously use not only to be ahead of the curve but to manage PV activities in a risk-proportionate manner.
Also competent authorities are monitoring developments in this space and are aiming to address the regulatory challenges that the use of AI poses (see the joint Horizon Scanning Assessment Report International Coalition of Medicines Regulatory Agencies (ICMRA) with recommendations related to case study AI in PV). As the use of AI can affect the benefit/risk ratio of medicinal products, guidelines will be required to set out the requirements regarding validation and AI-specific challenges such as explainability and human oversight as well as the integration of these tools into the quality and risk management system. We expect the entire industry to turn to certain automation and AI-driven technologies to ensure that all relevant PV data is captured and duly evaluated.
Which challenges remain, and how can pharmaceutical companies address them?
The overwhelming reason many pharmaceutical companies still appear to hesitate when it comes to AI is a lack of trust. Seemingly unanswered questions include these:
- How do we ensure that AI learns from the “right” data? Which experts are preparing the data?
- AI is fully relying on the data on which it is trained; PV departments themselves do not develop AI. Rather, data scientists and AI engineers (i.e., experts in this type of technology) create the relevant models. How do we bridge the knowledge gap between those who “make” and select the data to train AI and those who will use it?
- How do we ensure that AI continues to learn and develop in the right direction? PV systems are based on the idea of human interaction: Human teams talk to each other in case of doubt; they help each other overcome complex challenges. To reflect such human interactions, will AI need to be updated frequently?
- How will data bias and data drift, meaning the failure to match real-world data that changed over time, be addressed?
- How do we remain compliant with data privacy rules and still share large data sets, which are necessary for AI to learn?
- How do we ensure that AI detects AEs but does not get “overwhelmed” with nonadverse events? In other words: What is the right balance between sensitivity and specificity?
- What requirements will legislators and competent authorities expect with respect to AI? Will the use of AI be part of the good PV practice at some point? How will systems be validated and AI used in a responsible, consistent, and predictable fashion?
These questions can only be addressed and satisfactorily answered if the pharmaceutical industry works together, makes its concerns and challenges heard, and dares to learn by doing. The experience which already exist can be a guidepost for smaller and midsize companies that wish to venture into AI. AI will need to be trained and adapted to each use case, with trial and error phases to be expected and acceptable error rates to be defined. After all, the human error rate in current day-to-day PV activities is still largely unquantified on an individual company basis.
The current breadth of use in PV of AI is strongly influenced by the still existing limits of the current AI solutions and the lack of trust that users have in the new technology. However, from what we see in the industry, AI is the next logical digital evolution, which will inevitably affect PV. Resolving technical and cultural limitations will help expand its use and thus open a new phase for AI in PV, which will allow for concrete improvement of AI’s results. Pharmaceutical companies are well advised to seize this opportunity to affect the way the industry thinks about AI in PV and shape its use.