by Michael Bani

7 minutes

Pragmatic Approach to AI in Pharmacovigilance

Explore how AI and GenAI are transforming pharmacovigilance with automation, GenAI applications, and evolving regulations globally.

Pragmatic Approach to AI in Pharmacovigilance

Artificial intelligence (AI) has been rapidly integrated into various aspects of the pharmaceutical sector—from drug discovery to pharmacovigilance. In the latter, generative AI (GenAI) holds exceptional promise. However, as AI systems are developed and widely used in pharmacovigilance, evaluating their implications and deployment is necessary.

A common understanding of AI systems and their implications needs to be established to create a trustworthy AI landscape. This includes identifying and assessing their advantages, disadvantages, and potential risks. Establishing this common understanding will allow all stakeholders to effectively collaborate and develop AI systems aligning with societal values and expectations. The rapid adoption of AI necessitates the development of a meticulous, trustworthy, and human-centric approach to ensure AI systems retain privacy, fairness, transparency, and compliance during pharmacovigilance activities.

The European Union (EU) AI Act plays a crucial role in setting global standards for the responsible development of AI systems. Establishing similar regulations and international standards/best practices is equally essential to ensure the responsible development and implementation of AI systems. However, this is likely to face several challenges.


Challenges in AI integration into pharmacovigilance

Life science companies face several challenges in AI integration into pharmacovigilance, as follows:

  • Executive expectations: The leadership is under immense pressure from stakeholders to modernize current processes with AI technologies. However, this is challenging as all safety systems in pharmacovigilance contain essential information. Proper integration is crucial to ensure the company retains regulatory compliance and safeguards all data.
  • Workforce concerns: The workforce in the pharmacovigilance realm often includes nurses, physicians, pharmacists, and scientists who have job security concerns due to automation. Hence, upskilling and training are essential to ensure the current workforce accepts and adopts new AI technologies.
  • Vendor demands: Vendors who provide safety systems/services are under pressure to integrate AI tools into their solutions while complying with complex global legal and regulatory requirements. However, meeting this expectation is challenging.


Legal framework

Pharmaceutical product development is highly regulated because it ensures the final developed and approved product is safe and effective. As companies try to integrate AI, they must navigate existing regulatory frameworks. Each framework poses unique challenges.

European Union

The EU leads all regulatory frameworks through AI integration using the EU AI Act. This Act categorizes AI systems based on potential risks and setting implementation requirements. Generally, these are divided into three:

  • High-risk applications: These applications are subject to strict oversight.
  • Banned applications: Certain applications of AI systems, such as facial and emotion recognition technologies, are entirely prohibited.
  • Allowed applications/exemptions: Some AI applications, such as AI in medicine research and development, are permitted but must adhere to governance and accountability principles.

Non-compliance with the EU AI Act can result in several financial penalties, i.e., between 3% and 7% of the company’s global revenue. While most drug-development AI systems are unlikely to be marked as “high-risk, " further guidance on applying the AI Act will provide clarity and certainty.

United States

The United States does not have comprehensive regulations for implementing AI systems. Instead, AI regulations are being implemented through various federal- and state-level actions. Some key actions include:

  • Executive Order 14110: Issued in October 2023, Executive Order 14110 encourages the collaboration of all federal agencies to develop guidelines for the safe use of AI in specific industries and technologies.
  • NIST RMF: This was developed by 240+ members of private industries, academia, and government. It provides a framework for the trustworthy development of AI systems. It aims to help organizations address and manage the risks of AI.

There are various emerging privacy legislations, like the American Privacy Rights Act of 2024, which aims to influence AI laws by focusing on the impact of assessment for data processing and transfer. However, these legislations are still nascent and can still be influenced by various socioeconomic and political factors (e.g., the Presidential Elections in December 2024).

Global trends

Globally, AI regulation is also gaining recognition, and many countries are adopting different approaches to AI regulation. As AI has a similar impact across regions and countries, many members of the public and many countries have encouraged international cooperation with organizations like the United Nations (UN) and the Organization for Economic Co-operation and Development (OECD) to set global standards for the ethical development and implementation of AI.

Many countries, like Rwanda, Nigeria, and South Africa, are using the EU AI Act as a model to develop their individual AI regulations. Other countries like China are not taking a unified AI approach but are implementing specific rules for different AI systems.

Therefore, on a global front, each country is individually navigating AI regulations using their ethical and business standards. However, global legislation or regulation may benefit all countries and will guide the safe use and development of AI systems.


Current use cases of AI in pharmacovigilance

Companies are rapidly integrating AI systems to automate tasks and reduce human chores—and, by extension, human errors. There are many applications of AI in pharmacovigilance. Here are some of the most prominent applications:

  • Individual case safety reports (ICSR): Rule-based bots and natural language processing techniques can automate case processing and intake.
  • Quality control and compliance monitoring: Automated systems, including those with AI, can automate data validation, auditing, and anomaly detection to ensure data quality and compliance.
  • Signal detection and management: Automated tools can transfer safety information from source systems, reducing manual effort. Next, machine learning algorithms can undertake data mining and pattern recognition.


Emerging use cases of GenAI in pharmacovigilance

GenAI enhances process efficiency, provides insights, and facilitates decision-making. Hence, it can be broadly implemented in pharmacovigilance. Here are some of the most prominent applications:

  • ICSR creation: AI can generate ICSRs from structured and unstructured data sources.
  • Medical evaluation: AI can automate seriousness and causality assessments.
  • Narrative and document generation: Large language models (LLMs) can generate detailed case narratives without manual errors, improving operational efficiency. LLMs can also draft comprehensive regulatory reports.
  • Adverse event reporting: AI-powered chatbots can improve the timeliness and accuracy of data collection. For example, follow-up communication with patients can also be automated with AI, ensuring on-time data collection.
  • Signal identification and characterization: LLMs can analyze data from different sources to detect safety signals early, reducing lag time and false positives.
  • Real-time data query: LLMs can enhance interaction with data repositories, providing real-time insights for faster signal validation.


Future Operating Models and Workforce Transformation

As companies adopt AI or GenAI, they should tailor these systems to their unique pharmacovigilance requirements while maintaining oversight of how AI systems are developed and deployed. Companies should foster collaboration and engagement with internal and external stakeholders to share best practices and address challenges.

When integrating new operating models, organizations must adopt a tailored approach. Here are some organizational best practices when implementing AI into pharmacovigilance:

  1. Accountability and governance: Organizations must establish an AI governance committee to oversee the AI strategy and compliance. An AI governance framework must be developed and regularly updated. They should also appoint a representative to monitor all ethical implications and ensure compliance.
  2. Transparency and explainability: During AI development and deployment, companies must prioritize transparency. Comprehensive AI system development, testing, and performance documentation must be maintained.
  3. Human oversight and control: Human oversight is necessary in high-risk AI use cases. To this end, human-in/on-the-loop approaches can be implemented to maintain human judgment in AI-led decision-making.
  4. Fairness and non-discrimination: Companies must prioritize fairness and non-discrimination during AI development and deployment. Regular bias assessments must be conducted, and diverse training datasets must be used to avoid discrimination.
  5. Privacy and data governance: Establish robust data governance practices and privacy-by-design principles. The ownership of all AI-generated content and data must be checked to ensure it does not breach intellectual property rights.

Additionally, there are several process considerations when implementing AI systems:

  1. Risk assessment: Organizations must assess the risk of all AI systems implemented in pharmacovigilance and develop appropriate risk analysis and mitigation measures.
  2. Vendor management: When using AI vendors, the roles of all people must be established. Additionally, the ownership of data and consent to use must be clear. Vendors’ compliance with regulatory requirements and the company’s internal governance structure must be assessed regularly. Non-compliance consequences and contract termination consequences must be established.
  3. Validation: Validation techniques must be integrated with AI systems to ensure traceability of data sources.
  4. Continuous monitoring: Continuous monitoring and checks must ensure the system operates safely. Companies must define reasonable success metrics to determine project success. Frequent quality assessments are necessary to maintain compliance.


Conclusion

The integration of AI into pharmacovigilance presents various opportunities. However, a collaborative strategy must be established to ensure the developed and deployed AI systems meet expectations. While this development is beneficial, it will most likely be challenging as AI regulations are still evolving. As companies navigate legislative and regulative waters individually and try to develop their AI regulatory framework, they can maintain compliance by developing internal governance frameworks. As AI use cases in pharmacovigilance increase, companies need to create their AI strategies and adapt them to new legislation and developments.

Author Profile

Michael Bani

Director, Editor (US & Europe)

Comment your thoughts

Author Profile

Michael Bani

Director, Editor (US & Europe)

Ad
Advertisement

You may also like

Article
AI in Clinical Trials: Improve Efficiency and Save Money

Michael Bani