top of page
Digital Brain Interface

Good AI Practice in Drug Development

Regller views on cGAIP

Artificial Intelligence (AI) is rapidly reshaping how medicines are discovered, developed, manufactured, and monitored. From predicting toxicity earlier to optimizing clinical trial design and improving post-market safety surveillance, AI holds real promise to accelerate innovation while improving patient outcomes.

However, in regulated environments like life sciences, AI must earn trust before it earns scale.

In January 2026, global regulators published a set of Guiding Principles for Good AI Practice in Drug Development, outlining expectations for responsible, safe, and effective use of AI across the drug product lifecycle. US FDA and EMA publication: https://www.fda.gov/about-fda/artificial-intelligence-drug-development/guiding-principles-good-ai-practice-drug-development

While not prescriptive regulations, these principles signal a clear direction: AI must strengthen—not weaken—quality, safety, and regulatory confidence.

At Regler, we see these principles as more than guidance. They represent a blueprint for how AI should be designed, governed, validated, and operationalized in regulated GxP environments.

This article translates those principles into practical, industry-ready insights.

Why “Good AI Practice” Matters in Drug Development Now

One of most matured business function that is seriously exploring the use of AI is drug discovery and development. While use of AI is still heavily scrutinized and speculation is everywhere, the new Good AI Practice is very timely. Drugs are approved based on demonstrated quality, safety, and efficacy. AI systems increasingly influence decisions that affect all three—whether through data analysis, predictions, or automation.

Unlike traditional software, AI systems:

  • Learn from data rather than fixed rules

  • May change behavior over time

  • Depend heavily on data quality and context

  • Can introduce hidden bias or performance drift

 

That makes governance, transparency, and lifecycle control essential.

The new guidance recognizes this reality and emphasizes risk-based, human-centric, and standards-aligned AI, consistent with expectations from regulators such as U.S. Food and Drug Administration and European Medicines Agency.

The 10 Principles — Interpreted Through a Practical Lens

1. Human-Centric by Design

 

AI should support human decision-making, not replace accountability. Final responsibility for drug development decisions must remain with qualified professionals.

Regller view: AI should be auditable, explainable, and designed to augment expert judgment—not obscure it.

2. Risk-Based Approach

 

Not all AI use cases carry the same risk. A model supporting early research exploration should not be governed like one influencing batch release or patient safety decisions.

Regller view: AI governance should scale with intended use, impact, and patient risk, just like GAMP and CSA principles.

3. Adherence to Standards

 

AI must align with applicable GxP, quality systems, cybersecurity, and regulatory standards.

 

Regller view: AI should be treated as part of the regulated system—not an external experiment.

4. Clear Context of Use

 

Every AI system must have a clearly defined purpose, scope, and limitation.

 

Reglier view: “What decision does this AI influence?” should be documented before the first model is trained.

5. Multidisciplinary Expertise

AI development is not just a data science exercise. It requires collaboration across quality, clinical, regulatory, engineering, and domain experts.

Regller view: AI failures often stem from organizational silos—not algorithms.

6. Data Governance & Documentation

 

Data sources, transformations, assumptions, and lineage must be traceable and verifiable.

Regller view: If data cannot be explained to an auditor, it should not be used by an AI model.

7. Sound Model Design & Development

 

AI systems should follow best practices in software engineering and model development, emphasizing robustness, generalizability, and explainability.

 

Regller view: “Black box” AI has no place in patient-impacting decisions.

8. Risk-Based Performance Assessment

 

AI performance must be evaluated using fit-for-purpose metrics, including how humans interact with the system.

 

Regller view: Validation should assess the entire system, not just model accuracy.

9. Lifecycle Management

 

AI systems must be continuously monitored for data drift, performance degradation, and unintended behavior.

 

Regller view: AI validation is not a one-time event—it’s an ongoing obligation.

10. Clear, Essential Information

 

Users, stakeholders, and patients should receive plain-language explanations of what the AI does, how it performs, and where it may fall short.

Regller view: Transparency is a trust accelerator.

AI is here to transform the ways medicines are made. 

 

AI is already transforming how medicines are discovered, developed, manufactured, and monitored across their life cycle.

To realize its full potential, industry, regulators, policymakers, legal, regulatory, and quality teams must work together to responsibly integrate AI into regulated operations. Organizations that adopt AI early—with the right governance, controls, and risk-based oversight—will be better positioned to improve quality, strengthen supply reliability, and sustain long-term regulatory confidence.

At Regler, we apply advanced AI and regulatory intelligence to analyze quality signals across manufacturing operations, audits, inspections, suppliers, and validation processes. Our platform combines standardized Quality Maturity Model (QMM) scoring, automated quality workflows, and predictive risk models to help organizations identify and mitigate quality risks before they escalate into recalls or drug shortages, supporting continuity of supply and patient safety at scale.

bottom of page