Insights

FDA Issues Draft Guidance Regarding AI Models Used in Support of Regulatory Decisions Highlighting Changing Risks and Opportunities for IP Strategy

In early January 2025, FDA issued draft guidance titled, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision Making for Drug and Biological Products (“Draft Guidance”), regarding the use of artificial intelligence (AI) to support FDA decision-making in the drug product lifecycle. The Draft Guidance provides long-awaited insight into the Agency’s expectations for companies to establish and evaluate the credibility of an AI model used in support of regulatory decisions. Notably, the Draft Guidance was published in the waning days of the Biden Administration and public comments are due by April 7, 2025. It remains to be seen the extent to which the incoming Trump Administration will align with the approach described in the Draft Guidance. Given the regulatory freeze issued by Executive Order on January 20, 2025, incoming FDA leadership will be evaluating recent FDA activity to assess the Agency’s path forward.

The proposed changes include additional disclosure on AI models to meet FDA expectations and may have significant implications for corporate intellectual property strategy going forward. This highlights the need for coordination between Regulatory and IP teams to ensure each is aware of what material related to its AI technology is going to be submitted to FDA.

The Draft Guidance is directed to the use of AI to produce information or data intended to support FDA decision-making on the safety, effectiveness, or quality for drugs, including biological products. It does not address the use of AI in other contexts, such as drug discovery or when used for operational efficiencies that do not impact patient safety, drug quality, or the reliability of results from a clinical or nonclinical study. 

In the Draft Guidance, FDA sets out a seven-step framework for establishing and documenting the credibility of AI-models used to support regulatory decision making. Under the framework, extensive description of AI models, including model inputs and outputs, architectures used, features, and parameters of the model, as well as detailed training data and methods, may need to be submitted to FDA in the form of a credibility assessment report. 

According to the Draft Guidance, the level of detail on the inner workings and training of AI models that should be included in credibility assessment reports will depend on the risk presented by the use of the AI model. Risk assigned to AI model use is based on two factors: (1) model influence – that is, how much significance is placed on evidence derived from the AI model as compared to other evidence, and (2) decision consequence – that is, what is the significance of an incorrect decision. The higher the risk posed by use of an AI model, the more detail FDA would expect to be disclosed. For example, use cases where decisions lean heavily on AI model results and can result in patient harm if incorrect or inaccurate would be deemed high risk.

As a result, the level of detail at which AI models may be reported in regulatory filings, as contemplated in the Draft Guidance, can go beyond what otherwise or ordinarily would have been divulged, for example in marketing materials. Details that may often be held as trade secrets, such as sensitive training, model architecture, and model parameter details, may be subject to an expectation of disclosure to FDA. IP and Regulatory teams will benefit from close communication and collaboration to properly identify confidential information and to communicate to FDA for purposes of seeking appropriate protections of trade secrets, such as through Exemption 4 of the Freedom of Information Act (FOIA).

Any increased expectations about disclosure warrant careful consideration of corresponding IP strategy.  For example, careful consideration should be given to whether to protect technology as a trade secret or to pursue patent protection -- if aspects of the technology will be subject to new disclosure expectations, this could complicate later attempts to obtain a patent or to preserve trade secret protection. Timely filing for patent protection, ahead of any disclosure, may therefore be prudent in some cases. 

Increased FDA disclosure expectations may also present opportunities to obtain information regarding AI use by competitors that would not otherwise be publicly available. Such information could be used to provide a stronger basis to identify and assert patent infringement or enforce other IP. Review of competitors’ approval packages on FDA’s publicly available database may be a helpful resource in the future.

As Regulatory teams prepare to respond to the Draft Guidance, it is important that they not only consider their regulatory disclosure strategy but also communicate regularly with their IP teams to ensure that everyone is aware of what information is going to be disclosed to FDA and when. Likewise, IP teams can provide valuable input to Regulatory teams when crafting FDA submissions in order to address and structure protections from the risk of disclosure that may impair a company’s IP position.

Printable version.