FDA Workshop On AI In Drug Development: The Key Takeaways

A full-day training called Artificial Intelligence in Drug and Biological Product Development was held on August 6, 2024, by the Food and Drug Administration (FDA) and the Clinical Trials Transformation Initiative (CTTI). There were four panels at the workshop, and people from the FDA, academia, lobbying groups, and the business world spoke. The panelists talked about different issues related to using AI for drug development, but almost all of them wanted the government to be more involved in making rules about these kinds of uses of AI clearer. They also wanted the government to provide more resources for the good development of AI, especially through public-private partnership opportunities. Panelists also stressed how important it is for the public to learn more about how AI can be used to create new drugs. This would help build trust in AI and lead to more innovation in the future.

Morgan Hanger, Executive Director at CTTI, gave the opening comments. He set the tone for the class by talking about how AI has changed every part of drug research in the last few years. She said that AI could be used in many ways during the development process, but the meeting would be mostly about clinical development and look at how AI could be used to improve study designs and other parts of the drug development process.

After the welcome comments, the main talk was given by Patrizia Cavazzoni, M.D., who is the Director of the FDA’s Center for Drug Evaluation and Research (CDER). Her speech again focused on the many ways AI is used in the drug creation process. She said that the FDA has gotten more than 300 applications for drug clearance that use AI. It put out a report in March 2024 called Artificial Intelligence and Medical Products that explained how the government planned to promote the safe and effective use of AI. But Dr. Cavazzoni said that the industry hasn’t had the clarity it needs around AI. He said that the FDA is writing risk-based guidance on the use of AI for drug development to give the industry more predictability and certainty, with the hope that more clarity will lead to more innovation.

Session 1: Using Expertise from Various Fields to Improve Model Design

The first group of the meeting talked about how important it is to have experts from different fields when making AI models that are used in drug research. The speakers stressed how important it is to get experts from different fields to work together on AI models and find out all of AI’s possible uses in drug creation. One of the speakers said that AI can be used to encourage teamwork between different fields because it can explain and show scientific ideas in different ways to a wide range of people, filling in gaps in knowledge and leading to better understanding.

The speakers also talked about ways to improve AI tools used in drug research. The judges all agreed that it is important to show off AI use case wins in this area, no matter how small, to show off the technology’s promise for drug development, teach stakeholders about different use cases, and build trust around AI. All of the panelists also stressed that everyone involved—developers, users, and people in general—should support the use and integration of AI instead of being against it because of unknown risks. This is because wider use and development will lead to more useful applications of AI and trust in it across the population.

Session 2: Making the data we need from the data we already have

The next group talked about the different kinds of data used in drug development and how AI can help drug makers with common problems they have with data. The speakers stressed the importance of having research-ready data sets. Many of them said that they don’t think most researchers and coders have enough data sets right now because data isn’t always available for a wide range of people and situations. The judges did, however, point out that public-private precompetitive partnerships could be a good way to make data that researchers and developers who want to use AI in drug creation can easily access.

The speakers also talked about the problem of bias in AI data and models. The speakers agreed that AI models may reinforce biases that are already present in data, especially when it comes to health care data. They also talked about the need for uniform controls to keep an eye on and evaluate bias throughout the entire lifecycle of AI models used for drug development. They also said they hoped AI could be used to find biases in data sets more quickly and accurately and to make results that get rid of some biases.

In the end, everyone in the group agreed that the data we have now are not enough to support the wide and safe use of AI in the drug creation process. They also agreed that data needs to be more open and easy to access. The panelists also asked the FDA to make guidelines on data transparency to help guide industry efforts. They also suggested that the federal government look into more ways to develop products (like conditional approval) to speed up the collection of real-world performance data. They also suggested that the government look into funding or partnership programs that aim to increase data availability and transparency, as well as improving AI model development and safety.

Session 3: Performance, Explainability, and Transparency of the Model

In the third part of the training, there was a group of people who have used AI in the process of developing drugs. One thing that these speakers talked about was how explainable and interpretable AI models are. This means that they asked if people can figure out how AI models, which are often seen as “black boxes,” make decisions and why they make those decisions. The speakers said that making AI models easier to understand and explain is very important for the future of using AI to create new drugs. Regulators need to know how AI models work in order to control them and trust the results that AI-driven drug research can bring. On the other hand, the industry needs to know how AI models work in order to trust them and put money into them.

The group also talked about what officials could do to encourage the development of drugs using AI models. The people on the panel thought that the FDA could bring attention to certain uses of AI as a way to boost trust in AI and the trustworthiness of drug makers who use AI in their work. As in previous sessions, the people who attended this one suggested that the FDA make rules more clear about how AI can be used in drug development. They also suggested that the FDA think about creating a clear regulatory framework for AI use in drug development to deal with certain unknowns and uncertainties that might stop people from investing and coming up with new ideas in this area. Lastly, one of the panelists asked the FDA to make it easier for companies working on drug research to use AI in their work by offering grants, prizes, and other forms of funding.

Sessions 4 and 5: Finding Gaps, Dealing with Problems, and Planning the Way Forward

In the last session, the panelists talked about many of the same themes and ideas that the earlier speakers had talked about. They called for officials to be clearer and give more rules. They also called for the government to work with AI developers and offer incentives to them. But this group went even further and said that global officials should focus on making sure that words, best practices, and standards related to AI are aligned and harmonized. They also pointed out a number of problems that make it harder for AI to be widely used in medicine research, such as

  • Keep up with new AI laws and rules in the US and around the world.
  • To build and manage AI-based systems, cross-functional teams are being put together.
  • The fact that there aren’t any broad rules or instructions for making, testing, and using AI-based systems.
  • The lack of data on what worked and what didn’t in order to guide AI research and build trust.

The FDA hosts asked the speakers to talk about their ideas for public-private partnerships to help AI development, which was an issue that came up a lot. One speaker said that the current state of AI is like the beginning and growth of the Internet. They suggested that the federal government should spend a lot of money to set rules and guidelines for the creation, use, and access to AI systems and data. Other speakers on the group stressed the importance of global stakeholders and regulators working together to come to an agreement on AI terms and standards, such as data standards, so that they are consistent and easy for people all over the world to access. An overwhelming majority of people agreed that a lot more people and government agencies need to get involved and work together to solve current problems and plan the way forward for using AI in drug research.

Finally, Jacqueline Corrigan-Curay, the Principal Deputy Center Director of CDER, spoke at the end of the workshop. She agreed with many of the things that Director Cavazzoni said in the opening speech about the possibilities of AI.

The public meeting showed that the FDA and the judges are all dedicated to using AI to its fullest potential to improve drug research and safety, as well as to make things better for patients. A strong theme that kept coming up, though, was that the FDA and the federal government need to do something to help set a stable legal framework and development path for the use of AI in the drug development process. This could mean providing resources, partnerships, clear guidance, or standards that can be put into action.