Decoration

Ensuring Patient Centricity With AI

News provided:

May 2, 2024, 12:33 PM EDT

Applied Clinical Trials - Patient Centricity Video Interview - Blog Post

(Originally published May 1st on Applied Clinical Trials

______________________________________________________________________________________________

ACT: How is artificial intelligence (AI) currently being used in clinical trials to ensure patient centricity?

Nambisan: I think it's a good question, especially since AI has many different use cases, since it's a tool and not necessarily a solution. I think there's a few different ways that we can leverage AI in clinical research specifically, the first thing to note is that clinical research primarily is a data collection exercise, and AI needs data to be fueled to be able to create any type of logical output that actually provides value. So one of the ways I think that we've seen this working is in relation to real-world data analysis.

So I think there is a difference often between what we think is actually providing value to patients and what actually is providing value to patients. And so understanding first by looking at data on a specific indication or a specific condition, let's say, and understanding how treatments are actually working to address outcomes, and how those outcomes are actually looking at identifying opportunities for new clinical research, new repurposing and opportunities for existing and new drug development. Right? So I think that's one area where I think real world data analysis, leveraging large sets of multi-dimensional types of data.

And so when I say real-world data, I'm not just speaking about claims data or EMR (electronic medical records) data, also thinking about laboratory data, thinking about specialty pharmacy data, and bringing that together in a manner that can be leveraged for some form of analysis. And there's multiple levels involved in that. The first is, if you're going to bring together those datasets to basically triangulate a group of patients and understand their progression in relation to treatment and their outcomes in relation to treatment, then you need to also harmonize that data and understand which set of data types of attributes belong to which patient, number one, which is handled right now through automation and tokenization. But then also analysis of that data, with a big data set, basically able to identify what are the most likely predictors of outcomes, and be able to identify also how to seek to determine which subpopulations you want to address in a future clinical trial. So those are some of the use cases we see around real-world data analysis.

I think there's other use cases around patient identification and recruitment. Similarly, you can bring together real-world datasets or healthcare datasets to understand where patients are getting care today. And through that, you can also triangulate where might there be an opportunity to access that patient, whether that might be on which of their multiple different providers they have, or providers in network, or in referral to those particular practicing providers that are providing care to that patient. I think another area we've seen this is in protocol development, there's an opportunity to really effectuate change much earlier on in the process. As you can imagine, each step in the value chain of a clinical research effort starts with assessment where there may be opportunity and then creation of a protocol synopsis to say, “Can I actually feasibly run this protocol? Does this protocol makes sense? Are there enough patients? Is there risk associated with the operations related to collecting this many patients for this study?” And then you go to the next step: all those things are yeses, you go to the next step, and then you start to identify a true, full protocol that you can train a set of sites on. And then that obviously begs the question: which sites am I going to use to get recruitment and so on, so forth, right? And there's incredible amounts of additional cost each step along the way, by the time you're actually in the clinic, that's when it's the most costly and the most time consuming. So effectuating change early on identifying risks, identifying operational issues in a particular design or plan early on, notably in the development of the protocols stage can really not only save lots of money, but it can accelerate the timeline of trials, because if there's an issue down the trip in the trial, you may have an amendment that delays things by six months to a year, etc. And so the ability for AI now to start processing that protocol in flight as you're writing it, or as you're drafting, consuming that, namely, to identify, first of all, is this eligibility criteria, these patients that you're specifying based on inclusion and exclusion criteria, is that even feasible to recruit 500 of them based on the protocol across these countries? Number two, how much burden would there be on a site to be able to run this? Will you see compliance issues and deviations in the trial based on what you're seeing? What's specified in the protocol? How much? How much burden is there on the participant itself to be able to run all these different assessments and endpoints over a period of time coming back in of the clinic? And so all of that can be assessed in natural language now. Now of course, with the advent of and the expansion of LLMs (large language models). It's much easier to process these, although there were rudimentary approaches to that with natural language processing in the last 10 years as well. I could probably spend another 10 minutes on this, but I'll pause there.