Decoration

Pharmaphorum: The 3 biggest mistakes we keep making in clinical operations

News provided:

April 2, 2024, 10:13 AM EDT

New Design (5)

By Andrew "AJ" Mills (Originally Published March 24th, 2024 on pharmaphorum)

____________________________________________________________________________________________________________

The life sciences industry is notoriously slow to change – for good reason, as there’s significantly more at stake than in other industries. Even so, clinical research has been stuck on a slow treadmill for decades. Even with rapid scientific advances, the reality of drug development boils down to one depressing statistic: 90% of experimental medicines fail to demonstrate effectiveness when tested on patients.

 

As we enter a new era of modern science with striking technological advancements from gene therapy to artificial intelligence (AI), the industry is still running in place – making the same mistakes in clinical research, including:

Mistake #1 – failure to recruit a representative patient population

Mistake #2 – delayed response to operational problems

Mistake #3 – reliance on the same, biased data sources (leading to various negative outcomes)

 

While there are dozens of reasons – from cultural resistance to financial challenges – why the industry replays its same playbook, there is one fundamental cause for recurrent errors. We lack access to centralized operational data derived from across the entire industry, including trial sites (both academic medical centres and site networks), contract research organisations (CROs), trial sponsors, and patients.

 

The data exists – in fact, Tufts Center for the Study of Drug Development reports the typical late-stage protocol now collects about 3.6 million data points, three times the number collected 10 years ago. The technology exists, too, especially with the evolution of data lakes, warehouses, and AI. However, the challenge is the industry’s inability to harmonize data from various companies, each with different data collection methods, sources, lineages, formats, and… priorities.

 

It's time to get off the treadmill to nowhere and come together to aggregate all data and centralize access, so everyone across an organisation (or even across the industry) can see the whole picture based on unbiased data. Shared data mends gaps in understanding so that companies can see warning signals before it’s too late to course correct. Like hospital patients hooked to monitors that sound an alarm when there’s any aberration so medical personnel can react and prevent tragedy, the same alerts are needed in clinical trials (which have become nearly as complex as the human body itself).

 

Without data-driven “alert” systems, the industry will continue to err, hindering the development of life-changing therapies. Here's a closer look at our three most persistent missteps, and how to stop.

 

Mistake #1: Failure to recruit a representative patient population

Of course, the scientific method is most effective when testing a hypothesis on data that best represents the population. While this might be a well-known requirement for ensuring sound science, the industry fails to hit the mark time and again. Fewer than 4% of adults in the United States participate in clinical trials and up to 85% of clinical trials fail to recruit or retain a sufficient sample size, leading to recruitment failures in four out of every five trials. The numbers are worse when examining population representation or diversity in trials.

 

Regulators have since gotten involved, as the US Food and Drug Omnibus Reform Act (FDORA) demands prioritization of representation in clinical trials. FDORA requires drug and device makers to submit diversity action plans to the FDA ahead of pivotal clinical trials, but the specifics are vague. In fact, the FDA was supposed to publish draft guidance no later than 29th December 2023, but as of the date of publication – crickets. Three congressional lawmakers just wrote to the agency to urge action. In the meantime, industry must find a way to change its recruitment strategy.

 

One solution is to incentive trial sites to recruit diverse patient populations – it’s not only motivating, but also fair. Recruiting patients from historically underrepresented populations often takes longer and is more challenging, due to distrust of research, fears of healthcare discrimination, cultural and linguistic differences, low literacy levels, financial and transportation constraints, and lack of awareness of trial opportunities. Diverse patient recruitment is another burden placed on sites. But offer sites a financial bonus for hitting milestones and additional resources, and they have reasonable motivation.

 

In addition to incentivising sites, leverage predictive analytics during enrollment planning to help recruit historically underrepresented patients faster and with greater success. Traditionally, clinical research managers have fragmented visibility of trial operations and rely on CROs or sites for real-time enrollment data. By the time a problem has surfaced, it is nearly impossible to correct without huge delays and costs.

 

However, with predictive analytics based on both real-world and historical data, teams can be informed of expected enrollment performance before the first patient consents, so changes can be made. For instance, sponsors may change their allocation of site resources to target new regions more easily, and more patients from undeserved geographies can be reached faster. By comparing planned sites with data-agnostic analytics, sponsors can optimize the number and type of sites to meet diversity and other key performance milestones.

 

 

“Sponsors are starting to leverage advanced data analytics to better predict the progress of trials,” said Marie Rosenfeld, former senior vice president of clinical operations at Astellas Pharma, in an interview. “There’s a huge advantage in being able to leverage robust statistical monitoring to see trends in data and be able to identify trials that might be going off track before it is too late. Ultimately, that will save sponsors a lot of time and money – particularly with DEI initiatives.”

 

Mistake #2 – Delayed response to operational complications: before (study start-up), during (study conduct), and after (post-market surveillance) trial close-out.

 

The clinical research industry is constantly looking in the rear-view mirror, making decisions based on retrospective data. Issues aren’t detected early enough to implement change. But with the right analytics and data sets, teams can rely on intelligence systems to signal an issue early on in the process.

 

Forward-looking analytics combined with real-world and historical data provide benchmark comparisons that allow companies to forecast the future with current parameters versus expected future performance. This is simply referred to as study forecasting, which is a comprehensive approach to timeline modelling that leverages validated computational models and methods to generate start-up and enrollment curves. With the ability to compare multiple scenarios while saving and accessing multiple projections for future reference, companies can see the odds of success in achieving key milestones like LPI (Last Patient In) under different parameters. Furthermore, these capabilities can reliably identify potential candidate countries and site facilities based on study phase and indication criteria.

 

For example, Lokavant analysed a closed trial that went off-course by feeding the trial’s operational day-by-day data into a predictive analytics model trained on historical data from real-life look-alike trials. In just the first 30 days, Lokavant uncovered a crucial insight (that turned out to be accurate): the trial would miss its LPI deadline by 18 months. In reality, the sponsor waited nine months to make the first of eight change orders to try to course correct, piling on significant costs and delays.

 

In another retrospective analysis, this time of a late-stage, multi-centre oncology study, predictive analytics technology anticipated an emerging increase in data management issues that – if addressed at the start of the trial – would have prevented site closures, delays, and $500,000 lost in patient enrollment costs. Specifically, an automatic alert would have been triggered between Day 25 and 30 of the study, two months before the issue occurred and seven months before the team realized the problem.

 

Mistake #3 – Reliance on the same biased data sources

Every pharmaceutical company naturally trusts its own data, creating a natural bias from “navel gazing” versus looking at a wider spectrum of data. Trial sponsors also often rely on the same, preferred CROs, sites, and data vendors, which yields homogenous and, therefore, biased data sets.

 

Compounding this issue is the fact that data is siloed in individual departments with little visibility across teams. As a result, it is difficult – if not impossible – to get an accurate picture of what “good” looks like. There is no reliable, universal benchmark holistically representative of the industry’s performance nor even therapeutic area-specific benchmarks.

 

Rather than endlessly returning to the same well of data, seek source-agnostic data, especially real-world data. Fundamentally, data sources must be unbiased, standardized, centralized, and substantive, while being relevant to answer specific problems – for example, site activation performance in Japanese sites, as compared to similar studies for similar sites in other regions.

 

Without data integrity, analytics models – AI most of all – will produce poor results (but it will just do it faster than humans, leading to flawed conclusions).

 

Too little, too late

 

All three of these repeated mistakes impact trial timelines and outcomes. Yet, we can prevent them by changing the way we capture, source, and access data. Data sharing should no longer be taboo. Reliable, objective recommendations will then suddenly replace subjective recommendations to better reflect reality and guide decision-making throughout all stages of a clinical trial.