News provided:
April 30, 2024, 9:51 AM EDT
(Originally Published April 29th, 2024 on Forbes)
______________________________________________________________________________________________________
The excitement around AI’s potential to solve healthcare’s biggest problems is palpable. For anyone who has used Chat GPT to generate a summary of a long report or create an image from a simple description, the technology can seem magical. Work that would take a person hours or days can be cranked out in mere seconds—and at a scale previously considered impossible to achieve. Imagine if we could do the same with human health—upload your genetic code to Chat GPT and ask it to identify disease risk factors and suggest personalized interventions and treatments.
While examples of AI identifying personalized treatments do exist, they are enabled by teams of scientists and medical professionals working together to help a single patient. That’s how one 82-year-old patient was able to find a unique treatment for his aggressive form of blood cancer that failed to respond to multiple rounds of chemotherapy and other available drugs. A team of researchers tested his cells against hundreds of drug cocktails, relying on machine learning models and robotic process automation to identify a medicine that has kept his cancer in remission for years.
However, to see a widespread benefit from this type of technology, we need to eliminate the existing biases that color our healthcare system. AI is only as good as its input and training data, as well as the questions we ask of it. When we build new systems, it’s easy for existing limitations to elicit a suboptimal solution for a diverse population.
Take an example from the tech world: A decade ago, retail giant Amazon wanted to automate its online rearch for potential employees. To train the system, the company fed its algorithm the resumes of top applicants from the prior decade. Since most of the company’s engineers were men, the system taught itself to pick out male candidates—and not always for obvious reasons. Men are more likely to use words such as "executed" and "captured,” on their resumes, and so those keywords were prioritized. Amazon discontinued using this system after the issue was identified.
We face a similar challenge in healthcare. The FDA is the gold standard for drug development as the agency typically requires multiple rounds of human testing, in addition to prerequisite laboratory and animal testing, to make sure treatments are safe and effective. But historically, most participants in these trials tend to be white men. Why does this matter? Because different patient populations can have different and unexpected reactions to the same medicine—but we have no way of knowing until we have sufficient data to assess potential issues.
Moreover, treatments prescribed to patients with the same symptoms can be different based on the patient’s gender, race and economic background, leading to disparities in health outcomes. This sadly has led to African American women in the U.S. having one of the highest maternal mortality rates in the world, and several times higher than white women in the country, even when controlling for other socio-economic factors.
If we continue to build AI models based on conventional healthcare data, the result will be very biased.
So how do we avoid this?
To start, we need to think about collecting data in a way that's not just leveraging the sources we have used in the past. This could include working with healthcare systems to capture several elements of each patient healthcare encounter but also tapping into additional networks of databases.
Researchers at the University of California, San Francisco (UCSF) recently demonstrated such an approach as published in Nature Aging. They analyzed UCSF electronic health records looking for potential indicators of Alzheimer’s disease that would arise before a patient was diagnosed with the condition. They then cross-referenced their findings with a database of databases, which includes clinical trial information, basic molecular research, environmental factors and other human genetic data.
The Nature Aging study identified several risk factors common amongst both men and women, including high cholesterol, hypertension and vitamin D deficiency, while an enlarged prostate and erectile dysfunction were also predictive for men. However, for women, osteoporosis emerged as an important gender-specific risk factor. More importantly, the researchers were able to predict a person’s likelihood of developing Alzheimer’s disease seven years before the onset of the condition and point to ways we can develop personalized and low-cost Alzheimers’s risk identification enabling early intervention.
While exciting, it’s also important to note the source of the data—UCSF’s electronic health records. With the health system based in one of the world’s richest regions, a similar database from another location, with a different socioeconomic makeup, may return different predictive results.
How can we broaden such analyses to include a more diverse patient population? It will require a joint effort across all stakeholders—patients, physicians, healthcare systems, government agencies, research centers and drug developers.
For healthcare systems, this means working to standardize data collection and sharing practices. For pharmaceutical and insurance companies, this could involve granting more access to their clinical trial and outcomes-based information.
Everyone can benefit from combining data with a safe, anonymized approach, and such technological approaches exist today. If we are thoughtful and deliberate, we can remove the existing biases as we construct the next wave of AI systems for healthcare, correcting deficiencies rooted in the past.
Let us ensure that legacy approaches and biased data do not virulently infect novel and incredibly promising technological applications in healthcare. Such solutions will enable true representation of unmet clinical needs and elicit a paradigm shift in care access to all healthcare consumers.