computerized eye made of various data points
Called to Be: Society & Democracy

Title:Incubating at the intersection of AI and health

Author: Shi En Kim
Date Published: July 11, 2024

Georgetown faculty weigh in on ethics, patient care, and research

Amid the growing hype around the promise of artificial intelligence (AI) comes a healthy dose of wariness about the potential for AI to create distrust, exploit humans’ hard work and creativity, and disrupt the labor market.

In health care, experts are grappling with the technical and ethical scope of AI use.

“It’s still so new that people can get over-excited by the possibility of the technology without learning what the conditions of responsibility are,” says Maggie Little, professor of philosophy, senior research scholar at the Kennedy Institute of Ethics, and founding director of the Ethics Lab.

Many Georgetown researchers are working at this frontier, considering how best to harness AI while also exploring its ethical questions. Georgetown is well positioned for this work, given the strength of interdisciplinary collaboration across the sciences and humanities, along with the academic health system partnership with MedStar Health.

“Georgetown has world-leading experts in bioethics, clinical ethics, and ethics of AI. And we’ve got faculty who care about these issues across the university—in the School of Medicine, the biology department, the computer science department,” says Joel de Lara, teaching professor in philosophy at the Ethics Lab, and coordinator of the Lab’s new initiative in AI, Health, and Ethics. His verdict: “There’s a lot of potential for Georgetown to be a leader at the intersection of AI and health.”

The power of AI

Artificial intelligence refers to the ability of computers to solve complex tasks generally assumed to require human intelligence. Creators train AI models on large amounts of data, during which the algorithms tease out important variables in accordance with set objectives, a skill called machine learning.

In the last decade, the application of AI has exploded across fields and penetrated everyday life. Generative AI tools like ChatGPT and DALL-E have entered the common lexicon. AI is used to sift through data archives, in facial recognition, and for data analytics in research settings. It has had a digital hand in search engines and personalized recommendation systems for media and online shopping platforms.

Many experts think health care is particularly well-suited for AI-driven transformation. Years of medical records, in some cases at the population level, provide good training data for an AI model. Because health care decisions are multivariable and can be life-or-death, a computer can help distinguish real signals from noise to support positive health outcomes.

“Health care is complicated; people are complicated,” says Peter McGarvey, professor in biochemistry and molecular and cellular biology. “You read stories about doctors having spent decades of their life figuring out little bits of data and how to put them together. Computers do this faster.”

One area where AI is flourishing is radiology. AI can take stock of seemingly random information and draw correlations that a human might overlook, while offering increased speed and accuracy. One AI-based detection software recorded 70% fewer false-positive errors compared with traditional programs for discerning breast cancer from mammography images, for example. In another study, an AI algorithm offered 19% more accuracy than two radiologists at diagnosing hip fractures. In 2022, 75% of all FDA-approved AI devices were used for radiology.

Beyond cutting-edge research, AI can help with routine administrative tasks, such as summarizing doctor-patient conversations, updating electronic health records, and sending prescriptions to pharmacies. Several hospitals across the U.S. are already putting AI scribes and chatbots to work.

Using AI in these ways could potentially alleviate health care worker burnout, a widespread challenge in the industry. For every hour of patient visits, clinicians need two hours for paperwork, which they often use personal time to finish. Automating clerical tasks clears headspace for workers to be more engaged with patients. By reducing workload, AI may indirectly improve the quality of patient care a health professional can deliver, according to Nawar Shara, chief of research data science, founding codirector of the AI CoLab, co-director of MHRI Center for Biostatistics, Informatics, and Data Science (CBIDS), and associate professor of medicine at Georgetown.

“Machines don’t get tired like humans do,” she says.

Shara has witnessed first-hand the benefits of AI in health care from her own research. In 2018–19, her team ran a pilot project to use AI-driven voice assistant technology to monitor patients with chronic heart failure, a condition that afflicts 6.2 million people in the U.S. Patients are mostly left to manage on their own; the current standard of care is for patients to dial 911 in the event of an emergency, which oftentimes is too late for doctors to make meaningful differences in health outcomes.

“When you have this kind of disease and you don’t manage it well, you’re probably in the ER every other week,” Shara says.

brain constellation

 

In her own research, Shara and her team supplied 30 MedStar Health patients with voice-assistant devices equipped with natural language processing. Every day, a preprogrammed smart speaker would ask participants about their symptoms and remind them to follow their health regimens. Then the virtual chatbot would analyze patients’ responses and steer the conversation based on the reported symptoms. Should the patient reveal certain predetermined severe symptoms, the AI tool would alert the patient’s health team for intervention.

The team’s clinical trial demonstrated how AI could be deployed for daily patient monitoring from the comfort of the patient’s home. The device doesn’t eliminate the need for human physicians, but it shows how human resources and virtual technology can synergize to deliver personalized regular care. Shara has seen positive patient adoption as well. When the 90-day feasibility study was over, “a few patients asked to keep the device going,” Shara says. Some patients said that it was one of the few ways they could remember to take their daily medication.

Shara’s team is now exploring AI’s ability to predict the financial burden of complex chronic disease conditions and to monitor gastrointestinal cancer surgery patients.

“AI will absolutely change health care,” Shara says. “In 10 years, health care will take a completely different shape.”

“It’s still so new that people can get over-excited by the possibility of the technology without learning what the conditions of responsibility are.”

—Maggie Little, Ph.D., Professor, Senior Research Scholar at Kennedy Institute of Ethics, Founding Director of the Ethics Lab

Ethical perils

For all of AI’s promise in health care, experts also see potential peril. AI’s errors can exacerbate inequities in health, harm patient trust in their providers, and make errors that reduce care quality and patient well-being.

One concern is the transfer of human bias over into AI models. In 2019, a landmark study in the journal Science showed that a commercial algorithm for predicting health risk had racially biased outcomes—the software mistook the disproportionately low health care spending among Black patients, an indicator of health care access, to mean that they were more robust than white patients. So the AI program ruled out a disproportionate number of Black patients from receiving the help that they needed.

Racial inequities appeared in another AI algorithm for predicting when children develop sepsis. The developers at Duke University Hospital found that doctors using the program took longer to order critical blood tests for Hispanic children than for white children. In examining the discrepancy, the program’s developers suspected that the algorithm may have observed Hispanic children taking longer on average to receive a diagnosis—perhaps their families couldn’t speak English and needed more time to communicate with the doctor—and concluded that this group was fortified against sepsis.

Problems often emerge after products have been released at scale, indicating the need for more due diligence during the testing phase. This due diligence starts with avoiding the trap of over-deference to AI, pausing to question a computer-rendered decision, Little says. It requires gathering a training dataset that is large enough to accurately represent the general population, so that the AI model can work out genuine correlations.

skeleton xray

 

Due diligence can also involve brainstorming ways the training data set might be inherently biased. Technical tests for the prototype need to be comprehensive to cover diverse scenarios that the AI might encounter in the real world. De Lara recommends bringing in broad expertise and lived experiences during the design process, potentially through the inclusion of diverse advisory groups comprising people who can provide insight into potential areas of bias.

It’s not acceptable to leave society at the mercy of product developers to do their own checks, Little says. She wants to see regulations requiring AI products to undergo standardized auditing requirements or follow safety checklists. Industry-wide regulatory structures raise collective ethical standards while leveling the playing field for those who voluntarily audit their programs in the first place.

“If you don’t have regulations, then you lose market share when you’re the good guy,” Little says.

The law is trailing the pace of the AI boom, but several state and national efforts are underway to reduce the gap. In 2023, the Biden Administration signed an executive order to set up several guiding principles for the safe and equitable development of AI. In response, 28 providers and payers pledged to responsible use of AI in health care, committing to follow certain transparency practices, such as informing users where AI is involved and tracking AI’s outcomes after product release.

Industry and experts are also coming together to set best-practice standards. The Coalition for Health AI, a consortium of academic and corporate experts in health and data science, released a 24-page blueprint on the trustworthy use of AI in health care. The consortium counts among its founders the FDA and tech giants Google and Microsoft.

There’s no one-size-fits-all approach to the ethics of AI, de Lara says. Additionally, ethical analyses need to be woven into every stage of product development, not just in the final step.

“We need to get away from thinking of ethics as something additive,” he says.

Cultivating an AI-informed society

While experts don’t expect AI to replace many humans in health care jobs in the near term—physicians are at most signing off on AI’s conclusions rather than handing over the reins—AI is changing the workforce by creating a need for doctors who are AI savvy.

“I think a doctor with AI experience will be more relevant in the future of health care than a doctor with no AI experience,” Shara says.

“A doctor with AI experience will be more relevant in the future than a doctor with no AI experience.”

—Nawar Shara, Chief of Research Data Science, Founding Co-Director of the AI CoLab, Co-Director of MHRI Center for Biostatistics, Informatics, and Data Science, and Associated Professor of Medicine

Researchers at Georgetown are implementing various educational programs to cultivate AI awareness in the health care sector.

Shara and McGarvey are part of AIM-AHEAD, the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity. The multi-million-dollar program’s goal is to diversify representation at the nexus of AI and health care through education and community building.

Shara and McGarvey are part of the Data Science Training Core and the Data and Research Core leadership, which involves working with students and faculty from historically Black colleges and universities on AI’s benefits and ethical quandaries. Part of the curriculum includes learning to wield AI for addressing disparities in health, empowering leaders to tackle issues their communities face.

An AI-trained health care workforce means these professionals will have the technical background to develop AI tools that potentially do less harm, as well as the skills to critically evaluate them. “If you don’t know what you don’t know, then you don’t know what probing questions to ask,” Little says.

Little’s work includes providing guidance on the ethics of harnessing big data, including data from social media activity or cellphone location data, to predict HIV risk in sub-Saharan Africa. Determining an individual’s HIV risk is complex, involving various health, familial, and community-level factors that AI has the potential to help untangle. However, researchers also warn of ethical pitfalls, especially if data collection practices start to look like mass surveillance.

Little was part of an 18-member working group, funded by the Bill & Melinda Gates Foundation, that drafted guidelines on privacy protection that won’t compromise the space for meaningful innovation. Their core principle: “Do not collect if you cannot protect.” Her team advocated for an independent review board to evaluate data collection protocols, public disclosure of any programs in the name of transparency, and community voice and involvement in decisions around the design and potential deployment of such programs.

The last guideline is the most challenging of all to implement. Public education in AI is an area that needs further research and investment, the team says. The Ethics Lab recently launched the AI, Ethics, & Health Initiative, building a course which incorporates theory with exercises, projects, and guest speakers who are inside the new world of AI and health.

The course’s mission is “getting ahead of the curve,” according to de Lara—to nurture future health care professionals and industry leaders, right when the world is at the cusp of this technological inflection point. Enrollment in the course has maxed out for two semesters in a row, and there’s a waitlist.

Georgetown’s interdisciplinary approach embraces the principle that good AI use requires strengthening our human ties: taking the time to cultivate channels of communication among researchers and practitioners from different fields, between providers and patients, and from experts to the public on benefits and risks. It also means giving the public the forum to seek avenues of redress when AI systems go awry.

What dictates the impact of AI isn’t just advancement in computing power, but also the quality of human input, from designers to reviewers to the end user. As much as the world is quick to experiment with AI, the hype should be tempered with caution. “There is no such thing as a neutral AI tool,” de Lara says.

For sources and other AI research links at Georgetown, see our digital magazine.

More Stories

students studying with each other on laptops

Virtual interprofessional program highlights the School of Nursing’s trailblazing role in online learning On a telehealth visit, family nurse practitioner Elke Zschaebitz performs an intake interview with a patient named…

mural

A new collaboration between Georgetown and Howard universities offers a path toward health justice through the medical humanities Stories about birth are as old as humanity itself. They are universal,…

Recent graduate Zoe Malchiodi, Ph.D. (MS’19, G’24), was named the 2023–4 Matt Riddle Scholar for the Metropolitan Washington Chapter of the Achievement Rewards for College Scientists Foundation. The award funds her crucial research preformed at Georgetown University assessing natural killer cells in pancreatic cancer samples.

Recent graduate Zoe Malchiodi, Ph.D. (MS’19, G’24), was named the 2023–4 Matt Riddle Scholar for the Metropolitan Washington Chapter of the Achievement Rewards for College Scientists Foundation. The award funds…