Your doctor could be a robot sooner than you think. The pandemic and its toll on the health care sector spurred interest in the role artificial intelligence can play in easing burdens and improving efficiency, particularly via chatbots to tackle routine tasks like scheduling appointments and billing issues. But evidence is building that suggests chatbots are already pretty good at giving health advice. Most recently, researchers from the Cleveland Clinic and Stanford University asked ChatGPT, the robot brain developed by the firm OpenAI, 25 questions about heart disease prevention. The online oracle’s answers earned a solid B, hitting the mark on 21 of the questions — 84 percent, according to a human reviewer. ChatGPT’s responses were appropriate for a broad range of recommendations, including questions about how to lose weight and lower cholesterol and the meaning of certain coronary calcium scores. “That surprised us and suggested that the way it was trained, it was able to pick up even nuanced information, presumably from the internet, and condense it into fairly simple responses,” Ashish Sarraju, a cardiologist at the Cleveland Clinic, told Ben. But the algorithm struck out on some other nuanced questions, like those about exercise. It “firmly recommended” cardio and weightlifting, which could harm some patients. It also missed key details on questions about cholesterol levels, and it incorrectly suggested that a cardiovascular disease drug isn’t commercially available. “When it comes to sophisticated medical decision-making, it fell behind a little bit,” Sarraju said, adding that clinical experts and patients should be involved in training chatbots to improve them. Other recent research found sometimes-contradictory evidence of chatbots’ smarts or lack thereof: — They have significant promise in health care but are in the early stages of development and need to be researched more, according to a 2022 review in Nature. Twenty-three percent of chatbot apps reviewed gave information about a specific health area, with some providing mental health counseling. Less than 1 in 10 had a “theoretical or therapeutic framework” like cognitive behavioral therapy. “It seems like they’re the next frontier,” corresponding author Smisha Agarwal, director of the Center for Global Digital Health Innovation at Johns Hopkins University, said. “[But] evidence-based treatment modalities do need to be incorporated, best practices within the field need to be incorporated, and they're often not.” — They have “no statistically significant effect” on “subjective wellbeing,” according to a 2020 review in the Journal of Medical Internet Medicine, which also found “weak evidence” of effectiveness in treating mental health issues like stress and depression. — They suffer from a “black box” problem in which it’s difficult to understand how they came to a certain answer, “potentially undermining the shared decision-making between physicians and patients,” 2021 research in JMIR Cancer found. Chatbots can be integrated into care to reduce costs and bolster outcomes, but even so, “human elements” aren’t replaceable, the authors concluded. Still, chatbots are “likely to be a key player” in improving cancer care in part by learning from large datasets. — They can be helpful for mental health care and substance use disorder, according to some evidence in smaller studies on the chatbot WoeBot. Those studies were conducted by researchers from Stanford and WoeBot’s makers. The company commissioned a survey that found close to three-quarters of Americans would be interested in chatbots “if they were scientifically proven” to work well.
|