Dennis Chornenky, chief artificial intelligence adviser at the University of California Davis Health, is helping the health system develop a framework for incorporating advanced technology into its medical practice. He previously served as a 2020 Presidential Innovation Fellow, advising the White House on AI initiatives. Chornenky takes a three-pronged approach to assessing the kind of artificial intelligence a health system should invest in: outline a narrative goal, understand the technical risks and assess the long-tail costs. When it comes to how AI should be regulated in health care, he thinks that health systems should lead the way. In a recent interview with Ruth, Chornenky talked about some challenges the health care industry faces as AI goes mainstream, emphasizing that his views represent his own and not necessarily those of UC Davis. This interview has been edited for length and clarity. How do you choose AI projects? We encourage folks to submit ideas from different departments within the health system. A number of them are thinking of their own AI strategies, adoption roadmaps and we try to coordinate that work so that the high-priority areas and high-potential use cases are being surfaced for more senior levels of review to be able to direct resources and prioritization. How do you think AI should be regulated? Technology is evolving really quickly, and it’s really hard for policymakers, lawmakers, regulators to keep up with those changes, especially when it comes to complex industries that are already heavily regulated like health care. So it makes sense to think about evolving a self-regulatory structure for complex industries like health care and finance when it comes to advanced technologies. We’ve seen this historically in the finance space with [the Financial Industry Regulatory Authority]. The Coalition for Health AI, an industry group you’ve worked with in the past, has floated the idea of using public-private assurance labs to certify and monitor AI in health care. What do you think? These AI assurance labs are kind of a step in that [self-regulatory] direction. It’s something that health care can basically start doing itself with our own expertise to start setting standards. And to the extent that the FDA and perhaps other elements of the federal government may be involved, that's great. Because over time, it’ll become more and more codified. But it’ll be driven from what's happening within the health care space rather than coming from legislation that may be well intended but is likely to miss its mark simply because of the gaps in understanding and expertise and what’s really happening with the technology. What concerns you about AI in health care right now? Right now, it’s really challenging for health systems because they have a really high and rapidly growing governance burden. Most AI technology developers aren’t thinking what are the 100 things I need to check off to make sure that this application is going to be appropriate for every health care environment. They’re just thinking, “I’ve built a model that can identify this disease really well, within this population” or something like that, right? But over time, hopefully, the industry will evolve more toward where those standards exist upfront and they do know what they need to do from the very beginning.
|