When AI thinks surgeon, he’s a white man

From: POLITICO Future Pulse - Wednesday Nov 15,2023 07:02 pm
The ideas and innovators shaping health care
Nov 15, 2023 View in browser
 
Future Pulse

By Shawn Zeller, Ruth Reader, Daniel Payne, Erin Schumaker and Evan Peng

TECH MAZE

Lead surgeon Pardeep Kumar performs surgery for the removal of a bladder tumor. | GETTY

Surgeons are getting more diverse, but AI systems don't know it. | Getty Images

Artificial intelligence systems designed to generate an image from a text request tend to think nearly all surgeons are white men.

Researchers at Brown University, Mass General Hospital and other universities and hospitals came to that conclusion after testing three prominent text-to-image systems for a study published today in JAMA Surgery.

How so? The researchers tested three popular text-to-image systems, DALL-E 2 from OpenAI, Midjourney version 5.1 from Midjourney and Stable Diffusion 2.1 from Stable AI.

They asked each system to develop images of surgeons working in eight different surgical specialties. They also asked each system to show images of surgical trainees, a more diverse demographic.

DALL-E-2 produced images of nonwhite and female surgeons at rates matching their representation in the profession but produced too few images of diverse surgical trainees.

Midjourney and Stable Diffusion produced images of white men almost exclusively.

The researchers attributed the more representative images from the OpenAI system to a process the company has developed to take user feedback.

Takeaways: The researchers said the results offer a cautionary tale: “Adoption of new medical technologies carries the potential for exacerbating, rather than ameliorating, disparities in patient outcomes due to differences in access, adoption, or clinical application.”

Even so: DALL-E-2’s performance suggests that improvements are feasible with better system design.

 

GO INSIDE THE CAPITOL DOME: From the outset, POLITICO has been your eyes and ears on Capitol Hill, providing the most thorough Congress coverage — from political characters and emerging leaders to leadership squabbles and policy nuggets during committee markups and hearings. We're stepping up our game to ensure you’re fully informed on every key detail inside the Capitol Dome, all day, every day. Start your day with Playbook AM, refuel at midday with our Playbook PM halftime report and enrich your evening discussions with Huddle. Plus, stay updated with real-time buzz all day through our brand new Inside Congress Live feature. Learn more and subscribe here.

 
 
WELCOME TO FUTURE PULSE

Asheville, N.C.

Asheville, N.C. | Herbert Zeller

This is where we explore the ideas and innovators shaping health care.

Harvard public health researchers helped American Airlines flight attendants make their case against clothing manufacturer Twin Hill, alleging that the uniforms it made for the airline caused health problems. A California jury awarded the attendants more than $1 million earlier this month.

Share any thoughts, news, tips and feedback with Carmen Paun at cpaun@politico.com, Daniel Payne at dpayne@politico.com, Evan Peng at epeng@politico.com, Ruth Reader at rreader@politico.com or Erin Schumaker at eschumaker@politico.com.

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.

Today on our Pulse Check podcast, host Lauren Gardner talks with POLITICO health care reporter Kelly Hooper, who explains Connecticut's approach to covering pricey weight-loss drugs in its employee health plans by tying coverage to lifestyle programs.

Play audio

Listen to today’s Pulse Check podcast

SAFETY CHECK

Teenagers hang on to their smartphones in Marseille, southern France, on June 27, 2022. (Photo by Nicolas TUCAT / AFP) (Photo by NICOLAS TUCAT/AFP via Getty Images)

Teens fear AI could become a tool for bullies. | AFP via Getty Images

Teens are concerned that artificial intelligence tools like ChatGPT, the bot that uses machine learning to answer questions, could be used in cyberbullying campaigns or to otherwise harass people, according to a new survey from the Family Online Safety Institute.

How so? The institute, a Washington-based group that seeks to keep kids safe online, polled approximately 3,000 parents and 3,000 teens in the U.S., Germany and Japan.

Overall, parents and teens shared many of the same worries — that generative AI like ChatGPT could lead to job losses and spread misinformation.

But only teens raised the possibility of cyber harassment.

Family Online Safety Institute poll on AI risk to teens showing cyberbullying as potential risk.

Family Online Safety Institute

Why it matters: Earlier this year, the Centers for Disease Control and Prevention reported that 20 percent of high school girls and 11 percent of high school boys said they were cyberbullied in 2021.

Lawmakers are already concerned with how online platforms can affect kids’ mental health.

The Senate is considering bipartisan legislation, the Kids Online Safety Act, to give parents and teens more control over their online experience.

Meanwhile, in October, some 33 attorneys general sued Meta, the parent company of Facebook, asserting the company designed platforms in ways that harm children’s mental health.

 

DOWNLOAD THE POLITICO APP: Stay in the know with the POLITICO mobile app, featuring timely political news, insights and analysis from the best journalists in the business. The sleek and navigable design offers a convenient way to access POLITICO's scoops and groundbreaking reporting. Don’t miss out on the app you can rely on for the news you need. DOWNLOAD FOR iOS DOWNLOAD FOR ANDROID.

 
 
IN THE COURTS

UnitedHealthcare cards and forms are shown in a June 15, 2018 file photo.

A suit alleges UnitedHealthcare used an algorithm to deny patients rehab. | Wilfredo Lee/AP Photo

Health insurers are eager to use AI to speed coverage decisions, but the practice will have to survive legal scrutiny.

A new class-action lawsuit aims to test it. Relatives of some UnitedHealthcare patients who died have hired California’s Clarkson Law Firm and are suing the insurance giant in federal district court in Minneapolis-St. Paul.

They claim the insurer wrongly denied elderly patients’ Medicare Advantage claims because artificial intelligence told it to.

How so? The complaint says UnitedHealthcare, the nation’s largest insurer, used a proprietary algorithm from naviHealth called nH Predict to deny follow-up care to elderly patients after a hospital stay.

The technology allegedly estimates how much care a patient should need and, according to the suit, its recommendations frequently fall short of doctors’ orders.

For example, Medicare patients are entitled to up to 100 days of follow-up care in a nursing home.

However, the lawsuit said plaintiffs were denied coverage after a 20-day stay and forced to pay for more care.

The lawsuit accuses UnitedHealthcare of directing patients to enroll in a government-subsidized Medicare program, shifting costs onto taxpayers.

UnitedHealthcare did not respond to a request for comment.

 

Follow us on Twitter

Carmen Paun @carmenpaun

Daniel Payne @_daniel_payne

Ruth Reader @RuthReader

Erin Schumaker @erinlschumaker

Evan Peng @thepngfile

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO Future Pulse

Nov 14,2023 07:01 pm - Tuesday

A nurse’s take on AI

Nov 13,2023 07:02 pm - Monday

To cure burnout, lend an ear

Nov 10,2023 07:02 pm - Friday

Soothe a doc: AI won’t replace you

Nov 09,2023 07:02 pm - Thursday

AI dominates Milken Summit onstage and off

Nov 08,2023 08:01 pm - Wednesday

The anti-junk food coalition is bipartisan

Nov 07,2023 07:02 pm - Tuesday

Hunch: Health AI will live in the shadows

Nov 06,2023 07:05 pm - Monday

Biden, Trump HHS honchos spar at Milken