Around the Web: Distinguishing Medical Deepfakes

By Alessandra Suuberg, Decency LLC

Who can distinguish a medical deepfake?

In a study published this month in the journal Radiology, researchers Tordjman et al. assessed the ability of multimodal large language models (LLMs) and 17 radiologists to distinguish synthetic radiographs from authentic clinical images.

They found that so-called “deepfake” radiographs “generated using an LLM were not easily distinguishable from authentic radiographs” for either the LLMs or the radiologists. They suggested training in order to mitigate risks.

As an accompanying editorial notes, “[s]ynthetic medical images are not new”—though medical deepfakes are now democratized, whereas “[c]reating convincing fakes” historically required “substantial technical expertise.”

Interested readers can find the study by Tordjman et al. here: https://pubs.rsna.org/doi/10.1148/radiol.252094 (“The Rise of Deepfake Medical Imaging: Radiologists’ Diagnostic Accuracy in Detecting ChatGPT-generated Radiographs”).

A deepfake dataset suggested by the researchers for training purposes is available here: https://noneedanick.github.io/DeepFakeXRay/.

More research on the varied uses and detection of medical deepfakes can be found here (“DeepFake knee osteoarthritis X-rays from generative adversarial neural networks deceive medical experts and offer augmentation potential to automatic classification”) and here (“DeepFake electrocardiograms using generative adversarial networks are the beginning of the end for privacy issues in medicine”).

Disclaimer: The information and opinions on this site do not include legal advice or the advice of a licensed healthcare provider.

Alessandra Suuberg