Dr Judith S. Birkenfeld
Photo: L'Oréal Spain
Dr Judith S. Birkenfeld conducts her research as member of the “Visual Optics and Biophotonics” group at the Institute of Optics in the Spanish National Research Council (IO-CSIC) in Madrid. She studied Physics and Medical Physics at Heidelberg University and earned her PhD with distinction at the Complutense University of Madrid. She worked at the Massachusetts Institute of Technology and the Harvard Medical School in Cambridge (US) before returning to Europe with a Marie Skłodowska-Curie Fellowship in 2018. For her research project at the IO-CSIC on the early diagnosis of the rare eye disease Keratoconus Judith Birkenfeld received the Spanish L’Oréal-UNESCO For Women in Science Award. Here, she introduces her recent work on the use of artificial intelligence for early detection of suspicious skin lesions.
„Cutaneous melanoma is responsible for over 75% of skin cancer deaths. In 2021, an estimated 106.110 patients will be diagnosed with melanoma, and 7180 patients are expected to die of melanoma (numbers for US). Early detection is key to reducing melanoma mortality and lowering treatment costs, but widespread melanoma screening is currently not feasible. As an example; in the US there are about 12,000 practicing dermatologists, and they would each need to see 27,416 patients per year to screen the entire population for suspicious pigmented lesions that can indicate cancer.
Some years ago, the SCREEN project in Northern Germany, a pilot skin screening project, has shown encouraging results on the impact of skin screening, and it is clear that there is a concrete need to develop efficient methods to identify precursor lesions, or “suspicious lesions”, that are at high risk for progression to melanoma. The screening for these suspicious lesions is known as macro-screening for outlier lesions. In practice, macro screening is usually done by an experienced dermatologist who has developed an intuition about which skin lesion on the whole of a patient’s body needs further examination. An experienced dermatologist can do a complete macro screening within 1-2 minutes, and then might follow up on the suspicious lesions using dermoscopy.
Photo: IO-CSIC
There is a lot of interest in the development of computer-aided diagnosis (CAD) systems that can analyze images of skin lesions and automatically identify suspicious lesions. CAD algorithms are trained to evaluate skin lesions individually for suspicious features, and the image database for training and test typically consists of close-ups of a variety skin lesions. But what dermatologists usually do during macro-screening is what is commonly referred to as “looking for an ugly duckling”: they compare multiple lesions of their patient and filter out the ones that visually stand out, for further examination. No CAD systems in dermatology, to date, have been designed to replicate this very first step in the diagnosis process, indeed, all existing CAD systems created for identifying suspicious lesions only analyze lesions individually, completely omitting the “ugly duckling” criteria.
We tried to tackle this challenge in our recent work, in which we introduce a new CAD system for skin lesions based on convolutional deep neural networks (CDNNs). Our system successfully distinguished suspicious lesions from non-suspicious lesions in photos of patients’ skin with ~90% accuracy, and for the first time established an “ugly duckling” metric capable of matching the consensus of three dermatologists 88% of the time.
This baseline system was still analyzing the features of individual lesions, rather than features across multiple lesions as dermatologists do. To add the ugly duckling criteria into our model, we used the extracted features in a secondary stage to create a 3D “map” of all of the lesions
in a given image, and calculated how far away from “typical” each lesion’s features were. The more “odd” a given lesion was compared to the others in an image, the further away it was from the center of the 3D space. This distance is the first quantifiable definition of the ugly duckling criteria, and serves as a gateway to leveraging deep learning networks to overcome the challenging and time-consuming task of identifying and scrutinizing the differences between all the pigmented lesions in a single patient.
The images we used come from our database of 33.000 pictures of patients that are placed in front of any given background and under various lightning conditions. With this important source of image variability, the CDNN should be able to use photos taken from any consumer-grade camera or smartphone for diagnosis. Our images contain both suspicious lesions and non-suspicious skin lesions that were labeled and confirmed by a consensus of three board-certified dermatologists. After training on the database and subsequent refinement and testing, the system was able to distinguish between suspicious and non-suspicious lesions with 90.3% sensitivity and 89.9% specificity, improving upon previously published systems.”
Interested in Dr Birkenfeld’s work? Contact her via j.birkenfeld@io.cfmac.csic.es
Photo: Soenksen et al.