AI is as skilled as experts when screening for retinal disease

The first results on the use of artificial intelligence (AI) for automatic screening of diabetic retinopathy (DR) in Danish patients were recently presented as part of the PhD project “Detection of Diabetic Eye Disease in a Danish Context using Deep Learning” by Jakob Andersen from Steno Diabetes Center Odense and SDU Robotics at the Maersk Mc-Kinney Moller Institute. In this project, an AI model based on deep neural networks was developed to detect specific disease indicators in retinal images and subsequently grade the severity of DR.

Figure 1 Example retinal image (left) along with retinal expert markings for various disease indicators (middle) and model detections in the same image (right).

More than 30,000 expert annotated retinal images from patients screened at Odense University Hospital were used for developing the model to grade the disease into five severity levels, ranging from no apparent DR to severe sight-threatening DR. In a test set of 4,703 images, the model was able to determine the correct level of disease in 70.4% of cases, which is similar to retinal experts performing the same task. In addition, for detecting moderate or worse levels of disease, the model gave the correct diagnosis in more than 9 out of 10 images. 

Using a dataset of 300 expert annotated images, the model was demonstrated to equal a retinal expert, as it had the ability to detect seven different retinal abnormalities in DR, including microvascular changes that indicate increased risk of vision loss and blindness (Figure 1).

Figure 2 Example retinal image correctly graded by the model as having moderately severe DR. The detected abnormalities are overlayed on the image, and the regions which the model has selected and used when grading the image are highlighted.

The model’s ability to detect individual disease indicators as well as grade the overall severity resulted in more interpretable model decisions, which is otherwise a concern with the use of AI for automatic medical image analysis (Figure 2). Model transparency could be a factor regarding clinical adoption of AI, as interpretable models may be seen as more trustworthy by clinicians and patients alike.