AMRI provides you with all of the tools to successfully complete your PALS recertification. Our course is fast and flexible. Renew your PALS Certification today!
The growing use of technology in the healthcare industry has led to major advances recently, including upgrades to electronic health records systems, telemedicine, and even apps to schedule house calls. It appears that using artificial intelligence (AI) to diagnose ailments may be next, thanks to a recent study in which Google research scientist Dr. Varun Gulshan and colleagues presented an algorithm designed to accurately screen for diabetic retinopathy.
First, the AI had to be trained to recognize the condition. This was accomplished with the use of a training set including 118,419 images of diabetic retinopathy obtained from EyePACS, a web-based program that allows eye care professionals to share clinical data and images. The images were chosen based on image quality and the presence of diabetic retinopathy and diabetic macular edema, with each being graded by 3 to 7 ophthalmologists (and/or advanced ophthalmology students) prior to inclusion.
Next came testing the algorithm to see what the AI learned from the training. For this, the research group ran two tests. First, they exposed the software to 9,963 images (from 4,997 patients) from EyePACS, with care to ensure no overlap from the training group of images. Ophthalmologists are trained to look for specific indicators for diagnosis, and the algorithms were set to mimic this.
Under these conditions, the AI exhibited a 90.3% sensitivity to the presence of diabetic retinopathy, and a 98.1% specificity in identification. The team then applied a more lenient cutpoint for results, after which the AI exhibited 97.5% sensitivity and 93.4% specificity.
The second test consisted of 1,748 images (from 874 patients) from the Messidor-2 data set, which is a collection of images from diabetic retinopathy exams. Under the first set of conditions (set to mimic ophthalmologist standards), this test yielded 87% sensitivity and 98.5% specificity. Under the more lenient cutpoint, the AI exhibited 96.1% sensitivity and 93.9% specificity.
What does it all mean? It's the first step toward proving that AI has the potential to learn from large data sets and potentially diagnose disease without the aid of an expert healthcare practitioner, potentially reducing the burden on overloaded healthcare workers and reducing costs for patients.
However, we're not quite there yet. Already, Tien Yin Wong, MD, PhD of the Singapore National Eye Centre, and Neil M. Bressler, MD, of Hopkins Medicine in Baltimore, MD (and editor of JAMA Ophthalmology) has responded to these findings.
Although they admit that the percentages of sensitivity and specificity are well above screening guidelines that call for 80% or greater accuracy, they noted flaws in the system, including the inability of the program to distinguish severe cases that require immediate care. The AI also lacks the ability to provide the same comprehensive screening and care as a trained ophthalmologist, who will screen for other conditions simultaneously.
The takeaway is that technology is advancing rapidly, and while it may never completely replace the expertise and services provided by healthcare professionals, it would be wise for healthcare providers to get ahead of these programs and find the best ways to integrate them for the benefit of their practices and their patients alike.
To learn more, read the entire article on Medscape.