Building a slide deck, pitch, or presentation? Here are the big takeaways:
- A Google Research team used a deep learning algorithm to accurately predict cardiovascular risk based on images of human eyes.
- Google Research’s deep learning work could represent a new method of scientific discovery.
Researchers from Google discovered a deep learning algorithm that can accurately predict cardiovascular risk factors based on images of a patient’s eyes, according to a Monday Google Research Blog post.
Heart disease and stroke are the world’s largest causes of death, accounting for more than half of all deaths worldwide in 2015, according to the World Health Organization. These diseases have remained the leading causes of death globally for the last 15 years, the organization noted. Using deep learning technology to aid in diagnosis could help scientists create more targeted hypotheses, and drive a wide range of future research on these and other conditions, Google noted.
For doctors, assessing a patient’s risk for cardiovascular disease is a critical first step toward reducing the likelihood that the patient suffers a cardiovascular event in the future, Lily Peng, Google Brain Team’s product manager, wrote in the post. Typically, this assessment includes examining risk factor such as age, sex, smoking, blood pressure, and cholesterol, as well as taking into account whether the patient has another disease associated with increased risk of cardiovascular issues, such as diabetes.
SEE: IT leader’s guide to the future of artificial intelligence (Tech Pro Research)
However, deep learning techniques can also be used to increase the accuracy of diagnoses for these conditions, Peng wrote. Google previously found that these methods can accurately detect diabetic eye disease. Now, they found that images of the eye can also “very accurately” predict other indicators of cardiovascular health.
“This discovery is particularly exciting because it suggests we might discover even more ways to diagnose health issues from retinal images,” Peng wrote.
In a paper published in Nature Biomedical Engineering, Google researchers, along with those from Verily Life Sciences and the Stanford School of Medicine, used deep learning algorithms trained on data from 284,335 patients. The algorithms were able to predict cardiovascular risk factors from retinal images with “surprisingly high accuracy” for patients from two independent datasets of 12,026 and 999 patients.
The algorithm could distinguish the retinal images of a smoker versus a non-smoker 71% of the time, Peng wrote. And while doctors can usually distinguish between the retinal images of patients with severe high blood pressure and those without, the algorithm could go further, and predict the systolic blood pressure for all patients.
Further, the algorithm was “fairly accurate” at predicting the risk of a cardiovascular event directly, Peng wrote. When given the retinal image of a patient who experienced a major cardiovascular event up to five years after the image was taken, and the image of a patient who did not, the algorithm could determine which patient experienced the health event 70% of the time.
“This performance approaches the accuracy of other [cardiovascular] risk calculators that require a blood draw to measure cholesterol,” Peng wrote.
Google also made sure to determine how the algorithm was making its prediction. Using attention techniques, the researchers generated a heatmap that showed which pixels were the most important for predicting a specific cardiovascular risk factor.
“Explaining how the algorithm is making its prediction gives doctor more confidence in the algorithm itself,” Peng wrote. “In addition, this technique could help generate hypotheses for future scientific investigations into CV risk and the retina.”
Google plans to continuing developing and testing the algorithm on larger and more comprehensive datasets, Peng wrote.