Eye charts date back to the middle of the 19th century, and, have changed relatively little since then. Many optometrists and ophthalmologists still gauge patients’ vision by having them read rows of letters or numbers.
But is this the best way to determine how well we see?
Zhong-Lin Lu, associate provost and chief scientist at NYU Shanghai, thinks these charts continue to have value, but are too imprecise for measuring vision loss or other changes in vision. The lack of other options could be preventing early detection, and treatment, of eye-related afflictions, such as amblyopia, age-related macular degeneration, diabetic retinopathy, diabetic macular edema, glaucoma, and cataracts, that affect more than 45 million people in the US, with an estimated annual economic burden of $139 billion in medical and productivity costs.
Lu, also a professor in NYU’s Center for Neural Science and Department of Psychology, and his colleagues recognized this shortcoming years ago and began developing more cutting-edge ways to evaluate vision—and to spot changes to it.
Driving much of this work is Lu’s scholarship in related areas, which he discussed with NYU News.
How are the standard eye charts currently used, and where do they fall short?
The commonly used eye charts are very coarse rulers of human vision. They focus on only one dimension of vision: the smallest letter one can see. Because of their low precision, they have proved inadequate for screening vision loss, demonstrating efficacy of novel therapies in clinical trials, or providing real-world evidence for coverage decisions of approved therapies. For instance, although a therapy may have been approved by regulators, such as the FDA, it may not work as effectively as what is claimed when put into practice. As a result, doctors, hospitals, and insurance companies may not use or pay for an approved therapy. Therefore, doctors cannot detect early disease, pharmaceutical research cannot decipher early-stage results about new therapies, and regulators as well as insurance companies and government agencies cannot make informed decisions about drug approval or drug coverage.
How did you set out to solve this challenge?
By using AI. When we think of machine learning or artificial intelligence, eye charts are unlikely to be top of mind. But can these methods, blended with neuroscience, be helpful in testing vision and changes to it?
We believe there is an opportunity to apply intelligent tools to vision testing and to provide the signal detection needed for confident decision making in clinical trials and real-world care. We have developed a novel hardware/software platform, built on our knowledge of the human visual system and a computational framework that implements active learning algorithms on digital displays to modernize the assessment of human vision. For each patient, the active learning algorithm evaluates an expansive space of potential test outcomes, searches a large library of potential test items, and converges to a test sequence comprising optimal queries for each patient based on their previous responses. This personalized approach enables us to converge to a visual phenotype.
What has your research uncovered about what eye charts miss?
In real life, we see objects with many different sizes and contrasts. A more comprehensive eye test must evaluate how well a person can see objects with many different sizes. The contrast sensitivity function provides such a test by measuring how much contrast is needed for one to see letters of all sizes. However, a typical contrast sensitivity test in the laboratory takes about one hour and can’t be used in the clinic. Our research and development have reduced the test time to about 2-5 minutes and made the procedure available to clinicians while maintaining the fine-grained laboratory test precision and made the procedure available to clinicians.
What has allowed you to speed up the test time?
The contrast sensitivity test consists of a sequence of trials; here, an individual is asked to identify letters of a particular size and contrast. The active learning algorithm ascertains the characteristic of the contrast sensitivity function of the individual based on her or his responses to the test stimuli and prescribes the optimal test stimulus in the next trial—the one that could generate the maximum amount of information about the characteristics of the contrast sensitivity function. In this way, the algorithm quickly converges to a precise measure of the individual’s contrast sensitivity function.
So, is your research impacting real patients?
Soon. I have co-founded a company, Adaptive Sensory Technology, Inc., with my former student Luis Lesmes, to do exactly that. We have created a platform that aims to more precisely identify changes in vision through algorithmic assessments and machine-learning techniques.
Can the methodological principles underlying this approach potentially be applied to other areas of human health?
Yes. Doctors have always needed to inquire about and prescribe tests on patients in order to provide accurate diagnoses of medical conditions. The principles developed here could potentially be extended to generate personalized testing that would enable doctors to ask the most informative questions and zoom in on the most critical tests for each patient. Another possible application is in personalized treatment—the active learning principle can be used to characterize each patient’s reaction to multiple treatment options and then optimize the treatment individually.