Translational Vision Science & Technology | 2021

Meibography Phenotyping and Classification From Unsupervised Discriminative Feature Learning

 
 
 

Abstract


Purpose The purpose of this study was to develop an unsupervised feature learning approach that automatically measures Meibomian gland (MG) atrophy severity from meibography images and discovers subtle relationships between meibography images according to visual similarity. Methods One of the latest unsupervised learning approaches is to apply feature learning based on nonparametric instance discrimination (NPID), a convolutional neural network (CNN) backbone model trained to encode meibography images into 128-dimensional feature vectors. The network aims to learn a similarity metric across all instances (e.g. meibography images) and groups visually similar instances together. A total of 706 meibography images with corresponding meiboscores were collected and annotated for the use of network learning and performance evaluation. Results Four hundred ninety-seven meibography images were used for network learning and tuning, whereas the remaining 209 images were used for network model evaluations. The proposed nonparametric instance discrimination approach achieved 80.9% meiboscore grading accuracy on average, outperforming the clinical team by 25.9%. Additionally, a 3D feature visualization and agglomerative hierarchical clustering algorithms were used to discover the relationship between meibography images. Conclusions The proposed NPID approach automatically analyses MG atrophy severity from meibography images without prior image annotations, and categorizes the gland characteristics through hierarchical clustering. This method provides quantitative information on the MG atrophy severity based on the analysis of phenotypes. Translational Relevance The study presents a Meibomian gland atrophy evaluation method for meibography images based on unsupervised learning. This method may be used to aid diagnosis and management of Meibomian gland dysfunction without prior image annotations, which require time and resources.

Volume 10
Pages None
DOI 10.1167/tvst.10.2.4
Language English
Journal Translational Vision Science & Technology

Full Text