Assessing retinal vein occlusion based on color fundus photographs using neural understanding network (NUN)


Beeche C., Gezer N. S., Iyer K., Almetwali O., Yu J., Zhang Y., ...Daha Fazla

MEDICAL PHYSICS, 2022 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Basım Tarihi: 2022
  • Doi Numarası: 10.1002/mp.16012
  • Dergi Adı: MEDICAL PHYSICS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, EMBASE, INSPEC, MEDLINE
  • Anahtar Kelimeler: convolutional neural network (CNN), graph neural network (GNN), image classification, neural understanding network (NUN), retinal vein occlusion, CLASSIFICATION, VEGF
  • Dokuz Eylül Üniversitesi Adresli: Evet

Özet

Objective To develop and validate a novel deep learning architecture to classify retinal vein occlusion (RVO) on color fundus photographs (CFPs) and reveal the image features contributing to the classification. Methods The neural understanding network (NUN) is formed by two components: (1) convolutional neural network (CNN)-based feature extraction and (2) graph neural networks (GNN)-based feature understanding. The CNN-based image features were transformed into a graph representation to encode and visualize long-range feature interactions to identify the image regions that significantly contributed to the classification decision. A total of 7062 CFPs were classified into three categories: (1) no vein occlusion ("normal"), (2) central RVO, and (3) branch RVO. The area under the receiver operative characteristic (ROC) curve (AUC) was used as the metric to assess the performance of the trained classification models. Results The AUC, accuracy, sensitivity, and specificity for NUN to classify CFPs as normal, central occlusion, or branch occlusion were 0.975 (+/- 0.003), 0.911 (+/- 0.007), 0.983 (+/- 0.010), and 0.803 (+/- 0.005), respectively, which outperformed available classical CNN models. Conclusion The NUN architecture can provide a better classification performance and a straightforward visualization of the results compared to CNNs.