End-to-end optimization of prosthetic vision

Jaap de Ruyter van Steveninck*, Umut Güçlü, Richard van Wezel, Marcel van Gerven

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

22 Citations (Scopus)
89 Downloads (Pure)

Abstract

licensed under Downloaded from jov.arvojournals.org on 03/19/2022Neural prosthetics may provide a promising solution to restore visual perception in some forms of blindness. The restored prosthetic percept is rudimentary compared to normal vision and can be optimized with a variety of image preprocessing techniques to maximize relevant information transfer. Extracting the most useful features from a visual scene is a nontrivial task and optimal preprocessing choices strongly depend on the context. Despite rapid advancements in deep learning, research currently faces a difficult challenge in finding a general and automated preprocessing strategy that can be tailored to specific tasks or user requirements. In this paper, we present a novel deep learning approach that explicitly addresses this issue by optimizing the entire process of phosphene generation in an end-to-end fashion. The proposed model is based on a deep auto-encoder architecture and includes a highly adjustable simulation module of prosthetic vision. In computational validation experiments, we show that such an approach is able to automatically find a task-specific stimulation protocol. The results of these proof-of-principle experiments illustrate the potential of end-to-end optimization for prosthetic vision.

Original languageEnglish
Article number20
JournalJournal of vision
Volume22
Issue number2
DOIs
Publication statusPublished - Feb 2022

Keywords

  • Computer vision
  • Deep learning
  • End-to-end optimization
  • Prosthetic vision
  • UT-Gold-D

Fingerprint

Dive into the research topics of 'End-to-end optimization of prosthetic vision'. Together they form a unique fingerprint.

Cite this