Abstract
A major disadvantage of feedforward neural networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical transformation of a feed-forward network into a SOMlike structure such that its internal knowledge can be visually interpreted. This is particularly applicable to networks trained in the general classification problem domain.
Original language | Undefined |
---|---|
Title of host publication | 14th Annual Workshop on Circuits Systems and Signal Processing (ProRISC) |
Place of Publication | Netherlands |
Publisher | STW |
Pages | 447-452 |
Number of pages | 6 |
ISBN (Print) | 90-73461-39-1 |
Publication status | Published - Nov 2003 |
Event | 14th ProRISC Workshop on Circuits, Systems and Signal Processing 2003 - Veldhoven, Netherlands Duration: 25 Nov 2003 → 27 Nov 2003 Conference number: 14 |
Publication series
Name | |
---|---|
Publisher | STW Technology Foundation |
Workshop
Workshop | 14th ProRISC Workshop on Circuits, Systems and Signal Processing 2003 |
---|---|
Abbreviated title | ProRISC |
Country/Territory | Netherlands |
City | Veldhoven |
Period | 25/11/03 → 27/11/03 |
Keywords
- feature maps
- selforganizing maps
- character recognition
- rule extraction
- EWI-9666
- Neural Networks
- IR-46698
- METIS-215828