Translating feedforward neural nets to SOM-like maps

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    20 Downloads (Pure)


    A major disadvantage of feedforward neural networks is still the difficulty to gain insight into their internal functionality. This is much less the case for, e.g., nets that are trained unsupervised, such as Kohonen’s self-organizing feature maps (SOMs). These offer a direct view into the stored knowledge, as their internal knowledge is stored in the same format as the input data that was used for training or is used for evaluation. This paper discusses a mathematical transformation of a feed-forward network into a SOMlike structure such that its internal knowledge can be visually interpreted. This is particularly applicable to networks trained in the general classification problem domain.
    Original languageUndefined
    Title of host publication14th Annual Workshop on Circuits Systems and Signal Processing (ProRISC)
    Place of PublicationNetherlands
    Number of pages6
    ISBN (Print)90-73461-39-1
    Publication statusPublished - Nov 2003
    Event14th ProRISC Workshop on Circuits, Systems and Signal Processing 2003 - Veldhoven, Netherlands
    Duration: 25 Nov 200327 Nov 2003
    Conference number: 14

    Publication series

    PublisherSTW Technology Foundation


    Workshop14th ProRISC Workshop on Circuits, Systems and Signal Processing 2003
    Abbreviated titleProRISC


    • feature maps
    • selforganizing maps
    • character recognition
    • rule extraction
    • EWI-9666
    • Neural Networks
    • IR-46698
    • METIS-215828

    Cite this