Context dependent learning in neural networks

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    1 Citation (Scopus)
    158 Downloads (Pure)

    Abstract

    In this paper an extension to the standard error backpropagation learning rule for multi-layer feed forward neural networks is proposed, that enables them to be trained for context dependent information. The context dependent learning is realised by using a different error function (called Average Risk: AVR) in stead of the sum of squared errors (SQE) normally used in error backpropagation and by adapting the update rules. It is shown that for applications where this context dependent information is important, a major improvement in performance is obtained.
    Original languageEnglish
    Title of host publicationFifth International Conference on Image Processing and Its Applications 1995
    Place of PublicationPiscataway, NJ
    PublisherIEEE
    Pages632-636
    Number of pages5
    ISBN (Print)0-85296-642-3
    DOIs
    Publication statusPublished - 1995
    Event5th International Conference on Image Processing and its Applications 1995 - Edinburgh, United Kingdom
    Duration: 4 Jul 19956 Jul 1995
    Conference number: 5

    Conference

    Conference5th International Conference on Image Processing and its Applications 1995
    Country/TerritoryUnited Kingdom
    CityEdinburgh
    Period4/07/956/07/95

    Keywords

    • SCS-Safety
    • AVR
    • Back propagation
    • Multilayer feedforward neural networks
    • Multilayer perceptrons
    • Neural networks
    • Average risk
    • Context-dependent learning
    • Feedforward neural nets
    • Error backpropagation
    • Error backpropagation learning rule
    • Error function

    Fingerprint

    Dive into the research topics of 'Context dependent learning in neural networks'. Together they form a unique fingerprint.

    Cite this