Long Short-Term Relevance Learning

Bram P. van de Weg*, L. Greve, B. Rosic

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)
11 Downloads (Pure)

Abstract

To incorporate sparsity knowledge as well as measurement uncertainties in the traditional long short-term memory (LSTM) neural networks, an efficient relevance vector machine algorithm is introduced to the network architecture. The proposed scheme automatically determines relevant neural connections and adapts accordingly, in contrast to the classical LSTM solution. Due to its flexibility, the new LSTM scheme is less prone to overfitting and hence can approximate time-dependent solutions by use of a smaller data set. On a structural nonlinear finite element application, we show that the self-regulating framework does not require prior knowledge of a suitable network architecture and size, while ensuring satisfying accuracy at reasonable computational cost.
Original languageEnglish
Pages (from-to)61-87
Number of pages26
JournalInternational Journal for Uncertainty Quantification
Volume14
Issue number1
Early online date29 Aug 2023
DOIs
Publication statusPublished - 1 Jan 2024

Keywords

  • NLA
  • Neural network
  • Automatic relevance determination
  • Bayesian
  • Sparsity
  • Finite element model
  • LSTM

Fingerprint

Dive into the research topics of 'Long Short-Term Relevance Learning'. Together they form a unique fingerprint.
  • Long Short-Term Relevance Learning

    van de Weg, B. P., Greve, L. & Rosic, B., 21 Jun 2021, ArXiv.org.

    Research output: Working paperPreprintAcademic

    Open Access
    File

Cite this