Deep learning of normal form autoencoders for universal, parameter-dependent dynamics

Manu Kalia*, Steven L. Brunton, Hil Gaétan Ellart Meijer, Christoph Brune, J. Nathan Kutz

*Corresponding author for this work

Research output: Contribution to conferencePaperpeer-review

581 Downloads (Pure)

Abstract

A long-standing goal in dynamical systems is to construct reduced-order models for high-dimensional spatiotemporal data that capture the underlying parametric dependence of the data. In this work, we introduce a deep learning technique to discover a single low-dimensional model for such data that captures the underlying parametric dependence in terms of a normal form. A normal form is a symbolic expression, or universal unfolding, that describes how a the reduced-order differential equation model varies with respect to a bifurcation parameter. Our approach introduces coupled autoencoders for the state and parameter, with the latent variables constrained to adhere to a given normal form. We demonstrate our method on one-parameter bifurcations that occur in the canonical Lorenz96 equations and a neural field equation. This method demonstrates how normal forms can be leveraged as canonical and universal building blocks in deep learning approaches for model discovery and reduced-order modeling.
Original languageEnglish
Number of pages6
Publication statusPublished - 12 Dec 2020
Event1st NeurIPS Workshop on Interpretable Inductive Biases and Physically Structured Learning 2020 - Virtual
Duration: 12 Dec 202012 Dec 2020
Conference number: 1
https://inductive-biases.github.io/

Workshop

Workshop1st NeurIPS Workshop on Interpretable Inductive Biases and Physically Structured Learning 2020
CityVirtual
Period12/12/2012/12/20
Internet address

Keywords

  • Deep Learning
  • Dynamical Systems

Fingerprint

Dive into the research topics of 'Deep learning of normal form autoencoders for universal, parameter-dependent dynamics'. Together they form a unique fingerprint.

Cite this