Port-Hamiltonian Approach to Neural Network Training

Stefano Massaroli, Michael Poli, F. Califano, Angela Faragasso, Jinkyoo Park, Atsushi Yamashita, Hajime Asama

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
22 Downloads (Pure)


Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations. Recent work proposes networks with layer outputs which are no longer quantized but are solutions of an ordinary differential equation (ODE); however, these networks are still optimized via discrete methods (e.g. gradient descent). In this paper, we explore a different direction: namely, we propose a novel framework for learning in which the parameters themselves are solutions of ODEs. By viewing the optimization process as the evolution of a port-Hamiltonian system, we can ensure convergence to a minimum of the objective function. Numerical experiments have been performed to show the validity and effectiveness of the proposed methods.

Original languageEnglish
Title of host publication2019 IEEE 58th Conference on Decision and Control, CDC 2019
Number of pages8
ISBN (Electronic)9781728113982
Publication statusPublished - Dec 2019
Event58th IEEE Conference on Decision and Control, CDC 2019 - Nice Acropolis, Nice, France
Duration: 11 Dec 201913 Dec 2019
Conference number: 58

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0743-1546


Conference58th IEEE Conference on Decision and Control, CDC 2019
Abbreviated titleCDC 2019
Internet address


Dive into the research topics of 'Port-Hamiltonian Approach to Neural Network Training'. Together they form a unique fingerprint.

Cite this