Abstract
Neural networks are discrete entities: subdivided into discrete layers and parametrized by weights which are iteratively optimized via difference equations. Recent work proposes networks with layer outputs which are no longer quantized but are solutions of an ordinary differential equation (ODE); however, these networks are still optimized via discrete methods (e.g. gradient descent). In this paper, we explore a different direction: namely, we propose a novel framework for learning in which the parameters themselves are solutions of ODEs. By viewing the optimization process as the evolution of a port-Hamiltonian system, we can ensure convergence to a minimum of the objective function. Numerical experiments have been performed to show the validity and effectiveness of the proposed methods.
Original language | English |
---|---|
Title of host publication | 2019 IEEE 58th Conference on Decision and Control, CDC 2019 |
Publisher | IEEE |
Pages | 6799-6806 |
Number of pages | 8 |
ISBN (Electronic) | 9781728113982 |
DOIs | |
Publication status | Published - Dec 2019 |
Event | 58th IEEE Conference on Decision and Control, CDC 2019 - Nice Acropolis, Nice, France Duration: 11 Dec 2019 → 13 Dec 2019 Conference number: 58 https://cdc2019.ieeecss.org/ |
Publication series
Name | Proceedings of the IEEE Conference on Decision and Control |
---|---|
Volume | 2019-December |
ISSN (Print) | 0743-1546 |
Conference
Conference | 58th IEEE Conference on Decision and Control, CDC 2019 |
---|---|
Abbreviated title | CDC 2019 |
Country/Territory | France |
City | Nice |
Period | 11/12/19 → 13/12/19 |
Internet address |