One-shot learning using Mixture of Variational Autoencoders: A generalization learning approach

D.C. Mocanu, E. Mocanu

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

2 Citations (Scopus)
3 Downloads (Pure)

Abstract

Deep learning, even if it is very successful nowadays, traditionally needs very large amounts of labeled data to perform excellent on the classification task. In an attempt to solve this problem, the one-shot learning paradigm, which makes use of just one labeled sample per class and prior knowledge, becomes increasingly important. In this paper, we propose a new one-shot learning method, dubbed MoVAE (Mixture of Variational AutoEncoders), to perform classification. Complementary to prior studies, MoVAE represents a shift of paradigm in comparison with the usual one-shot learning methods, as it does not use any prior knowledge. Instead, it starts from zero knowledge and one labeled sample per class. Afterward, by using unlabeled data and the generalization learning concept (in a way, more as humans do), it is capable to gradually improve by itself its performance. Even more, if there are no unlabeled data available MoVAE can still perform well in one-shot learning classification. We demonstrate empirically the efficiency of our proposed approach on three datasets, i.e. the handwritten digits (MNIST), fashion products (Fashion-MNIST), and handwritten characters (Omniglot), showing that MoVAE outperforms state-of-the-art oneshot learning algorithms.
Original languageEnglish
Title of host publication17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018
EditorsM. Dastani, G. Sukthankar, E André, S. Koenig
PublisherThe International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages2016-2018
Number of pages3
Volume3
ISBN (Print)9781510868083
Publication statusPublished - 11 Jul 2018
Externally publishedYes
Event17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018
Conference number: 17

Conference

Conference17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018
Abbreviated titleAAMAS
CountrySweden
CityStockholm
Period10/07/1815/07/18

Fingerprint

Learning algorithms
Deep learning

Keywords

  • Collective Intelligence
  • Generalization learning
  • One-shot learning
  • Semi-Supervised Learning
  • Variational AutoEncoders

Cite this

Mocanu, D. C., & Mocanu, E. (2018). One-shot learning using Mixture of Variational Autoencoders: A generalization learning approach. In M. Dastani, G. Sukthankar, E. André, & S. Koenig (Eds.), 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018 (Vol. 3, pp. 2016-2018). The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS).
Mocanu, D.C. ; Mocanu, E. / One-shot learning using Mixture of Variational Autoencoders : A generalization learning approach. 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018. editor / M. Dastani ; G. Sukthankar ; E André ; S. Koenig. Vol. 3 The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2018. pp. 2016-2018
@inproceedings{bef98e1bcb1546d8bf786feb28fabf18,
title = "One-shot learning using Mixture of Variational Autoencoders: A generalization learning approach",
abstract = "Deep learning, even if it is very successful nowadays, traditionally needs very large amounts of labeled data to perform excellent on the classification task. In an attempt to solve this problem, the one-shot learning paradigm, which makes use of just one labeled sample per class and prior knowledge, becomes increasingly important. In this paper, we propose a new one-shot learning method, dubbed MoVAE (Mixture of Variational AutoEncoders), to perform classification. Complementary to prior studies, MoVAE represents a shift of paradigm in comparison with the usual one-shot learning methods, as it does not use any prior knowledge. Instead, it starts from zero knowledge and one labeled sample per class. Afterward, by using unlabeled data and the generalization learning concept (in a way, more as humans do), it is capable to gradually improve by itself its performance. Even more, if there are no unlabeled data available MoVAE can still perform well in one-shot learning classification. We demonstrate empirically the efficiency of our proposed approach on three datasets, i.e. the handwritten digits (MNIST), fashion products (Fashion-MNIST), and handwritten characters (Omniglot), showing that MoVAE outperforms state-of-the-art oneshot learning algorithms.",
keywords = "Collective Intelligence, Generalization learning, One-shot learning, Semi-Supervised Learning, Variational AutoEncoders",
author = "D.C. Mocanu and E. Mocanu",
year = "2018",
month = "7",
day = "11",
language = "English",
isbn = "9781510868083",
volume = "3",
pages = "2016--2018",
editor = "M. Dastani and G. Sukthankar and E Andr{\'e} and S. Koenig",
booktitle = "17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018",
publisher = "The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)",

}

Mocanu, DC & Mocanu, E 2018, One-shot learning using Mixture of Variational Autoencoders: A generalization learning approach. in M Dastani, G Sukthankar, E André & S Koenig (eds), 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018. vol. 3, The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), pp. 2016-2018, 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018, Stockholm, Sweden, 10/07/18.

One-shot learning using Mixture of Variational Autoencoders : A generalization learning approach. / Mocanu, D.C.; Mocanu, E.

17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018. ed. / M. Dastani; G. Sukthankar; E André; S. Koenig. Vol. 3 The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), 2018. p. 2016-2018.

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

TY - GEN

T1 - One-shot learning using Mixture of Variational Autoencoders

T2 - A generalization learning approach

AU - Mocanu, D.C.

AU - Mocanu, E.

PY - 2018/7/11

Y1 - 2018/7/11

N2 - Deep learning, even if it is very successful nowadays, traditionally needs very large amounts of labeled data to perform excellent on the classification task. In an attempt to solve this problem, the one-shot learning paradigm, which makes use of just one labeled sample per class and prior knowledge, becomes increasingly important. In this paper, we propose a new one-shot learning method, dubbed MoVAE (Mixture of Variational AutoEncoders), to perform classification. Complementary to prior studies, MoVAE represents a shift of paradigm in comparison with the usual one-shot learning methods, as it does not use any prior knowledge. Instead, it starts from zero knowledge and one labeled sample per class. Afterward, by using unlabeled data and the generalization learning concept (in a way, more as humans do), it is capable to gradually improve by itself its performance. Even more, if there are no unlabeled data available MoVAE can still perform well in one-shot learning classification. We demonstrate empirically the efficiency of our proposed approach on three datasets, i.e. the handwritten digits (MNIST), fashion products (Fashion-MNIST), and handwritten characters (Omniglot), showing that MoVAE outperforms state-of-the-art oneshot learning algorithms.

AB - Deep learning, even if it is very successful nowadays, traditionally needs very large amounts of labeled data to perform excellent on the classification task. In an attempt to solve this problem, the one-shot learning paradigm, which makes use of just one labeled sample per class and prior knowledge, becomes increasingly important. In this paper, we propose a new one-shot learning method, dubbed MoVAE (Mixture of Variational AutoEncoders), to perform classification. Complementary to prior studies, MoVAE represents a shift of paradigm in comparison with the usual one-shot learning methods, as it does not use any prior knowledge. Instead, it starts from zero knowledge and one labeled sample per class. Afterward, by using unlabeled data and the generalization learning concept (in a way, more as humans do), it is capable to gradually improve by itself its performance. Even more, if there are no unlabeled data available MoVAE can still perform well in one-shot learning classification. We demonstrate empirically the efficiency of our proposed approach on three datasets, i.e. the handwritten digits (MNIST), fashion products (Fashion-MNIST), and handwritten characters (Omniglot), showing that MoVAE outperforms state-of-the-art oneshot learning algorithms.

KW - Collective Intelligence

KW - Generalization learning

KW - One-shot learning

KW - Semi-Supervised Learning

KW - Variational AutoEncoders

M3 - Conference contribution

SN - 9781510868083

VL - 3

SP - 2016

EP - 2018

BT - 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018

A2 - Dastani, M.

A2 - Sukthankar, G.

A2 - André, E

A2 - Koenig, S.

PB - The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)

ER -

Mocanu DC, Mocanu E. One-shot learning using Mixture of Variational Autoencoders: A generalization learning approach. In Dastani M, Sukthankar G, André E, Koenig S, editors, 17th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2018. Vol. 3. The International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS). 2018. p. 2016-2018