Visualising the Training Process of Convolutional Neural Networks for Non-Experts

Michelle Peters, Lindsay Kempen, Meike Nauta, Christin Seifert

    Research output: Contribution to conferencePaperpeer-review

    877 Downloads (Pure)


    Convolutional neural networks are very complex and not easily interpretable by humans. Several tools give more insight into the training process and decision making of neural networks but are not un- derstandable for people with no or limited knowledge about artificial neural networks. Since these non-experts sometimes do need to rely on the decisions of a neural network, we developed an open-source tool that intuitively visualises the training process of a neural network. We visualize neuron activity using the dimensionality reduction method UMAP. By plotting neuron activity after every epoch, we create a video that shows how the neural network improves itself throughout the training phase. We evaluated our method by analysing the visualization on a CNN training on a sketch data set. We show how a video of the training over time gives more insight than a static visualisation at the end of training, as well as which features are useful to visualise for non-experts. We conclude that most of the useful deductions made from the videos are suitable for non-experts, which indicates that the visualization tool might be helpful in practice.
    Original languageEnglish
    Publication statusPublished - 2019
    Event31st Benelux Conference on Artificial Intelligence, BNAIC 2019 - Ateliers Des Tanneurs, Brussels, Belgium
    Duration: 6 Nov 20198 Nov 2019
    Conference number: 31


    Conference31st Benelux Conference on Artificial Intelligence, BNAIC 2019
    Abbreviated titleBNAIC
    Internet address


    Dive into the research topics of 'Visualising the Training Process of Convolutional Neural Networks for Non-Experts'. Together they form a unique fingerprint.

    Cite this