Convolutional neural networks are very complex and not easily interpretable by humans. Several tools give more insight into the training process and decision making of neural networks but are not un- derstandable for people with no or limited knowledge about artificial neural networks. Since these non-experts sometimes do need to rely on the decisions of a neural network, we developed an open-source tool that intuitively visualises the training process of a neural network. We visualize neuron activity using the dimensionality reduction method UMAP. By plotting neuron activity after every epoch, we create a video that shows how the neural network improves itself throughout the training phase. We evaluated our method by analysing the visualization on a CNN training on a sketch data set. We show how a video of the training over time gives more insight than a static visualisation at the end of training, as well as which features are useful to visualise for non-experts. We conclude that most of the useful deductions made from the videos are suitable for non-experts, which indicates that the visualization tool might be helpful in practice.
|Publication status||Published - 2019|
|Event||31th Benelux Conference on Artificial Intelligence, BNAIC 2019 - Ateliers Des Tanneurs, Brussels, Belgium|
Duration: 6 Nov 2019 → 8 Nov 2019
Conference number: 31
|Conference||31th Benelux Conference on Artificial Intelligence, BNAIC 2019|
|Period||6/11/19 → 8/11/19|