TY - UNPB
T1 - Dynamic Sparse Network for Time Series Classification
T2 - Learning What to "see''
AU - Xiao, Qiao
AU - Wu, Boqian
AU - Zhang, Yu
AU - Liu, Shiwei
AU - Pechenizkiy, Mykola
AU - Mocanu, Elena
AU - Mocanu, Decebal Constantin
N1 - Accepted at Neural Information Processing Systems (NeurIPS 2022)
PY - 2022/12/19
Y1 - 2022/12/19
N2 - The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: https://github.com/QiaoXiao7282/DSN.
AB - The receptive field (RF), which determines the region of time series to be ``seen'' and used, is critical to improve the performance for time series classification (TSC). However, the variation of signal scales across and within time series data, makes it challenging to decide on proper RF sizes for TSC. In this paper, we propose a dynamic sparse network (DSN) with sparse connections for TSC, which can learn to cover various RF without cumbersome hyper-parameters tuning. The kernels in each sparse layer are sparse and can be explored under the constraint regions by dynamic sparse training, which makes it possible to reduce the resource cost. The experimental results show that the proposed DSN model can achieve state-of-art performance on both univariate and multivariate TSC datasets with less than 50\% computational cost compared with recent baseline methods, opening the path towards more accurate resource-aware methods for time series analyses. Our code is publicly available at: https://github.com/QiaoXiao7282/DSN.
KW - cs.LG
KW - cs.AI
U2 - 10.48550/arXiv.2212.09840
DO - 10.48550/arXiv.2212.09840
M3 - Preprint
BT - Dynamic Sparse Network for Time Series Classification
PB - ArXiv.org
ER -