Sparse neural networks (SNNs) address the high computational complexity of deep neural networks by using sparse connectivity among their layers and aiming to match the predictive performance of their dense counterpart. Pruning dense neural networks is among the most widely used methods to obtain SNNs. Driven by the high training cost of such methods that can be unaffordable for a low-resource device, training SNNs sparsely from scratch has recently gained attention, known as "sparse training" in the literature. In this talk, I will provide a brief introduction to SNNs and recent advances in the field of sparse training. Then, I present how SNNs can be utilized to perform different tasks efficiently with a focus on feature selection.