Neurocomputing | 2021

Robust and structural sparsity auto-encoder with L21-norm minimization

 
 
 
 
 

Abstract


Abstract The mean square error (MSE), as the most commonly used cost function for auto-encoder, is sensitive to outliers or impulsive noises in real-world application, which may misguide the training process. At the same time, stacked auto-encoder(SAE) is indeed a totally fully connected network, the parameters exponentially increase as the nodes and layers increase, which may cause over-fitting, huge computational complexity and storage overhead. So the robustness and sparseness problem of the auto-encoder need to be further investigated. In this paper, we develop a robust and structural sparsity stacked auto-encoder with L21-norm loss function and regularization (LR21-SAE). Our L21-norm loss function can alleviate the negative impact of outlier samples, thus show superior robust performance. Our L21-norm regularization can enforce some rows/columns of weight matrix shrink to zero entirely, thus promote to learn sparse features and choose compact network. We have validated our LR21-SAE model on several common datasets. Experimental results show that LR21-SAE is significantly robust to outlier noises for real-world data, it also can get sparse nodes connection deep neural network with notable less number of parameters than what the original non-sparse network has, while maintain outstanding performance.

Volume 425
Pages 71-81
DOI 10.1016/j.neucom.2020.02.051
Language English
Journal Neurocomputing

Full Text