Learning Optimized Structure of Neural Networks by Hidden Node Pruning with L1 Regularization
Article Type
Research Article
Publication Title
IEEE Transactions on Cybernetics
Abstract
We propose three different methods to determine the optimal number of hidden nodes based on L1 regularization for a multilayer perceptron network. The first two methods, respectively, use a set of multiplier functions and multipliers for the hidden-layer nodes and implement the L1 regularization on those, while the third method equipped with the same multipliers uses a smoothing approximation of the L1 regularization. Each of these methods begins with a given number of hidden nodes, then the network is trained to obtain an optimal architecture discarding redundant hidden nodes using the multiplier functions or multipliers. A simple and generic method, namely, the matrix-based convergence proving method (MCPM), is introduced to prove the weak and strong convergence of the presented smoothing algorithms. The performance of the three pruning methods has been tested on 11 different classification datasets. The results demonstrate the efficient pruning abilities and competitive generalization by the proposed methods. The theoretical results are also validated by the results.
First Page
1333
Last Page
1346
DOI
10.1109/TCYB.2019.2950105
Publication Date
3-1-2020
Recommended Citation
Xie, Xuetao; Zhang, Huaqing; Wang, Junze; Chang, Qin; Wang, Jian; and Pal, Nikhil R., "Learning Optimized Structure of Neural Networks by Hidden Node Pruning with L1 Regularization" (2020). Journal Articles. 378.
https://digitalcommons.isical.ac.in/journal-articles/378