Regularizing Multilayer Perceptron for Robustness
IEEE Transactions on Systems, Man, and Cybernetics: Systems
The weights of a multilayer perceptron (MLP) may be altered by multiplicative and/or additive noises if it is implemented in hardware. Moreover, if an MLP is implemented using analog circuits, it is prone to stuck-at 0 faults, i.e., link failures. In this paper, we have proposed a methodology for making an MLP robust with respect to link failures, multiplicative noise, and additive noise. This is achieved by penalizing the system error with three regularizing terms. To train the system we use a weighted sum of the following four terms: 1) mean squared error (MSE); 2) l2 norm of the weight vector; 3) sum of squares of the first order derivatives of MSE with respect to weights; and 4) sum of squares of the second order derivatives of MSE with respect to weights. The proposed approach has been tested on ten regression and ten classification tasks with link failure, multiplicative noise, and additive noise scenarios. Our experimental results demonstrate the effectiveness of the proposed regularization to achieve robust training of an MLP.
Dey, Prasenjit; Nag, Kaustuv; Pal, Tandra; and Pal, Nikhil R., "Regularizing Multilayer Perceptron for Robustness" (2018). Journal Articles. 1292.