Regularizing Multilayer Perceptron for Robustness

Article Type

Research Article

Publication Title

IEEE Transactions on Systems, Man, and Cybernetics: Systems

Abstract

The weights of a multilayer perceptron (MLP) may be altered by multiplicative and/or additive noises if it is implemented in hardware. Moreover, if an MLP is implemented using analog circuits, it is prone to stuck-at 0 faults, i.e., link failures. In this paper, we have proposed a methodology for making an MLP robust with respect to link failures, multiplicative noise, and additive noise. This is achieved by penalizing the system error with three regularizing terms. To train the system we use a weighted sum of the following four terms: 1) mean squared error (MSE); 2) l2 norm of the weight vector; 3) sum of squares of the first order derivatives of MSE with respect to weights; and 4) sum of squares of the second order derivatives of MSE with respect to weights. The proposed approach has been tested on ten regression and ten classification tasks with link failure, multiplicative noise, and additive noise scenarios. Our experimental results demonstrate the effectiveness of the proposed regularization to achieve robust training of an MLP.

First Page

1255

Last Page

1266

DOI

10.1109/TSMC.2017.2664143

Publication Date

8-1-2018

This document is currently not available here.

Share

COinS