A Neural Network Model for Matrix Factorization: Dimensionality Reduction

Document Type

Conference Article

Publication Title

Proceedings of IEEE Asia-Pacific Conference on Computer Science and Data Engineering, CSDE 2022

Abstract

One of the most commonly used approaches to deal with complex high-dimensional datasets is dimensionality reduction. In this scenario, a shallow neural network model for non-negative matrix factorization has been developed for low rank approximation. We have used hierarchical learning to ma-nipulate the ubiquity of nonnegative input data to generate part-based, sparse, and meaningful representations. A modification of the He initialization technique has been proposed to initialize the weights while maintaining the non-negative criterion of the model. A necessary modification of the ReLU activation function has been made to suppress all neurons in a layer from adjusting their weights simultaneously. Regularization has been used in the model's objective function to reduce the risk of overfitting. To demonstrate the efficacy of the proposed model, we have analyzed and compared the results with six well known dimensionality reduction methods on five popular datasets for clustering. We have also discussed the computational complexity of the model.

DOI

10.1109/CSDE56538.2022.10089284

Publication Date

1-1-2022

This document is currently not available here.

Share

COinS