Polynomial PCA for Face Recognition.

Date of Submission

December 2009

Date of Award

Winter 12-12-2010

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Master's Dissertation

Degree Name

Master of Technology

Subject Name

Computer Science

Department

Machine Intelligence Unit (MIU-Kolkata)

Supervisor

Murthy, C. A. (MIU-Kolkata; ISI)

Abstract (Summary of the Work)

PCAPCA is a useful statistical technique that has found application on fields such as face recognition and image compression,and is a common technique for finding patterns in data of high dimensions.PCA finds the orthogonal directions that account for the highest amount of variance. The data is then projected into the subspace spanned by these directions.It is a way of identifying patterns in data, and expressing the data in such a way as to highlight their similarities and differences. Since patterns in data can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool for analysing data. The other main advantage of PCA is that once you have found these patterns in the data, and you compress the data, ie. by reducing the number of dimensions, without much loss of information. This technique used in image compression. some mathematical and statistical skills are required to understand the process of PCA , such as covariance, standard deviation, eigenvectors and eigen values.The following steps are required to perform a Principal Components analysis on a set of data• Step1: get some dataAt first we collect some data of some dimensions on which PCA will be applied.• Step2: Subtract the meanNow we have to subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension.• Step3: Calculate the covariance matrixIn this step we calculate the variance covariance matrix of the given data which is a squre matrix.• Step4: Calculate the eigenvectors and eigenvalues of the covariance matrixScince the covariance matrix is squre, we can calculate the eigenvectors and eigenvalues for the matrix. These are rather important, as they tell us useful information about our data. Scince sum of eigenvalues = sum of the variances .• Step5: Choosing components and forming a feature vectorHere is where the notion of data compression and reduced dimensionality comes into it. In fact, it turns out that the eigenvector with the highest eigenvalue is the principle component of the data set. To be price, if we have n dimensions in our data, and so we calculate n eigenvectors and eigenvalues, and then we choose only the first p (largest) eigenvectors, then the final data set has only p dimensions.• Step6: Deriving the new data setThis the final step in PCA. Once we have chosen the components (eigenvectors) that we wish to keep in our data and formed a feature vector, we simply take the transpose of the vector and multiply it on the left of the original data set, transposed.FinalData = RowFetureVector × RowDataAdjust,where RowFetureVector is the matrix with the eigenvectors in the columns transposed so that the eigenvectors are now in the rows, with the most significant eigenvector at the top, and RowDataAdjust is the mean adjusted data transposed, i.e the data items are in each column, with each row holding a separate dimension.

Comments

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843467

Control Number

ISI-DISS-2009-236

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

DOI

http://dspace.isical.ac.in:8080/jspui/handle/10263/6393

This document is currently not available here.

Share

COinS