Date of Submission

2-28-1987

Date of Award

2-28-1988

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Doctoral Thesis

Degree Name

Doctor of Philosophy

Subject Name

Computer Science

Department

Theoretical Statistics and Mathematics Unit (TSMU-Kolkata)

Supervisor

Dutia Majumdar, Dwijesh K.

Abstract (Summary of the Work)

The first tank can involve one or more subtasks. For instance, it may require the design of a classifier on the basis of whatever prior knowledge there is of the feature space, or given the design, to estimate efficiently the parameters of the classifier. The latter might involve the estimation of the density function itself if very little is known about the class-conditional fenture distribution, or it may necessitate the entimation of the parameters of the fenture distribution, if one can assume it to have some known form. It may also involve estimating the boundaries of the classes, if even less is known about the feature space. A brief discussion regarding learning can be found in the next few sections. All learning activities require the assistance of a net of snmples from the fenture space, which is called the training sel. If the correct labels of the samples in the training set is known, the learning that takes place with the help of these samples is called supervised learning. Otherwise, it is termed nonsupervised learning. One special case of nonsupervised learning is self-supervised learning in which the eystem is equipped with a feedback mechanism so that it can learn from its past actions. In most practical situations it is either expensive or difficult to provide labels for the training samples, so that there is every possibility of having to learn with mislabeled ones. This may be due to random or systematic errors of observation or of the labeling process itself. In such situntions, traditional approaches have to be modified no as to ensure that the characteristics of the method being used are not vitinted in this type of non-ideal situation.Finally, chapter 6 sums up the contributions made by this thesis to the theory of recursive estimation of parameters in the field of pattern recognition, particularly, when there is a likelihood of training samples being mislabeled. It also contains suggestions for further renearch in this direction. 1.2 Learning Learning has been of interest to paychologists and mathematicians for decades and more recently to computer scientists. The interest of a psychologist or a mathematician in learning is to explain or describe the manner in which animals and men learn to do a variety of ekills by observing the changes in their behaviour. Such an approach is termed a deseriptive approach. A large number of models have been developed (11,12,13] for the purpose of describing mathematically the type of learning involved here. On the other hand, in nyntems theory and computer science, the aim is to develop a computer program or build a machine which will 'learn' to perform certain prespecified tasks. Such an appronch in called the prescriptive appronch [14).Learning is often associated with a goal or a performance measure. For lack of sufficient information the goal of learning may not be completely specified. In this context, learning has a dunl role [15]:1)Compensate for insufficient information by appropriate data collection and procensing.2) In that process, incrementally move towards the ultimate goal. In synteins theory and computer acience, learning has been implemented in many ways :1. The use of stochastic approximation methods (16,16,17) 2. Inductive inferential techniques [18).

Comments

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843319

Control Number

ISILib-TH190

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

DOI

http://dspace.isical.ac.in:8080/jspui/handle/10263/2146

Included in

Mathematics Commons

Share

COinS