Algorithms and Bounds in Online Learning.

Date of Submission

December 2016

Date of Award

Winter 12-12-2017

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Master's Dissertation

Degree Name

Master of Technology

Subject Name

Computer Science

Department

Machine Intelligence Unit (MIU-Kolkata)

Supervisor

Murthy, C. A. (MIU-Kolkata; ISI)

Abstract (Summary of the Work)

Online learning is the process of answering the sequence of questions based on the correct answers of the previous questions. The goal here is to make as little expected mistakes as possible over the entire sequence of questions. It is studied in many research areas such as game theory, information theory and machine learning where settings of online learning are similar to that of these areas.There are two main components of online learning framework. First, the learning algorithm also known as the learner and second, the hypothesis class which is essentially a set of functions. Learner tries to predict answers (labels) to the asked questions using this set of functions.This class may be finite or infinite. Sometimes, this class contains some functions which have the capability to provide correct answers to entire sequence of asked questions. In this case, the goal of learner becomes to identify these functions in the hypothesis class as early as possible during a learning round to avoid further mistakes for the remaining rounds. This setting, when function class contains some powerful functions which can provide correct answers to the entire sequence of questions, is called realizable case.Sometimes, it may not contain any such powerful functions which can provide correct answers to the entire sequence of questions. In such a case, learner has to rely on all the available functions in the hypothesis class and use them intelligently to predict the answers. The goal of the learner, therefore, becomes to make as little mistakes as that could have been made by the most powerful functions among the available functions. This setting, when hypothesis class does not contain any powerful functions which can provide correct answers to the entire sequence of questions, is called unrealizable or agnostic case.There are various learning algorithms for each of these settings. All learning algorithms are expected to make least possible mistakes in each setting. Performance of these algorithms is analyzed through the expected number of mistakes over all possible orderings of the sequence of questions.This dissertation proposes three algorithms to improve the mistakes bound in the agnostic case. Proposed algorithms perform highly better than the existing ones in the long run when most of the input sequences presented to the learner are likely to be realizable.

Comments

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843306

Control Number

ISI-DISS-2016-336

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

DOI

http://dspace.isical.ac.in:8080/jspui/handle/10263/6493

This document is currently not available here.

Share

COinS