Transductive Transfer Learning using Autoencoders.

Date of Submission

December 2016

Date of Award

Winter 12-12-2017

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Master's Dissertation

Degree Name

Master of Technology

Subject Name

Computer Science

Department

Electronics and Communication Sciences Unit (ECSU-Kolkata)

Supervisor

Pal, Nikhil Ranjan (ECSU-Kolkata; ISI)

Abstract (Summary of the Work)

A major assumption, that traditional machine learning algorithms make, is that training and test data come from the same domain. In other words, these data are represented in the same feature space and follow the same data distribution. However, in a real world scenario, this assumption may be violated due various reasons. These reasons include different marginal distributions, different feature spaces, different predictive distribution and different label spaces of the source and target domain datasets. In these kind of scenarios, a special learning startegy, called transfer learning is useful. Transfer leanring gains knowledge while performing one task, and then applies that knowledge to improve the performance of a different but related task.In this thesis, we will specifically deal with transductive trasnfer learning. In this setting, a labelled source domain dataset and an unlabelled target domain dataset is available. Moreover, both the domains have the same feature space but follow different marginal distributions. Our aim is to maximize the classification accuracy on the target domain. To accompolish this task, we propose two methods using autoencoders. The first method is a supervised one. In this strategy, we try to extract features which not only encodes information common to both the domains but also have discriminating power for the source domain. In the second method, in an unsupervised fashion we try to get good representation for target domain that is close to source domain. To achieve this, at first we train an autoencoder on the source datset. After that, we train another autoencoder on the target dataset that is similar to the previously trained autoencoder in terms of both weights and biases. We have tested our methods on three dataset of different type to show their generic nature. We also analyze our methods by discussing the pros and cons associated with them. We at last provide some ideas to improve their performance further.

Comments

ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843380

Control Number

ISI-DISS-2016-342

Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.

DOI

http://dspace.isical.ac.in:8080/jspui/handle/10263/6499

This document is currently not available here.

Share

COinS