Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/52421
Title: Exploiting auxiliary data for designing reliable classifier in domain adaptation
Authors: Seah, Chun Wei.
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2012
Abstract: To date, many practical realizations of machine intelligence are making their way as important tools that assist humans in their decision making process. A motivating example is sentiment rating prediction on user reviews, as a tool for crafting novel marketing strategies on newly launched products. However, the insufficiency of labelled data gathered on newly launched products often leads to the impediments on performance of supervised classifiers, especially when the problem involves multiple classes. On the other hand, the desired outcome of supervised learning in decision making, however, seeks for reliable classifier that often has sufficient labeled samples as a pre-requisite. A common remedy in the literature is to consider transductive learning, which exploits the assumptions made on unlabeled samples as auxiliary data. Some progress on binary classification has since been made in the area. Nevertheless, to date there has been a lack of studies on ordinal regression problem (multiple classes with ordinal information) under limited labeled data, which is prominent in sentiment rating prediction applications. Taking this cue, in this dissertation, the first work focuses on addressing the general ordinal regression problem under limited labeled data using a transductive setting.
URI: http://hdl.handle.net/10356/52421
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
PpMain.pdf
  Restricted Access
2.39 MBAdobe PDFView/Open

Page view(s) 20

206
checked on Oct 23, 2020

Download(s) 20

11
checked on Oct 23, 2020

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.