Please use this identifier to cite or link to this item:
Title: The study of word embedding representations in different domains
Authors: Seng, Jeremy Jie Min
Keywords: DRNTU::Engineering
Issue Date: 2016
Abstract: Word embedding has been a popular research topic since 2003 when Mikolov and his colleagues proposed a few new algorithms. These algorithms which were modified from the existing Machine Learning architectures. It allows machine to learn meaning behind words using an unsupervised manner. These proposed algorithms were able to determine how close two words are in a vector by measuring the cosine similarity distance. However, much work can be done to determine if these proposed methods can further to determine the context of a sentence or a paragraphs using these cosine distances. As the proposed algorithms requires a large dictionary of words or commonly referred to a corpus in this report, the author wishes to find out if the corpus supplied with articles found in Wikipedia are able to show the closeness of two words in different context.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
2.19 MBAdobe PDFView/Open

Page view(s)

Updated on May 6, 2021


Updated on May 6, 2021

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.