Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/75535
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhang, Linghan-
dc.date.accessioned2018-06-01T05:50:50Z-
dc.date.available2018-06-01T05:50:50Z-
dc.date.issued2018-
dc.identifier.urihttp://hdl.handle.net/10356/75535-
dc.description.abstractGraphs are a rich and versatile data structure. They are widely used in representing data like social networks, chemical compound, protein structures. Analytical tasks against graph data attracted great attention in many domains. Effective graph analytics provides users deep insights of the data. However, due to the structural characteristics of graphs, computation cost for graph analytics tasks on large graph data set can be very high. We discuss two recent frameworks inspired by the advancements in feature representation learning, neural networks and graph kernels, namely patchy-san and subgraph2vec. We conducted experiments with patchy-san and subgraph2vec frameworks for graph classification problems. With established benchmark datasets, we demonstrate that these two frameworks, despite taking different approaches, are efficient and competitive with state-of-the-art techniques.en_US
dc.format.extent55 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University-
dc.subjectDRNTU::Engineering::Computer science and engineeringen_US
dc.titleLearning feature representation for subgraphsen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorChen Lihuien_US
dc.contributor.schoolSchool of Electrical and Electronic Engineeringen_US
dc.description.degreeBachelor of Engineeringen_US
item.grantfulltextrestricted-
item.fulltextWith Fulltext-
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
FYP Zhang Linghan.pdf
  Restricted Access
2.57 MBAdobe PDFView/Open

Page view(s)

170
Updated on Aug 6, 2022

Download(s)

7
Updated on Aug 6, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.