Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/138814
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChandra, Rohitashen_US
dc.contributor.authorGupta, Abhisheken_US
dc.contributor.authorOng, Yew-Soonen_US
dc.contributor.authorGoh, Chi-Keongen_US
dc.date.accessioned2020-05-13T02:40:29Z-
dc.date.available2020-05-13T02:40:29Z-
dc.date.issued2017-
dc.identifier.citationChandra, R., Gupta, A., Ong, Y.-S., & Goh, C.-K. (2018). Evolutionary multi-task learning for modular knowledge representation in neural networks. Neural Processing Letters, 47(3), 993-1009. doi:10.1007/s11063-017-9718-zen_US
dc.identifier.issn1370-4621en_US
dc.identifier.urihttps://hdl.handle.net/10356/138814-
dc.description.abstractThe brain can be viewed as a complex modular structure with features of information processing through knowledge storage and retrieval. Modularity ensures that the knowledge is stored in a manner where any complications in certain modules do not affect the overall functionality of the brain. Although artificial neural networks have been very promising in prediction and recognition tasks, they are limited in terms of learning algorithms that can provide modularity in knowledge representation that could be helpful in using knowledge modules when needed. Multi-task learning enables learning algorithms to feature knowledge in general representation from several related tasks. There has not been much work done that incorporates multi-task learning for modular knowledge representation in neural networks. In this paper, we present multi-task learning for modular knowledge representation in neural networks via modular network topologies. In the proposed method, each task is defined by the selected regions in a network topology (module). Modular knowledge representation would be effective even if some of the neurons and connections are disrupted or removed from selected modules in the network. We demonstrate the effectiveness of the method using single hidden layer feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. Furthermore, we apply the method to benchmark pattern classification problems. The simulation and experimental results, in general, show that the proposed method retains performance quality although the knowledge is represented as modules.en_US
dc.description.sponsorshipNRF (Natl Research Foundation, S’pore)en_US
dc.language.isoenen_US
dc.relation.ispartofNeural Processing Lettersen_US
dc.rights© 2017 Springer Science+Business Media, LLC. All rights reserved.en_US
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleEvolutionary multi-task learning for modular knowledge representation in neural networksen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.contributor.organizationRolls-Royce@NTU Corporate Laboratoryen_US
dc.contributor.researchSingapore Institute of Manufacturing Technologyen_US
dc.identifier.doi10.1007/s11063-017-9718-z-
dc.identifier.scopus2-s2.0-85030857577-
dc.identifier.issue3en_US
dc.identifier.volume47en_US
dc.identifier.spage993en_US
dc.identifier.epage1009en_US
dc.subject.keywordsEvolutionary Multitaskingen_US
dc.subject.keywordsNeuro-evolutionen_US
item.fulltextNo Fulltext-
item.grantfulltextnone-
Appears in Collections:SCSE Journal Articles

SCOPUSTM   
Citations 10

17
Updated on Mar 2, 2021

PublonsTM
Citations 10

15
Updated on Mar 5, 2021

Page view(s)

161
Updated on Jul 3, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.