Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/161253
Full metadata record
DC FieldValueLanguage
dc.contributor.authorLiu, Boen_US
dc.contributor.authorDing, Zhengtaoen_US
dc.contributor.authorLv, Chenen_US
dc.date.accessioned2022-08-22T07:28:26Z-
dc.date.available2022-08-22T07:28:26Z-
dc.date.issued2019-
dc.identifier.citationLiu, B., Ding, Z. & Lv, C. (2019). Distributed training for multi-layer neural networks by consensus. IEEE Transactions On Neural Networks and Learning Systems, 31(5), 1771-1778. https://dx.doi.org/10.1109/TNNLS.2019.2921926en_US
dc.identifier.issn2162-237Xen_US
dc.identifier.urihttps://hdl.handle.net/10356/161253-
dc.description.abstractOver the past decade, there has been a growing interest in large-scale and privacy-concerned machine learning, especially in the situation where the data cannot be shared due to privacy protection or cannot be centralized due to computational limitations. Parallel computation has been proposed to circumvent these limitations, usually based on the master-slave and decentralized topologies, and the comparison study shows that a decentralized graph could avoid the possible communication jam on the central agent but incur extra communication cost. In this brief, a consensus algorithm is designed to allow all agents over the decentralized graph to converge to each other, and the distributed neural networks with enough consensus steps could have nearly the same performance as the centralized training model. Through the analysis of convergence, it is proved that all agents over an undirected graph could converge to the same optimal model even with only a single consensus step, and this can significantly reduce the communication cost. Simulation studies demonstrate that the proposed distributed training algorithm for multi-layer neural networks without data exchange could exhibit comparable or even better performance than the centralized training model.en_US
dc.language.isoenen_US
dc.relation.ispartofIEEE Transactions on Neural Networks and Learning Systemsen_US
dc.rights© 2019 IEEE. All rights reserved.en_US
dc.subjectEngineering::Mechanical engineeringen_US
dc.subjectEngineering::Electrical and electronic engineeringen_US
dc.titleDistributed training for multi-layer neural networks by consensusen_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Mechanical and Aerospace Engineeringen_US
dc.identifier.doi10.1109/TNNLS.2019.2921926-
dc.identifier.pmid31265422-
dc.identifier.scopus2-s2.0-85081545178-
dc.identifier.issue5en_US
dc.identifier.volume31en_US
dc.identifier.spage1771en_US
dc.identifier.epage1778en_US
dc.subject.keywordsBackpropagationen_US
dc.subject.keywordsConsensusen_US
item.grantfulltextnone-
item.fulltextNo Fulltext-
Appears in Collections:MAE Journal Articles

SCOPUSTM   
Citations 20

23
Updated on Mar 1, 2024

Web of ScienceTM
Citations 20

16
Updated on Oct 28, 2023

Page view(s)

90
Updated on Mar 4, 2024

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.