Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/162582
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Liu, Haitao | en_US |
dc.contributor.author | Ong, Yew-Soon | en_US |
dc.contributor.author | Jiang, Xiaomo | en_US |
dc.contributor.author | Wang, Xiaofang | en_US |
dc.date.accessioned | 2022-10-31T05:34:18Z | - |
dc.date.available | 2022-10-31T05:34:18Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | Liu, H., Ong, Y., Jiang, X. & Wang, X. (2021). Modulating scalable Gaussian processes for expressive statistical learning. Pattern Recognition, 120, 108121-. https://dx.doi.org/10.1016/j.patcog.2021.108121 | en_US |
dc.identifier.issn | 0031-3203 | en_US |
dc.identifier.uri | https://hdl.handle.net/10356/162582 | - |
dc.description.abstract | For a learning task, Gaussian process (GP) is interested in learning the statistical relationship between inputs and outputs, since it offers not only the prediction mean but also the associated variability. The vanilla GP however is hard to learn complicated distribution with the property of, e.g., heteroscedastic noise, multi-modality and non-stationarity, from massive data due to the Gaussian marginal and the cubic complexity. To this end, this article studies new scalable GP paradigms including the non-stationary heteroscedastic GP, the mixture of GPs and the latent GP, which introduce additional latent variables to modulate the outputs or inputs in order to learn richer, non-Gaussian statistical representation. Particularly, we resort to different variational inference strategies to arrive at analytical or tighter evidence lower bounds (ELBOs) of the marginal likelihood for efficient and effective model training. Extensive numerical experiments against state-of-the-art GP and neural network (NN) counterparts on various tasks verify the superiority of these scalable modulated GPs, especially the scalable latent GP, for learning diverse data distributions. | en_US |
dc.language.iso | en | en_US |
dc.relation.ispartof | Pattern Recognition | en_US |
dc.rights | © 2021 Elsevier Ltd. All rights reserved. | en_US |
dc.subject | Engineering::Computer science and engineering | en_US |
dc.title | Modulating scalable Gaussian processes for expressive statistical learning | en_US |
dc.type | Journal Article | en |
dc.contributor.school | School of Computer Science and Engineering | en_US |
dc.identifier.doi | 10.1016/j.patcog.2021.108121 | - |
dc.identifier.scopus | 2-s2.0-85108971042 | - |
dc.identifier.volume | 120 | en_US |
dc.identifier.spage | 108121 | en_US |
dc.subject.keywords | Gaussian Process | en_US |
dc.subject.keywords | Modulation | en_US |
dc.description.acknowledgement | It was supported by the National Key Research and Development Program of China (2020YFA0714403), the National Natural Science Foundation of China (52005074), and the Fundamental Research Funds for the Central Universities (DUT19RC(3)070). Besides, it was partially supported by the Research and Innovation in Science and Technology Major Project of Liaoning Province (2019JH1-10100024), and the MIIT Marine Welfare Project (Z135060009002). | en_US |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
Appears in Collections: | SCSE Journal Articles |
SCOPUSTM
Citations
50
3
Updated on Jan 31, 2023
Web of ScienceTM
Citations
50
3
Updated on Feb 4, 2023
Page view(s)
13
Updated on Feb 7, 2023
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.