Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/184517
Title: | Spectrum-based modality representation fusion graph convolutional network for multimodal recommendation | Authors: | Ong, Kenneth Rongqing Khong, Andy W. H. |
Keywords: | Engineering | Issue Date: | 2025 | Source: | Ong, K. R. & Khong, A. W. H. (2025). Spectrum-based modality representation fusion graph convolutional network for multimodal recommendation. 18th ACM International Conference on Web Search and Data Mining (WSDM '25), 773-781. https://dx.doi.org/10.1145/3701551.3703561 | Conference: | 18th ACM International Conference on Web Search and Data Mining (WSDM '25) | Abstract: | Incorporating multi-modal features as side information has recently become a trend in recommender systems. To elucidate user-item preferences, recent studies focus on fusing modalities via concatenation, element-wise sum, or attention mechanisms. Despite having notable success, existing approaches do not account for the modality-specific noise encapsulated within each modality. As a result, direct fusion of modalities will lead to the amplification of cross-modality noise. Moreover, the variation of noise that is unique within each modality results in noise alleviation and fusion being more challenging. In this work, we propose a new Spectrum-based Modality Representation (SMORE) fusion graph recommender that aims to capture both uni-modal and fusion preferences while simultaneously suppressing modality noise. Specifically, SMORE projects the multi-modal features into the frequency domain and leverages the spectral space for fusion. To reduce dynamic contamination that is unique to each modality, we introduce a filter to attenuate and suppress the modality noise adaptively while capturing the universal modality patterns effectively. Furthermore, we explore the item latent structures by designing a new multi-modal graph learning module to capture associative semantic correlations and universal fusion patterns among similar items. Finally, we formulate a new modality-aware preference module, which infuses behavioral features and balances the uni- and multi-modal features for precise preference modeling. This empowers SMORE with the ability to infer both user modality-specific and fusion preferences more accurately. Experiments on three real-world datasets show the efficacy of our proposed model. The source code for this work has been made publicly available at https://github.com/kennethorq/SMORE. | URI: | https://hdl.handle.net/10356/184517 | URL: | http://arxiv.org/abs/2412.14978v1 | ISBN: | 9798400713293 | DOI: | 10.1145/3701551.3703561 | Schools: | School of Electrical and Electronic Engineering | Rights: | © 2025 Copyright held by the owner/author(s). This work is licensed under a Creative Commons Attribution International 4.0 License. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
3701551.3703561.pdf | 5.34 MB | Adobe PDF | ![]() View/Open |
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.