Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/88075
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Song, Qun | en |
dc.contributor.author | Gu, Chaojie | en |
dc.contributor.author | Tan, Rui | en |
dc.date.accessioned | 2019-08-20T05:51:45Z | en |
dc.date.accessioned | 2019-12-06T16:55:28Z | - |
dc.date.available | 2019-08-20T05:51:45Z | en |
dc.date.available | 2019-12-06T16:55:28Z | - |
dc.date.issued | 2018 | en |
dc.identifier.citation | Song, Q., Gu, C., & Tan, R. (2018). Deep room recognition using inaudible echos. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2(3), 135-. doi:10.1145/3264945 | en |
dc.identifier.uri | https://hdl.handle.net/10356/88075 | - |
dc.description.abstract | Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level indoor localization approach based on the measured room’s echos in response to a two-millisecond single-tone inaudible chirp emitted by a smartphone’s loudspeaker. Different from other acoustics-based room recognition systems that record full-spectrum audio for up to ten seconds, our approach records audio in a narrow inaudible band for 0.1 seconds only to preserve the user’s privacy. However, the short-time and narrowband audio signal carries limited information about the room’s characteristics, presenting challenges to accurate room recognition. This paper applies deep learning to effectively capture the subtle fingerprints in the rooms’ acoustic responses. Our extensive experiments show that a two-layer convolutional neural network fed with the spectrogram of the inaudible echos achieve the best performance, compared with alternative designs using other raw data formats and deep models. Based on this result, we design a RoomRecognize cloud service and its mobile client library that enable the mobile application developers to readily implement the room recognition functionality without resorting to any existing infrastructures and add-on hardware. Extensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and 89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in a quiet museum, and 15 spots in a crowded museum, respectively. Compared with the state-of-the-art approaches based on support vector machine, RoomRecognize significantly improves the Pareto frontier of recognition accuracy versus robustness against interfering sounds (e.g., ambient music). | en |
dc.format.extent | 28 p. | en |
dc.language.iso | en | en |
dc.rights | © 2018 Association for Computing Machinery (ACM). All rights reserved. This paper was published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) and is made available with permission of Association for Computing Machinery (ACM). | en |
dc.subject | Room Recognition | en |
dc.subject | Smartphone | en |
dc.subject | Engineering::Computer science and engineering | en |
dc.title | Deep room recognition using inaudible echos | en |
dc.type | Conference Paper | en |
dc.contributor.school | School of Computer Science and Engineering | en |
dc.contributor.school | Interdisciplinary Graduate School (IGS) | en |
dc.contributor.conference | Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) | en |
dc.contributor.research | Energy Research Institute @ NTU (ERI@N) | en |
dc.identifier.doi | 10.1145/3264945 | en |
dc.description.version | Accepted version | en |
item.fulltext | With Fulltext | - |
item.grantfulltext | open | - |
Appears in Collections: | ERI@N Conference Papers IGS Conference Papers SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Deep Room Recognition Using Inaudible Echos.pdf | 4.21 MB | Adobe PDF | View/Open |
Page view(s) 50
459
Updated on Mar 28, 2024
Download(s) 20
295
Updated on Mar 28, 2024
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.