Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/88075
Title: | Deep room recognition using inaudible echos | Authors: | Song, Qun Gu, Chaojie Tan, Rui |
Keywords: | Room Recognition Smartphone Engineering::Computer science and engineering |
Issue Date: | 2018 | Source: | Song, Q., Gu, C., & Tan, R. (2018). Deep room recognition using inaudible echos. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), 2(3), 135-. doi:10.1145/3264945 | Conference: | Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) | Abstract: | Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level indoor localization approach based on the measured room’s echos in response to a two-millisecond single-tone inaudible chirp emitted by a smartphone’s loudspeaker. Different from other acoustics-based room recognition systems that record full-spectrum audio for up to ten seconds, our approach records audio in a narrow inaudible band for 0.1 seconds only to preserve the user’s privacy. However, the short-time and narrowband audio signal carries limited information about the room’s characteristics, presenting challenges to accurate room recognition. This paper applies deep learning to effectively capture the subtle fingerprints in the rooms’ acoustic responses. Our extensive experiments show that a two-layer convolutional neural network fed with the spectrogram of the inaudible echos achieve the best performance, compared with alternative designs using other raw data formats and deep models. Based on this result, we design a RoomRecognize cloud service and its mobile client library that enable the mobile application developers to readily implement the room recognition functionality without resorting to any existing infrastructures and add-on hardware. Extensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and 89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in a quiet museum, and 15 spots in a crowded museum, respectively. Compared with the state-of-the-art approaches based on support vector machine, RoomRecognize significantly improves the Pareto frontier of recognition accuracy versus robustness against interfering sounds (e.g., ambient music). | URI: | https://hdl.handle.net/10356/88075 http://hdl.handle.net/10220/49692 |
DOI: | 10.1145/3264945 | Schools: | School of Computer Science and Engineering Interdisciplinary Graduate School (IGS) |
Research Centres: | Energy Research Institute @ NTU (ERI@N) | Rights: | © 2018 Association for Computing Machinery (ACM). All rights reserved. This paper was published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) and is made available with permission of Association for Computing Machinery (ACM). | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | ERI@N Conference Papers IGS Conference Papers SCSE Conference Papers |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Deep Room Recognition Using Inaudible Echos.pdf | 4.21 MB | Adobe PDF | ![]() View/Open |
Page view(s) 50
557
Updated on Mar 27, 2025
Download(s) 20
365
Updated on Mar 27, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.