Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/167383
Title: | Retraining SNN conversions: CNN to SNN for audio classification tasks | Authors: | Chang, John Rong Qi | Keywords: | Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Engineering::Computer science and engineering::Software::Software engineering |
Issue Date: | 2023 | Publisher: | Nanyang Technological University | Source: | Chang, J. R. Q. (2023). Retraining SNN conversions: CNN to SNN for audio classification tasks. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167383 | Project: | B2286-221 | Abstract: | Efficient yet powerful models are in high demand for its portability and affordability. Amongst other methods such as model-pruning, is limiting neural network operations to sparse event-driven spikes: Spiking Neural Networks (SNNs) aims to unravel a new direction in machine learning research. A significant amount of SNN literature straddles upon mature works of artificial neural networks (ANNs) by migrating its architecture and parameters into SNNs, optimizing the migration to retain as much performance as possible. We spearhead a novel approach: the architecture is migrated and retrained from scratch. We hypothesize that this new direction will unravel concepts that currently bottlenecks improvements in the field of SNN conversions. Furthermore, alike Transfer Learning, inspire future efforts of fine-tuning a well converted model through training. This paper presents our analysis of training converted Convolutional Neural Networks (CNNs) to SNNs on audio classification models. Results show that (1) SNN conversions consistently underperforms CNNs marginally during training, however we also show that model complexity has a possible association with this margin. (2) SNN converts doesn't necessarily approach the performance of its CNN counterparts asymptotically by increasing the number of time-steps. (3) SNN training from scratch is costly and impractical with current hardware and dedicated SNN optimization techniques are necessary. (4) Enabling the SNN membrane decay rate to be learned doesn't significantly affect performance. This paper provides valuable insights into the perspective of retraining converted SNNs for audio classification, and serves as a reference for future studies and hardware implementation benchmarks. | URI: | https://hdl.handle.net/10356/167383 | Schools: | School of Electrical and Electronic Engineering | Organisations: | A*STAR Institute of Microelectronics | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
B2286-221 Retraining SNN Conversions.pdf Restricted Access | 1.34 MB | Adobe PDF | View/Open |
Page view(s)
266
Updated on Mar 23, 2025
Download(s)
5
Updated on Mar 23, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.