Please use this identifier to cite or link to this item:
|Title:||Communication interface for bone-conducted sounds||Authors:||Suryani Simon Turtan.||Keywords:||DRNTU::Engineering::Electrical and electronic engineering::Control and instrumentation::Medical electronics||Issue Date:||2010||Abstract:||Technology has been a major advancement factor in human way of living. By integrating technology with our daily activities and processes, automated and interactive smart systems are widely available around us nowadays. One of the domain of which technology has been widely being developed is in medical environment. Many disabled people have benefited from the vast developments of technology. For example, speech recognition system which has been applied to aid hearing disabled people to communicate with the world. The project aimed to provide an advancement of sound recognition technology in order to enhance the lives of speech disabled people, including paralyzed patients in hospitals. Classification is one of main processes in sound recognition system, together with feature extraction. Mel Frequency Cepstral Coefficient (MFCC) feature extraction method is implemented due to its capability to simulate human hearing processes, together with Proximal Support Vector Machine (PSVM) as the classifier. The main objective of the project is to improve the original classification system available to be able to process bone-conducted sound produced by the vibration of bone and body surface when words are non-audibly articulated. Studies and experiments on semi-supervised learning method and learning strategy to enhance the classifier’s performance were also being conducted.||URI:||http://hdl.handle.net/10356/38859||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.