Please use this identifier to cite or link to this item:
|Title:||Automatic closed caption generation from video files||Authors:||Tan, Kenneth Chengwei||Keywords:||DRNTU::Engineering::Computer science and engineering::Computer systems organization::Computer system implementation||Issue Date:||2014||Abstract:||The idea of speech recognition using computers and software is not new. However, for years, its rather low accuracy and constantly changing variables, such as a speaker’s accent, background noise, etc. has resulted in a low adoption rate (Challenges in adopting speech recognition, 2004), until a boom in the medical industry with the adoption of electronic health records (EHRs) (Speech Recognition Booms As EHR Adoption Grows, 2013). Speech recognition has generally been kept for specialized and educational purposes, but are now hitting the mainstream industries and is permeating into the everyday lives of its users, for example – voice commands for smartphones, dictation software for personal computers and even home automation. (Say What? Google Works to Improve YouTube Auto-Captions for the Deaf, 2011) Through the use of speech recognition, much effort and time spent in traditionally entering large amounts of text manually into a computer can now be cut down drastically. The process of creating a closed caption file for a video manually, requires effort to listen, enter and finally synchronize the close captions to the audio track of a video, from the transcriber, and it can be tedious and time consuming. Using speech recognition, this process can be automated to produce closed captions, at a fraction of that time and effort previously required. This final year project report documents the development of an application that automatically generates closed captions from the input of a video file. It also discusses about the current speech recognition technology, and potential improvements and modifications that may be added to the application in future. The project commenced in August 2013 and completed in February 2014.||URI:||http://hdl.handle.net/10356/59631||Rights:||Nanyang Technological University||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Student Reports (FYP/IA/PA/PI)|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.