Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/40059
Title: Optimizing model training for speech recognition
Authors: Chak, Hui Ping
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Pattern recognition
Issue Date: 2010
Abstract: Modern speech recognition systems are generally based on statistical models which output a sequence of symbols or quantities. These models can be trained automatically and are simple and computationally feasible to use. To reduce long computational time, the model training can be distributed to many machines for parallel processing. Apache Hadoop is a Java software framework that uses the Map-Reduce architecture to support data-intensive parallel and distributed processing. The objective of this project is to tune the performance of model training for speech recognition by distributing and parallelizing the model training process using the Hadoop framework. Performance of the optimization is measured for comparison and analysis. The report also shows how the legacy scripts are ported into the Map-Reduce architecture and discusses the issues and challenges involved. With the aid of the Swimlanes visualization tools [1] in understanding and tuning the performance of the job, various methods of processing data for the training are explored and discussed in the report. Performance is measured for 100 iterations of the model training process for 4 nodes using the various methods discussed. From the results of the experiment, it is found that model training can be optimized by taking data locality into consideration in the software design.
URI: http://hdl.handle.net/10356/40059
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
SCE128.pdf
  Restricted Access
1.17 MBAdobe PDFView/Open

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.