Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWang, Xiangruien
dc.identifier.citationWang, X. (2008). Refining learning models in grammatical inference. Doctoral thesis, Nanyang Technological University, Singapore.en
dc.description.abstractGrammatical inference is a branch of computational learning theory that attacks the problem of learning grammatical models from string samples. In other words, grammatical inference tries to identify the computational models that generate the sample strings. In recent decade, due to the explosion of information, efficient ways of searching and organizing are of great importance. Therefore, using grammatical inference methods to process information computationally is very interesting. In real applications, the space-cost plays an important role on performance. In this thesis, we address the problem of refining learning models in grammatical inference. For regular language learning, we introduce formally the notion of “Classification Automaton” that reduces model size by identifying one automaton for multiple string classes. Classification automaton is proved to reduce 30% model size from a straightforward multiple automata approach on house rent data obtained from the public folder in Microsoft Exchange Server of Nanyang Technological University. In real world applications, there is always a maximum possible length for the strings. Based on this observation, we further introduce cover automata, which simplified a learning model with a maximum length limit, for grammatical inference. Test results based on Splice-junction Gene Sequence database demonstrate the method reduces model size by 32% from the widely used deterministic finite automaton model. By mixed k-th order Markov Chains, stochastic information for all possible substrings within k+1 length is captured. However, the space cost is exponential. We introduce the use of recurrent neural networks (RNNs) and present a pruning learning method to avoid the exponential space costs. There is a tradeoff between the accuracy and model size. We proposed a balancing method and presented test results based on Splicejunction Gene Sequence database, which demonstrate that the method reduced the model size by 105 times with a reduced accuracy from 80% to 76%. Then, we introduce profile-based alignment learning (PBAL) framework to refine the model for context-free grammar learning. The PBAL framework is an extension of the existing Alignment Based Learning (ABL) framework by making use of statistical information to refine alignment and further refine the learned grammatical rules. The converged results rules are proved in experiments to improve the alignment precision from 50% to 90% with model size reduced to 47%.en
dc.format.extent187 p.en
dc.subjectDRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen
dc.subjectDRNTU::Engineering::Computer science and engineering::Theory of computation::Mathematical logic and formal languagesen
dc.titleRefining learning models in grammatical inferenceen
dc.contributor.supervisorNarendra Shivaji Chaudharien
dc.contributor.schoolSchool of Computer Engineeringen
dc.description.degreeDOCTOR OF PHILOSOPHY (SCE)en
item.fulltextWith Fulltext-
Appears in Collections:SCSE Theses
Files in This Item:
File Description SizeFormat 
WangXiangrui08.pdfMain report706.89 kBAdobe PDFThumbnail

Page view(s) 50

Updated on Jul 21, 2024

Download(s) 20

Updated on Jul 21, 2024

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.