Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/13589
Title: Refining learning models in grammatical inference
Authors: Wang, Xiangrui
Keywords: DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
DRNTU::Engineering::Computer science and engineering::Theory of computation::Mathematical logic and formal languages
Issue Date: 2008
Source: Wang, X. (2008). Refining learning models in grammatical inference. Doctoral thesis, Nanyang Technological University, Singapore.
Abstract: Grammatical inference is a branch of computational learning theory that attacks the problem of learning grammatical models from string samples. In other words, grammatical inference tries to identify the computational models that generate the sample strings. In recent decade, due to the explosion of information, efficient ways of searching and organizing are of great importance. Therefore, using grammatical inference methods to process information computationally is very interesting. In real applications, the space-cost plays an important role on performance. In this thesis, we address the problem of refining learning models in grammatical inference. For regular language learning, we introduce formally the notion of “Classification Automaton” that reduces model size by identifying one automaton for multiple string classes. Classification automaton is proved to reduce 30% model size from a straightforward multiple automata approach on house rent data obtained from the public folder in Microsoft Exchange Server of Nanyang Technological University. In real world applications, there is always a maximum possible length for the strings. Based on this observation, we further introduce cover automata, which simplified a learning model with a maximum length limit, for grammatical inference. Test results based on Splice-junction Gene Sequence database demonstrate the method reduces model size by 32% from the widely used deterministic finite automaton model. By mixed k-th order Markov Chains, stochastic information for all possible substrings within k+1 length is captured. However, the space cost is exponential. We introduce the use of recurrent neural networks (RNNs) and present a pruning learning method to avoid the exponential space costs. There is a tradeoff between the accuracy and model size. We proposed a balancing method and presented test results based on Splicejunction Gene Sequence database, which demonstrate that the method reduced the model size by 105 times with a reduced accuracy from 80% to 76%. Then, we introduce profile-based alignment learning (PBAL) framework to refine the model for context-free grammar learning. The PBAL framework is an extension of the existing Alignment Based Learning (ABL) framework by making use of statistical information to refine alignment and further refine the learned grammatical rules. The converged results rules are proved in experiments to improve the alignment precision from 50% to 90% with model size reduced to 47%.
URI: https://hdl.handle.net/10356/13589
DOI: 10.32657/10356/13589
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
WangXiangrui08.pdfMain report706.89 kBAdobe PDFThumbnail
View/Open

Page view(s) 50

302
checked on Oct 21, 2020

Download(s) 50

88
checked on Oct 21, 2020

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.