Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/40070
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGoh, Choon Tat.-
dc.date.accessioned2010-06-10T02:15:37Z-
dc.date.available2010-06-10T02:15:37Z-
dc.date.copyright2010en_US
dc.date.issued2010-
dc.identifier.urihttp://hdl.handle.net/10356/40070-
dc.description.abstractNeural network often performed only technical analysis in financial forecasting. Modern traders perform both fundamental and technical analysis to determine their next move. Volatility is a crucial factor considered by traders in deciding what trading signals to perform and determining expected returns. An autoregressive MLP-ARMA hybrid model combines both Autoregressive Model (AR) and Moving Average Model (MA) predictions. MLP produces a stationary time series prediction for Autoregressive Moving Average Model (ARMA) to work on. AR model predict the general trend of the intraday price swing. MA uses volatility to predict the intraday price fluctuation. An ensemble output is formed to further improve the opening intraday price swing prediction so that an opening trade can be determined. Traders often seek advices from a collective analysis of different risk appetites to arrive at a informed signal. The Partially Observed Markov Decision Process (POMDP) is able to generate a informed signal similar to that of a collective analysis of different risk appetite. Three type of traders with different risk appetites are being modeled with POMDP. The state of stock market identified with Relative Strength Index (RSI) and Exponential Moving Average (EMA) determined the appropriate trading policies in the reinforcement learning policies. The selected trading policies directly reinforces the trading signals generated by the autoregressive model. Such an approach relies on the learnt trading strategies rather than the predictive power of a model to generate profitable trading signals.en_US
dc.format.extent107 p.en_US
dc.language.isoenen_US
dc.rightsNanyang Technological University-
dc.subjectDRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligenceen_US
dc.titleIncremental Q-learning Partially Observable Markov Decision Process intraday trading systemen_US
dc.typeFinal Year Project (FYP)en_US
dc.contributor.supervisorQuek Hiok Chaien_US
dc.contributor.schoolSchool of Computer Engineeringen_US
dc.description.degreeBachelor of Engineering (Computer Science)en_US
dc.contributor.researchCentre for Computational Intelligenceen_US
item.fulltextWith Fulltext-
item.grantfulltextrestricted-
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)
Files in This Item:
File Description SizeFormat 
SCE09-357.pdf
  Restricted Access
2.71 MBAdobe PDFView/Open

Page view(s) 50

386
checked on Oct 24, 2020

Download(s) 50

8
checked on Oct 24, 2020

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.