Please use this identifier to cite or link to this item:
|Title:||Usage-pattern-based scheduling schemes for power optimization in multi-core systems||Authors:||Hou, Zhaoqi||Keywords:||DRNTU::Engineering::Electrical and electronic engineering::Computer hardware, software and systems||Issue Date:||2017||Source:||Hou, Z. (2017). Usage-pattern-based scheduling schemes for power optimization in multi-core systems. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||This thesis discusses the scheduling schemes with app usage pattern awareness for power dissipation optimization in mobile multi-core computing systems. The power dissipation of mobile multi-core computing systems becomes crucial as the power wall symptom arises. One way to resolve the power wall issue is to manage the system resource with power awareness focus. Three main areas of work are presented in the thesis to address the power dissipation issues. First, three scheduling schemes are proposed to decompose the instantaneous power dissipation into different components to model the redundancies in application inflicted power dissipation. The decomposition identifies the manageable power components as the Thread Interference Power which exists among the context switching of multithreading applications. Different amounts of Thread Interference Power dissipate between different combinations of adjacent threads. In this thesis the application combination and power dissipation relationship is studied, identified at system runtime. Application thread scheduling schemes are designed based on the assumption that the usage pattern of a mobile system user is periodical, hence the future application sequence and possible power dissipation are predictable based on the past performance recorded. The scheduling schemes presenting in the thesis make scheduling decisions based on the instantaneous system application combinations and the possible incoming application predicted. The scheduler reorders the application threads according to the amount of Thread Interference Power dissipated. The scheduler is testified in a simulated environment built with GEM5 and McPAT. The simulation results show that the adoption of usage pattern aware scheduling method is capable of reducing the total system power dissipation up to 19% compared to the total system power dissipated without applying the proposed method. To incorporate the proposed scheduling schemes in a Linux scheduler, the quantization method of the Thread Interference Power dissipation with system monitored performance events is studied. Multiple power dissipation related problems are studied to generalize the function of the scheduling schemes as functions of a scheduler. Mutual cache evictions or invalidations by parallel processes, corresponding data movements, context switching overheads and code serialization bottlenecks due to unfortunate scheduled synchronization / locking mechanisms on shared access resources are major contributors to the excessive computing power dissipation of modern chip multiprocessor (CMP) architectures. The proposed Linux scheduler manages above mentioned problems based on user-specific application usage pattern identified in the Thread Interference Power model. To extend the scheduling capacity of the proposed scheduler, novel Dynamic Voltage and Frequency Scaling (DVFS) techniques are developed for ARM asymmetrical System-on-Chip (SoC) designs and tested in a three clustered CPU setup. The result of the proposed scheduler shows a maximum of 22.2% improvement in the power dissipation reduction over the native Global Task Scheduler (GTS). Last but not least, to better the prediction based power dissipation monitoring mechanism in the proposed scheduler, neural network algorithms are adopted to solve the problem with non-parametric assumption. Each power component is sampled individually over a period of time and undergoes analysis as a time series. Thread Interference Power components are then identified and isolated from the time series power dissipation with the aid of a system of neural network algorithms including Autoencoder, Restricted Boltzmann Machine and recurrent neural network. With the aid of the neural network implementation the proposed scheduler reduced the computation overhead by 73.3% and power dissipated by a maximum of 8.1%.||URI:||http://hdl.handle.net/10356/72031||DOI:||10.32657/10356/72031||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||EEE Theses|
Page view(s) 10105
Updated on Nov 23, 2020
Updated on Nov 23, 2020
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.