Please use this identifier to cite or link to this item:
|Title:||Core utility development for hysia performance optimization||Authors:||Zhou, Shengsheng||Keywords:||Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence||Issue Date:||2019||Source:||Zhou, S. (2019). Core utility development for hysia performance optimization. Master's thesis, Nanyang Technological University, Singapore.||Abstract:||To serve machine learning requests with trained models plays an increasingly important role with the advance and continuous commercialization of machine learning models. Model serving is also the dominant cost in production-scale machine learning systems such as versatile prediction pipelines, complex models, diverse machine learning frameworks and heterogeneous hardware like CPU, GPU and TPU. Serving machine learning pipelines with low latencies for better user experience is the key to the success for an e-commerce product. This becomes more challenging, due to the complex constitutions of model serving, i.e. models, frameworks and hardware accelerators, to serve interactive machine learning workloads. Accessibility, cost and latency are especially challenging to be addressed. Hysia is a multi-modal machine learning model serving framework developed by our team, to remedy such challenges introduced by the complex interactions between models and hardware. Hysia framework addresses acces- sibility, cost and latency issues by providing easy-to-use application interfaces and an intelligent controller which jointly optimizes performance to balance the trade-off between resource consumption and prediction accuracy. This thesis focuses on the design, implementation and benchmarking of the core utility for Hysia framework, i.e. to provide profile information about models and statuses about system resources in order to optimize machine learning pipelines. The core utility plays a significant role for the joint system performance optimization for Hysia. Model profiler and resource monitor form the core utility. The model profiler is designed to profile machine learning models to get their statistics like parameters, memory usage and inference latency. Our design for model profiler unifies the differences among various machine learning platforms and ensures extensibility. The resource monitor is used to monitor the system resource status like memory and GPU utilization. Our resource monitor is capable to retrieve rich system statuses. Both model profiler and resource monitor are designed in a distributed way to improve efficiency and support distributed computation.||URI:||https://hdl.handle.net/10356/82835
|Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.