Driving Timing Convergence of FPGA Designs through Machine Learning and Cloud Computing
Date of Issue2015-05
2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM)
School of Computer Engineering
Machine learning and cloud computing techniques can help accelerate timing closure for FPGA designs without any modification to original RTL code. RTL is generally frozen closer to system delivery target to avoid injecting new unforeseen bugs or significantly affecting design characteristics. In these circumstances, developers trying to close timing are either at the mercy of random trials through placement seed exploration or through vendor-provided design space exploration tools that run a few compilation trials with changes to the CAD tool options (or parameters). Instead, we propose evaluating multiple CAD runs in parallel on the cloud, supported by a Bayesian learning and classification framework for generating multiple CAD parameter combinations most likFPGA CAD tool parametersely to help attain timing closure. We maintain a database of FPGA CAD tool parameters (input) along with associated variations in timing slack (output)to enable the learning process. A key engineering resource we use here is cheap and abundant parallelism made possible through cloud computing frameworks such as the Google Compute Engine. Across a range of open-source benchmarks, we show that learning helps improve total negative slack (TNS) scores by 10.5× (geomean) when compared to a single baseline run of Quart us 14.1 and by 7× (geomean) when compared to Alter a Quart us 14.1 Design Space Explorer (DSE).
Computer Science and Engineering
© 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: [http://dx.doi.org/10.1109/FCCM.2015.36].