Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/183990
Title: | Approximate computing for machine learning | Authors: | Syed Mohammed Mosayeeb Al Hady Zaheen | Keywords: | Computer and Information Science | Issue Date: | 2025 | Publisher: | Nanyang Technological University | Source: | Syed Mohammed Mosayeeb Al Hady Zaheen (2025). Approximate computing for machine learning. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/183990 | Abstract: | Approximate Computing has emerged as a promising paradigm to enhance computational efficiency by introducing controlled inaccuracies in arithmetic operations. This study explores the application of static approximate adders (AAs) within a machine learning context, specifically the K-means clustering algorithm, to assess the trade-offs between clustering accuracy and computational resource savings. A total of 17 AAs, including two newly proposed designs (BPAA-LSP1 and NAA), along with a conventional accurate adder, were integrated into a K-means clustering algorithm and evaluated across four benchmark datasets. Both software simulations and hardware implementations were conducted, measuring key performance metrics such as within-cluster sum of squares (WCSS), power, delay, and area. Results demonstrate that approximate adders maintain strong clustering performance while significantly reducing power consumption (up to 60% lower power-delay product) and area usage (up to 50% lower area-delay product). Notably, the newly proposed BPAA-LSP1 and NAA adders achieve an outstanding balance between clustering accuracy and computational efficiency, comparable to those achieved using the accurate adder. BPAA-LSP1 demonstrates reductions of 22.2% in power, 21.6% in area, and 26.3% in delay, while NAA achieves even greater efficiency, with 31.0% lower power consumption, 21.6% reduced area, and a 37.1% decrease in delay. These results suggest that specific characteristics of the error generated by these AAs may be beneficial or have error-compensatory effects for iterative machine learning tasks. These findings indicate that approximate adders, particularly BPAA-LSP1 and NAA, could serve as viable alternatives in energy-efficient, error-tolerant machine learning applications. | URI: | https://hdl.handle.net/10356/183990 | Schools: | College of Computing and Data Science | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | CCDS Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Syed Zaheen Final Report Amended Submission.pdf Restricted Access | Post-amendment submission | 3.05 MB | Adobe PDF | View/Open |
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.