Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/48452
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nur Shabrina Rusli. | |
dc.date.accessioned | 2012-04-24T03:18:05Z | |
dc.date.available | 2012-04-24T03:18:05Z | |
dc.date.copyright | 2012 | en_US |
dc.date.issued | 2012 | |
dc.identifier.uri | http://hdl.handle.net/10356/48452 | |
dc.description.abstract | Since the Human Visual System (HVS) is the ultimate receiver and appreciator of natural scenes, many visual attention models have been developed and applied to various image processing applications in the past decade.In order to provide the insight for effective deployment in this project, we study various existing saliency detection models and different image processing techniques. The performance of the respective experiments is analyzed. Benchmarking the state-of-the-art technology is then made in the related area. | en_US |
dc.format.extent | 54 p. | en_US |
dc.language.iso | en | en_US |
dc.rights | Nanyang Technological University | |
dc.subject | DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision | en_US |
dc.title | Perceptual image processing algorithm benchmarking | en_US |
dc.type | Final Year Project (FYP) | en_US |
dc.contributor.supervisor | Lin Weisi | en_US |
dc.contributor.school | School of Computer Engineering | en_US |
dc.description.degree | Bachelor of Engineering (Computer Engineering) | en_US |
dc.contributor.research | Centre for Multimedia and Network Technology | en_US |
item.fulltext | With Fulltext | - |
item.grantfulltext | restricted | - |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
SCE11-0156.pdf Restricted Access | 1.76 MB | Adobe PDF | View/Open |
Page view(s) 50
488
Updated on Mar 15, 2025
Download(s)
10
Updated on Mar 15, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.