Please use this identifier to cite or link to this item:
Title: The augmented human - visual movement magnification
Authors: Tan, Ryan Jinn-En
Keywords: Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision
Issue Date: 2023
Publisher: Nanyang Technological University
Source: Tan, R. J. (2023). The augmented human - visual movement magnification. Final Year Project (FYP), Nanyang Technological University, Singapore.
Project: SCSE22-0284 
Abstract: Over the years of mankind’s history, the advent of tools has allowed us to extend our human capabilities. Each of the 5 senses (Touch, Hearing, Smell, Taste, and most importantly Sight) have received countless enhancements through the power of human ingenuity and creativity, borrowing elements from nature, science and sometimes our imagination granting us the ability to achieve feats that continue to scale in both scope and aspirations. This report discusses utilising an existing Learning-based Video Motion Magnification (LVMM) that allows users to observe previously indiscernible movement such as breathing, pulse and tiny facial movements of other people from a video recorded using conventional camera equipment (such as a mobile phone), before feeding the output data into a separate deep-learning image classification model built using Keras to train it into discerning between a person inhaling and exhaling. The image classification software will serve as a foundation for monitoring a person’s respiratory cycle and potentially be used in conjunction with existing medical devices to further expand the groundwork for non-invasive patient care.
Schools: School of Computer Science and Engineering 
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Report.pdf
  Restricted Access
1.26 MBAdobe PDFView/Open

Page view(s)

Updated on Feb 21, 2024


Updated on Feb 21, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.