Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/166912
Title: | Designing a sustainable and robust pipeline for integrating tensorflow motion capture models into unity | Authors: | Chua, Zeta Hui Shi | Keywords: | Engineering::Computer science and engineering | Issue Date: | 2023 | Publisher: | Nanyang Technological University | Source: | Chua, Z. H. S. (2023). Designing a sustainable and robust pipeline for integrating tensorflow motion capture models into unity. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/166912 | Abstract: | With the significant rise of Metaverse due to facebook’s attempt to create the “Horizon” Metaverse and changing their brand name to “Meta”, virtual avatars have also gained importance offering alternative digital identities. However transitioning from the 2D web and mobile interfaces to embody the 3D static avatar profiles in the metaverse can become a huge user interface challenge when shifting from Web 2.0 to the unfamiliar Web 3.0 and thereby hindering social interactions. To overcome these user adoption issues into the metaverse, Motion capture technology can be introduced into these avatars to map the realtime facial and body movements of a user’s physical body onto virtual avatars. Motion capture is a subset of augmented reality and machine learning. WebGL Motion capture applications are the most cost effective and accessible as compared to VR and AR Motion capture applications in todays age due to VR and AR headsets costing up to thousands of dollars whereas WebGL applications simply can be used via a laptop and webcamera. However there is a lack of sustainable and robust motion capture libraries to create such WebGL Motion capture applications and thus decreasing the accessibility of bridging the current consumer market to the Metaverse. Therefore our solution is to implement a sustainable and robust motion capture pipeline for developers to utilize, specifically for Unity, to easily add motion capture technology to metaverse avatars and thereby improving virtual social interactions between avatars. This can be done by automating the process of transfering Tensorflow.js based motion capture data directly to a Unity 3D environment, thereby reducing the friction of developing immersive and interactive Metaverse applications. Keywords: Motion Capture, Human Computer Interface, Unity, Web Augmented Reality, Metaverse, Unity, Tensorflow.js, PoseNet, Face Mesh, Kalidoface | URI: | https://hdl.handle.net/10356/166912 | Schools: | School of Computer Science and Engineering | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | SCSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Zeta FYP Report (3).pdf Restricted Access | Updating my FYP report: Improving my phrasing and points, but still didnt change the original core of the fyp project | 6.23 MB | Adobe PDF | View/Open |
Page view(s)
240
Updated on May 7, 2025
Download(s)
16
Updated on May 7, 2025
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.