Please use this identifier to cite or link to this item:
|Title:||Improving self-supervision in video representation learning||Authors:||Liu, Hualin||Keywords:||Engineering::Computer science and engineering||Issue Date:||2021||Publisher:||Nanyang Technological University||Source:||Liu, H. (2021). Improving self-supervision in video representation learning. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152209||Abstract:||With the rapid advancement of deep learning techniques in computer vision, researchers have achieved high performance in video related downstream tasks such as action classification and action detection. However, a pressing issue in this field is the scarcity of labeled data. A video contains hundreds of frames and hence it would take a daunt- ing effort to manually collect and label a large video dataset for researchers. There are two promising directions to tackle this problem. One is self-supervised learning and the other is semi-supervised learning. In our research, we focus on improving self-supervised video representation learning methods. Current methods based on instance discrimination tasks suffer from a major limitation: semantically-similar samples are treated as negatives and their representations are enforced to be different. To address this limitation, we propose smooth contrastive learning with a weak teacher, where we employ a teacher model to mine additional supervisory signals. Specifically, the teacher model computes a similarity distribution over weakly-augmented negative samples and uses it as an artificial label to smooth the one-hot label. The student is trained on strongly- augmented samples using the smoothed label. We evaluate the learned representation on action recognition and video retrieval tasks. The proposed Weak Teacher outperforms the baseline methods under the same dataset and computation budget.||URI:||https://hdl.handle.net/10356/152209||DOI:||10.32657/10356/152209||Rights:||This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0).||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Updated on May 15, 2022
Updated on May 15, 2022
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.