Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/150181
Title: Deepshoe : an improved Multi-Task View-invariant CNN for street-to-shop shoe retrieval
Authors: Zhan, Huijing
Shi, Boxin
Duan, Ling-Yu
Kot, Alex Chichung
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2019
Source: Zhan, H., Shi, B., Duan, L. & Kot, A. C. (2019). Deepshoe : an improved Multi-Task View-invariant CNN for street-to-shop shoe retrieval. Computer Vision and Image Understanding, 180, 23-33. https://dx.doi.org/10.1016/j.cviu.2019.01.001
Project: NRF2016NRF-NSFC001-098
Journal: Computer Vision and Image Understanding
Abstract: The difficulty of describing a shoe item seeing on street with text for online shopping demands an image-based retrieval solution. We call this problem street-to-shop shoe retrieval, whose goal is to find exactly the same shoe in the online shop image (shop scenario), given a daily shoe image (street scenario) as the query. We propose an improved Multi-Task View-invariant Convolutional Neural Network (MTV-CNN+) to handle the large visual discrepancy for the same shoe in different scenarios. A novel definition of shoe style is defined according to the combinations of part-aware semantic shoe attributes and the corresponding style identification loss is developed. Furthermore, a new loss function is proposed to minimize the distances between images of the same shoe captured from different viewpoints. In order to efficiently train MTV-CNN+, we develop an attribute-based weighting scheme on the conventional triplet loss function to put more emphasis on the hard triplets; a three-stage process is incorporated to progressively select the hard negative examples and anchor images. To validate the proposed method, we build a multi-view shoe dataset with semantic attributes (MVShoe) from the daily life and online shopping websites, and investigate how different triplet loss functions affect the performance. Experimental results show the advantage of MTV-CNN+ over existing approaches.
URI: https://hdl.handle.net/10356/150181
ISSN: 1077-3142
DOI: 10.1016/j.cviu.2019.01.001
Rights: © 2019 Elsevier Inc. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Journal Articles

Page view(s)

83
Updated on May 20, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.