Please use this identifier to cite or link to this item:
Title: A linear programming based method for joint object region matching and labeling
Authors: Wang, Junyan
Wang, Li
Chan, Kap Luk
Constable, Martin
Issue Date: 2013
Source: Wang, J., Wang, L., Chan, K. L., & Constable, M. (2013). A linear programming based method for joint object region matching and labeling. 11th Asian Conference on Computer Vision, 66-78.
Abstract: Object matching can be achieved by finding the superpixels matched across the image and the object template. It can therefore be used for detecting or labeling the object region. However, the matched superpixels are often sparsely distributed within the image domain, and there could therefore be a significant proportion of incorrectly detected or labeled regions even though there are few outlier matches. Consequently, the labeled regions may be unreliable for locating, extracting or representing the object. To address these problems, we propose to impose label priors that were previously incorporated in segmentation on the object matching. Specifically, in order to label as many regions as possible on the object, we propose to adopt the boundary-weighted smoothness prior. To reduce the singular outlier matches as much as possible, we propose to adopt the minimum description length principle adopted in segmentation. We then linearize the priors and incorporate them in the linear programming (LP) formulation of matching. The above gives rise to a novel general LP model for joint object region matching and labeling. This work extends the scope of conventional LP based object matching. The experimental results show that our method compares favorably to the LP based matching methods for object region labeling on a challenging dataset.
DOI: 10.1007/978-3-642-37444-9_6
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:EEE Conference Papers

Citations 50

Updated on Sep 6, 2020

Page view(s) 50

Updated on May 8, 2021

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.