Please use this identifier to cite or link to this item:
|Title:||Sampling-based approaches for image and video matting||Authors:||Ehsan Shahrian Varnousfaderani||Keywords:||DRNTU::Engineering::Computer science and engineering::Computing methodologies::Image processing and computer vision||Issue Date:||2013||Source:||Ehsan, S. V. (2013). Sampling-based approaches for image and video matting. Doctoral thesis, Nanyang Technological University, Singapore.||Abstract:||Image matting methods play a fundamental role in image and video editing operations. They can be generally categorized into $\alpha$-propagation and color sampling based methods. The color statistics are leveraged to correlate neighboring pixels and propagate alpha from known regions toward unknown ones in $\alpha$-propagation based methods. In color sampling based methods, the color samples are used to estimate alpha values by finding the foreground ($F$) and background ($B$) samples that represent the true colors of unknown pixels. The quality of mattes degrades in images whose $F$ and $B$ region have overlapped color distributions because color feature cannot discriminate between $F$ and $B$ samples. Also alpha propagation based matting methods may fail when the propagation of alpha is blocked by strong edges. Furthermore, color sampling based matting methods choose samples that are located around the boundaries of $F$ and $B$ regions and miss samples inside the regions. In addition to these drawbacks of image matting, video matting methods suffer from lack of temporal coherency because conventional video matting methods exploit image matting methods to extract mattes of video frames individually; this introduces temporal jitter in the matte videos. Developing new image matting methods to solve drawbacks of the current matting methods and also extracting temporally coherent mattes on video sequence are the objectives of this research. We achieve the objectives through the following contributions. First, the texture feature is proposed as a complementary feature with color to discriminate between $F$ and $B$ regions when they have overlapped color distributions. The texture feature is extracted using local chromatic and structure for every pixel in such a way as to increase the distinction between foreground and background regions. Two new methods are proposed that combine texture with color in different stages of matting process. The first approach generates an initial matte based on texture feature. The initial matte is then combined with color statistics to improve matting by finding best $(F,B)$ pairs for unknown pixels. This can be considered as a late combination of texture and color. In the second approach, the contribution of texture and color feature are weighted in early stages of matting to find best $(F,B)$ pairs. The contribution of texture and color is automatically weighted by analyzing the content of the image. The effectiveness of the proposed methods is compared quantitatively as well as qualitatively with other matting methods by evaluating their results on a benchmark dataset and anothr set of complex images. The evaluations show that the proposed methods presented the best among state of the art matting methods and show that combining color and texture information leads to improved matte. Second, a new color sampling based matting method is proposed, which uses more comprehensive and representative set of samples so as not to miss out on the true samples. This is accomplished by expanding the sampling range for pixels farther from the foreground or background boundary and ensuring that samples from each color distribution are included. A new objective function is used to force those foreground and background samples to be picked that are generated from well-separated distributions. The proposed color sampling based method solves the problem of missing true samples and achieves good rank among other matting methods as reported by the alpha matting website. The idea of sampling from inside $F$ and $B$ regions are used to build sets of comprehensive samples for previous matting methods which uses an early combination of color and texture for matting. The extended method solves the problems of overlapped color distributions and missing true samples. Experimental results on benchmark and synthesized set of images show that the extended method achieves state of the art performance. Third, A new color and texture sampling based video matting is proposed which uses color, texture and temporal information to estimate temporally consistent video mattes. The temporal information is propagated using optical flow and comprehensive set of samples are collected to cover highly correlated local as well as temporal global samples. The temporal Laplacian refinement is applied on estimated mattes to guarantee temporal coherency of mattes. The qualitative evaluations indicate the performance of the proposed method in estimating spatially accurate and temporally coherent video mattes.||URI:||http://hdl.handle.net/10356/55279||Fulltext Permission:||restricted||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Theses|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.