Please use this identifier to cite or link to this item:
|Title:||Pattern-based video coding with dynamic background modeling||Authors:||Paul, Manoranjan
Lau, Chiew Tong
|Keywords:||DRNTU::Engineering::Computer science and engineering||Issue Date:||2013||Source:||Paul, M., Lin, W., Lau, C. T., & Lee, B.-S. (2013). Pattern-based video coding with dynamic background modeling. EURASIP journal on advances in signal processing, 2013(138), 1-15.||Series/Report no.:||EURASIP journal on advances in signal processing||Abstract:||The existing video coding standard H.264 could not provide expected rate-distortion (RD) performance for macroblocks (MBs) with both moving objects and static background and the MBs with uncovered background (previously occluded). The pattern-based video coding (PVC) technique partially addresses the first problem by separating and encoding moving area and skipping background area at block level using binary pattern templates. However, the existing PVC schemes could not outperform the H.264 with significant margin at high bit rates due to the least number of MBs classified using the pattern mode. Moreover, both H.264 and the PVC scheme could not provide the expected RD performance for the uncovered background areas due to the unavailability of the reference areas in the existing approaches. In this paper, we propose a new PVC technique which will use the most common frame in a scene (McFIS) as a reference frame to overcome the problems. Apart from the use of McFIS as a reference frame, we also introduce a content-dependent pattern generation strategy for better RD performance. The experimental results confirm the superiority of the proposed schemes in comparison with the existing PVC and the McFIS-based methods by achieving significant image quality gain at a wide range of bit rates.||URI:||https://hdl.handle.net/10356/101383
|ISSN:||1687-6180||DOI:||10.1186/1687-6180-2013-138||Rights:||© 2013 The Authors, licensee Springer. This paper was published in EURASIP Journal on Advances in Signal Processing and is made available as an electronic reprint (preprint) with permission of the authors. The paper can be found at the following official DOI: [http://dx.doi.org/10.1186/1687-6180-2013-138]. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper is prohibited and is subject to penalties under law.||Fulltext Permission:||open||Fulltext Availability:||With Fulltext|
|Appears in Collections:||SCSE Journal Articles|
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.