Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/66339
Title: Shading-based high-quality 3D object reconstruction
Authors: Xu, Di
Keywords: DRNTU::Engineering
Issue Date: 2016
Abstract: 3D object reconstruction is the process of reconstructing real objects into the digital world. Reconstructing the shape of a 3D object from multi-view images under unknown, general illumination is a fundamental problem in computer vision. Extensive research has been done in this area and many techniques have been developed. Though the state-of-the-art has achieved great success, many methods still have various underlying requirements or heavy assumptions that limit their application scope in practice. This thesis investigates both the shape refinement and its related lighting recovery. We approach the problem with different focuses: quality, robustness and efficiency. Our goal is to design and develop effective algorithms that solve the long-lasting problems in 3D reconstruction. Firstly, we consider the problem of high-quality 3D reconstruction under unknown illumination. No assumption on object albedos makes the problem challenging, especially when recovering surface details. We present a total variation (TV) based approach for recovering surface details using shading and multi-view stereo (MVS), with the lighting modeled as overall illumination vectors. Behind the approach are our two important observations: (1) the illumination over the surface of an object tends to be piecewise smooth and (2) the recovery of surface orientation is not sufficient for reconstructing geometry, which were previously overlooked. Thus we introduce TV to regularize the lighting and use visual hull to constrain partial vertices. The reconstruction is formulated as a constrained TV-minimization problem that treats the shape and lighting as unknowns simultaneously. An augmented Lagrangian method is proposed to quickly solve the TV-minimization problem. Our approach recovers high quality of surface details even starting with a coarse MVS. Secondly, considering that the existing Shape-from-shading methods usually assume Lambertian surfaces, we extend the algorithm to make it robust to non-Lambertian surfaces as well. Based on the independence property of diffuse reflectance and specular reflectance, we introduce the specular intensity variable and tackle the two types of reflectance separately. Different from existing works, the proposed algorithm requires no prior knowledge or hardware setups and only has the assumption that the light sources are fixed and distant. By iteratively solving the lighting, specular intensity and geometry, the extended framework effectively deals with highlight effects which cannot be solved by traditional methods. Even for the challenging non-Lambertian object, our algorithm is able to remove the highlight and recover its surface details robustly. Thirdly, to improve the efficiency of current work, we propose a novel mesh refinement framework by optimizing the face normals instead of vertex normals. The traditional vertex-based methods usually have high computational cost and thus suffer from problems like long processing time and low density output. Our proposed framework focuses on the mesh face normals and remove the complicated non-linear computations of traditional methods. As a result, the maximum size of the mesh that can be handled by our framework is substantially improved. As the denser meshes are usually favored in common evaluation criteria, results generated by the proposed framework have showed improved performance with greatly reduced runtime.
URI: http://hdl.handle.net/10356/66339
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Theses

Files in This Item:
File Description SizeFormat 
Thesis_final_xudi.pdf
  Restricted Access
20.87 MBAdobe PDFView/Open

Page view(s)

235
Updated on Oct 18, 2021

Download(s)

11
Updated on Oct 18, 2021

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.