Date Added: Jul 2009
This paper presents a novel multi-view stereo method designed for image-based rendering that generates piecewise planar depth maps from an unordered collection of photographs. First a discrete set of 3D plane candidates are computed based on a sparse point cloud of the scene (recovered by structure from motion) and sparse 3D line segments reconstructed from multiple views. Next, evidence is accumulated for each plane using 3D point and line incidence and photo-consistency cues. Finally, a piecewise planar depth map is recovered for each image by solving a multi-label Markov Random Field (MRF) optimization problem using graph-cuts. The novel energy minimization formulation exploits high-level scene information.