A Generative Model for 3D Urban Scene Understanding From Movable Platforms
3D scene understanding is key for the success of applications such as autonomous driving and robot navigation. However, existing approaches produce a mild level of understanding, e.g., segmentation, object detection, or are not accurate enough for these applications, e.g., 3D pop-ups. In this paper, the authors propose a principled generative model of 3D urban scenes that takes into account dependencies between static and dynamic features. They derive a reversible jump MCMC scheme that is able to infer the geometric (e.g., street orientation) and topological (e.g., number of intersecting streets) properties of the scene layout, as well as the semantic activities occurring in the scene, e.g., traffic situations at an intersection.