Panoramic Video from Unstructured Camera Arrays
Federico Perazzi (Disney Research Zurich)
Alexander Sorkine-Hornung (Disney Research Zurich)
Henning Zimmer (Disney Research Zurich)
Peter Kaufmann (Disney Research Zurich)
Oliver Wang (Disney Research Zurich)
Scott Watson (Walt Disney Imagineering)
Markus Gross (Disney Research Zurich)
May 8, 2015
We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact-free panorama stitching is impeded by parallax between input views. Common strategies such as multi-level blending or minimum energy seams produce seamless results on quasi-static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair-wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non-overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input.
Download File "Panoramic Video from Unstructured Camera Arrays-Paper"
[pdf, 54.33 MB]