Multi-Perspective Stereoscopy from Light Fields

Project Members

Changil Kim (Disney Research Zurich)
Simon Heinzle (Disney Research Zurich)
Wojciech Matusik (Disney Research Zurich)
Alexander Sorkine-Hornung (Disney Research Zurich)

PROJECT_Multi-PerspectiveStereoscopy_teaser

Three-dimensional stereoscopic television, movies, and games have been gaining more and more popularity both within the entertainment industry and among consumers. However, the task of creating convincing yet perceptually pleasing stereoscopic content remains difficult because post-processing tools for stereo are still underdeveloped, and one often has to resort to traditional monoscopic tools and workflows, which are generally ill-suited for stereo-specific issues.

The main cue responsible for stereoscopic scene perception is binocular parallax (or binocular disparity) and therefore tools for manipulating binocular parallax are extremely important. One of the most common methods for controlling the amount of binocular parallax is based on setting the baseline of two cameras prior to acquisition. However, the range of admissible baselines is quite limited because most scenes exhibit more disparity than humans can tolerate when viewing the content on a stereoscopic display. Reducing baseline decreases the amount of binocular disparity; but it also causes scene elements to be overly flat. The second, more sophisticated approach to disparity control requires remapping image disparities (or remapping the depth of scene elements), and then re-synthesizing new images. This approach has considerable disadvantages as well; for content captured with stereoscopic camera rigs, it typically requires accurate disparity computation and hole filling of scene elements that become visible in the re-synthesized views.

In this project we propose a novel concept for stereoscopic post-production to resolve these issues. The main contribution is a framework for creating stereoscopic images, with accurate and flexible per-pixel control over the resulting image disparities. Our framework is based on the concept of 3D light fields, assembled from a dense set of perspective images. While each perspective image corresponds to a planar cut through a light field, our approach defines each stereoscopic image pair as general cuts through this data structure, i.e. each image is assembled from potentially many perspective images. We show how such multi-perspective cuts can be employed to compute stereoscopic output images that satisfy an arbitrary set of goal disparities. These goal disparities can be defined either automatically by a disparity remapping operator or manually by the user for artistic control and effects.

In summary, our proposed concept and formulation provides a novel, general framework that leverages the power and flexibility of light fields for stereoscopic content processing and optimization

Publications

Memory Efficient Stereoscopy from Light Fields-Thumbnail

Memory Efficient Stereoscopy from Light Fields
December 8, 2014
3D International Conference on 3D Vision (3DV) 2014
Paper File [pdf, 28.51 MB]

Multi-Perspective Stereoscopy from Light Fields-Thumbnail

Multi-Perspective Stereoscopy from Light Fields
December 1, 2011
ACM SIGGRAPH Asia 2011
Paper File [pdf, 15.97 MB]

Copyright Notice

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.