Computer Graphics

Disney Research has a strong competency in computer graphics. Our entertainment businesses provide diverse target applications for our pioneering work. Because of this we achieve a rare level of cross-fertilization by juxtaposing real-time algorithms for the game studios with high-end techniques for the movie studios, achieving speed and directability in physical simulation, spanning visual styles from photorealistic to artistic, and blurring the boundaries between computer graphics and materials science.


(in alphabetical order)

A Programmable System for Artistic Volumetric Lighting
We present a method for generating art-directable volumetric effects, ranging from physically-accurate to non-physical results. Our system mimics the way experienced artists think about volumetric effects by using an intuitive lighting primitive, and decoupling the modeling and shading of this primitive. To accomplish this, we generalize the physically-based photon beams method to allow arbitrarily programmable simulation and shading phases. This provides an intuitive design space for artists to rapidly explore a wide range of physically-based as well as plausible, but exaggerated, volumetric effects. We integrate our approach into a real-world production pipeline and couple our volumetric effects to surface shading.

Animating Non-Humanoid Characters with Human Motion Data
In this work, we present a method for generating animations of non-humanoid characters from human motion capture data. Characters considered in this work have proportion and/or topology significantly different from humans, but are expected to convey expressions and emotions through body language that are understandable to human viewers. Keyframing is most commonly used to animate such characters. Our method provides an alternative for animating non-humanoid characters that leverages motion data from a human subject performing in the style of the target character. The method consists of a statistical mapping function learned from a small set of corresponding key poses, and a physics-based optimization process to improve the physical realism. We demonstrate our approach on three characters and a variety of motions with emotional expressions.

Artist Friendly Hairshading
Rendering hair in motion pictures is an important and challenging task. Despite much research on physically based hair rendering, it is currently difficult to benefit from this work because physically based shading models do not offer artist friendly controls. As a consequence much production work so far has used ad hoc shaders that are easier to control, but often lack the richness seen in real hair.

Augmenting Hand Animation with Three-dimensional Secondary Motion
Secondary motion, or the motion of objects in response to that of the primary character, is widely used to amplify the audience's response to the character's motion and to provide a connection to the environment. These three- dimensional (3D) effects are largely passive and tend to be time consuming to animate by hand, yet most are very effectively simulated in current animation software. In this paper, we present a technique for augmenting hand-drawn animation of human characters with 3D physical effects to create secondary motion. In particular, we create animations in which hand-drawn characters interact with cloth and clothing, dynamically simulated balls and particles, and a simple fluid simulation. The driving points or volumes for the secondary motion are tracked in two dimensions, reconstructed into three dimensions, and used to drive and collide with the simulated objects. Our technique employs user interaction that can be reasonably integrated into the traditional animation pipeline of drawing, cleanup, inbetweening, and coloring.

Content Retargeting Using Parameter-Parallel Facial Layers
Facial motion retargeting approaches often transfer expressions by establishing correspondences between shared units of motion, such as action units, or spatial correspondences of landmarks between the source actor and target character faces. When the actor and character are structurally dissimilar, shared units of motion or spatial landmarks may not exist, and subtle styles of performance may differ. We present a method to deconstruct the content of an actor’s facial expression into three parameter-parallel layers using a composition function, transfer the content to equivalent parameter-parallel layers for the character, and reconstruct the character’s expression using the same composition function. Our algorithm uses the same parameter-parallel layered model of facial expression for both the actor and character, separating the content of facial expressions into emotion, speech, and eye-blink layers. Facial motion in each layer is embedded in simplicial bases, each of which encodes semantically significant configurations of the face. We show the transfer of facial motion capture and video-based tracking of the eyes and mouth of an actor to a number of faces with dissimilar facial structure and expressive disposition.

Data-Driven Estimation of Cloth Simulation Models
Progress in cloth simulation for computer animation and apparel design has led to a multitude of deformation models, each with its own way of relating geometry, deformation, and forces. As simulators improve, differences between these models become more important, but it is difficult to choose a model and a set of parameters to match a given real material simply by looking at simulation results. This paper provides measurement and fitting methods that allow nonlinear models to be fit to the observed deformation of a particular cloth sample. Unlike standard textile testing, our system measures complex 3D deformations of a sheet of cloth, not just one-dimensional force--displacement curves, so it works under a wider range of deformation conditions. The fitted models are then evaluated by comparison to measured deformations with motions very different from those used for fitting.

Data-Driven Procedural Landscape Modeling
Improvements in computer graphics continue to make worlds look and feel more believable and more realistic. This advance, however, comes at a huge price: financial investment in artists and designers to carry out increasingly time-consuming and detailed tasks.This work presents several distinct projects which address the problems involved in partially automated reconstruction of virtual environments, with particular focus on landscapes and natural terrain features. Multi-spectral stereo image capture is investigated with the aim of extracting useful information about a natural scene from a small database of images. Aerial and satellite footage is then investigated in order to create natural-looking environments which incorporate the statistical distributions of different types of vegetation which can be then edited (importantly in an invertible procedural model), amplifying the artists creativity.The random forest classifier allows us to retain connectivity of regions separated by stochastic patterns (typically observed in vegetation distribution). We are also investigating image synthesis techniques to analyse image patterns are reapply them to target landscapes with procedural models. Presently, promising results are arising from the use of pyramid histograms, which reasonably effectively capture the frequency profile of the input terrain.We are incorporating this work with an interactive height-field editing system for large scale landscape design. Using interviews and further feedback from artists, we are steering the development with the goal of amplifying the artist's workflow efficiency. Procedural terrain editing features under development include, procedural copy and paste similar features to target locations ridge feature drawing mountain shaping library of distinctive terrain feature maps to apply variations of in place on the map.

Deformable Objects Alive
We present a method for controlling the motions of active deformable characters. As an underlying principle, we require that all motions be driven by internal deformations. We achieve this property by dynamically adapting rest shapes in order to induce deformations that, together with environment interactions, result in purposeful and physically-plausible motions. Rest shape adaptation is a powerful concept and we show that by restricting shapes to suitable subspaces, it is possible to explicitly control the motion styles of deformable characters. Our formulation is general and can be combined with arbitrary elastic models and locomotion controllers. We demonstrate the efficiency of our method by animating curve, shell, and solid-based characters whose motion repertoires range from simple hopping to complex walking behaviors.

Differential Blending for Expressive Sketch-Based Posing
Caricatured poses and expressive movement are the hallmark of hand-drawn animation, where pencil and paper afford the artist full creative freedom when crafting the shape and movement of animated characters. In three-dimensional animation, however, highly expressive and caricatured poses are more difficult to achieve, because the artist interacts with the character indirectly via the character's rigging controls. As a result, expressive poses that come naturally from the fluid interaction of paper and pencil can be cumbersome or impossible to achieve using modern 3D animation tools.

Our research addresses these shortcomings in 3D animation by proposing a novel blending method for skeletal deformations and illustrating how it can be used to transfer the concept of "emph{line-of-action}" curves from 2D hand-drawn animation for creating highly expressive poses with intuitive sketch-based controls in 3D animation. In 2D animation, these curves serve as a guide to convey the composition, balance, energy, and dynamics of the character's pose. By interpreting these curves for 3D skeletal deformations, our system allows fast and intuitive creation of highly expressive poses that are notoriously difficult to obtain with complex classical rigs.

The core technical challenge of developing such a system lies in blending skeletal transformations. Because highly expressive poses involve large bends and twists, the rigging system is forced to blend large, disparate rotations in complex regions such as the shoulder where vertices are influenced by multiple portions of the skeleton. Due to ambiguities inherent in the representation of rotations, blending algorithms used by existing rigging systems fail to give smooth and intuitive results in this case. To solve these problems, we propose a new blending technique specifically designed for large and disparate transformations, as our main contribution. Our "emph{differential blending}" method represents all transformations in a differential manner and computes averages of the differential transformations, which are then composed to get the final blended transformation.

Discrete Bending Forces and Their Jacobians
Computation of bending forces on triangle meshes is required for numerous simulation and geometry processing applications. In particular it is a key component in cloth simulation. A common quantity in many bending models is the hinge angle between two adjacent triangles. This angle is straightforward to compute, and its gradient with respect to vertex positions (required for the forces) is easily found in the literature.

Dynamic Visemes
We present a new method for generating a dynamic, concatenative, unit of visual speech that can generate realistic visual speech animation. We redefine visemes as temporal units that describe distinctive speech movements of the visual speech articulators. Traditionally visemes have been surmized as the set of static mouth shapes representing clusters of contrastive phonemes (e.g. /p, b, m/, and /f, v/). In this work, the motion of the visual speech articulators are used to generate discrete, dynamic visual speech gestures. These gestures are clustered, providing a finite set of movements that describe visual speech, the visemes. Dynamic visemes are applied to speech animation by simply concatenating viseme units. We compare to static visemes using subjective evaluation. We find that dynamic visemes are able to produce more accurate and visually pleasing speech animation given phonetically annotated audio, reducing the amount of time that an animator needs to spend manually refining the animation.

Efficient Elasticity for Character Skinning with Contact and Collisions
We present a new algorithm for near-interactive simulation of skeleton driven, high resolution elasticity models to create soft tissue deformation in character animation. The algorithm is based on a novel discretization of corotational elasticity over a hexahedral lattice. Within this framework, we enforce positive definiteness of the stiffness matrix to allow efficient quasistatics and dynamics. In addition, we present a multigrid method that converges with very high efficiency.

Efficient Simulation of Example-Based Materials
We present a new method for efficiently simulating art-directable deformable materials. We use example poses to define subspaces of desirable deformations via linear interpolation. As a central aspect of our approach, we use an incompatible representation for input and interpolated poses that allows us to interpolate between elements individually. This key insight enables us to bypass costly reconstruction steps and we thus achieve significant performance improvements compared to previous work. As a natural continuation, we furthermore present a formulation of example-based plasticity. Finally, we extend the directability of example-based materials and explore a number of powerful control mechanisms. We demonstrate these novel concepts on a number of solid and shell animations including artistic deformation behaviors, cartoon physics, and example-based pose space dynamics

Efficient Simulation of Secondary Motion in Rig-Space
We present an efficient method for augmenting keyframed character animations with physically-simulated secondary motion. Our method achieves a performance improvement of one to two orders of magnitude over previous work without compromising on quality. This performance is based on a linearized formulation of rig-space dynamics that uses only rig parameters as degrees of freedom, a physics-based volumetric skinning method that allows our method to predict the motion of internal vertices solely from deformations of the surface, as well as a deferred Jacobian update scheme that drastically reduces the number of required rig evaluations. We demonstrate the performance of our method by comparing it to previous work and showcase its potential on a production-quality character rig.

Expressing Animated Performances through Puppeteering
An essential form of communication between the director and the animators early in the animation pipeline is rough cut at the motion (ablocked-in animation). This version of the character’s performance allows the director and animators to discuss how the character will play his/her role in each scene. However, blocked-in animationis also quite time consuming to construct, with short scenes requiring many hours of preparation between presentations.

Facial Performance Enhancement Using Dynamic Shape Space Analysis
The facial performance of an individual is inherently rich in subtle deformation and timing details. Although these subtleties make the performance realistic and compelling, they often elude both motion capture and hand animation. We present a technique for adding fine-scale details and expressiveness to low-resolution art-directed facial performances, such as those created manually using a rig, via marker-based capture, by fitting a morphable model to a video, or through Kinect reconstruction using recent faceshift technology. We employ a high-resolution facial performance capture system to acquire a representative performance of an individual in which he or she explores the full range of facial expressions. From the captured data, our system extracts an expressiveness model that encodes subtle spatial and temporal deformation details specific to that particular individual. Once this model has been built, these details can be transferred to low-resolution art-directed performances. We demonstrate results on various forms of input; after our enhancement, the resulting animations exhibit the same nuances and fine spatial details as the captured performance, with optional temporal enhancement to match the dynamics of the actor. Finally, we show experimentally that our technique compares favorably to the current state-of-the-art in example-based facial animation

Improved Failsafe for Cloth Simulation
Robust treatment of complex collisions is a challenging problem in cloth simulation. The collision response framework of Bridson et al. [2002] is widely adopted in the industry for its efficiency, versatility, and proven ability to solve practical problems. Its efficiency is partly due to a built-in fail-safe: when facing a cluster of interacting simultaneous collisions, the framework extracts the rigid body motion of the cluster, as presented by Provot [1997].

Inferring Artistic Intention in Comic Art through Viewer Gaze
Comics are a compelling, though complex, visual storytelling medium. Researchers are interested in the process of comic art creation to be able to automatically tell new stories, and also, summarize videos and catalog large collections of photographs for example.

Interactive Region-Based Linear 3D Face Models
Linear models, particularly those based on principal component analysis (PCA), have been used successfully on a broad range of human face-related applications. Although PCA models achieve high compression, they have not been widely used for animation in a production environment because their bases lack a semantic interpretation. Their parameters are not a natural set for animators to work with. In this paper we present a linear face modelling approach that allows intuitive click-and-drag interaction for animation.

Joint Importance Sampling of Low-Order Volumetric Scattering
Central to all Monte Carlo-based rendering algorithms is the construction of light transport paths from the light sources to the eye. Existing rendering approaches sample path vertices incrementally when constructing these light transport paths. Paths should ideally be constructed according to a joint probability density function proportional to the integrand, yet current incremental sampling strategies only locally account for certain terms in the integrand. The resulting probability density is thus a product of the conditional densities of each local sampling step, constructed without explicit control over the form of the final joint distribution of the complete path. We analyze why current incremental construction schemes often lead to high variance in the presence of participating media, and reveal that such approaches are an unnecessary legacy inherited from traditional surface-based rendering algorithms. We devise joint importance sampling of path vertices in participating media to construct paths that explicitly account for the product of all scattering and geometry terms along a sequence of vertices instead of just locally at a single vertex. This leads to a number of practical importance sampling routines to explicitly construct single- and double-scattering subpaths in anisotropically-scattering media. We demonstrate the benefit of our new sampling techniques, integrating them into several path-based rendering algorithms such as path tracing, bidirectional path tracing, and many-light methods. We also use our sampling routines to generalize deterministic shadow connections to connection subpaths consisting of two or three random decisions, to efficiently simulate higher-order multiple scattering. Our algorithms significantly reduce noise and increase performance in renderings with both isotropic and highly anisotropic, low-order scattering.

Leveraging the Talent of Hand Animators to Create Three-Dimensional Animation
The skills required to create three-dimensional animation using computer software are quite different from those required to create hand animation with paper and pencil. The three-dimensional medium has several advantages over the traditional medium—it is easy to relight the scene, render it from different view-points, and add physical simulations. We present a method to leverage the talent of traditionally trained hand animators to create three-dimensional animation of human motion, while allowing them to work in the medium that is familiar to them.

Medusa Performance Capture System
The Medusa Performance Capture system, developed by Disney Research in Zurich, consists of a mobile rig of cameras and lights coupled with proprietary software that can reconstruct actor’s faces in full motion, without using traditional motion-capture dots. The technology comes as the result of many years worth of research and scientific advances in capturing and modeling of human faces.

Modeling and Animating Eye Blinks
Facial animation often falls short in conveying the nuances present in the facial dynamics of humans. We investigate the subtleties of the spatial and temporal aspects of eye blinks.

Modeling and Estimation of Internal Friction in Cloth
Several researchers have identified internal friction as the source of large hysteresis in force-deformation measurements in real cloth, yet it has not been incorporated into computer animation models of cloth. Even if the elastic parameters are chosen to fit the average of loading and unloading behaviors, given observed hysteresis as high as 50% of the average force, ignoring internal friction may induce deformation errors of up to 25% for a given load. Internal friction also plays a central role in the formation and dynamics of cloth wrinkles. We have observed that internal friction may induce the formation of ‘preferred’ wrinkles and folds.

In this project, we developed a model of internal friction based on a reparameterization of Dahl's model, and validated that this model provides a good match to important features of cloth hysteresis even with a minimal set of parameters. We also provide novel parameter estimation procedures based on easy to acquire and sparse data. In contrast to previous work, which relies on complex force-deformation measurement systems with uniform strain, controlled deformation velocity, and dense data acquisition, the hardware used for acquisition is extremely simple.

Finally, we provide an algorithm for the efficient simulation of internal friction using implicit integration methods. We demonstrate it on cloth simulation examples that show disparate behavior with and without internal friction.

Modular Radiance Transfer
Modular Radiance Transfer is an approach for interactively computing approximate direct-to-indirect transfer by warping and combining transport from a library of simple shapes. Incorporating precomputed light transport into authoring pipelines for large scenes incurs long preprocessing times, generates large datasets, hinders artistic iteration workflows, and often results in only modest run-time performance. We observe that using a prior on the distribution of incident lighting enables accurate low-rank approximations to the light transport operator for simple canonical shapes, which can be precomputed off-line. An implicit lighting environment induced from the low-rank approximation is then used to model the flow of light volumetrically in the scene and through interface lightfields between shapes. These interfaces enable coupling between shapes and act as aggregation points for distant propagation, increasing the runtime performance and minimizing the required memory. We replace the scene dependent precomputation with a light-weight, artist driven mapping between the complex scene and the dictionary of shapes. High frame rates are produced on target platforms ranging from cell-phones to high end GPUs.

Multi-linear Data-Driven Dynamic Hair Model with Efficient Hair-Body Collision Handling
We present a data-driven method for learning hair models that enables the creation and animation of many interactive virtual characters in real-time (for gaming, character pre-visualization and design). Our model has a number of properties that make it appealing for interactive applications: (i) it preserves the key dynamic properties of physical simulation at a fraction of the computational cost, (ii) it gives the user continuous interactive control over the hairstyles (e.g.,lengths) and dynamics (e.g.,softness) without requiring re-styling or re-simulation, (iii) it deals with hair-body collisions explicitly using optimization in the low-dimensional reduced space, (iv) it allows modeling of external phenomena (e.g.,wind). Our method builds on the recent success of reduced models for clothing and fluid simulation, but extends the mina number of significant ways. We model motion of hair in a conditional reduced sub-space, where the hair basis vectors, which encode dynamics, are linear functions of user-specified hair parameters. We formulate collision handling as an optimization in this reduced sub-space using fast iterative least squares. We demonstrate our method by building dynamic, user-controlled models of hair styles.

Novel Toolset for 2D Drawing and Animation
This project investigates a set of novel digital tools for 2D Animation addressing the shortcomings of current digital support, representations and algorithms. Our goal is to produce animation tools that are intuitive to use, allow full control over the resulting drawing when desired, and provide the artist with immediate visual feedback of the animation as it progresses.

In contrast to previous work, where automation has been often been the goal, we focus on building tools that keep the artist as a central agent. We target automation of the most tedious tasks where the need for artistic interpretation is minimal, and otherwise aim for computer-assisted solutions geared to provide a similar experience as with traditional workflows augmented with algorithmic computation.

We start by investigating the problem of representing drawings digitally, analyzing existing representations, and highlighting their major shortcomings. We then present a hybrid representation that combines the advantages of vector and raster images, and propose the use of a novel vector description for lines and areas.

We then address the problem of vectorization of line drawings. This problem is challenging due to ambiguities in regions where lines are drawn close to each other or intersect. We propose a two-step, topology-driven approach that first exploits the pixel gradient information in a clustering process to generate an initial stroke graph from which the topology of the drawing is learned, and then applies a ``reverse drawing'' procedure where plausible junction configurations are considered and a heuristic optimum is selected.

Segmentation is a key step in organizing digital drawings into semantic groups ready for editing and animation. Done manually, this can be a very labor intensive task. We propose a scribble-based interface that guides a novel energy minimization resulting in the labeling of the drawing strokes. In contrast to previous methods, we exploit both geometric and temporal information available with modern drawing devices.

In the realm of applications, we address the task of inbetweening, which is the creation of animation frames between pairs of key frames in order to create the illusion of a continuous animation. Drawings are represented as stroke graphs. Given two input key frames, a mapping between the graphs is derived, and spiral trajectories for graph nodes and additional salient points are computed. Strokes are then interpolated, leading to an initial set of inbetween frames. We propose a set of tools to modify the mapping, deal with simple topological mismatches, and redraw animation trajectories.

Finally, we propose a technique to control temporal noise in sketchy animation. Sequences of sketches typically present notable temporal artifacts in the form of visual flickering due to the lack of temporal consistency in the way sketched lines vary from the visually perceived boundaries and interior lines. We propose a two-step method that applies a temporal filter bi-linearly. By combining motion extraction, stroke correspondence, and inbetweening, temporal consistency can be enforced at the stroke level. We first apply this to selected key frames in the input animation to generate a so-called ``noise free'' sequence, and then to pairs of frames from the input sequence and the noise free sequence to obtain the desired temporal noise level specified by the user.

We present a technique to generalize the 2D painting metaphor to 3D that allows the artist to treat the full 3D space as a canvas. Strokes painted in the 2D viewport window must be embedded in 3D space in a way that gives creative freedom to the artist while maintaining an acceptable level of controllability. We address this challenge by proposing a canvas concept defined implicitly by a 3D scalar field. The artist shapes the implicit canvas by creating approximate 3D proxy geometry. An optimization procedure is then used to embed painted strokes in space by satisfying different objective criteria defined on the scalar field. This functionality allows us to implement tools for painting along level set surfaces or across different level sets. Our method gives the power of fine-tuning the implicit canvas to the artist using a unified painting/sculpting metaphor. In OverCoat, scenes are rendered by projecting each brush stroke onto the current view plane and rasterizing it as one or more fragments for every pixel that the stroke overlaps. This rendering method exposes a technical dilemma about the order in which the fragments should be composited. In the 2D painting metaphor, when the artist places a new paint stroke, it obscures all previous paint strokes that it overlaps. Such behavior is achieved by compositing in stroke order. From a 3D point of view, however, strokes that are closer to the viewer should obscure those that are farther away, which amounts to compositing in depth order. Compositing purely in stroke order negates much of the benefit of 3D painting, as the sense of tangible objects is lost when the view is changed. Compositing purely in depth order, on the other hand, leads to Z-fighting, precluding the artist from painting over existing strokes, and thus ignores an important part of the 2D painting metaphor. In this work, we formalize this idea for the first time and design a new mixed-order rendering algorithm that addresses these challenges.

Photon Beam Diffusion: A Hybrid Monte Carlo Method for Subsurface Scattering
We present photon beam diffusion, an efficient numerical method for accurately rendering translucent materials. Our approach interprets incident light as a continuous beam of photons inside the material. Numerically integrating diffusion from such extended sources has long been assumed computationally prohibitive, leading to the ubiquitous single-depth dipole approximation and the recent analytic sum-of-Gaussians approach employed by Quantized Diffusion. In this paper, we show that numerical integration of the extended beam is not only feasible, but provides increased speed, flexibility, numerical stability, and ease of implementation, while retaining the benefits of previous approaches. We leverage the improved diffusion model, but propose an efficient and numerically stable Monte Carlo integration scheme that gives equivalent results using only 3–5 samples instead of 20–60 Gaussians as in previous work. Our method can account for finite and multi-layer materials, and additionally supports directional incident effects at surfaces. We also propose a novel diffuse exact single-scattering term which can be integrated in tandem with the multi-scattering approximation. Our numerical approach furthermore allows us to easily correct inaccuracies of the diffusion model and even combine it with more general Monte Carlo rendering algorithms. We provide practical details necessary for efficient implementation, and demonstrate the versatility of our technique by incorporating it on top of several rendering algorithms in both research and production rendering systems.

Photon Beams
We present two contributions to the area of volumetric rendering. We develop a novel, comprehensive theory of volumetric radiance estimation that leads to several new insights and includes all previously published estimates as special cases. This theory allows for estimating in-scattered radiance at a point, or accumulated radiance along a camera ray, with the standard photon particle representation used in previous work.

Practical Hessian-Based Error Control for Irradiance Caching
This paper introduces a new error metric for irradiance caching that significantly outperforms the classic Split-Sphere heuristic. Our new error metric builds on recent work using second order gradients (Hessians) as a principled error bound for the irradiance. We add occlusion information to the Hessian computation, which greatly improves the accuracy of the Hessian in complex scenes, and this makes it possible for the first time to use a radiometric error metric for irradiance caching. We enhance the metric making it based on the relative error in the irradiance as well as robust in the presence of black occluders. The resulting error metric is efficient to compute, numerically robust, supports elliptical error bounds and arbitrary hemispherical sample distributions, and unlike the Split-Sphere heuristic it is not necessary to arbitrarily clamp the computed error thresholds. Our results demonstrate that the new error metric outperforms existing error metrics based on the Split-Sphere model and occlusion-unaware Hessians.

Programmable Motion Effects
Although animation is one of the most compelling aspects of computer graphics, the possibilities for depicting the movement that make dynamic scenes so exciting remain limited. In our work, we experiment with motion depiction as a first-class entity within the rendering process. We extend the concept of a surface shader, which is evaluated on an infinitesimal portion of an object’s surface at one instant in time, to that of a programmable motion effect, which is evaluated with global knowledge about all portions of an object’s surface that pass in front of a pixel during an arbitrary long sequence of time.

Real-Time Volumetric Shadows using 1D Min-Max Mipmaps
Light scattering in a participating medium is responsible for several important effects we see in the natural world. In the presence of occluders, computing single scattering requires integrating the illumination scattered towards the eye along the camera ray, modulated by the visibility towards the light at each point. Unfortunately, incorporating volumetric shadows into this integral, while maintaining real-time performance, remains challenging.

In this paper we present a new real-time algorithm for computing volumetric shadows in single-scattering media on the GPU. This computation requires evaluating the scattering integral over the intersections of camera rays with the shadow map, expressed as a 2D height field. We observe that by applying epipolar rectification to the shadow map, each camera ray only travels through a single row of the shadow map (an epipolar slice), which allows us to find the visible segments by considering only 1D height fields. At the core of our algorithm is the use of an acceleration structure (a 1D min-max mipmap) which allows us to quickly find the lit segments for all pixels in an epipolar slice in parallel. The simplicity of this data structure and its traversal allows for efficient implementation using only pixel shaders on the GPU.

Rig-Space Physics
In animated films, believable and compelling character animation, requires careful consideration of the complex physical forces involved in movement in order to give weight and substance to an otherwise empty and weightless shape. In current animation pipelines, the disconnect between the workflow used by artists (manually keyframing a small set of rig parameters) and the output of physics-based simulations (a high number of independent degree of freedom) limits the effectiveness of physical simulation in the animation pipeline. To date, artists must choose between laboriously keyframing physical effects or employing physics-based tools that offer limited control and may not respect the character's range of meaningful deformations.

We present a method that brings the benefits of physics-based simulations to traditional animation pipelines. We formulate the equations of motions in the subspace of deformations defined by an animator's rig. Our framework fits seamlessly into the work-flow typically employed by artists, as our output consists of animation curves that are identical in nature to the result of manual key framing. Artists are therefore capable of exploring the full spectrum between hand-crafted animation and unrestricted physical simulation. To enhance the artist's control, we provide a method that transforms stiffness values defined on rig parameters to a non-homogeneous distribution of material parameters for the underlying FEM model. In addition, we use automatically extracted high-level rig parameters to intuitively edit the results of our simulations, and also to speed up computation. Our method treats all rigging controls in a unified manner and thus works with skeletons, blend shapes, spatial deformation fields, or any other rigging procedure. Moreover, the output of our system consists of animation keyframes for the rig parameters, making editing convenient for the artist. To demonstrate the effectiveness of our method, we create compelling results by adding rich, secondary motions to coarse input animations.

Sketch-Based Generation and Editing of Quad Meshes
Extremely coarse quad meshes are the preferred representation for animating characters in movies and video-games. In these scenarios, artists want complete control of the mesh, i.e. the edge flow and the placement of singularities, and existing automatic algorithms are not yet able to match the quality of manually modeled meshes.

We propose an algorithm to quadrangulate 3D shapes that supports explicit control over the placement of edge loops and singularities and arbitrary local subdivision of the patch edges. The algorithm enables a novel user interface specifically designed to assist the interactive creation of coarse quad meshes. The UI is based on a combination of sketch-based tools to define the edge flow, and an autocompletion algorithm that helps the user fill the gaps between quadrangulated parts. The UI responds to the user sketches in real time and provides full control of the geometry and topology of the generated quad mesh, while significantly simplifying the process by removing repetitive, tedious tasks.

We show that with our method, artists can efficiently retopologize triangle meshes or modify quadrangulations generated with existing automatic methods.

Stable Spaces for Real-time Clothing
We present a technique for learning clothing models that enables the simultaneous animation of thousands of detailed garments in real-time. This surprisingly simple conditional model learns and preserves the key dynamic properties of a cloth motion along with folding details. Our approach requires no a priori physical model, but rather treats training data as a ‘black box’. We show that the models learned with our method are stable over large time- steps and can approximately resolve cloth-body collisions. We also show that within a class of methods, no simpler model covers the full range of cloth dynamics captured by ours. Our method bridges the current gap between skinning and physical simulation, combining benefits of speed from the former with dynamic effects from the latter. We demonstrate our approach on a variety of apparel worn by male and female human characters performing a varied set of motions typically used in video games (e.g., walking, running, jumping, etc.).

Style and Abstraction in Portrait Sketching
We use a data-driven approach to study both style and abstraction in sketching of a human face portrait. We gather and analyze data from a number of artists that sketch a human face from a reference photograph. To achieve different levels of abstraction in the sketches, decreasing time limits were imposed – from 4:5 minutes to 15 seconds. We analyzed the data at two levels: strokes and geometric shape. In each, we create a model that captures both the style of the different artists and the process of abstraction. These models are then used for a portrait sketch synthesis application. Starting from a novel face photograph, we can synthesize a sketch in the various artistic styles and in different levels of abstraction.

Video-Based 3D Motion Capture Through Biped Control
We demonstrate our approach by capturing sequences of walking, jumping, and gymnastics. We evaluate the results through qualitative and quantitative comparisons to video and motion capture data.

Virtual Ray Lights for Rendering Scenes with Participating Media
We present an efficient many-light algorithm for simulating indirect illumination in, and from, participating media. Instead of creating discrete virtual point lights (VPLs) at vertices of random-walk paths, we present a continuous generalization that places virtual ray lights (VRLs) along each path segment in the medium. Furthermore, instead of evaluating the lighting independently at discrete points in the medium, we calculate the contribution of each VRL to entire camera rays through the medium using an efficient Monte Carlo product sampling technique. We prove that by spreading the energy of virtual lights along both light and camera rays, the singularities that typically plague VPL methods are significantly diminished. This greatly reduces the need to clamp energy contributions in the medium, leading to robust and unbiased volumetric lighting not possible with current many-light techniques. Furthermore, by acting as a form of final gather, we obtain higher-quality multiple-scattering than existing density estimation techniques like progressive photon beams.