Tactile Rendering of 3D Features on Touch Surfaces
In this project, we develop and apply a tactile rendering algorithm to simulate rich 3D geometric features (such as bumps, ridges, edges, protrusions, texture etc.) on touch screen surfaces. The underlying hypothesis is that when a finger slides on an object then minute surface variations are sensed by friction-sensitive mechanoreceptors in the skin. Thus, modulating the friction forces between the fingertip and the touch surface would create illusion of surface variations. We propose that the perception of a 3D “bump” is created when local gradients of the virtual bump are mapped to lateral friction forces.
To validate our approach, we used an electro-vibration based friction display to modulate the friction forces between the touch surface and the sliding finger. We first determined a psychophysical relationship between the voltage applied to the display and the subjective strength of friction forces, and then used this function to render friction forces directly proportional to the gradient (slope) of the surface being rendered. In a pair-wise comparison study, we showed that users are at least three times more likely to prefer the proposed slope-model than other commonly used models. Our algorithm is concise, light and easily applicable on static images and video streams.
Our algorithm has three main steps. 1) Calculate the gradient of the virtual surface we want to render, 2) Determine the “dot product” of the gradient of the virtual surface and velocity of the sliding finger, and 3) Map the dot-product to the voltage using the psychophysical relationship.
In order to render real object, such as the one shown in (a), we extend our basic algorithm. The input to the algorithm is a depth map of the object either measured using, for example, a Kinect or extracted from a 3D model, see (b). From the depth field, we calculate gradient field (c) and render haptic feedback when the finger moves on the 2D image of the object (d).
The picture on the right (a) is augmented with user defined 2D ellipsoidal bumps (b). Fine details of the picture are rendered by analyzing the gray-scale of the image.
Depth maps extracted from Kinect like sensors are used to render fine features on visual images that are not touchable nor reachable.
Data extracted from digital elevation models is augmented on navigation maps to provide elevation and depth information to user’s.
One main feature of our algorithm is that it is light weight and can easily be implemented in real-time. (a) A 3D model of objects can be zoomed and panned in real-time to sense fine edge and protruding features of the object. (b) Similarly, the algorithm is scaled to render fine tactile features on live video stream.