Human-Computer Interaction

human-computer-interaction

We’re interested in the many ways computer interfaces can span the digital and tangible worlds, giving rise to qualitatively new experiences. Our agenda takes advantage of technologies that are relatively new in the commercial world, and whose interactions have not yet been fully explored. Our researchers invent new technologies for sensing touch and pose, as well as creating new sensory experiences such as haptic illusions.

Projects

(in alphabetical order)

3D Printed Interactive Speakers
We propose technology for designing and manufacturing interactive 3D printed speakers. With the proposed technology, sound reproduction can easily be integrated into various objects at the design stage and little assembly is required. The speaker can take the shape of anything from an abstract spiral to a rubber duck, opening new opportunities in product design. Furthermore, both audible sound and inaudible ultrasound can be produced with the same design, allowing for identifying and tracking 3D printed objects in space using common integrated microphones. The design of 3D printed speakers is based on electrostatic loudspeaker technology first explored in the early 1930s but not broadly applied until now. These speakers are simpler than common electromagnetic speakers, while allowing for sound reproduction at 60 dB levels with arbitrary directivity ranging from focused to omnidirectional. Our research of 3D printed speakers contributes to the growing body of work exploring functional 3D printing in interactive applications.

A Passively Safe and Gravity-Counterbalanced Anthropomorphic Robot Arm
When designing a robot for human-safety during direct physical interaction, one approach is to size the robot’s actuators to be physically incapable of exerting damaging impulses, even during a controller failure. Merely lifting the arms against their own weight may consume the entire available torque budget, preventing the rapid and expressive movement required for anthropomorphic robots. To mitigate this problem, gravity-counterbalancing of the arms is a common tactic; however, most designs adopt a shoulder singularity configuration which, while favorable for simple counterbalance design, has a range of motion better suited for industrial robot arms. In this paper we present a shoulder design using a novel differential mechanism to counterbalance the arm while preserving an anthropomorphically favorable singularity configuration and natural range-of-motion. Furthermore, because the motors driving the shoulder are completely grounded, counterbalance masses or springs are easily placed away from the shoulder and low in the torso, improving mass distribution and balance. A robot arm using this design is constructed and evaluated for counterbalance efficacy and backdrivability under closed-loop force control.

A Tongue Input Device for Creating Conversations
We present a new tongue input device, the tongue joystick, for use by an actor inside an articulated-head character costume. Using our device, the actor can maneuver through a dialogue tree, selecting clips of prerecorded audio to hold a conversation in the voice of the character.

Aireal: Interactive Tactile Experiences in Free Air
AIREAL is a low cost, scalable haptic technology that delivers rich tactile sensations in mid air. AIREAL enables users to feel virtual objects without requiring the user to wear or touch a physical device. To deliver these tactile sensations, AIREAL uses a vortex, a ring of air that can impart a significant force the user can feel at large distances. The AIREAL technology is almost entirely 3D printed including an enclosure, flexible nozzle and a pan and tilt gimbal structure that allows a vortex to be precisely delivered to any location in 3D space. We envision using AIREAL in numerous applications including gaming and storytelling, location based entertainment and simulation applications.

Botanicus Interacticus: Interactive Plant Technology
Botanicus Interacticus is a technology for designing highly expressive interactive plants, both living and artificial. Driven by the rapid fusion of our computing and living spaces, Botanicus Interacticus takes interaction from computing devices and places it anywhere where living plants are present. The interaction is rich and extensive, it goes beyond simple touch detection and allows complex gestural interaction, such as sliding fingers on the stem of the plant, detecting touch and grasp location, tracking proximity between the user hand and a plant, and estimating the amount of touch contact.

Capacitive Fingerprinting
We propose a novel human identification and differentiation techniques based on Touché sensing technology. Touché measures the impedance of a user to the environment by sweeping across a range of AC frequencies. Different people have different bone densities and muscle mass, wear different clothes and footwear, and so on. This, in turn, yields different impedance profiles, which allows attribute touch events and multitouch gestures to a particular user and modify interaction and system response accordingly. Note that this technology does not require any instrumentation of the user or the environment. We refer to this technology as Capacitive Fingerprinting.

Electric Flora: An Interactive Energy Harvesting Installation
Electric Flora is an interactive, human-powered energy harvesting system that converts a person’s movement into light. The installation explores the interaction of bodies in space, movement, materials, and electrostatic energy.
Electric Flora relies on a person’s interactions with a polyester covered floor, hanging acrylic rods with embedded LEDs, and garments. The electromechanical conversion−from a user’s movements to electricity−is based on the triboelectric effect, where rubbing and contact of different insulators cause them to exchange electric charge.

Electrostatic Vibration
We present Electrostatic Vibration (formerly “TeslaTouch”), a new technology for enhancing touch interfaces with tactile sensations. Electrostatic Vibration is based on the electrovibration phenomenon and does not use any moving parts. Our technology provides a wide range of tactile sensations to fingers sliding across surfaces of any shape and size, from small mobile displays to curved or wall-sized screens. Electrostatic Vibration can be easily combined with a wide range of touch sensing technologies, including capacitive, optical and resistive touch screens. When combined with an interactive display and touch input, our tactile technology enables the design of interfaces that allow the user to feel virtual elements through touch. It can be used to enhance a wide range of applications with rich tactile feedback, such as feeling properties of interface elements in graphical user interfaces, maps and characters in video games, textures and colors in graphical painting applications, and many more.

Feel Effects: Enriching Storytelling with Haptic Feedback
Despite a long history of use in communication, haptic feedback is a relatively new addition to the toolbox of special effects. Unlike artists who use sound or vision, haptic designers cannot simply access libraries of effects that map cleanly to media content, and they lack even guiding principles for creating such effects. In this paper, we make progress toward both capabilities: we generate a foundational library of usable haptic vocabulary and do so with a methodology that allows ongoing additions to the library in a principled and effective way. We define a feel effect as an explicit pairing between a meaningful linguistic phrase and a rendered haptic pattern. Our initial experiment demonstrates that users who have only their intrinsic language capacities, and no haptic expertise, can generate core set of feel effects that lend themselves via semantic inference to the design of additional effects. The resulting collection of more than forty effects covers a wide range of situations (including precipitation, animal locomotion, striking and pulsating events) and is empirically shown to produce the named sensation for the majority of our test users in a second experiment. Our experiments demonstrate a unique and systematic approach to designing a vocabulary of haptic sensations that are related in both the semantic and parametric spaces.

FeelCraft
FeelCraft is a media plugin that monitors events and states in the media and associates them with expressive tactile content using a library of feel effects (FEs). A feel effect (FE) is a user-defined haptic pattern that, by virtue of its connection to a meaningful event, generates dynamic and expressive effects on the user’s body. We compiled a library of more than fifty FEs associated with common events in games, movies, storybooks, etc., and used them in a sandbox-type gaming platform. The FeelCraft plugin allows a game designer to quickly generate haptic effects, associate them to events in the game, play them back for testing, save them and/or broadcast them to other users to feel the same haptic experience. Our demonstration shows an interactive procedure for authoring haptic media content using the FE library, playing it back during interactions in the game, and broadcasting it to a group of guests.

HideOut
How can projected imagery traverse the digital-physical divide to interact with physical objects and surfaces in the environment? HideOut explores how mobile projectors can enable new forms of interaction with digital content projected on everyday objects such as books, walls, game boards, tables, and many others.
We enable seamless interaction between the digital and physical world using specially formulated infrared-absorbing markers - hidden from the human eye, but visible to a camera embedded in a compact mobile projection device. Digital imagery directly augments and responds to the physical objects it is projected on, such as an animated character interacting with printed graphics in a storybook.

Interactive Light Field Painting
Since Sutherland's seminal SketchPad work in 1964, direct interaction with computers has been compelling: we can directly touch, move, and change what we see. Direct interaction is a major contribution to the success of smartphones and tablets, but the world is not flat. While existing technologies can display realistic multi-view stereoscopic 3D content reasonably well, interaction within the same 3D space often requires extensive additional hardware. This project presents a cheap and easy system that uses the same lenslet array for both multi-view autostereoscopic display and 3D light-pen position sensing. The display provides multi-user, glasses-free autostereoscopic viewing with motion parallax. A single near-infrared camera located behind the lenslet array is used to track a light pen held by the user. Full 3D position tracking is accomplished by analysing the pattern produced when light from the pen shines through the lenselet array. This light pen can be used to directly draw into a displayed light field, or as input for object manipulation or defining parametric lines. The system has a number of advantages. First, it inexpensively provides both multi-view autostereoscopic display and 3D sensing with 1:1 mapping. A review of the literature indicates that this has not been offered in previous interactive content-creation systems. Second, because the same lenslet array provides both 3D display and 3D sensing, the system design is extremely simple, inexpensive, and easy to build and calibrate. The demo at SIGGRAPH 2012 shows a variety of interesting interaction styles with a prototype implementation: freehand drawing, polygonal and parametric line drawing, model manipulation, and model editing.

Interactive Mobile Projectors
Mobile projectors are small projectors that can be embedded in handheld computing devices such as smartphones and tablets. Market research predicts that as many as 39 million devices with embedded projectors will be on the market by 2014. Developing new interfaces for mobile projectors opens up a range of possibilities for interactive experiences in both work and play. (Projects included: MotionBeam; SideBySide)

Ishin-Den-Shin: Transmitting Sound Through Touch
Ishin-Den-Shin explores the use of the human body as sound transmission medium. The technology turns an audio message into an inaudible signal that is relayed by the human body. When the communicator’s finger slightly rubs an object, this physical interaction creates an ad-hoc speaker that makes it possible to hear the recorded sounds. A special case of Ishin-Den-Shin interactioh happens when the communicator touches another person’s ear. In this case, a modulated electrostatic field creates a very small vibration of the ear lobe; the finger and the other person’s ear, together, form a speaker which makes the signal audible only for the person touched.

MotionBeam
The MotionBeam project explores new forms of character interaction using handheld projectors. With our prototype system, users interact and control projected characters by moving and gesturing with the projection device. This creates a unified interaction style where sensor input and projector output are tied together within a single device.

Our character and racing game applications show how MotionBeam can be used with mobile games. It can also be utilized for augmented reality interaction by linking projected content to physical objects in the environment. We envision MotionBeam as a key component in a new ‘game projector’ platform where the real world becomes a playground and users interact directly with each other and the environment.

One Man Band: A Touch Screen Interface for Producing Multi-Camera Sports Broadcasts
Generating broadcasts of ive sporting events requires a coordinated crew of camera operators, directors, and technical personnel to control and switch between multiple cameras to tell the evolving story of a game. In this paper, we present a unimodal interface concept that allows one person to cover live sporting action by controlling multiple cameras and determining which view to broadcast. The interface exploits the structure of sports broadcasts which typically switch between a zoomed out game-camera view (which records the strategic team-level play), and a zoomed in iso-camera view (which captures the animated adversarial relations between opposing players). The operator simultaneously controls multiple pan-tilt-zoom cameras by pointing at a location on the touch screen, and selects which camera to broadcast using one or two points of contact. The image from the selected camera is superimposed on top of a wide-angle view captured from a context -camera which provides the operator with periphery information (which is useful for ensuring good framing while controlling the camera). We show that by unifying directorial and camera operation functions, we can achieve comparable broadcast quality to a multi-person crew, while reducing cost, logistical, and communication complexities.

Paper Generators: Harvesting Energy from Touching, Rubbing and Sliding
We present a new energy harvesting technology that generates electrical energy from a user’s interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user’s gestures such as tapping, touching, rubbing and sliding to generate energy. The harvested energy is then used to actuate LEDs, e-paper displays and other devices to create interactive applications for books and other printed media.

Papillon
PAPILLON is a technology for 3D printing highly expressive animated eyes for interactive characters, robots and toys. Expressive eyes are essential in any form of face-to-face communication and designing them has been a critical challenge in robotics, as well as in interactive character and toy development. Crucially, the traditional animatroncis approach is not applicable to fictional characters from animated movies, comics and cartoons whose eye expressions are non-realistic, highly exaggerated and can take any size and shape to communicate a character’s emotions and intentions, e.g. “dollar signs” for greed or a “heart” for romance. [Projects include: Expressive Eyes for Interactive Characters (ACM SIGGRAPH 2013) -and- Designing Curved Display Surfaces with Printed Optics (ACM UIST 2013)]

Printed Optics
Printed Optics is a new approach to creating custom optical elements for interactive devices using 3D printing. Printed Optics enable sensing, display, and illumination elements to be directly embedded in the body of an interactive device. Using these elements, unique display surfaces, novel illumination techniques, custom optical sensors, and robust embedded components can be digitally fabricated for rapid, high fidelity, customized interactive devices.

Printing Teddy Bears: A Technique for 3D Printing of Soft Interactive Objects
This paper considers the design, construction, and example use of a new type of 3D printer which fabricates three-dimensional objects from soft fibers (wool and wool blend yarn). This printer allows the substantial advantages of additive manufacturing techniques (including rapid turn-around prototyping of physical objects and support for high levels of customization and configuration) to be employed with a new class of material. This material is a form of loose felt formed when fibers from an incoming feed of yarn are entangled with the fibers in layers below it. The resulting objects recreate the geometric forms specified in the solid models which specify them, but are soft and flexible – somewhat reminiscent in character to hand knitted materials. This extends 3D printing from typically hard and precise forms into a new set of forms which embody a different aesthetic of soft and imprecise objects, and provides a new capability for researchers to explore the use of this class of materials in interactive devices.

Revel: Programming the Sense of Touch
REVEL is a new wearable tactile technology that modifies the user’s tactile perception of the physical world. Current tactile technologies enhance objects and devices with various actuators to create rich tactile sensations, limiting the experience to the interaction with instrumented devices. In contrast, REVEL can add artificial tactile sensations to almost any surface or object, with very little if any instrumentation of the environment. As a result, REVEL can provide dynamic tactile sensations on touch screens as well as everyday objects and surfaces in the environment, such as furniture, walls, wooden and plastic objects, and even human skin.

SideBySide
SideBySide is a novel interactive system that allows multiple people to play and work together using handheld projectors at anytime and anyplace. The system is immediate and simple: users simply project onto a surface and their projection becomes aware and responsive to other projections nearby. Interaction can range from projector-based games, such as boxing with projected characters, to everyday tasks such as exchanging contact information by ‘dragging and dropping’ onto another user’s projection.

Importantly, SideBySide does not require any fixed sensing in the environment and can be used anywhere: at home, at the office, or even inside the car during long road trips. The system consists of a hybrid mobile projector that outputs both visible and invisible projections at the same time. The invisible projection contains tracking data that can be recognized by the device camera, allowing accurate location tracking of multiple projections and lightweight communication between devices.

Social Sensing for Human-Robot Interaction
We explore the idea of a sidekick from a human-robot interaction perspective. A sidekick is closely associated with another, primary character, and regarded as a subordinate or partner. Sidekicks are popular in various forms of narrative, where they are often used as comic relief or to introduce an accessible character to increase audience engagement. Likewise, sidekicks can act as a vehicle for raising an obvious concern to the primary character from the audience. For example, a sidekick may yell, “Look out!” to the hero when a villain appears on screen.

Surround Haptics
Surround Haptics is a new tactile technology that uses a low- resolution grid of inexpensive vibrating actuators to generate high- resolution, continuous, moving tactile strokes on human skin. It is based on a carefully designed and thoroughly evaluated algorithm that uses tactile illusions to create and move virtual actuators anywhere on a grid. The technology has implications in the video games, movies, rides, toys, consumer products, medical devices, assistive devices, sporting equipments and more.

Surround Haptics: Immersive Tactile Experiences
The Surround Haptics technology is integrated with a wide variety of entertainment and media contents, such that the contents are not only seen and heard but also felt, simultaneously. The tactile contents are carefully created and synchronized with visual and auditory cues to create effective and immersive experiences and increase the interest of users while playing video games, watching movies, etc. The technology is integrated into theater seats, gaming chairs and vests, rides, gloves, shoes, hand-held devices and controllers, clothes, to create another dimension of sensory feedback. For example, while playing an intense driving simulation game, users feel road conditions, gravel, traction, acceleration, brake, explosions, collisions, etc.

Surround Haptics: Tactile Brush Algorithm
Tactile Brush is an algorithm that produces smooth, two-dimensional tactile moving strokes with varying frequency, intensity, velocity and direction of motion. The design of the algorithm is derived from the results of psychophysical investigations tactile illusions such as apparent tactile motion (phi phenomena), phantom sensations (funneling illusion), saltation, etc. Combined together they allow for the design of high-density two-dimensional tactile displays for the body using sparse vibrating arrays. In a series of experimental evaluations we demonstrate that Tactile Brush is robust and can reliably generate a wide variety of moving tactile sensations for a broad range of applications, actuation technology, body sites and embodiments.

Tactile Rendering of 3D Features on Touch Surfaces
In this project, we develop and apply a tactile rendering algorithm to simulate rich 3D geometric features (such as bumps, ridges, edges, protrusions, texture etc.) on touch screen surfaces. The underlying hypothesis is that when a finger slides on an object then minute surface variations are sensed by friction-sensitive mechanoreceptors in the skin. Thus, modulating the friction forces between the fingertip and the touch surface would create illusion of surface variations. We propose that the perception of a 3D “bump” is created when local gradients of the virtual bump are mapped to lateral friction forces.

Take or Wait? Learning Turn-Taking from Multiparty Data
We build turn-taking models for autonomous characters in language-based interactions with small groups of children. Two models explore the use of support vector machines given the same multimodal features, but different methods for collecting turn-taking labels.

The Magical Wooden Stick: Enchanting the Sense of Touch
In this installation, a magical wooden stick places its holder under a spell that modifies the feel of objects and images of the environment. It reveals a parallel world in which dynamic textures are hidden in a colorful world of smooth and silent images and shapes. This installation is based on the Revel Technology.

Touché: Touch and Gesture Sensing for the Real World.
Touché is a new sensing technology that proposes a Swept Frequency Capacitive Sensing technique that can not only detect a touch event, but simultaneously recognize complex configurations of the human fingers, hand and body during touch interaction. This allows to significantly enhances touch interfaces in a broad range of applications, from conventional touchscreens to designing novel interaction scenarios for unique contexts and materials, such as human body and liquids.