Robotics

robots

In this arena, we’re addressing a portfolio of research problems whose applications range from short-term improvements to long-term challenges. Ultimately, we envision a future in which robots interact with humans in complex, unpredictable environments. We’re working toward this vision by addressing constituent problems in computer graphics, control techniques for humanoid robotics, and human-robot interaction. We also create opportunities of immediate, short-term interest intended to improve operational costs and maintainability.

Projects

Ballwalker

This project investigates the optimization of, and presents a control framework for, a biped robot to maintain balance and walk on a rolling ball. We design a balance controller for a simplified linear model of a biped robot, which comprises a foot connected to a lump mass through an ankle joint and a translational spring and damper. We also derive a collision model for the system consisting of the cylinder, supporting leg, and swing leg. The control framework consists of two primary components: a balance controller and a footstep planner.

Controlling Humanoid Robots with Motion Capture Data

Motion capture is a good source of data for programming humanoid robots because it contains the natural styles and synergies of human behaviors. However, it is difficult to directly use captured motion data because the kinematics and dynamics of humanoid robots differ significantly from those of humans. In this work, we develop a controller that allows a robot to maintain balance while tracking a given reference motion. The controller consists of a balance controller based on a simplified robot model and a tracking controller that performs local joint feedback and an optimization process to obtain the joint torques to simultaneously realize balancing and tracking. We have implement the controller on a full-body, force-controlled humanoid robot and demonstrated that the robot can track captured human motion sequences.

Display Swarm

Display Swarm is a new kind of display composed of a mobile robot swarm. Each robot acts as an individual pixel and has controllable color. We use the swarm to make representational images and animated movies. Our first prototype system had 14 robots, sufficient to generate basic graphics and providing a test-bed for research on robot collision avoidance and localization. This research also addressed the unusual requirement of achieving visually appealing motion of the robots. The latest prototype system has 75 robots, with magnetic wheels for deployment on a vertical surface to provide better visibility. Swarm images are a novel concept that raises basic questions about how to best represent an image with a finite number of movable pixels, and current research is investigating swarm graphics and interaction.

Electromagnetic Eye

We have designed and prototyped an Electromagnetic Eye for animatronic applications. The Eye consists of a clear solid-acrylic inner sphere surrounded by a transparent outer plastic shell. The inner sphere is painted to look like a human eye, with the puplil area and a section at the back left clear. The inner sphere floats in a liquid that index matches it to the outer shell, and in which it is neutrally buoyant. Permanent magnets are impressed at the North and South poles and the East and West equatorial poles. Electromagnetic coils mounted on the outer shell can move the Eye at saccade speeds exceeding those of the human eye. Because of the index matching, the entire eye acts as a non-moving lens for a bare CCD chip mounted to the back of the Eye allowing it to be used as a video camera. In addition the inner eye is magnified by the structure so as to appear to be the outer surface allowing the Eye to “rotate” even when pressed against animatronic skin. Most recently we have begun explorations on how the Eye could not only satisfy animatronic needs for more human-appearing eyes with vision capability, but actually be used as a human eye cosmetic prosthesis driven by signals derived from a person’s remaining (functioning) eye.

Humanoid Robot Calibration

This project presents methods and experimental results regarding the identification of kinematic and dynamic parameters of force-controlled biped humanoid robots. The basic idea is to solve an optimization problem that represents a kinematic constraint that can be easily enforced, such as placing both feet flat on floor.

Playing Catch and Juggling with a Humanoid Robot

Robots in entertainment environments typically do not allow for physical interaction and contact with people. However, catching and throwing back objects is one form of physical engagement that still maintains a safe distance between the robot and participants. Using an animatronic humanoid robot, we developed a test bed for a throwing and catching game scenario. We use an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman filter to predict ball destination and timing. The robot’s hand and joint-space are calibrated to the vision coordinate system using a least-squares technique, such that the hand can be positioned to the predicted location. Successful catches are thrown back two and a half meters forward to the participant, and missed catches are detected to trigger suitable animations that indicate failure. Human to robot partner juggling (three ball cascade pattern, one hand for each partner) is also achieved by speeding up the catching/throwing cycle. We tested the throwing/catching system on six participants (one child and five adults, including one elderly), and the juggling system on three skilled jugglers.

Operational Space Control of Constrained and Underactuated Systems

The operational space formulation (Khatib, 1987), applied to rigid-body manipulators, describes how to decouple task-space and null space dynamics, and write control equations that correspond only to forces at the end-effector or, alternatively, only to motion within the null space. We would like to apply this useful theory to modern humanoids and other legged systems, for manipulation or similar tasks, however these systems present additional challenges due to their underactuated floating bases and contact states that can dynamically change.

Sensor Robots

This project is on capturing 3D models of environments, outdoor and indoor, and capturing activity that is happening in environments, using cameras mounted on mobile robots. Traditional computer vision applications have used fixed-installation or hand-held cameras. There has been a limited use of mobile cameras (e.g. Google’s Street View camera trucks, or plane/helicopter cameras to capture imagery for city models) but these have been special-purpose deployments that are not available to the ordinary user. A new mode of deploying computer vision is now appearing as autonomous robots become commonplace for everyday applications.

Sit to Stand

In this work, we perform the challenging task of a humanoid robot standing up from a chair. First we recorded demonstrations of sit-to-stand motions from normal human subjects as well as actors performing stylized standing motions (e.g. imitating an elderly person). Ground contact force information was also collected for these motions, in order to estimate the human’s center of mass trajectory. We then mapped the demonstrated motions to the humanoid robot via an inverse kinematics procedure that attempts to track the human’s kinematics as well as their center-of-mass trajectory. In order to estimate the robot’s center-of-mass position accurately, we additionally used an inertial parameter identification technique that fit mass and center-of-mass link parameters from measured force data. We demonstrate the resulting motions on the Carnegie Mellon/Sarcos hydraulic humanoid robot.