In this arena, we’re addressing a portfolio of research problems whose applications range from short-term improvements to long-term challenges. Ultimately, we envision a future in which robots interact with humans in complex, unpredictable environments. We’re working toward this vision by addressing constituent problems in computer graphics, control techniques for humanoid robotics, and human-robot interaction. We also create opportunities of immediate, short-term interest intended to improve operational costs and maintainability.
(in alphabetical order)
A Low-Friction Passive Fluid Transmission and Fluid-Tendon Soft Actuator
We present a passive fluid transmission based on antagonist pairs of rolling diaphragm cylinders. The transmission fluid working volume is completely sealed, forming a closed, passive system, ensuring input-output symmetry and complete backdrivability. Rolling diaphragm-sealed cylinders provide leak-free operation without the stiction of a traditional sliding seal. Fluid pressure preloading allows for bidirectional operation and also serves to preload the gears or belts in the linear-to-rotary output coupler, eliminating system backlash end-to-end. A prototype transmission is built and tested for stiffness, bandwidth, and frictional properties using either air or water as working fluids. Torque transmission is smooth over the entire stroke and stiction is measured to be one percent of full-range torque or less. We also present a tendon-coupled design where the rolling diaphragm is inverted from its normal orientation; this design does not require shaft support bushings, tolerates misalignment, and can be made out of substantially soft materials. Actuator units and a passive transmission are demonstrated using this new soft cylinder design.
A Message-Passing Algorithm for Multi-Agent Trajectory Planning
We describe a novel approach for computing collision-free global trajectories for multiple agents with specified initial and final configurations, based on an improved version of the alternating direction method of multipliers (ADMM) algorithm. Compared with existing methods, our approach is naturally parallelizable and allows for incorporating different cost functionals with only minor adjustments.
A Passively Safe and Gravity-Counterbalanced Anthropomorphic Robot Arm
When designing a robot for human-safety during direct physical interaction, one approach is to size the robot’s actuators to be physically incapable of exerting damaging impulses, even during a controller failure. Merely lifting the arms against their own weight may consume the entire available torque budget, preventing the rapid and expressive movement required for anthropomorphic robots. To mitigate this problem, gravity-counterbalancing of the arms is a common tactic; however, most designs adopt a shoulder singularity configuration which, while favorable for simple counterbalance design, has a range of motion better suited for industrial robot arms. In this paper we present a shoulder design using a novel differential mechanism to counterbalance the arm while preserving an anthropomorphically favorable singularity configuration and natural range-of-motion. Furthermore, because the motors driving the shoulder are completely grounded, counterbalance masses or springs are easily placed away from the shoulder and low in the torso, improving mass distribution and balance. A robot arm using this design is constructed and evaluated for counterbalance efficacy and backdrivability under closed-loop force control.
This project investigates the optimization of, and presents a control framework for, a biped robot to maintain balance and walk on a rolling ball. We design a balance controller for a simplified linear model of a biped robot, which comprises a foot connected to a lump mass through an ankle joint and a translational spring and damper. We also derive a collision model for the system consisting of the cylinder, supporting leg, and swing leg. The control framework consists of two primary components: a balance controller and a footstep planner.
Controlling Humanoid Robots with Motion Capture Data
Motion capture is a good source of data for programming humanoid robots because it contains the natural styles and synergies of human behaviors. However, it is difficult to directly use captured motion data because the kinematics and dynamics of humanoid robots differ significantly from those of humans. In this work, we develop a controller that allows a robot to maintain balance while tracking a given reference motion. The controller consists of a balance controller based on a simplified robot model and a tracking controller that performs local joint feedback and an optimization process to obtain the joint torques to simultaneously realize balancing and tracking. We have implement the controller on a full-body, force-controlled humanoid robot and demonstrated that the robot can track captured human motion sequences.
Display Swarm is a new kind of display composed of a mobile robot swarm. Each robot acts as an individual pixel and has controllable color. We use the swarm to make representational images and animated movies.
Our first prototype system had 14 robots, sufficient to generate basic graphics and providing a test-bed for research on robot collision avoidance and localization. This research also addressed the unusual requirement of achieving visually appealing motion of the robots. The latest prototype system has 75 robots, with magnetic wheels for deployment on a vertical surface to provide better visibility.
Swarm images are a novel concept that raises basic questions about how to best represent an image with a finite number of movable pixels, and current research is investigating swarm graphics and interaction.
Humanoid Robot Calibration
This project presents methods and experimental results regarding the identification of kinematic and dynamic parameters of force-controlled biped humanoid robots. The basic idea is to solve an optimization problem that represents a kinematic constraint that can be easily enforced, such as placing both feet flat on floor.
Operational Space Control of Constrained and Underactuated Systems
The operational space formulation (Khatib, 1987), applied to rigid-body manipulators, describes how to decouple task-space and null space dynamics, and write control equations that correspond only to forces at the end-effector or, alternatively, only to motion within the null space. We would like to apply this useful theory to modern humanoids and other legged systems, for manipulation or similar tasks, however these systems present additional challenges due to their underactuated floating bases and contact states that can dynamically change.
Pixelbots is a new kind of display composed of a mobile robot swarm. Each robot acts as an individual pixel and has controllable color. We use the swarm to make representational images and animated movies. The robotics research is in collision avoidance for a swarm of mobile robots. The graphics research is in creating visual effects using a small number of mobile pixels.
Playing Catch and Juggling with a Humanoid Robot
Robots in entertainment environments typically do not allow for physical interaction and contact with people. However, catching and throwing back objects is one form of physical engagement that still maintains a safe distance between the robot and participants. Using an animatronic humanoid robot, we developed a test bed for a throwing and catching game scenario. We use an external camera system (ASUS Xtion PRO LIVE) to locate balls and a Kalman ﬁlter to predict ball destination and timing. The robot’s hand and joint-space are calibrated to the vision coordinate system using a least-squares technique, such that the hand can be positioned to the predicted location. Successful catches are thrown back two and a half meters forward to the participant, and missed catches are detected to trigger suitable animations that indicate failure. Human to robot partner juggling (three ball cascade pattern, one hand for each partner) is also achieved by speeding up the catching/throwing cycle. We tested the throwing/catching system on six participants (one child and ﬁve adults, including one elderly), and the juggling system on three skilled jugglers.
This project is on capturing 3D models of environments, outdoor and indoor, and capturing activity that is happening in environments, using cameras mounted on mobile robots. Traditional computer vision applications have used fixed-installation or hand-held cameras. There has been a limited use of mobile cameras (e.g. Google's Street View camera trucks, or plane/helicopter cameras to capture imagery for city models) but these have been special-purpose deployments that are not available to the ordinary user. A new mode of deploying computer vision is now appearing as autonomous robots become commonplace for everyday applications. This is the result of converging technology trends - affordable robot hardware, more powerful on-board computation for mobile robots, longer battery life, and the maturing of algorithms to support autonomous robot operation using vision.
This project is on robot sensors - robots carrying cameras and other sensors that are deployed on an ad-hoc basis to perform a task in an environment, but which are not a permanent installation. The three components of the project are (a) robot-mounted cameras/sensors, (b) intelligent infrastructure - wireless and VLC devices to support the deployment of mobile robots, (c) modeling of the environment so that robots are aware of physical context. This work is a platform that is intended to support a wide range of applications which have traditionally been done with fixed or hand-held cameras.
In this work, we perform the challenging task of a humanoid robot standing up from a chair. First we recorded demonstrations of sit-to-stand motions from normal human subjects as well as actors performing stylized standing motions (e.g. imitating an elderly person). Ground contact force information was also collected for these motions, in order to estimate the human’s center of mass trajectory. We then mapped the demonstrated motions to the humanoid robot via an inverse kinematics procedure that attempts to track the human’s kinematics as well as their center-of-mass trajectory. In order to estimate the robot’s center-of-mass position accurately, we additionally used an inertial parameter identification technique that fit mass and center-of-mass link parameters from measured force data. We demonstrate the resulting motions on the Carnegie Mellon/Sarcos hydraulic humanoid robot.
Synthesizing Object Receiving Motions of Humanoid Robots with Human Motion Database
This project presents a method for synthesizing motions of a humanoid robot that receives an object from a human, with focus on a natural object passing scenario where the human initiates the passing motion by moving an object toward the robot, which continuously adapts its motion to the observed human motion in realtime.
Towards Automatic Discovery of Agile Gaits for Quadrupedal Robots
Developing control methods that allow legged robots to move with skill and agility remains one of the grand challenges in robotics. In order to achieve this ambitious goal, legged robots must possess a wide repertoire of motor skills. A scalable control architecture that can represent a variety of gaits in a uniﬁed manner is therefore desirable. Inspired by the motor learning principles observed in nature, we use an optimization approach to automatically discover and ﬁne-tune parameters for agile gaits. The success of our approach is due to the controller parameterization we employ, which is compact yet ﬂexible, therefore lending itself well to learning through repetition. We use our method to implement a ﬂying trot, a bound and a pronking gait for StarlETH, a fully autonomous quadrupedal robot.