This project is on capturing 3D models of environments, outdoor and indoor, and capturing activity that is happening in environments, using cameras mounted on mobile robots. Traditional computer vision applications have used fixed-installation or hand-held cameras. There has been a limited use of mobile cameras (e.g. Google’s Street View camera trucks, or plane/helicopter cameras to capture imagery for city models) but these have been special-purpose deployments that are not available to the ordinary user. A new mode of deploying computer vision is now appearing as autonomous robots become commonplace for everyday applications. This is the result of converging technology trends – affordable robot hardware, more powerful on-board computation for mobile robots, longer battery life, and the maturing of algorithms to support autonomous robot operation using vision.
This project is on robot sensors – robots carrying cameras and other sensors that are deployed on an ad-hoc basis to perform a task in an environment, but which are not a permanent installation. The three components of the project are (a) robot-mounted cameras/sensors, (b) intelligent infrastructure – wireless and VLC devices to support the deployment of mobile robots, (c) modeling of the environment so that robots are aware of physical context. This work is a platform that is intended to support a wide range of applications which have traditionally been done with fixed or hand-held cameras.