Semi-Situated Learning of Verbal and Nonverbal Content for Repeated Human-Robot Interaction


Iolanda Leite (Disney Research Pittsburgh)
André Pereira (Disney Research Pittsburgh)
Allison Funkhouser (Disney Research Pittsburgh)
Boyang Albert Li (Disney Research Pittsburgh)
Jill F. Lehman (Disney Research Pittsburgh)

ICMI 2016

November 12, 2016


Content authoring of verbal and nonverbal behavior is a limiting factor when developing agents for repeated social interactions with the same user. We present PIP, an agent that crowdsources its own multimodal language behavior using a method we call semi-situated learning. PIP renders segments of its goal graph into brief stories that describe future situations, sends the stories to crowd workers who author and edit a single line of character dialog and its manner of expression, integrates the results into its goal state representation, and then uses the authored lines at similar moments in conversation. We present an initial case study in which the language needed to host a trivia game interaction is learned predeployment and tested in an autonomous system with 200 users “in the wild.” The interaction data suggests that the method generates both meaningful content and variety of expression.

Download File "Semi-Situated Learning of Verbal and Nonverbal Content for Repeated Human-Robot Interaction-Paper"
[pdf, 1.36 MB]

Copyright Notice

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.