Teaching Robots the Complexities of Human Social Interactions
Matthias Scheutz, PhD, joined the Department of Computer Science in 2010. His main research areas are human–robot interactions and computational models/complex systems. He approaches these areas from his broad background in philosophy (PhD, University of Vienna), formal logic (MS, University of Vienna), computer engineering (MS, Vienna University of Technology), and cognitive science and computer science (joint PhD, Indiana University). Scheutz is an associate professor of cognitive and computer science, director of the Human–Robot Interaction Laboratory, and program director for the joint PhD in cognitive science (approved by Tufts in May of 2011).
Scheutz focuses on social robots, which recognize and respond to human social cues with appropriate behaviors. His human–robot interaction work has three interconnecting components: robots as models, robots as tools, and robots as technology. Scheutz and his research group model human behavior in robots. “Our focus is on natural language,” says Scheutz, “so we’re really interested in dialogue and the kinds of natural language processes that people use when they engage in normal coordinating activities.” The work involves videotaping people interacting, transcribing the interactions, and using the data to derive algorithms that will instruct the robot to model the behavior. It is very complicated work. “For example, people need acknowledgments, and the acknowledgments have to come at the right time,” says Scheutz. The robot needs to know when to acknowledge with “OK” and when to say “Yes” or “No”; it needs to know when to nod its head and when to look at something.
Once a robot can model a human behavior, researchers can use the robot as a tool to study that behavior. Human–human interactions are difficult to study because of the constantly changing variables, such as the verbal, facial, and bodily expressions of each person. “For us to be able to understand the timing here and study those aspects, we need some measurement tool that can adapt in the same way that you’re adapting to my speech in real time,” says Scheutz. “We can use robots for that.” For example, Scheutz and his research group have used robots as tools to investigate joint attention patterns, such as directing someone’s attention to an object. “If I want to attract your attention and direct it someplace, first I look at your eyes and watch to see if you look at me, then I can look at the target object, and then I watch you look there as well,” says Scheutz, who can test the time course of that interaction by having a robot interact with a person. “And then we can manipulate what the robot does and see what effect it has on the initiator of the attempt to direct attention.”
Robot technology is another component of the human–robot interaction work. “We are interested in autonomous robots,” says Scheutz, “so all the intelligence is on the robots—the robot makes all the decisions. You can instruct the robot in natural language to do something new that it has not done before; it can acquire new knowledge.” For such tasks as natural language processing and action representations, Scheutz and his research group make use of existing component algorithms and develop their own algorithms. “DIARC [distributed integrated affect reflection cognition] is the complex architecture that we’re building,” says Scheutz. “There’s middleware for robots and grid-based simulations that we use here as well—that’s called ADE [architecture development environment].” They have also just started looking at robot ethics, which Scheutz says is needed; robots that don’t feel sympathy or empathy might be more likely to act in unethical ways. The Scheutz group has just started human–robot experiments that look at ethical behavior and decision making.
The Scheutz lab also focuses on computational models and complex systems. The group works with neural network models, which are simulations of brain activity that are run on a computer or computer network. The models are based on empirical data and allow the group to test and refine hypothesized behavior mechanisms. They model both human and animal activities, and have used models to simulate groups of animals in specific environments, undertaking specific tasks, such as food collection. “Simulations allow you to do something that you can oftentimes not do in reality, and we look at a great variety of different conditions and contexts that you may not actually find anywhere,” says Scheutz. “You can run lots of simulations on high-performance computer clusters.” Scheutz has been working on a new ADE for grid-based simulations. “This is a massive enabling technology,” says Scheutz, “because typically modelers don’t know how to program, and even if they know how to program, they don’t know how to operate the clusters. The software that we developed basically takes all of that complication away.” Scheutz has posted this software on http://sourceforge.net, a repository of open source software.
“I’m always open to collaborators,” says Scheutz. “Right now I’m looking for people who want to collaborate on the modeling part—somebody who has an interesting problem, and I can think about it, and develop a model for it—that’s always something that I like to do.” He would also like to collaborate with a computational linguist in order to improve natural language processing, and with a language pragmatist to develop principles of social language that could be implemented in robots. Scheutz says he is always interested in new students, and he expects the new interdisciplinary major in cognitive and brain sciences to further stimulate interest in this field.
For more information, please go to http://hrilab.cs.tufts.edu.