At the joint ACM/IEEE Human-Robot Interaction (HRI '24) conference in Boulder, CO, in March, a surprising answer to this behavioral problem was revealed: large language models (LLMs), researchers at Google DeepMind in Silicon Valley told delegates, should be able to provide the "rich social context" that allows robots to express themselves with appropriate human-friendly behaviors. As a corollary of that, they said, people will be more accepting of the robot, making its mission all the more effective.
These "expressive behaviors," as DeepMind called them, are something that roboticists have actually been attempting to program into social robots for decades. Those attempts have largely failed, either because they tried to use exhaustive rules-based methods to predict and code up templates for every type of human-robot interaction, or because they tried to implement application-specific datasets for every kind of social situation the robot could be expected to encounter.
Neither approach has worked well, said Fei Xia , senior research scientist at DeepMind's Mountain View, CA, lab. Previously, he said, "Robot behaviors weren't exactly trained; rather, they were determined by predefined rules from professional animators or specialized datasets. These methods were extremely limiting, as the rules and data used to program robot behaviors couldn't be effectively transferred across environments and therefore required significant manual effort for each new environment."
So if a robot entered a new type of social situation, such as a noisy, crowded room where verbal communication did not work, the code could not generalize and "scale" to the new situation, requiring new code to be written. However, LLMs, trained on vast amounts of human knowledge on the Internet, hold the potential to work out a way around that social problem. DeepMind's idea is to "leverage" the social context available from large language models, and not use it only to generate appropriately expressive robot behaviors, but also to make it adaptive to new conditions.
No comments:
Post a Comment