Top of Page

Making eye contact with a robot: CMU investigates how to make robots understand social cues

on

Movie robots – R2D2, Wall-E – are adorably empathetic to humans. Real robots, however, may be whizzes on the assembly line, but not so great around the water cooler.
 
Now a Carnegie Mellon Robotics Institute team is investigating how to make robots more socially intuitive. “The way we interact with machines and computers is as tools – the way we interact with a hammer,” says Yaser Sheikh, a CMU  professor.
 
Sheikh, Hyun Soo Park, a Ph.D. student  in mechanical engineering and Eakta Jain, a recent robotics doctorate, gathered data from head-mounted video cameras on groups of people and developed algorithms to detect precisely where their gazes converged. What individuals in a group are looking at typically identifies something of interest or helps delineate social groups – insights that could someday allow vision-aided robots designed to interact with humans to evaluate a variety of social cues, such as facial expressions or body language.
 
The team expects that its research could have applications in the study of social behavior, such as group dynamics and gender interactions, and research and diagnosis of behavioral disorders such as autism – “imaging” behaviors in much the way that x-rays and MRIs image the physical body. Other potential applications could include search-and-rescue operations, surgical teams and even sports. (If team members all wore head-mounted cameras, it might be possible to reconstruct a game from their collective point-of-view.)
 
The work, Sheikh says, ties into that of CMU professor Takeo Kanabe and the anticipated release of Google's computerized eyeglasses. Several companies have shown interest in commercializing the research, which has so far been supported by Samsung Global Research Outreach Program, Intel and the National Science Foundation.
 
Sources: Yaser Sheikh and Hyun Soo Park, CMU Robotics Institute

Writer: Elise Vider

Related Posts

Higher Ed, News
Top