Social Intelligence in Humans and Robots
Workshop RSS 2023 - July 9, US Eastern Time / July 10, Korean time
In-person location: 322B
Video recordings are available on our YouTube channel: link
Time (US Eastern Time, GMT-4) | Time (Korean time, GMT+9) | |
---|---|---|
07:50 pm - 08:00 pm, July 9 | 08:50 am - 09:00 am , July 10 | Organizers Introductory Remarks |
08:00 pm - 08:35 pm, July 9 | 09:00 am - 09:35 am, July 10 | Mark Ho Cognitive Science as a Source of Design Principles for Interactive Machine Learning Abstract
Computational cognitive science provides fundamental insights into how humans learn, think, decide, and interact, and provides a key bridge between the behavioral sciences and engineering disciplines. There are two broad ways in which cognitive science can inform the development of machines with human-like social intelligence. The first is to reverse-engineer human social intelligence itself---for example, by characterizing socio-cognitive processes such as theory of mind, communication, and cooperation. The second is to develop more accurate and principled theories of general cognition and decision-making---for instance, by incorporating notions of limited computational capacity into models of rational planning and action. I will discuss how these two approaches are complementary and how research on explaining and engineering social intelligence can be informed by their interaction.
|
08:35 pm - 09:10 pm, July 9 | 09:35 am - 10:10 am, July 10 | Scott Niekum Models of Human Preference for AI Alignment Abstract
Human preference elicitation is now at the core of some of the most successful contemporary approaches to reinforcement learning, imitation learning, and AI alignment in applications ranging from robotics to language modeling. However, such approaches typically rely on highly questionable assumptions about the meaning of human preferences — for example, that a preference between two trajectories implies that one has a higher ground-truth return (sum of rewards) than another. If the fundamental interpretation of preferences is flawed, then all of these promising approaches will also fall short, or even mislead practitioners into overestimating the alignment of AI agents. This talk will highlight some of these poor assumptions, as well as recent approaches for improving them, enabling a better understanding of human values and AI alignment with those values.
|
09:10 pm - 09:45 pm, July 9 | 10:10 am - 10:45 am, July 10 | Yang Wu Emotion as Information Abstract
Adults’ interactions with young children are full of emotion: we smile, laugh, frown, and act surprised and delighted. How do children understand these expressions, and what roles do they play in human learning? Although emotional expressions are commonly considered indicators of how people feel, I propose that they are a powerful source of information that even very young children can use to reason about the world. Based on a computational framework, I will first show that children can harness emotional expressions to recover hidden aspects of the physical world, such as how a toy works, guiding early learning and exploration. Second, I will demonstrate that children can use emotional expressions to draw inferences about what others think and want, supporting a sophisticated understanding of the social world. Finally, building on the findings that humans think and learn by using emotion as information, I will discuss the implication of this research for artificial intelligence.
|
09:45 pm - 10:00 pm, July 9 | 10:45 am - 11:00 am, July 10 | Coffee Break |
10:00 pm - 10:35 pm, July 9 | 11:00 am - 11:35 am, July 10 | Henny Admoni Eye Gaze as Indicator of Mental States for Human-Robot Collaboration Abstract
In robotics, human-robot collaboration works best when robots are responsive to their human partners' mental states. Human eye gaze has been used as a proxy for one such mental state: attention. While eye gaze can be a useful signal, for example enabling intent prediction, it is also a noisy one. Gaze serves several functions beyond attention, and thus recognizing what people are attending to from their eye gaze is a complex task. In this talk, I will discuss our research on modeling eye gaze to understand human attention in collaborative tasks such as shared manipulation and assisted driving.
|
10:35 pm, July 9 - 12:30 am, July 10 | 11:35 am - 01:30 pm, July 10 | Lunch Break |
12:30 am - 01:05 am, July 10 | 01:30 pm - 02:05 pm, July 10 |
Yukie Nagai Predictive Coding Theory for Social Intelligence Abstract
Predictive coding is a neuroscience theory that suggests the human brain functions as a predictive machine. The brain continually generates predictions about the world and minimizes prediction errors by updating internal models and taking actions in the environment. My research group has been exploring the potential of neural network models based on predictive coding. Our primary objective is to investigate the extent to which these models enable robots to acquire social intelligence.
In this presentation, we will showcase our robot experiments, which demonstrate the successful development of social cognitive functions such as imitation, intention and emotion recognition,
and altruistic behavior. Neural networks with multimodal predictive processing have enabled
robots to acquire internal models for goal-directed actions and emotion expression, which can be
further used for interpreting the internal states of others. Furthermore, our experiments have
unveiled that modified predictive processing can lead to individual diversity, mirroring
observations in humans. We will discuss how our neuro-inspired approach contributes to
understanding human cognitive development and how it enhances interactions between humans
and robots.
|
01:05 am - 02:00 am, July 10 | 02:05 pm - 03:00 pm, July 10 | Contributed Talks
|
02:00 am - 02:05 am, July 10 | 03:00 pm - 03:05 pm, July 10 | Organizers Concluding Remarks |
02:05 am - 02:30 am, July 10 | 03:05 pm - 03:30 pm, July 10 | Coffee Break & Poster Session |
02:30 am - 03:00 am, July 10 | 03:30 pm - 04:00 pm, July 10 | Poster Session
Gather.Town virtual poster room: Link |