Social Intelligence in Humans and Robots
Workshop RSS 2022 - July 1st (Hybrid)
In person locatoin: 545 Mudd
Video recordings are available on our YouTube channel: link
Time (ET) | ||
---|---|---|
09:00 am - 09:15 am | Organizers Introductory Remarks |
|
09:15 am - 09:50 am | Victoria Southgate Uniquely infant social intelligence Abstract
The classic view of very early cognition is that it is egocentric, and that sufficient cognitive control is required to overcome an egocentric bias. However, this view is difficult to reconcile with data accumulated over the last decade, indicating that infants readily adopt others’ perspectives, and do so despite limited cognitive control. In this talk, I will present a radically different view of infant cognition in which infants are predominantly altercentric and biased to encode information that is the focus of others’ attention, even at the expense of their own perspective. I argue that this is possible, in part, due to an initial absence of self-representation and propose that this bias will constrain and facilitate infant learning by allowing them to exploit others’ information selection at a time when their own ability to act on the world is limited. I will present recent empirical studies from my lab in which we have been testing the various hypotheses derived from this account.
|
|
09:50 am - 10:25 am | Jakob Foerster Zero-Shot Coordination and Off-Belief Learning Abstract
There has been a large body of work studying how agents can learn communication protocols in decentralised settings, using their actions to communicate information. Surprisingly little work has studied how this can be prevented, yet this is a crucial prerequisite from a human-AI coordination and AI-safety point of view.
The standard problem setting in Dec-POMDPs is self-play, where the goal is to find a set of policies that play optimally together. Policies learned through self-play may adopt arbitrary conventions and implicitly rely on multi-step reasoning based on fragile assumptions about other agents' actions and thus fail when paired with humans or independently trained agents at test time. To address this, we present off-belief learning (OBL). At each timestep OBL agents follow a policy pi_1 that is optimised assuming past actions were taken by a given, fixed policy, pi_0, but assuming that future actions will be taken by pi_1. When pi_0 is uniform random, OBL converges to an optimal policy that does not rely on inferences based on other agents' behaviour.
OBL can be iterated in a hierarchy, where the optimal policy from one level becomes the input to the next, thereby introducing multi-level cognitive reasoning in a controlled manner. Unlike existing approaches, which may converge to any equilibrium policy, OBL converges to a unique policy, making it suitable for zero-shot coordination (ZSC).
OBL can be scaled to high-dimensional settings with a fictitious transition mechanism and shows strong performance in both a toy-setting and the benchmark human-AI & ZSC problem Hanabi.
|
|
10:25 am - 10:40 am | Coffee Break | |
10:40 am - 11:15 am | Julian Jara-Ettinger Institutional representations for machine social intelligence Abstract
Humans' unprecedented success in constructing and navigating complex social worlds is typically associated with our ability to build nuanced predictive models of others' minds: a mentalistic stance. I will argue that humans are endowed with a second intuitive theory: an institutional stance that structures social interactions in terms of institutional representations. In contrast to the mentalistic stance, which helps us predict and understand unconstrained behavior, the institutional stance shapes and regulates behavior so as to make it intelligible and predictable. I will argue that precursors of the institutional stance help explain non-human social behavior, supporting complex social interaction between conspecifics that lack complex models of each others' cognitive states. At the same time, the institutional stance is unique in humans in its generative capacity, supporting rapid construction, tracking, and inference of institutional structures that shape our social world. This view suggests that uniquely-human social cognition is best understood as an interplay between mechanisms for predicting others' behavior embedded in institutional structures that make others' behavior predictable.
|
|
11:15 am - 11:50 am | Contributed talks 1
Spotlight
|
|
11:50 am - 01:20 pm | Lunch Break | |
01:20 pm - 01:55 pm |
Georgia Chalvatzaki Towards AI robotic assistants that learn from and for humans Abstract
The embodied AI robotic assistants are at the epicenter of modern robotics and AI research, spanning their applications from domestic environments to hospitals, workhouses to agricultural development, etc. Societal facts like the increase in the elderly population and the recent unprecedented situation of the Covid-19 pandemic make intelligent robotic assistants more urgent than ever. In this talk, I will focus on problems I have addressed over the last years regarding human-centered robotic assistants that learn from humans and for humans. I will cover methods for human behavior understanding and intention prediction. I will explain how we can learn skills from a few human demonstrations that we can improve with experience. Moreover, I will talk about learning adaptive human-robot interactions taking into account human intentions, and finally, I will talk about our work on adding active safety constraints in HRI.
|
|
01:55 pm - 02:30 pm | Julian De Freitas Stigma Against AI Companions Abstract
Amid a 'loneliness pandemic', there has been a rise in ‘companion chatbot’ applications, designed for free-form social conversation that is non-judgmental and available 24/7. Yet we find a robust barrier to their adoption: stigma against friendships and relationship with AI, rooted in the intuition that these relationships are one-sided because AI companions cannot truly understand you. We explore interventions to overcome this barrier.
|
|
02:30 pm - 03:05 pm | Claudia Pérez D’Arpino Robot learning and planning for social navigation |
|
03:05 pm - 03:20 pm | Coffee Break | |
03:20 pm - 03:55 pm | Contributed talks 2 Spotlight
|
|
03:55 pm - 04:45 pm |
Panel Session
|
|
04:45 pm - 04:55 pm | Organizers Concluding Remarks |
|
04:55 pm - 06:00 pm | In person: CS Lounge; Virtual: On Gather.Town [Link] Poster Session |