“Failing at Human: Kinship Between Humanoid Robots and Autistics” – Chris van der Vegt
The third session of Transmission in Motion on the topic of robots in art and performance was opened with a short introduction by professor Maaike Bleeker wherein she noted that, for a long time, it was the goal of developers to make robots resemble human movement, speech and behaviour as closely as possible. She proposed that, rather than have a robot pretend to be something it is not, it might be interesting to lean into the robot’s unique qualities. In this blogpost, I want to question what we actually mean when we say ‘human-like’ and explore a kinship between robots and autistics in their struggle to perform ‘human’ behaviour.
When we talk about humanoid robots or artificial intelligence, we bring preconceived notions of what humanity and human behaviour look like. Part of the criteria for ‘socially interactive robots’, as formulated by Hegel and colleagues (2009), is that they can express and understand emotions, hold eye contact, understand social cues (3), things that autistic people are less inclined to. It is therefore not uncommon for autistics who don’t display these behaviours to be compared to robots. When we say ‘humanlike behaviour’, we actually imply something narrower than the diversity of behaviours that naturally occurs in humans.
I was 22 when it was first brought to my attention that I might be a robot, when a professional told me that I’m “a bit of a spectrum-girl”. I was diagnosed late in part because I’ve been fairly good at ‘masking’, hiding my autistic traits, even without knowing it. Regardless, I still feel like a malfunctioning robot sometimes. When I get tired or overwhelmed, I get glitchy. I struggle to make eye contact and my sentences become monotone and choppy. Sometimes social interactions feel like a Turing test, where I have to keep proving myself human enough.
Whereas it may be dehumanising to some, I find joy in comparing myself to a robot, similar to how Susan Stryker (2013) found power in comparing herself to Frankenstein’s monster. Calling yourself the thing that has been used against you releases you of ‘normal’. It gives permission to simply be. In that regard, I felt an incredible kinship with the ‘misbehaving’ robots shared by Ruowen Xu. There was one robot in particular whose performance resonated with me. It was programmed to imitate a human’s movements as closely as possible. When the robot felt like it didn’t do an adequate job, it would stop mirroring and went to ‘sleep’. I found it incredibly moving when we watched a video wherein the robot ‘gave up’ and the person who was supposed to be instructing the movements instead began to imitate the robot. It felt as if they were shifting authority to the non-human and letting them set the standard for that moment. Watching this interaction made me feel like robots like me may also be met with compassion, regardless of our ability to perform ‘human’.
Hegel, Frank, Claudia Muhl, Britta Wrede, Martina Hielscher-Fastabend, and Gerhard Sagerer. 2009. “Understanding Social Robots.” 2009 Second International Conferences on Advances in Computer-Human Interactions, 1-7.
Stryker, Susan. 2013. “My words to Victor Frankenstein above the village of Chamounix: Performing transgender rage.” In The transgender studies reader, edited by Susan Stryker and Stephen Whittle, 244-256. Routledge.