Transmission in Motion

Saturday 25th May – Panels and Demonstrations

“Viewpoints as Vocabulary for Robots” – Michael Rau

There have been many instances of dancing robots and humans dancing with robots. Most of these performances have involved laborious programming, or mapping/mimicry of a human performer and offer no room for improvisation. The modern dance system “the Viewpoints” is “a set of names given to certain principles of movement through space and time; [they] are points of awareness that a performer or creator uses while working [1].” This methodology for generating a performance could also be repurposed as a framework for programming autonomous robots to improvise and interact as performers.

The Viewpoints was initially developed as a non-hierarchical system for improvised movement by Mary Overlie, and while there has been one exploration for a Viewpoints AI, but that work relies primarily on a human performer to interact with a Kinect and a projected image [2]. There have been no developments using an autonomous robot working within the Viewpoints framework. In this talk, I will outline the Viewpoints as defined by Overlie, Bogart, and Landau and provide a theoretical framework for implementation in autonomous robots. By integrating the Viewpoints within an autonomous robot, human performers could generate improvised performances with robot performers, and could even open the possibility of fully improvised performances performed by solely by robots.

Notes

[1] Bogart and Landau The Viewpoints Book: A Practical Guide to Viewpoints and Composition, Theater Communication Group, New York, NY 2005.

[2] Jacob, M., Coisne, G, Gupta, A., Sysoev, I., Verma, G., & Magerko, B. (2013). Viewpoints AI. Proceedings from: Ninth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment

Michael Rau is a live performance director specializing in new plays, opera, and digital media projects. He has worked internationally in Germany, Brazil, the UK, Ireland, and the Czech Republic. He has created work in New York City at Lincoln Center, The Public Theater, and PS122. He has been an associate director for Anne Bogart, Robert Woodruff, and Ivo Van Hove. He is a New York Theater Workshop Usual Suspect and an assistant professor of directing at Stanford University.


“Scripted or Improvised: Imitative Interactions with therapeutic robots” – Gohar Vardanyan

The paper addresses therapeutic robots and their ability to enhance social interactions and easily adapt to changing surroundings. While therapeutic robots can be pre-programmed to mimic humans, in some cases researchers chose to take a more improvisational approach, where robots learn how to interact with their human counterparts. This means that robots are not only pre-programmed for specific actions but also change the flow of their interaction causing the imitative patterns to shift from machine-human to human-machine. Here the question remains, do these robots actually “learn” during the process of interaction, or do they rather “teach” the human how to adapt to them? In other words, do robots imitate humans or do humans imitate robots? When do robots follow the script or do they “write” a new one while improvising on the spot? Lucy Suchman even questions machine’s mimicking ability asking if “one day they might successfully mimic the capacity of the autonomous human subject”[1].  While in this scope few therapeutic robots will be presented, the main discussion will be around dance. In this context, the comparison of actual robots’ performances like NAO, Keepon, Leonardo will be in the scope of discussion. In spite of their pre-programmed purpose to facilitate productive communication with people in need, the ability to perform can be used and modified in different spaces and platforms as well. The change of the space makes a significant difference in their responsive mechanisms as programmed actions researched in laboratories do not necessarily fit into the new environments of health care systems, research facilities, schools or even theaters.

[1]Lucy Suchman, “Robot Futures,” Robot Futures, https://robotfutures.wordpress.com/.

Gohar Vardanyan is a student of Media Arts Cultures joint MA program. Currently, she’s in her master thesis writing semester at Aalborg University, Denmark. Her thesis is about therapeutic robots and she’s deeply interested in observing them from a media art perspective.


“Reading the Air and Speaking Noises: Weak Humanly Communication in Hirata Oriza’s Robot Theatre” – Kyoko Iwaki

From the time of Greek tragedies, the ability to communicate logically has been considered as one of the indispensable criteria for humanness. Man is a zôïon logicon, a rational animal, and all other zôïa are condemned illogical. Already in the nineteenth century, however, Nietzsche warned against the mistaken tendency to assume that language reflects our ontological reality. In fact, as Karen Barad deduces from Nietzsche, the representationalist belief that words and language ‘reflect the underlying structure of the world is a continuing seductive habit of mind [that is merely] a Cartesian by-product.’ (Barad, 2003: 801,806).

Japanese theatre director Hirata Oriza, who is known as the co-creator of the world’s first robot theatre, agrees with Barad and argues that humanly communication is construed of mainly two components: apt handling of contextual information and the ability to control the redundancy rate in different dialogues, conversations and chatters (Hirata, 2012:62-106). From this premise, this paper specifically focuses on the pre/linguistic aspect of HRI and argues that, perhaps, if one wishes to design an anthropomorphized robot, one should aim not for logic and coherence, but rather, paradoxically, noise and redundancy. In other words, a robot scientist Okada Michio affirms, we should aim for and HRI that is more ‘inefficient’ and ‘weak’ in lieu of perfectly-pitched communication. To substantiate the argument, the paper will assess, in detail, Hirata’s two robot theatre productions: I, Who Works (Hataraku watashi, 2008) and The Cherry Orchard Android Version (2012).

Kyoko Iwaki is a JSPS Post-Doctoral researcher affiliated with Waseda University. Kyoko obtained a PhD in Theatre from Goldsmiths, University of London in 2017. After her completion of PhD, she became a Visiting Scholar at The Graduate Center, CUNY. Her recent publications include chapters in Fukushima and the Arts: Negotiating Nuclear Disaster (Routledge, 2016), A History of Japanese Theatre (CUP, 2016), and A Routledge Companion to Butoh Performance (Routledge, 2018).

“The Algorithm as Intimate, Invisible Robot” – Sarah Lucie

Popular images of the robot as propagated by science fiction are now performing in real life, and on live stages. The most commonly staged robot, such as those performing in Robot Theater Project’s I, Worker (2008) and Lincoln Center’s After the Blast (2017), calls upon traditional science fiction images and definitions. This robot is commonly understood as an object that functions as an amalgam of technologies in order to interact with its environment automatically, either autonomously in real time or with some human manipulation enacted remotely in time and space. However, less visible robots, entities intangible and widely varying from these popular images, are also already at work in the world. I argue that algorithms are robots—objects as defined through the lens of object-oriented ontology (OOO), and most certainly acting to great effect in the causal realm. Including less visible conceptions of robots such as the algorithm in our understanding of the robot genre will offer insight into the human relationship with these performing objects. The nature of the algorithm highlights the vast potentiality of this performing object, inviting us to glimpse at objects’ potentiality beyond the human senses of time, cause and effect, and control. Does human subjectivity shift when we enter into intimate relationships with these intangible robots? This paper will consider these questions through an interrogation of Annie Dorsen’s “algorithmic theatre” in conjunction with writings on the posthuman, 21st-century media, and ecocritical theory.

Sarah Lucie is a Ph.D. Candidate in Theater and Performance at the Graduate Center, CUNY, and has an MA in Performance Studies from New York University. Her research interests include object performance and the nonhuman environment, ecocritical theory, and contemporary performance. Sarah is also General Manager of East Coast Artists.


“Homo Sentient AI: Investigating moments of shifting consciousness” – Petra Ardai, Rinske Bosma, Lisa Rombout, Ecaterina Grigoriev,  Andrius Penkauskas & Bo Doan

Petra Ardai is an expert on immersive collaborative storytelling. Her conversational pieces are dissecting the ‘wicked problems’ through the politics of the personal and by embedding the dilemmas in a parallel imaginary world. The participants are a group of people of prominently diverse expertise. Together they imagine their lives and choices in the realm of a shared fantasy.

The storytelling sessions catalyze usually extraordinary interactions. Moments where someone suddenly opens up steps out of the comfort zone of pre-conditioned thinking patterns, make a statement, provokes a discussion, or shows altruism are common in the safety of the imaginary. In these moments affection, empathy, visionary thinking and a high level of engagement occur between the participants and the theme.

Petra has been practicing interactive theatre and immersive collaborative storytelling for years. She has extensive experience in conducting in-depth interviews and designing culture-specific flexible scenarios. Petra was eager to get a better understanding of all the information and experiences she gathered on human interaction and storytelling, and approached Tilburg University with the questions: Can we measure and classify the diffuse knowledge gathered while people interact and engage in immersive collaborative storytelling? What can we learn about ourselves (humans) with the help of machines? Can AI teach us to become better humans? How can science and art join forces and cumulate deeper insights? Could we measure and raise impact?

From these questions, a unique collaboration emerged with Tilburg University. In this presentation, our interdisciplinary team will demonstrate and explain how we are researching the possibility of an art/science methodology that focuses on cathartic and catalyzing moments in inter-human communication. We will focus on the results of our first experiment and share the outlines of the full trajectory.

We are in the first stage of our performative research. At some point, we would like to involve machine intelligence in the storytelling by building a classification system for human social experiences. In the far future, perhaps an AI can join our conversations and catalyze magic moments of shifting consciousness. How does a social machine ‘behave’ if it is ‘raised’ or ‘socialized’ in the imaginary? How does it ‘think’ if it was only fed with facts of fiction? What would a representation of AI in performance look like?

Interdisciplinary team From SPACE, Digest the Future project: Petra Ardai, artist and performer, Rinske Bosma, developer, and musician. From Tilburg University, Department of Cognitive Science and Artificial Intelligence: Ngoc (Bo) Ðoàn, student, proximity sensing and vocal characteristics, Andrius Penkauskas, student, heart rate and skin conductance data, Ecaterina Grigoriev, student, psychometrics, Lisa E. Rombout, PhD candidate and lecturer, social embodiment, Dr. Maryam Alimardani, assistant professor, human-robot interaction, Paris Mavromoustakos Blom, PhD candidate, player behavior modeling, Dr. Martin Atzmüller, associate professor, social network analysis and wearables.


“Robot Challenges: Performing Artificial Intelligence” – Ulf Otto
In science as well as in art, in contemporary robotics and machine learning, robot challenges have recently proliferated: spectacular setups in which quasi-autonomous machines have to perform the feasibility of their algorithms. The background is the physical turn that leads from ›classic‹ to ›modern‹ Artificial Intelligence and to a new ideal of research in the late ’80s. Instead of the chess player, as an incarnation of the rational Cartesian ego, the soccer player and his ability to deal with unknown, imprecise and human environments became the new objective for the mechanical double. In consequence, machines have started to enter our live world in unprecedented fashion, slowly taking over those tasks that, on moral grounds, have been reserved for humans so far: playing with the children, killing the presumed enemy or taking care of the elderly. Unlike the appearances of robots in past Science Fiction, the contemporary entrances of robots in the spectacular setups of science and art, are not solely metaphorical manifestations of human fears and desires in the face of industrial civilization; they have to be discussed as attempts to enforce, negotiate or challenge a new communality of humans and machines. Drawing on examples from art as well as science the paper discusses the performativity of the robot in the context of digital cultures.

Ulf Otto, Dr. phil., is Professor Theatre Studies and Intermediality at Ludwig-Maximilians-University Munich and holds a BSC in Computer Science. Areas of research include interconnections of theater history and history of technology, the theatricality of digital cultures, gestures and genealogies of reenactments, media performances in contemporary theatre. Recent publications: Theater als Zeitmaschine. Zur performativen Praxis des Reenactments, ed. w. J. Roselt (2012); Internetauftritte. Eine Theatergeschichte der neuen Medien (2013). Auftritte. Strategien des In-Erscheinung-Tretens in Künsten und Medien, ed. w. M. Matzke u. J. Roselt (2015). Current research projects deal with the electrification of theatre and the theatricality of electricity at the end of the 19th century, the politics of representation in German theater and the art of rehearsal.

“Mediating Human-Robot performance through virtual environments” – John McCormick, Adam Nash, Stephanie Hutchison, Kim Vincs

In technology-based performance, it is not unusual for multiple mediums to contribute to the liveness of the interactions occurring on stage. Projections, soundscape, and interactions between human and robot performers, which may be pre-recorded or generated in real time, are typically integrated to form complex multi-modal systems. In the case of real-time interaction and generation of movement, sound, and image, the array of programs in use and coordination and synchronization of these elements can be a great challenge. In this presentation, we demonstrate and reflect upon examples of the use of virtual environments as a means of collating the actions of human and robot performers, and of generating unique soundscapes, movement and 3D projections that emerge from the interactions of the performers.

The use of simulated environments to manage robot behavior is not new. However, with the maturing of virtual and augmented environment development platforms such as Unity and Unreal Engine, real-time digital environments can be readily incorporated into live performance, extending the actions of the performers. Examples of theatrical works including Pinoke, Child in the Wild, Eve of Dust, City of Androids and Neuron Conductor will be used to illustrate the use of virtual environments as a mediating platform for human/robot performance.

These practices also extend the boundaries of the performance arena. When this kind of virtual environment is used, all elements of the environment can reach out to the internet for contextual information and contact with remote performers and cloud-based AI, to give the robots greater scope in their artistic expression.

John McCormick is a technology-based artist with a major interest in movement. John is currently a lecturer and researcher at the Swinburne University of Technology where he investigates artistic practice in mixed reality environments, robotics, artificial intelligence, and human action. Current work explores human-robot interactions in mixed reality environments.

Adam Nash is an artist, composer, programmer, performer and writer Adam Nash works in virtual environments, distributed audiovisual performance, data/motion capture, generative platforms, and artificial intelligence. His work is presented all over the world, including SIGGRAPH, ISEA, ZERO1SJ and Venice Biennale. He is Associate Dean of Digital Design in the School of Design, RMIT University.

Professor Kim Vincs is Co-Director of the Centre for Transformative Media Technologies at the Swinburne University of Technology. She is a choreographer, digital artist and dance/technology researcher with 6 Australian Research Council grants and 40+ industry partnerships, and commercial motion capture credits for several computer games, television commercials, and film projects.

Steph Hutchison is a choreographer, performer, and artist-researcher. At QUT, Steph is a dance academic and leader for the Experimental Creative Practice research for Creative Lab. Her practice is driven by dance and collaborations with robotics, motion capture, animation, haptics, and artificially intelligent performance agents.


“A State of Interdependence – Robots on the Contemporary French Stage” – Julia Dobson

As we approach the hundredth anniversary of the first use of the word ‘robot’, which occurred in Capek’s modernist play ‘Rossum’s Universal Robots’ (1920), it is timely to address the evolution of the presence of robots in theatre and performance. A shift of regard moves us from classic science fiction tropes of alienated labor and monstrous ‘other’ towards a recognition of co-presence and the near-future presence of assistive and other robots within the next decade.

Within the context of French performance, this paper will address the diverse presence of robots on the contemporary stage from the use of a production line robot in Aurelien Bory’s ‘Sans objet’ to the use of an android in Aurélie Ivan’s work, L’Androïde [HU#1]. Evoking parallels between the practices and research questions presented by performers, puppeteers and roboticists in different configurations of performance, including the constraints of material and processes of experimentation, the paper will echo Julie Sermon’s call for recognition of a more dialectical relationship between human actors and robots on stage which foregrounds co-action, delegation and relational agencies [1]. The paper will focus on the co-presence and interdependence of humans and robots on stage.

Notes

[1] Julie Sermon ‘I, Robot…You Puppet’ Artpress 38 (2015) La marionnette sur toutes les scènes, pp. 19-24, p. 20

Julia Dobson has published widely on contemporary French film and performance. Her current book project is ‘Performing objects’ which addresses the impact of performing objects (puppets, dolls, robots, machines) on stage. In a world of smart objects, artificial intelligence and enhanced bodies, the uncanny materiality and ambiguous agency of performing objects reveal contemporary renegotiations of what it means to be (human).


“‘O Romeo, Romeo, then Let Us Cull Thy Numbers!’ and Other Thoughts about the Displacement of Human Creativity” – Dennis Jansen

An unexpected encounter with a robotic production of Romeo and Juliet has led me to questions about the purpose of robots in theatre performance, and perhaps about the involvement of robots in creative processes in general. In the scene, three Romeos and three Juliets perform a twisted and alienating version of the famous “Wherefore art thou Romeo?” passage, which results in a machinic bloodbath (PlatinumGames 2017; cf. Gerrish 2018). Regardless of the actors and audience all being digital simulations of robots, a vision of humanity remains clearly present in this scene—the human seems inerasable from artistic practice. My concern is therefore not with a perceived ‘replacement’ of human creativity with nonhuman creativity, but with the function of robotic involvement in performances and other artforms. What is the goal of using robots in theatre and other performances? Are they just there to demonstrate or comment on our posthuman intertwinement with technology (cf. Klich 2012; Gemeinboeck and Saunders 2013), or is there more to them? I draw on a number of other examples, including Shimon the interactive robotic marimba player (Hoffman and Weinberg 2011) and the final chapter of the recent Dutch re-release of Asimov’s I, Robot, which was co-authored by a human and a robot (Giphart and Asibot 2017). I argue that such robocentric reinterpretations of human culture offer a deliberate defamiliarization from ‘human’ artistic practice by giving their own interpretation of anthropocentric stories, and thereby effectively destabilize and displace human creativity as the only form of ‘true’ creativity.

References

  • Gemeinboeck, Petra, and Rob Saunders. 2013. “Creative Machine Performance: Computational Creativity and Robotic Art.” In Proceedings of the Fourth International Conference on Computational Creativity, edited by Mary Lou Maher, Tony Veale, Rob Saunders, and Oliver Bown, 215–19. Sydney: University of Sydney.
  • Gerrish, Grace. 2018. “NieR (De)Automata: Defamiliarization and the Poetic Revolution of NieR: Automata.” In DiGRA Nordic ’18: Proceedings of 2018 International DiGRA Nordic Conference. http://www.digra.org/wp-content/uploads/digital-library/DiGRA_Nordic_2018_paper_32.pdf.
  • Giphart, Ronald, and Asibot. 2017. “De robot van de Machine is de mens.” In Ik, robot, by Isaac Asimov, translated by Leo H. Zelders, 256–71. Amsterdam: Meulenhoff Boekerij.
  • Hoffman, Guy, and Gil Weinberg. 2011. “Interactive Improvisation with a Robotic Marimba Player.” Autonomous Robots 31 (2–3): 133–53. https://doi.org/10.1007/s10514-011-9237-0.
  • Klich, Rosemary. 2012. “The ‘unfinished’ Subject: Pedagogy and Performance in the Company of Copies, Robots, Mutants and Cyborgs.” International Journal of Performance Arts and Digital Media 8 (2): 155–70.
  • PlatinumGames. 2017. NieR:Automata. [PC]. Square Enix.

Dennis Jansen is a RMA student of Media, Art, and Performance Studies at Utrecht University. His research interests are mainly in game studies and cultural analysis, with additional interests in media fandom and digital culture. He currently focuses on embodiment and social justice in videogames and other digital media.

Altera: Affect and the Non-Human Agent

In this experimental workshop visitors are invited to interact with a flexible robotic toroid form which is placed on the participant’s body. The robot senses the participant’s physiological changes through biosensors and responds with its own language, writhing and pulsating. Creating an intimate exchange where the participant becomes integral to the robot’s performance. Through this human-robot symbiosis the participant’s primal psychological states, such as; fear, joy, anxiety, revulsion can influence the robot’s behaviour. This evolving interplay revealing the body’s capacity to affect and to be affected.

Equally, through this transference of affect the robot could be considered haunted or possessed. This in turn leads to the ‘uncanny’, when animate and inanimate objects become confused, when objects behave in a way which imitates life. Drawing on Sigmund Freud’s notions of the uncanny and Jacques Lacan’s theory of the mirror stage, the representation of the self outside itself, the robot becomes a double mirroring causing a loss of distinction between self and other. This witnessing of the self as other also connects to the idea of the abject through Georges Bataille and Julia Kristeva.

This symbiosis brings to light questions surrounding hybrid embodiment, alternative modalities of being in the world and notions of the cyborg. Providing ways of imagining different ecologies and modes of interconnectedness between the nonbiological, nonhuman and human.

David McLellan is a practice-research PhD candidate at the University of Plymouth. His main areas of research are concerned with artistic practices that employ robotic technologies to explore novel forms of human-robot interaction. His doctoral research centres on the use of theatre and performance art as a site for investigating perceptions of ‘aliveness’ and agency within non-anthropomorphic interactive robotic installations and performances in relation to affect and the uncanny.

Pottery Bot

Do you remember the famous pottery scene from the 90s movie ‘Ghost’, in which Demi Moore and Patrick Swayze are molding their love together into a slowly coalescing pot? Whether you think this scene is the climax of nineties romance, or find it to be shallow drivel, the scene is easily recognizable by an entire generation. SETUP is going to put this famous scene into action in our next installation.

How will we do this? Pottery 2.0 – traditional pottery, together with a robot. The user either takes the role of Demi or Patrick, and will learn from the robot how to mold a vase, or will teach the robot to do this. Not only will they make something together, but the process will also redefine the relationship between human and machine.

The fusion between human and machine has become more fact than fiction, for example through companies such as Elon Musk’s Neuralink that are building a future as wild as famous sci-fi stories from the past. Most of the media attention seems to be about the assumption that machines are becoming more like us. The often ignored antithesis to this is that we slowly become more and more like machines.

This installation will be created in conjunction with Casper de Jong, an artist who specializes in installations and robotics, and who tries to discuss heavy subjects through theatrical narratives that raise a smile.

SETUP is an Utrecht-based media lab, established in 2010. Combining academic background with artistic practice, we strive to strengthen a more enriched, critical stance towards emerging technologies. Our main goal is to educate a wide audience, providing them with the tools necessary to design this brave new world and infuse it with human values and new-found creativity.