Transmission in Motion


“With a little help from animated friends” – Max Peters

One of the main challenges in coordinating robot behavior, as shown by Roos van Berkel and Emilia Barakova in their lecture, centers around the ability to let robots respond to movement and emotion; essential to human behavior. They demonstrated a variety of approaches, ranging from gaze matching to Laban Movement Analysis, and it quickly became clear how meticulous, precise and careful their work is executed. The patience to constantly program, test and report on both robotic development in itself and its interaction with humans, is something to be greatly admired. In one of their articles, Van Berkel and Barakova experiment with Laban Movement Analysis and try to connect this to different ways of hand-waving and playing the guitar. In all cases, human actors emulate and act out an emotion in their movement, that robots are supposed to recognize and, subsequently, find the appropriate response to. In the concluding section of this article, they conclude by remarking that

“an important design aspect of each humanoid robot is how closely human movement can be emulated on it. The emulation is restricted by the understanding the physical limitations of the robots, and the mechanism that cause the movement behavior”.

What captures me in this research, taking into account my non-existent knowledge of its technicalities, is how they observe limitations in the research based on the degree to which the actual people participating in the experiments are able to convey the emotions that the robots are supposed to recognize. These people are performers, they are acting and in fact ‘emulating’ emotions and movements themselves. Spontaneous action is harder to capture, mostly because of ethical considerations that prevent the creation of an environment in which the registered behavior is purely genuine, and not acted out. Therefore, I wonder whether state of the art animation technology could assist in this process. Motion-capture technology, for example, is capable of capturing weight, posture, and movement in a very lifelike way, while at the same time allowing for reference points that could, possibly, be connected to the robot’s computed ‘brain’. This same digital connection could cross over from computer-generated characters, where an immense range of anchor points control facial expressions. I am well aware of the fact that this suggestion adds another layer of artificiality and illusion, yet I think that the possibility to connect digital animation processes to the programming of a robot could lead to a clearer framework of emotional recognition. After all, a robot is programmed and does not act as a result of emotional intention. Therefore, the recognition of emotional states could, in my view, just as well come from studying complex animated simulations. Or at least, it could lead to interesting experiments…