December
“Essentialism and free labour – the other side of neural networks” – Irene Alcubilla Troughton
In his talk Using Neural Networks to Study Conceptual Shifts in Text and Image, Melvin Wevers exposed a type of research that, relying on large corpora of data for advertisements, employed computational devices in order to be able to proceed with his analysis. The amount and accessibility of data nowadays, as it has been addressed by numerous scholars, poses new questions in academia and, especially, to humanities researchers. As we move to larger and larger amounts of objects of study, we become increasingly interdependent with technological developments.
Nonetheless, as it was mentioned during the lecture, said collaborations between the humanities and the computer sciences are not only necessary in terms of their practicalities but also regarding the critical assessment of such devices. Wevers focused on neural networks that would recognise and categorise images in archives of advertisements. These digital tools would provide him with a fast and efficient mode of recognising big patterns in certain selected clusters, therefore contributing to his research.
This type of research, however, posed both during the lecture and the discussion, two questions that raised my interest. The first one relates to semantics. Computational devices, in its so-to-speak invisible speed, have the risk of obscuring the cultural basis that underlies any type of configuration. In this sense, all the algorithms that need to be used for image categorisation rely on a deeply biased conception of what something is or is not. Essentialist semantics play an essential role here, considering certain aspects as fundamental and others are arbitrary in order to recognise an image and give it a particular meaning. Needless to say, such a method is efficient but inaccurate: our perception of reality falls into a continuum that exceeds simplistic categorisations, which are embedded in strong cultural and social assumptions. As an example of this, Melvin Wevers mentioned how by privileging Caucasian faces in media representations, a neural network is more prone to have difficulties in recognising an Asian face as such.
The second problematic around this digital tool resides in its creation. The emergence of a useful neural network requires not only time but work. Algorithms need to be trained by labelling data, which is an activity that human beings have to do manually. This, obviously, connects with the previous point concerning biases but also brings forth another question: who does this work? Part of the training of these algorithms resides in image classification that online users do for free and almost unknowingly as passage processes to other activities. For example, when someone is asked to select in which pictures they see a car before continuing to the selected page. One could still argue that by choosing to use the Internet, people voluntarily decide to expose themselves to such an activity, which in any case does not seem to be compulsory. However, we should reconsider the parameters of invisibilisation in which those uses of free labour fall, as well as how the impossibility to skip that step to reach your destiny turns the issue of obligation into a more complex one.