Transmission in Motion

Documentation

Fear of the Artificial Other: A Dystopian-Speculative Approach to AI – Chris van der Vegt

 

Image by Freepik

 

In the fourth Transmission in Motion seminar on ethics and artificial intelligence (AI), we were asked to pick a one out of four positions on AI governance, divided over two axis: utopian-dystopian and dragmatic-speculative. Based on the way we positioned ourselves on the grid, we were sorted into groups to discuss our perspectives. In the group I was in, the fourth quadrant became the main point of our discussion. This perspective titled ‘System Critique’ was the dystopian-speculative one, which saw AI as embedded in unequal power structures and therefore in need of structural critique if not complete abolishment. The people who chose ‘System Critique’ as their position made a strong argument that AI had to be looked at in its broader societal context and that it needed to be carefully considered who benefitted but also who might be victimised by the technology. One point that we ran up against was the abolitionist approach that was incorporated into this quadrant by the presenters of the session. The people who choose position four did not agree that AI had to be eradicated but primarily that the potential harms should be considered before anything else. I found myself wondering what unexplored arguments there might be for stopping the development of AI.

I was reminded of an article I had read during my BA for a course on the ethics of technology. In “The Vulnerable World Hypothesis” (2019), philosopher Nick Bostrom considers the possible invention of a catastrophically destablising technology. In the article, Bostrom compared human creativity to an urn that we pull balls out of, the balls being feasible ideas, discoveries and technological inventions. Bostrom poses that up until this point, most of the balls have been white (beneficial) while some have been gray (causing harm to some). Bostrom’s concern lies with the possibility of a black ball: “a technology that invariably or by default destroys the civilization that invents it” (455). Bostrom posits that if there is a black ball in the urn, we are likely to stumble upon it at one point in the future and that once it is out, there will be no way to put it back in. This possibility is what he calls the ‘Vulnerable World Hypothesis’.

I tried to imagine AI as a black ball, the invention that we lose control of and that will inevitably destroy us. There are plenty of scifi stories that explore this fear, the eldritch horror fantasy that we may create something that will regard us as insignificant and bothersome, merely a hiccup on the AI’s path toward actualising its goal. It was a scenario so far on the dystopian axis that we did not discuss it in our session. No one argued that AI should be abolished or stop being developed despite the risks that are attached.

I have to admit that I don’t particularly share the fear that AI may one day wipe us out. It seems more likely to me that AI will be another collection of gray balls, benefiting some while harming others. It is why I ended up in the System Critique quadrant despite starting out as a pragmatic-utopianist. There is no denying that AI is developed in an imperfect world and will produce imperfect outcomes, in particular for those who have less power to begin with. Even if AI won’t be the end of us all, it is important to consider the potential risks. Even the small catastrophes are worth preventing.

 

References

Bostrom, Nick. 2019. “The Vulnerable World Hypothesis.” Global Policy 10 (4): 455-476. https://doi.org/10.1111/1758-5899.12718.