The Ethics of Ambiguous AI – Dominique Ubbels
Two weeks before TIM’s fourth session “Social imaginaries of ethics and AI”, my friends and I started to obsess over The Chat GPT, a chatbot recently launched by OpenAI. Our discussions tended towards the “dystopian-speculative” view that was one of the common attitudes towards AI discussed by the speakers Sonja Rebecca Rattay, Irina Shklovski and Marco Rozendaal. The bot writes surprisingly adequate essays about almost every subject and seems to combine varying concepts creatively. What will happen to thought and writing in cultural and social studies if AI produces essays many times faster than the speed with which we take things in, and transform and type them out as texts? We’ll lose our jobs, perhaps, but maybe worse, theory could lose the ambivalence inherent to the thought, bodies, and environments from which it emerges–to become products of an echo-chamber devouring its own data, and that ruminates and spits out its uncontestable digits. With this in mind, I wondered what essay the GPTchat would write about ethical AI In 300 words, it resumed societal concerns about “biased algorithms”, “displacement of jobs”, and AI potentially serving “malicious proposes” (i.e. weapon developments). To produce ethical AI–meaning “ensuring that it is used for the benefit of humanity and does not cause harm”–AI developers, companies, and governments should provide the means to halt biases, unemployment, and misuses of the technology.” “By following ethical principles,” the bot concludes, “we can ensure that AI is a positive force for good in the world.”
GPTchat perfectly describes the global discourses but obscures what “the benefit of humanity”, “harm”, or “good in the world” entail. When confronting the bot, it explains tautologically that “AI should be developed in ways that have positive impacts on society as a whole and individual people”. Big implications at play here: humanity/society exists of individuals that belong to one shared whole; AI is individuals/society’s other but enacts positive/negative influence on them; though human actors, ultimately, are at the helm of AI’s effects. The claim simultaneously implies an ethics that presumes absolute goodness to be progressively materialized by all working and shaping technologies towards its same end.
But isn’t it precisely the friction between our different bodies, environments, and attachments that makes ethics feasible? Though, of course, not accounting for the blurry influxes and effluxes between humans and technology crucial today, this is what Simone de Beauvoir argued in The Ethics of Ambiguity. What makes us “free” is exactly that we’re not (nor are we the same); we’re, instead, limited by singular physiologies and socialites; constellations of being and change moved by the constitutive limits of our projects and desires whilst tying to others. Ethics, to de Beauvoir, is not about protecting the abstract contents of “goodness”, “individuals”, “societies”, but an embracement of existences’ ambiguous and contingent infrastructures that destabilize selves with their environments (6-14). When we talk about ethical AI, we usually don’t address the ambivalence within its functions and interactions with the world. Often, it’s designed to serve the same good–or rejected as it disturbs what was good. What is often neglected, though, is that what makes up the good already is socially mutated and translated into AI; and some beings leave the process more suspectable or vulnerable than others. What if we stopped looking at what “good” we want to produce or sustain and start designing AI infrastructures that do not shed away from, nor fix, but transition from errors? Maybe then AI becomes just a tool (i.e. for writing), instead of the displaced fantasies of “perfected” human intelligence – or those of the opposite.
 “Influx and efflux” are concepts coined by Jane Bennet in her eponymous book that “explores the experience of being continuously subject to influence and still managing to add something to the mix” (xiii); “the “float” between impression and expression” (xvi); “streams within a more-than-human process” (xvii). They resist just as much the conflation or total collapse of forms, as their fixed distinctiveness.
 I here imply (and added to de Beauvoir’s ethics) Lauren Berlant’s definition of “infrastructures” as used in On the Inconvenience of Other People. “Infrastructures […] appear as generative, multiple, and often contested processes involved in the substantive connections among people and lifeworlds” (19); “infrastructure is the living mediation of what provides the consistency of life in the ordinary; infrastructure is the lifeworld of structure” (20); “the […] mediation of the ongoingness of the ordinary, and the constant copresence of its intelligibility and creative generativity” (21); “[…] infrastructure as a loose convergence that lets a collectivity stay bound to the ordinary even as some of its forms of life are fraying, wasting, and developing offshoots […]” (25).
de Beauvoir, Simone. The Ethics of Ambiguity. 1948. Philosophical Library, 2015.
Bennett, Jane. Influx and Efflux: Writing Up with Walt Whitman. Duke, 2020.
Berlant, Lauren. On the Inconvenience of Other People. Duke, 2022.