Transmission in Motion

Documentation

AI and the Accessibility Tool – Pauline Munnich

 

 

In the seminar “Social Imaginaries of Ethics and AI” we discussed and explored the ethics around AI, focusing on four positions that tend to be taken when it comes to imagining future possibilities for AI. The four positions were constructed around two axes: the axis of dystopian-utopian and the axis of pragmatic and speculative.

During the first round of discussions, my group struggled to put ourselves into one of these positions. One of the members in my group even initially wished to put themselves in the centre of the axes, able to identify with each of these positions.  That being said, when we were given a specific case study, the development of emotion recognition on Zoom, we seemed to simultaneously agree that the technology should not be used as we argued that it was too invasive. We all placed ourselves into systematic critique, agreeing that there was no need for such an AI and that all it did was encourage new forms of surveillance.

As we all seemed to agree that nothing good could come out of that sort of AI, we were implored by Sonja to come up with potential benefits for using this kind of AI. However, when discussing the potential benefits of emotion recognition AI, we briefly got into another discussion that was unfortunately cut short due to time. One of my fellow groupmates noted it could potentially be beneficial to those with disabilities or neurodivergencies that complicate the reading of the emotions of others. Although we all agreed it could hold the potential to function as an accessibility tool, we simultaneously noted that it could be solved through clear communication just as well and that this would assume for the technology to work well on everyone.

Nevertheless, Loftis (2015) argues that disabilities can be seen as “one form of human diversity and that disability is only problematic in the context of a society that is not designed to accommodate difference”(18). Thus, although technology could be adapted as an accessibility tool, it would not necessarily make technology more inclusive or resolve the normative discourse coining labels such as disability by not accepting difference as inherently human, creating the need for accessibility tools in the first place due to a lack of accommodation to these needs.

Perhaps, this can be further drawn to a more general question of who is designing our AI and technology. What could we gain by making the discussion around AI ethics and design more inclusive? Are the assumptions of what counts as intelligent and who benefits from these technologies already reflecting and perpetuating the ideologies of the normative group? Do we fall into solutionism when we try to solve societal issues by using AI and technology, potentially obscuring societal issues by focusing on the results of these social issues instead?

 

Reference

Loftis, S.F. (2015). Imagining autism: fiction and stereotypes on the spectrum. Indiana UP.