After Explainability: AI Metaphors and Materialisations Beyond Transparency
Veranstaltungsdetails
Datum: 17. – 18 Juni 2024
Ort: ITeG, Pfannkuchstr. 1, 34121 Kassel & online
Registrierung: Zur Teilnahme bitte online registrieren.
Kontakt: Goda Klumbytė
Der Veranstaltungsort und der virtuelle Link werden den registrierten Teilnehmenden per E-Mail zugesandt.
Workshopsprache ist Englisch.
Workshopbeschreibung (EN)
In the last few decades, explainability and interpretability of AI and machine learning systems have become important societal and political concerns, exemplified through the negative cases of “black box” algorithms. Often predicated on the ideal of transparency of algorithmic decision making (ADM) systems and their mechanisms of inference, explainability and interpretability become features of ADMs to be designed either incrementally during the development process or addressed post-factum through various explanation methods and techniques. These features are also implicitly or explicitly connected to the ethics of algorithms: as the logic goes, more transparent, interpretable, and explainable systems can enable humans who use them to make better decisions and lead to more trustworthy AI applications. In this sense, the capacity for ethical interaction with AI rests on the understandability of such systems, which in turn relies on these systems to be transparent and/or interpretable enough to be explainable.
Even though explainability and transparency of AI can indeed contribute to greater agentiveness on the users’ side, both explainability and transparency in AI are often defined through narrow, technical terms. Additionally, explanations might illuminate how the system generates inferences (such as by demonstrating which variables contribute most to the decision), however, they might not engage explanations of the broader social, political, environmental effects of such systems. Furthermore, explainability and transparency design is often geared towards engineers themselves or direct users of the systems (as opposed to broader audience or those negatively affected) and relies heavily on natural language explanations and visualisations as the main modalities of communication, appealing to universal, disembodied reason as the main form of perception.
While these more conventional research areas are important and valuable, this workshop calls for more exploratory approaches to ethical interactions with/in AI beyond concepts of transparency and explainability, particularly through engaging the rich knowledges in humanities and social sciences. We are especially interested in how the goals of explainability, transparency and ethics could be re-thought in the face of other epistemic traditions, such as feminist, Black, Indigenous, post/de-colonial thought, new materialisms, critical posthumanism, and other critical theoretical perspectives. How concepts such as opacity/poetics of relation (Glissant, Fereira da Silva), embodied/carnal knowledges (Bolt and Barret), affective resonances (Paasonen), reparation (Davis et al.), response-ability (Haraway), friction (Hamraie and Fritsch), holistic futuring (Sloane et al.) and other terms rooted in critical perspectives could generate different articulations of the framework of interpretability/explainability in AI? How might theoretical premises such as relational ethics (Birhane et al.), care ethics (de la Bellacasa), cloud ethics (Amoore) and other conceptual apparati offer alternative ways of engaging in interactions with ADM systems? Participants are welcome from a diverse spectrum of disciplines, including, media studies, philosophy, history of technology, social sciences and humanities more broadly, as well as arts, design, and machine learning.
Organisation
DFG Forschungsnetzwerk Gender, Medien und Affekt und Fachgebiet Partizipative IT-Gestaltung, Universität Kassel. Der Workshop ist außerdem Teil des von der Volkswagen Stiftung geförderten Projektes AI Forensics.
Programm
17. Juni, Montag
Zeit | Agenda |
---|---|
10:30 – 11:00 | Einführung – Goda Klumbytė, Universität Kassel |
11:00 – 11:30 | Eugenia Stamboliev, Universität Wien, “Post-Critical AI Literacy” |
11:30 – 12:00 | Alex Taylor, Universität Edinburgh, “Flows and Scale” |
12:00 – 12:30 | Diskussion |
12:30 – 14:00 | Mittagspause |
14:00 – 14:30 | Katrin Köppert, Hochschule für Grafik und Buchkunst Leipzig, “Inexplicability” |
14:30 – 15:00 | Simon Strick, ZeM Brandenburgisches Zentrum für Medienwissenschaften, “Overwhelming/Amplification” |
15:00 – 15:30 | Diskussion |
15:30 – 16:00 | Kaffeepause |
16:00 – 16:30 | Arif Kornweitz, HfG Karlsruhe, “Accountability” |
16:30 – 17:00 | Fabian Offert, Paul Kim, Qiaoyu Cai, Universität von Kalifornien, Santa Barbara, “XAI as Science” |
17:00 – 17:30 | Diskussion |
18. Juni, Dienstag
Zeit | Agenda |
---|---|
10:30 – 11:00 | Rückblick und Zusammenfassung |
11:00 – 11:30 | Rachael Garrett, KTH Königliche Technische Hochschule, “Felt Ethics” |
11:30 – 12:00 | Goda Klumbytė, Universität Kassel, und Dominik Schindler, Imperial College London, “Experiential Heuristics” |
12:00 – 12:30 | Diskussion |
12:30 – 14:00 | Mittagspause |
14:00 – 14:30 | Conrad Moriarty-Cole, Bath Spa Universität, “The Machinic Imaginary” |
14:30 – 15:00 | Nelly Yaa Pinkrah, TU Dresden, “Opacity” |
15:00 – 15:30 | Diskussion |
15:30 – 16:00 | Abschluss |