Complexities, Explainability and Method Media Philosophy and Artificial Intelligence

panel / roundtable
Authorship
  1. 1. David M. Berry

    University of Sussex

  2. 2. M. Beatrice Fazi

    University of Sussex

  3. 3. Michael Dieter

    University of Warwick

  4. 4. Ben Roberts

    University of Sussex

  5. 5. Caroline Bassett

    University of Sussex

  6. 6. Andrew Salway

    University of Sussex

  7. 7. Nathaniel Tkacz

    University of Warwick

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.


Abstract
This panel addresses the relationship between explanatory complexities involved in understanding knowledge and method. One of the perennial debates related to the problematic raised by computation is that knowledge and its processing are enveloped within a black-boxed structure which obscures or hides the internal workings of the machine. This has implications not just for programmers, but also for those who rely on computational techniques, such as the digital humanities. Moreover, as algorithms continue to penetrate into broader society, major difficulties are raised when important decisions are made which cannot be understood or checked – this has democratic implications. But not only are decisions and the decision-making process often obscured; the form of knowledge and mode of thought itself are also equally veiled. These debates have been given greater intensity with the rise of machine-learning systems that are fully able to automate much more complex decision-making processes than the previous generation of algorithms. Not a day goes by without a news report detailing a new front in the automation of labour, the creation of a new form of robot, or a report warning of mass unemployment unless re-training and re-skilling is begun in earnest. These concerns have also given rise to renewed intellectual debate about the future of work and proposed remedies for automation-induced under employment, such as UBI (Universal Basic Income). But such debates need to be extended to the fundamental problem of understanding automated computational decision making and algorithmic forms of knowledge. In this panel, we explore these questions by probing a number of different ways of thinking about and representing this thematic. Are there ways in which we can use new digital methods to uncover these systems in order to present them in a human-readable form? Do we need new theoretical frameworks for understanding these complex issues? What are the broader implications for the way in which the digital humanities can contribute to these important questions over complexities in knowledge, thought and data?

Explainable Digital Humanities
Berry, David M.
In the UK, the Data Protection Act 2018 has come into force, which was the enabling legislation for the European GDPR (General Data Protection Regulation). It has been argued that this creates a new right, in Article 22, in relation to automated algorithmic systems that requires the "controller" of the algorithm to supply an explanation of how a decision was made to the user (or "data subject") – the social right to explanation. This right has come to be known as the problem of explainability for artificial intelligence research, or Explainable Artificial Intelligence (XAI). Although limited to the European Union, in actuality the GDPR is likely to have global effects as it becomes necessary for global companies to standardise their software products and services but also to respond to growing public disquiet over these systems (see Darpa 2018, Sample 2017). In this paper I want to explore the implications of explainability for the digital humanities, and particularly the
concept of explainability it gives rise to. This is increasingly relevant to the growing public visibility of digital humanities projects and the potential for the use of machine learning in these approaches. The discussion I wish to open in this paper is largely speculative. It seems to me that we have two issues that are interesting to consider. Firstly, that the GDPR might require digital humanities projects to have or to be explainable in some sense and therefore subject to the same data protection regime as other algorithms. This may mean they are required to provide their processing descriptions under this "right to explanation". Secondly, the interpretation problem faced by algorithm programmers seems to me exactly the kind of expertise that digital humanists possess and who could therefore help inform the debate over explainability. Digital humanists tend to be familiar with technical systems and the questions raised by understanding and interpretation more generally, for example in the discussions over close and distant reading. There is now an emergent field of Explainable AI (XAI), also sometimes known as transparent AI, which attempts to design AI systems whose actions can be easily understood by humans. These third-wave AI systems are designed to produce more “explainable models, while still maintaining a high level of learning performance” and prediction accuracy and helping humans to “understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners” (Gunning 2018). This means that XAI systems will have to have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future in order to strengthen their public accountability. In this paper, I explore this discussion and argue that the digital humanities can enrich the debate by extending the notion of explainability with what I want to call “understandability”. That is, from the domains of a formal, technical and causal model of explanation, to that of understanding, given through a notion of comprehension modelled after the humanities and stressing complexity, interpretability, hermeneutics and critique.

Beyond Human Knowledge: Explainability and Representation in Deep Learning
Fazi, M. Beatrice
This paper addresses some of the philosophical implications of computer programs being no longer constrained by and to the limits of human knowledge. I will interpret this freedom from human knowledge as an exemption from human modes of abstraction. In the paper, I will develop this argument by discussing and relating this claim to epistemological questions about representation, as these have been developed both within philosophy and computer science. I will do so in order to argue for the necessity to engage—philosophically—with the contemporary success of computational procedures for which there are not adequate human representations, and to which human representation is also no longer necessary. Artificial intelligence research in machine learning will be my main case study. More specifically, I will look at the case of deep learning. The latter expression denotes a set of machine-learning techniques that rely on artificial neurons to process information, somewhat analogously to what a biological brain is understood to do. A lower layer of neurons carries out a computation and transmits this result to the layer above, contributing to the final outcome that is to be outputted by the layer at the top. Although they have been around for decades, these deep learning techniques have today capitalized upon the contemporary increase in computational power and the availability of vast amounts of data. Because of their recent successes in pattern recognition, for example, deep learning techniques are today much talked about, and have attracted interest from the general public, academia, government and business alike. In this paper, I will address this condition and this debate in order to discuss how deep learning operates in computational ways that are opaque and often illegible. I will thus consider the black-box aspect of deep learning, and I will characterize it in terms of a technical condition. My claim is that this technical condition requires us to reconsider and reassess the abstractive capacities of these AI technologies. The scope of the paper is precisely to offer such a reflection by entering the multifaceted debate about explainability in AI, and thus assessing the ways in which technoscience and technoculture, alongside the digital humanities, are addressing the possibility to re-present algorithmic procedures to the human mind. By engaging with examples from the field of explainable AI (or XAI), I will argue that deep learning is not only transforming the epistemic spaces and scopes of what we consider to be a valid explanation: computationally automated techniques such as deep learning are changing the meaning and reach of abstraction too. I will thus conceptualize explainability in AI as a problem that asks us to surpass a strictly phenomenological analysis of machine representation. The challenge for philosophy and for the humanities, I will claim, is that of advancing a theory of knowledge that would be able to account for the epistemic specificities of the artificial cognitive agents that populate our present.

Walking Through the Security Apparatus: On Verification, Explainability and Apps
Dieter, Michael; Tkacz, Nathaniel

Apps appear as bounded digital objects, but operate through multiple data flows that vary according to diverse infrastructural settings. Indeed, the coordination of such flows not only make apps functional, but also economically viable. Studying the embeddedness of apps within these wider situations, however, presents a number of methodological challenges, particularly in developing approaches that are able to delve into their ‘multi-situatedness’. This presentation aims to contribute to the study of these complexities by focussing on walkthroughs, specifically by exploring banking apps using a form of hybrid walkthrough analysis. While walkthrough methods are well-established across a number of research fields and commercial practices, including user-focused walkthroughs (Light, et al. 2016), ‘cold’ walkthroughs that focus on data (Weltevrede, et al. 2017), and recent ‘post-phenomenological’ approaches from human geography (Ash, et al. 2018), we apply a more interdisciplinary approach with an emphasis on how designerly ways of knowing and doing are interwoven with specific techno-economic and cultural niches. Practically speaking, this involves not only systematically documenting a normative scenario of app use, but thickening the approach on technical dimensions by decompiling apps as software packages to perform diagnostics on the codebase; identifying points of user engagement by reflexively ‘tracing’ interface design patterns; and capturing data-flows to cloud and platform infrastructures through dynamic network analysis. From a critical perspective, this hybrid approach also means paying attention to the heterogeneous kinds of expertise that intersect with apps, including the legislative frameworks, epistemologies and practices of software development, strategies from data-driven marketing and other established ways of doing and knowing within a specific sector. In this presentation, we demonstrate the novelty of this approach with a focus on European digital-only ‘challenger’ or ‘neo-banks’ – that is, new actors in banking whose financial services are delivered exclusively as apps. Importantly, these banking apps are distinct since they rely on relatively high levels of security. Secure identity verification processes for banking are closely configured according to a raft of judicial and regulatory frameworks broadly referred to as ‘Know Your Customer’ (KYC). In general, the transaction thresholds for mandatory KYC in Europe has in recent years been significantly reduced, while being extended into emerging contexts like cryptocurrencies. Many of the latest regulatory reforms incorporate new actors, while requiring more frequent and intensive monitoring. Framed in terms of risk and operationalized through digital technologies and techniques – including smartphones and apps, but also machine-learning systems and social media profiling – these forms of securitization seek to explain who users are through new technical and automated means. In this paper, we discuss how hybrid walkthrough analysis can be used to study these ways of knowing app users and provide a context for critically reflecting on how these ‘appified’ situations drive the habituation of security as convenience.

Automation Now and Then: Interpreting Automation Fevers, Anxieties and Utopias
Roberts, Ben; Bassett, Caroline; Salway, Andrew
The way that automation is currently investigated is primarily either in terms of the technological developments themselves, or in relation to the implications of automation for the world of work (Frey and Osborne, 2013). Our approach is different. We set out to understand automation anxiety, that is, we analyse ways in which automation is socially justified, imagined, understood, designed, and critiqued. We call such social and cultural concerns automation anxiety. Such anxiety is long-standing and demands urgent and sustained attention in its own right. In this paper we probe earlier automation scares of the 1950s and 1960s. Automation anxiety in this period produced a plethora of gray literature from government agencies and committees, labour organisations, political parties, and corporations, among others. These include discussions between labour, civil rights activists, left public intellectuals, and emerging industrial figures over automation, particularly over the question of 'who benefits/when?’, which remain highly pertinent today. In our own time there is also a burgeoning collection of similar literature produced in relation to the new application of machine learning to more complex tasks and its implication for the future of work. This paper reports the findings of a pilot project conducted with a small set of this literature, including UK government and US congress reports on automation from 1956. It informs the wider project which will use computational approaches, including topic modelling and corpus text analysis, to analyse and compare corpora of gray literature from the 1950s and 1960s with material from the present. Corpus linguistic techniques such keywords and n-grams, along with topic modelling, will be used to identify technology-specific terminology and a set of automation concepts. The varying uses, meanings and connotations of these automation concepts over time will be explored with concordance and collocation analysis. Keyness analysis will also be used to contrast the UK and US cases. Our goal in this research is twofold: firstly to understand better and account for the cyclical nature of automation anxiety, as reflected in the language of gray literature, and the recurrence of automation debates in culture, particularly with reference to the 1960s and today. Of interest here is the media-archaeological concept of​ topos drawn from Erkki Huhtamo as a way we might think about the return of automation anxieties (and fevers). We are also interested in the idea of revived ‘salience’ (Traub) applied to the way in which tropes evident in these debates are reproduced and re-embedded today. The second concern explored in this paper concerns the forms of knowledge the methods adopted can deliver. We address the tension between interpretive critical theoretical methods and empirical DH methodologies which brings into question the forms of knowledge the historical ‘probes’ we are deploying can deliver. A mode of ‘complex interpretation’, we suggest, may be usefully developed to take us beyond Moretti’s sense of operationalization, in which the test of big data methods is that they ‘change theory’, in thinking about how to engage critical theorisation and ‘data’.

Bibliography
Berry, David, Fazi, Beatrice, Roberts, Benjamin and Webb, Alban (2019) 

No signal without symbol: decoding the digital humanities.
 In:  Gold, Matthew K and Klein, Lauren F (eds.) Debates in Digital Humanities. University of Minnesota Press, Minnesota.

Berry, David M and Fagerjord, Anders (2017) 

Digital humanities: knowledge and critique in a digital age.
 Polity Press, Cambridge.

Berry, David (2017) 

Prolegomenon to a media theory of machine learning.
 Media Theory, 1 (1).

Berry, David M (2014) 

Critical theory and the digital.
 Critical theory and contemporary society . Bloomsbury, New York.

Dieter, Michael and David Gauthier, 'On the Politics of Chrono-Design: Capture, Time and the Interface', 
Theory, Culture & Society 36. 2 (2019): 61-87.

Fazi, M Beatrice (2018) 

Contingent computation: abstraction, experience, and indeterminacy in computational aesthetics.
Media Philosophy . Rowman & Littlefield International, London.

Fazi, M Beatrice (2018) 

Can a machine think (anything new)? Automation beyond simulation.
 AI & Society.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

In review

ADHO - 2019
"Complexities"

Hosted at Utrecht University

Utrecht, Netherlands

July 9, 2019 - July 12, 2019

436 works by 1162 authors indexed

Series: ADHO (14)

Organizers: ADHO