The case for DH in Literary scholarship

paper, specified "long paper"
Authorship
  1. 1. Elena Pierazzo

    University of Tours, France

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.


Not all Humanities have been equally touched by the digital. For textual scholarship, history and linguistics, for instance, we can have a substantial number of scholarly contributions, particularly when we include experiences embodied in projects and resources. However, comparatively speaking, digital literary criticism has had few followers. An exception are Computational Literary Studies (CLS) that apply quantitative methods to large amount of literary and bibliometric data. Linked to the methods of distant reading [Moretti, 2005], this approach enjoys great success today, while web resources like Voyant, software like Gephi, and programming environments like R, have made text mining very accessible, even for those with limited computer skills. Linked to this approach, stylometry and authorship attribution are also thriving. Particularly mediatized researches are the initiatives that led to "unmasking" the identies of Robert Galbraith, a pseudonym of J.K. Rowling, and Elena Ferrante [Joula, 2015; Tuzzi and Cortelazzo, 2018]. However, literary criticism connected to close reading seems almost absent from the DH radar. The CATMA tool, designed to define personalized tagsets for (mainly) literary analysis [Meister 2020], represents a bright exception. Meister, in fact, is one of the few scholars that has engaged with digital literary criticism and digital hermeneutics; the latter has been explored also by Van Zundert (2016) and Ramsey (2011), but from a quantitative perspective. Relatively few scholars in DH have to addressed literary criticism with qualitative approaches, which are, conversely, among the most important for non-digital literary scholars.
The reasons for this absence are probably to be found in the controversies about the use of markup within texts that have inflamed the scholarly community since the Eighties. The act of adding explicit markers in the text has been subjected to scrutiny, as it is perceived (rightly) as a harbinger of interpretation and this fact has been (and is, to a certain extent, still) perceived as an invasion, a disfigurement of the text; Cummings (2008) gives a vivid account of the debate and reflects on how it has limited the use of TEI for literary criticism. The argument goes that once the text is marked up, it cannot be reused by others because the interpretation added by the encoder would make it unusable. According to this vision, digital texts must be made available in their most neutral and objective form, and any form of annotation, including editorial, must be avoided. Sperberg-McQueen 1991 and Cummings 2008, amongst others, have tried to address the issue, and I have argued elsewhere on the hermeneutic fallacy of the category of objectivity [Pierazzo 2015]; but these methods remain far from impacting “the Humanities at large” and in particular the literary scholars [Meister 2020]. However, in order to contextualize this debate, one should go back to when this controversy was born. The urgency of those years was to put texts online, to create literary corpora for concordances and the study of word frequencies; at the time, digital acquisition of texts, the transformation of the printed into sequences of characters to be analyzed by computers (Machine Readable Form) was mostly done by hand, with an enormous expenditure of time and energy. The emphasis was therefore on making texts available and on the need of not repeating work. Researchers did not want to work with texts full of manually added codes which then had to be removed just as manually in order to reuse the texts.
It is worth noting, though, how this discourse hides the concept of DH as a service: the goal was thought produce resources for others to do “real” research. This argument is not only dangerous, condemning DH to a mere service, but also wrong, as text, any text, can only be the result of the dialectical compromise between the source documents that contains it and scholars that interpret it (even when they “only” transcribe it), and therefore no text can ever be considered objectively neutral [Pierazzo, 2015]. Today conditions have changed: most literary texts are digitally available in many versions, not to mention the plethora of tools and methods to “get rid of” markup, therefore the objections do not stand in the same way.
Another obstacle for the uptake of DH in literary studies is the conviction that close reading and critical interpretation only require a reader, a text, and a (printed) essay, and therefore computers, in this context, are useful as typewriters [Kirschenbaum, 2016]. Yet, the lack of experimentation and engagement of the scholarly community in DH for literary analysis does not allow for a clear assessment of the epistemological added value of using computers for one or few texts at a time. But shouldn’t be this the moment for rethinking Digital Literary Studies? Couldn’t we at least try to use markup, ontologies and other methods to understand a text, or answer questions about interpretation?
The paper will present some experiences at the University of Tours using TEI markup for the history of ideas, and ontologies and databases for analysis of fictional entities (people and places). We have applied these methods to works by Boccaccio, the Vite by Vasari, and to a small corpus of librettos of the 17th century. These experiments are showing promising results, not only in literary terms, but also on a largely methodological perspective, with colleagues and researchers finding themselves challenged and enticed by DH heuristics.
Conditions are ripe for experiences and discussions in order to evaluate the impact of DH in literary studies, particularly in the light of the advancements in HTR and other types of CLS that have the potentials of bringing a large amount of unknown and understudied texts into the literary arena. This could truly change our perspectives and understandings on literature, but we need to sharpen our hermeneutical tools first.

Bibliography

Cummings, J., 2008. The text encoding initiative and the study of literature. In
A companion to digital literary studies (pp. 451-76), Blackwell.

Kirschenbaum, M.G., 2016. What is digital humanities and what’s it doing in English departments?. In
Defining Digital Humanities (pp. 211-220). Routledge.

Juola, P., 2015. The Rowling case: A proposed standard analytic protocol for authorship questions.
Digital Scholarship in the Humanities, 30(1): 100-113.

Pierazzo, E., 2015.
Digital scholarly editing: Theories, models and methods. Routledge.

Ramsay, S., 2011.
Reading Machines: Toward and Algorithmic Criticism. University of Illinois Press.

Sperberg-McQueen, C.M., 1991. Text in the electronic age: Texual study and textual study and text encoding, with examples from medieval texts.
Literary and Linguistic Computing, 6(1): 34-46.

Tuzzi, A. and Cortelazzo, M.A., 2018. What is Elena Ferrante? A comparative analysis of a secretive bestselling Italian writer.
Digital Scholarship in the Humanities, 33(3): 685-702.

Van Zundert, J.J., 2016. Screwmeneutics and hermenumericals: the computationality of hermeneutics.
A companion to digital humanities. (pp. 331-347) Blackwell.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

In review

ADHO - 2022
"Responding to Asian Diversity"

Tokyo, Japan

July 25, 2022 - July 29, 2022

361 works by 945 authors indexed

Held in Tokyo and remote (hybrid) on account of COVID-19

Conference website: https://dh2022.adho.org/

Contributors: Scott B. Weingart, James Cummings

Series: ADHO (16)

Organizers: ADHO