Starting the Conversation: Literary Studies, Algorithmic Opacity, and Computer-Assisted Literary Insight

paper, specified "long paper"
  1. 1. Aaron Louis Plasek

    New York University

  2. 2. David L. Hoover

    New York University

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

1. Probing questions of literary interpretation with computer-aided text analysis
Thirty-seven years after Milic (1966) rejected the untenable fear of many literary scholars that “the study of literature may become mechanical if it is processed by a computer” (4), Ramsay (2003) observes that while computers can successfully pursue “empirical validation of ‘impressionistic’ or ‘serendipitous’ critical readings” (173), such uses “have not penetrated the major journals of literary study,” and that, “in general, our methods are perceived as some sort of positivistic last stand” (168). Wanting computer-assisted text analysis to have a prominent place in the “wider community of humanist scholars” (2003: 173), Ramsay argues that all literary interpretation is more arbitrary than scholars acknowledge (2003: 170), and that interpretation “must present its alternative text as a legitimate counterpart—even a consequence—of the original” (Ramsay 2011: 56). But, à la McGann (2001), “[a] true critical representation does not accurately (so to speak) mirror its object; it consciously (so to speak) deforms its object” (173). Reading Machines echoes this: “The critic who endeavors to put forth a “reading” puts forth not the text, but a new text” (Ramsay 2011: 16). How we put forth “new” texts, McGann argues, deeply depends on our “idiosyncratic relation to the work” (116). Computers help us explore these idiosyncrasies by expanding the number of deformed texts we can produce; reading these “new” texts brings us “to a critical position in which we can imagine things about the text that we didn’t and perhaps couldn’t have otherwise known” (McGann 2001: 116).

Ellis & Favat (1966) also employ rearrangement-as-interpretative procedure and provide a useful counterpoint to McGann and Ramsay by emphasizing that computers merely “enhance” what literary critics already do (1966: 637). Ellis & Favat use the “opportunity for the different examination and reordering of [Huckleberry Finn]” (637) to ask questions about Huck’s speech regarding the relation of “death” and “family” by generating concordances (630-33), but concordances have been used since at least the 13th century. What is new, Ellis & Favat contend, is that the critic can now “ask questions that previously he could have only wished to ask” (638) because the computational efficiency of evidence-gathering allows us to pursue questions previously pragmatically impossible (637). Yet Ellis & Favat suggest that certain kinds of reordering are not valid: “The fact that the computer has aided the scholar does not mean that this critical procedure has been violated” (637). Understanding what would constitute a violation positions us to better appreciate what is new in Ramsay’s algorithmic criticism. When does “the different examination and reordering of data” and the “grouping of [text]” cease to be valid “evidence” (637)?

2. Algorithmic criticism as literary chatbot
Ramsay’s textual analysis, grounded in deformance, is a mechanism for destabilizing linguistic and cultural assumptions when reading texts. Although deformance has been a source of contention (Hoover 2005, 2007), many new DH scholars seem to underestimate its importance and implications. We argue that the practice of deformance arises, in part, from a broader conception of the humanities that emphasizes multiplying possible solutions and facilitating interesting and unpredictable discussions rather than finding particular solutions. Thus Ramsay insists that literary criticism should aim not to arrive at the meaning, but to ask “how do we ensure that [a text] keeps on meaning?” (2003: 170). He reaffirms this eight years later: “conclusions are evaluated not in terms of what propositions the data allows, but in terms of the nature and depth of the discussions that result” (2011: 9), echoing McGann’s claim that “the critical and interpretative question is not ‘what does a poem mean?’ but ‘how do we release or expose the poem’s possible meaning?’” (2001: 108). More recently, surprised by the difference in the words used by male and female speakers in The Waves, first discussed in Reading Machines (2011), Ramsay asks,

Do we imagine that such further experiments would resolve long-standing questions about gender and language? Do we really want those questions resolved? That last question may seem slightly perverse, but I believe that, in the end, what is most distinct about humanistic inquiry is its resistance toward final answers. It is the goal of the seminar to answer questions,but mostly by proposing them more fruitfully. The humanities wants for itself a world that is more complex than we thought . . . We are in search of a conversation[.] (2012: 11-12)
Ramsay’s question is “how successful the algorithms were in provoking thought and allowing insight” (Ramsay 2003: 173), and he is ultimately “more concerned with evaluating the robustness of the discussion that a particular procedure annunciates” (Ramsay 2011: 17). Unfortunately, emphasizing the “robustness of the discussion” that a deformed text promotes may de-emphasize the algorithms used to generate it. Yet if the algorithm that deforms the original text is to facilitate our interpretive insights, knowing what the algorithm does seems crucially important.

3. Continuing the conversation: getting algorithms out of the black box
Deformance, like computational stylistics, necessarily focuses our attention on certain narratives of meaning, drawing our attention to specific words, phrases, images, and patterns at the expense of others. Lotaria in Calvino’s If on a Winter’s Night a Traveler (1979) “reads” a novel by looking at the words that appear 19 times—namely, “blood, cartridge, belt, commander, do, have, immediately, it, life, seen, sentry, shots, spider,” and so forth—and observes that “it’s a war novel, all action, brisk writing, with certain underlying violence” (182). She answers the literary equivalent of a factual question using a simple and relatively transparent method. However, when Ramsay lists the most characteristic words used by characters in The Waves in order to “participate in [a] literary critical endeavor beyond fact-checking” through the use of the tf-idf algorithm (Ramsay 2011: 10), the resulting “deformed text” he discusses cannot be reproduced by the procedure he describes. This is partly because he does not use the precise tf-idf equation he presents, but (more importantly) because of some interpretative decisions that are by no means obvious. Our point, however, is less to quarrel with Ramsay’s decisions than to interrogate his method.

Ramsay (2011) does not explain the exact algorithm that produces his menonly and women-only words, but they are simple enough to identify. Rather than 90 men-only and 14 women-only words, however, we found 117 and 10. The mismatches are caused mostly by Bernard’s final retrospective chapter, which Ramsay has (quite reasonably) omitted from his analysis (though he confirms by email that he should have noted this). Re-analyzing without the final chapter reveals a few remaining discrepancies. For example, Ramsay lists “banker” and “Brisbane” as men-only words, but they appear in Neville’s and Bernard’s monologues only as imagined quotations from Louis. Discussing this kind of decision, we suggest, could deepen and engage a conversation about The Waves.

More significantly, Ramsay’s provocative lists of 90 men-only and 14 womenonly words rest problematically on the amounts of text by the two genders. Even without the final chapter, there are about 35,000 words by the men and only 20,000 by the women, a discrepancy that “explains” the preponderance of male-only words. To “prove” this one could simply cut each male monologue to the length of its corresponding female monologue. (Corresponding how? Matching longest to longest, shortest to shortest? Why?) We chose a different “deformation,” randomizing the monologue lines and cutting each monologue to the length of the shortest, equalizing each character’s contribution. (Why?) This deformed text produces 31 women-only and 29 men-only words. Ramsay is right that the algorithm merely begins the argument, but the “provocative” revelation that the men share more words than the women seems deceptively and inappropriately provocative: it rests merely on the lengths of the monologues. (Why are the male monologues longer?) The nature of the male and female words remains provocative and suggestive for a conversation about gender in The Waves:

But our deformation’s doubling of women-only words raises interesting questions, and warns against over-interpreting the lists. Obviously, men use some of our women-only words in the parts we left out, a problem exacerbated by the rarity of these only-words: the highest frequency above is 4. All this suggests a reconsideration of the initial decision to use tf-idf in the first place.We use this opportunity to argue both that it is imperative that text analysis researchers carefully outline their procedures so that others can reproduce the original results and that more attention be given by the community at large to reexamining earlier results. Framing literary questions algorithmically is beneficial because stating our assumptions in computable terms may reveal our own hidden assumptions about the text we are examining, our assumptions regarding literary interpretation, or both. However, transforming our own literary methods into algorithms is always somewhat imperfect: foregrounding these interpretative decisions rather than hiding them allows the critical conversation to continue in a more valuable fashion by incorporating the difficulties of the algorithm into the act of literary interpretation itself.

Ellis, A. and Favat, F. (1966). “From Computer to Criticism: An Application of Automatic Content Analysis to the Study of Literature.” In Stone, P., Dunphy, D., Smith, M., and Ogilvie, D. (ed.), The General Inquirer: A Computer Approach to Content Analysis. Cambridge: MIT Press, 628-38.

Hoover, D. (2007). “The End of the Irrelevant Text: Electronic Texts, Linguistics, and Literary Theory.” DHQ: Digital Humanities Quarterly 1(2).

————. (2005). “Hot-Air Textuality: Literature after Jerome McGann.” TEXT Technology 2: 71-103.

McGann, J. (2001). Radiant Textuality: Literature after the World Wide Web. New York: Palgrave.

Milic, L. (1966). “The Next Step.” Computers and the Humanities 1(1): 3-6. Ramsay, S. (2012). “Textual Behavior in the Human Male.” (Revised March 2012 transcript, 14 pages.) Journal of Digital Humanities 1(1): 32.

————. (2011). Reading Machines: Toward an Algorithmic Criticism. Urbana: University of Illinois Press.

————. (2003). “Toward an Algorithmic Criticism.” Literary and Linguistic Computing 18(2): 167-74.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info


ADHO - 2014
"Digital Cultural Empowerment"

Hosted at École Polytechnique Fédérale de Lausanne (EPFL), Université de Lausanne

Lausanne, Switzerland

July 7, 2014 - July 12, 2014

377 works by 898 authors indexed

XML available from (needs to replace plaintext)

Conference website:

Attendance: 750 delegates according to Nyhan 2016

Series: ADHO (9)

Organizers: ADHO