Interfaces for Crowdsourcing Interpretation

poster / demo / art installation
Authorship
  1. 1. Gwendolyn Nally

    Libraries - University of Virginia

  2. 2. Chris Peck

    Libraries - University of Virginia

  3. 3. Shane Lin

    Libraries - University of Virginia

  4. 4. Cecilia Márquez

    Libraries - University of Virginia

  5. 5. Claire Maiers

    Libraries - University of Virginia

  6. 6. Brandon Walsh

    Libraries - University of Virginia

  7. 7. Praxis Praxis Program Team

    Libraries - University of Virginia

  8. 8. Jeremy Boggs

    Libraries - University of Virginia

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

Our research will detail a number of approaches to crowdsourcing interpretation — especially as these approaches relate to the ongoing development and design of Prism, a tool that facilitates crowdsourced interpretation of texts. We take up the challenge detailed by Ramsay and Rockwell (2012) that the activity of building provides affordances as rich and informed as writing, and that it is important to to be aware of the nature and quality of the intervention that happens through building. Drucker (2009) and Ruecker et. al. (2011) demonstrate the importance of speculative prototyping as a way to explore humanities questions and make arguments through prototypes. In that spirit, our research will inform the creation of several interfaces that address problems related to the individual and crowdsourced interpretation.Background
In 2011, the Praxis team at the University of Virginia created Prism as digital realization of the “Patacritical Demon” imagined by Drucker (2009), McGann (2004), and Nowviskie (2012). In its current form, Prism allows multiple users to highlight a text based on certain predetermined categories. The tool then creates an aggregate visualization of individual responses. In this way Prism diverges from current utilizations of crowdsourcing; where crowdsourced projects have traditionally asked users to compile data or do other mechanistic tasks, Prism asks individuals to mark up a text with categories of meaning, to discern trends in the way a larger group of users reads the text. Although Prism promises to bring individual experience into the fold, Bethany Nowiskie (2012) notes that Prism “is not a device for rich, individual exegesis,” and the usefulness of Prism lies in the overlapping and visualizing of all contributors, “generating spectra of similarity and difference.”

Existing Approaches to Crowdsourcing Interpretation
Owens (2012) distinguishes between two approaches that are commonly lumped under the heading of crowdsourcing: “human computation” and “the wisdom of crowds.” Human computation projects ask participants to solve problems for which computational solutions are comparatively expensive to develop or perform. Such problems include transcription (Transcribing Bentham, Old Weather, reCAPTCHA), protein folding (fold.it), and image metadata tagging (ESP Game). Crowd-wisdom projects, on the other hand, engage participants in open-ended socially-negotiated tasks that may go further than processing information and actually create new knowledge. (This is the mode of Wikipedia or any website with comment or discussion forum functionality.)

Directions for Research
We have begun to consider other ways to crowdsource interpretation, especially approaches that attend more closely to individual responses. In particular, we have identified two areas for exploration:

Computation and Crowd Wisdom — Owens (2012) suggests that both existing modes of crowdsourcing are worth designing for and can work in tandem for Digital Humanities projects (Galaxy Zoo). In that spirit, Prism presents the user with a task that is in both constrained in such a way that it produces information and somewhat open-ended and socially-negotiated through the user’s engagement with a text.
Wisdom of the Individual — Crowdsourcing interpretation might offer an increased focus on individuals as a part of the collaborative process. Preserving the marks of each participant would better respect the human element at the core of crowdsourcing. This feature would also make Prism a more powerful pedagogical tool, as an instructor could identify an individual student’s remarks for generating discussion. This would also be useful for the social sciences, which are inherently concerned with the individual as a member (and perhaps representative) of a particular group.
Future design and development of Prism must account for the many potential roles of individuals in crowdsourcing interpretation. For example, in order to better serve projects concerned with the wisdom of particular individuals Prism would need to store user markings and interpretations, in a way that they are extractable and easily viewed in relation to the markings of the crowd. Similarly, in order to better facilitate social science research Prism might include a way to separate out user interpretations according to demographic information (such as class, gender, age, etc.) or according to specific user responses. The poster will detail how the the design and development of Prism has been and continues to be influenced by the different roles individuals might play within the crowd.

References
Drucker, J. (2009). SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: University of Chicago Press, 2009.
Ford, P. (2011). The Web Is a Customer Service Medium. http://www.ftrain.com/wwic.html. (accessed 6 January 2011).
Galey, A., S. Ruecker, and the INKE team (2010). How a Prototype Argues. Literary and Linguistic Computing 25(4): 405–24.
McGann, J. (2001). Radiant Textuality: Literature After the World Wide Web. New York: Palgrave MacMillan.
McGann, J. (2004). What is a Text? In Schreibman, S., R. Siemens, and J. Unsworth (eds). A Companion to Digital Humanities. Oxford: Blackwell.http://digitalhumanities.org/companion/view?docId=blackwell/9781405103213/ 9781405103213.xml&chunk.id=ss1-3-4
Meister, J. C. (2012). Crowd Sourcing ‘True Meaning’: A Collaborative Markup Approach to Textual Interpretation. In Deegan, M. and W. McCarthy (eds). Collaborative Research in the Digital Humanities. Ashgate. 105–122.
Nowviskie, B. (2012). A Digital Boot Camp for Grad Students in the Humanities. Chronicle of Higher Education. http://chronicle.com/article/A-Digital-Boot-Camp-for-Grad/131665/ (29 April 2012).
Owens, T. Human Computation and Wisdom of Crowds in Cultural Heritage. http://www.trevorowens.org/2012/06/human-computation-and-wisdom-of-crowds-in-cultural-heritage/
Ramsay, S., and G. Rockwell (2012). Developing Things: Notes toward an Epistemology of Building in the Digital Humanities. In Gold, M. K. (ed). Debates in the Digital Humanities. Minnesota: University of Minnesota Press.
Ruecker, S., M. Radzikowska, and S. Sinclair (2011). Visual Interface Design for Digital Cultural Heritage: A Guide to Rich-Prospect Browsing. Burlington, VT: Ashgate.
Surowiecki, J. (2004). The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations. New York: Doubleday.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2013
"Freedom to Explore"

Hosted at University of Nebraska–Lincoln

Lincoln, Nebraska, United States

July 16, 2013 - July 19, 2013

243 works by 575 authors indexed

XML available from https://github.com/elliewix/DHAnalysis (still needs to be added)

Conference website: http://dh2013.unl.edu/

Series: ADHO (8)

Organizers: ADHO