Toward a Dynamic, Generative Evaluation Toolbox: a Roundtable

panel / roundtable
Authorship
  1. 1. Lisa Antonille

    Department of English - University of Maryland, College Park

  2. 2. LeeEllen Friedland

    National Digital Library Program - Library of Congress

  3. 3. Kenneth Price

    University of Nebraska–Lincoln, Libraries - University of Virginia

  4. 4. Susan Schreibman

    Royal Irish Academy, Digital Humanities Observatory - Royal Irish Academy, Trinity College Dublin, University Libraries - University of Maryland, College Park

  5. 5. Lara Vetter

    Maryland Institute for Technology and Humanities (MITH) - University of Maryland, College Park

  6. 6. George Williams

    University of Maryland, College Park, Maryland Institute for Technology and Humanities (MITH) - University of Maryland, College Park, Department of Languages, Literature, and Composition - University of South Carolina Upstate, English - University of South Carolina Upstate

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.


ACH paper proposals specify that papers "that concentrate on the
development of new computing methodologies should make clear how the
methodologies are applied to research and/or teaching in the
humanities, and should include some critical assessment of the
application of those methodologies in the humanities." Granting
agencies such as the Fund for the Improvement of Postsecondary
Education (FIPSE) are quite insistent that evaluation play a major
role in any sponsored project, and evaluation is seen as increasingly
important by all major funders. In 2000, the importance of measuring
results in education was not just a convenient U.S. Presidential
campaign slogan, but reports regularly appear in the Chronicle of
Higher Education about the value of and need for meaningful assessment
tools; in fact, the Chronicle recently featured a story about two
major foundations "collaborating on a national survey [National Survey
of Student Engagement] aimed at measuring the extent to which students
are learning and colleges are performing" (13 November 2000). Indeed,
in all its various forms of outcomes and process assessment,
evaluation is a crucial component of humanities computing, both for
development of high quality projects and of audience. Drawing on a
variety of experiences in humanities computing projects, this
roundtable seeks to probe the present state of evaluation tools and,
via critical discussion among the roundtable participants and with the
audience, propose dynamic methods of critical assessment especially
suited to new media. The participants will approach the problem of
evaluation mechanisms from three major project type perspectives
digital research projects designed to produce knowledge; digital
pedagogical endeavors designed to distribute research results and
develop skills for rigorous critical inquiry in new media; and library
projects designed to facilitate and maintain access to resources.

A scholarly website dedicated to the study of the British Romantic
period, Romantic Circles features research and pedagogical work
scholarly editions, critical and theoretical articles, teaching
components, and art and music relevant to the literature that brought
the site into being. Site Manager Lisa Antonille will describe
processes of evaluation for Romantic Circles dynamic, iterative,
constantly revisionary processes that examine the site's content and
interface, methods the project uses in evaluating and improving itself
as an academic resource and community. In terms of content,
contributions to Romantic Circles Praxis Series are peer-reviewed to
guarantee the highest scholarly standards. Antonille will discuss the
changing nature of peer review in a wired, networked environment.
Additionally, interface, including the site's usability, maintenance,
and accessibility, is equally important to the site's integrity. As a
result, Romantic Circles performs a variety of usability studies to
assure that its interface and content send a unified message to its
audience, and this presentation will highlight the opportunities and
difficulties of rigorously evaluating a site's usability and content
as it exists in a medium which is always in media res.

With the changing nature of peer review and questions of how best to
assess usability on the table, the Dickinson Electronic Archives
Associate Editor and Project Manager Lara Vetter will begin by
outlining the project's partially implemented plans to bring
conscientious users into the critical review process formally, as
complements to the experts relied upon to evaluate the project's
quality instead of just relying on user feedback in an ad hoc fashion
or not at all. Thus how innovations in assessment might effectively
extend critical review beyond peer review as well as incorporate
usability measures will also be addressed. Vetter will then focus on
the importance of contextual presentations as crucial enhancements of
online scholarly editions not bound by the physical bibliographical
restraints that limit the presentations of, for example, a variorum,
and how these presentations of Dickinsonian writings unavailable for
the past century, of biographical and historical materials lost to
literary history (either through active suppression or through the
accidents of transmission), of contemporary poets commenting upon
Dickinson's legacy, and of postsecondary, secondary, and primary
students and classes using the Dickinson Electronic Archives in
poetry, literature, writing, and history courses and producing their
own critical papers, websites, and journal reflections are crucial to
evaluation processes. What, if any, are the effects of vastly
different user levels (expertise both technically and intellectually)
on evaluation? Vetter will conclude by analytically describing the
project's work in 3-D computer modeling with a theater professor at
the University of Maryland and the curators of both Emily Dickinson's
and her brother and sister-in-law's houses, nineteenth-century
cultural and architectural treasures, to develop virtual learning
centers and how that work has unexpectedly begun to serve as part of
the evaluation mechanism for the Dickinson Electronic Archives
critical editions.

A FIPSE-sponsored project, The Classroom Electric: Dickinson, Whitman,
& American Culture is a collaborative venture between the Dickinson
Electronic Archives and the Whitman Hyptertext Archive that brings
together 11 faculty members from around the United States to develop
pedagogical applications of these major research archives. The
project participants are from different levels of institutions (from
the largest public universities on each coast to mid-size public and
private colleges to a small private college whose mission is to
educate the poorest members of rural America) and are, by the
project's design, not humanities computing specialist but are American
literature specialists who have agreed to devote part of their
intellectual energies and resources to developing practical
applications of these scholarly new media productions. FIPSE requires
each of its projects to formulate evaluation strategies that do
something much more than record subjective impressions from students
and teachers, and Ken Price will reflect on the first three years of
this project and will begin to outline plans to enhance The Classroom
Electric's evaluation tools. In the first three years, evaluation
became synonymous with frustration the professors involved were
resistant to control groups, and skepticism verging on hostility was
voiced by some humanists to the whole enterprise of evaluation. Price
will open up the discussion to questions of what happens when the
better types of evaluation enjoyed by a project anecdotal evidence
from teachers and students, the archived email discussion list of the
project participants and of the classes using the resources produced,
examples of electronic work produced by students, journals kept by
participants, and critically reflective essays written by participants
are those least sanctioned by the project's major granting agency.
How can these more subjective evaluative tools be used responsibly and
effectively, how can the more objective evaluative tools be
effectively incorporated into a process whose participants are
actively resistant, and what hybrid evaluative tools might be forged
to analyze more comprehensively the project's goals and outcomes and
contribute directly to improving project performance and future
achievements.

All three of the projects discussed by this point in the roundtable
receive some of their validation from the mechanisms of publication
Romantic Circles is published by the University of Maryland and
partners with Cambridge University Press, and the Dickinson Electronic
Archives, its partner the Whitman Hypertext Archive, and their mutual
endeavor The Classroom Electric are published via one of the most
respected humanities computing centers in the world, the Institute for
Advanced Technology in the Humanities (IATH) at the University of
Virginia. Using these as beginning examples, Susan Schreibman will
reflect and analyze publication standards and present a heuristic for
formulating more extensive critical review of dynamic publication.
Peer review in the bibliographical world usually focuses on content
and not the mechanisms of deliverability that, for example, design the
printed page, and peer review in the bibliographical world is also
focused on work that is static in that it is not readily and easily
updatable but must rely on the production of an entire new edition to
make substantive changes in any part. Critical review in digital
media examines the methods of deliverability and design as
constitutive of the content of digital productions, and should also
develop methods that will judiciously evaluate dynamic productions.
Schreibman's discussion will also reflect on evaluation of text
encoding, imaging, and preservation standards as important components
of any review process. Complications of evaluating institutional
support and what that actually means in terms of intellectual rigor
and quality will also be discussed.

The topics broached so far on the roundtable have had primarily to do
with project quality. MITH Program Associate George Williams will
extend these discussions to focus more directly on evaluating students
by analytically describing the University of Maryland's involvement in
administering Tek.Xam, an Information Technology Certification Exam,
which explores student aptitudes in the operation of technology, in
retrieving, interpreting, and presenting information in digital forms,
and their awareness of IT legal and ethical issues. The goal of the
exam is to provide liberal arts students with a mean to prove the
technical proficiency mandatory for success in an increasingly digital
world, and Williams will raise questions evaluating the exam and its
goals itself and offer perspectives on how to build on Tek.Xam's
successes and learn from its limitations. Issues of evaluating
e-scholarship in the undergraduate classroom will also be examined,
including the reward systems (or lack thereof) for work such as
teacher-student, student-student email exchanges that, in their
immediacy and electronic transmission and archiving, seem ephemeral.
Steps MITH is taking to measure visual literacy and MITH's work with
disability studies experts and concepts of universal design will
deepen the roundtable's considerations of how best to formulate
dynamic evaluation tools.

Having examined evaluations of digital research projects, pedagogical
applications of digital research, and the work of a humanities
computing institute, the roundtable will then turn to considerations
of digital libraries. Senior specialist LeeEllen Friedland of the
Library of Congress's National Digital Library Program will begin by
outlining basic ways in which the library perspective differs from
research and pedagogical points of view. Digital libraries build on
traditions of access and management, and usually the library
perspective is more broad and deep than that of research and teaching
projects because librarians are thinking about the underpinnings and
mechanisms, the structure and architecture of digital library projects
that allow all of the pieces to work and be usable. Librarians think
about resource discovery, maintenance, integrated access, and public
service in ways that individual project developers may well not, and
thus libraries go to great length to develop standards, and foster
community practice that optimizes potential for interoperability.
Friedland's presentation will build on the previous observations of
the other roundtable participants and will open up the discussion to
involve the audience as we collectively attempt to formulate dynamic
methods of critical assessment especially suited to new media. Such
methods will undoubtedly involve extending critical (peer) review to
involve users, and may well build on the evaluation work of usability
studies, focus groups, control groups in order to improve outcomes
measures and process assessment.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

In review

ACH/ALLC / ACH/ICCH / ALLC/EADH - 2001

Hosted at New York University

New York, NY, United States

July 13, 2001 - July 16, 2001

94 works by 167 authors indexed

Series: ACH/ICCH (21), ALLC/EADH (28), ACH/ALLC (13)

Organizers: ACH, ALLC

Tags