McGill University, McMaster University, Department of Languages, Literatures & Cultures - University of Alberta
The production of peer reviewed scholarship is the single most important activity for professional
advancement in academe, including tenure, promotion, and salary increases. The development of software for
Humanities Computing has been identified as a crucial need in our field1, and yet, because of a lack of peer
review mechanism for software, computing humanists lack an important incentive to engage fully in
programming activities. Why divert valuable time and effort to software development when the payoffs are
143
generally much greater for the production of articles and books? We believe that this conundrum is a
dominant factor in the current dearth of specialised text analysis tools.
This panel will examine the issues surrounding software development in Humanities Computing, and
explore possible mechanisms for peer review.
In addition to encouraging software development by computing humanists— those best suited to
understand the needs of the community—a peer review process would establish best practices and guidelines
that would likely be useful to most developers. A significant number of developers in the Humanities are
programming autodidactics who have had little or no formal training in software development and who would
especially benefit from guidelines established specifically for the circumstances of Humanities Computing.
Guidelines would also be an extremely valuable resource for instructors who wish to teach programming to
Liberal Arts students (computer science courses tend to focus on business, scientific and engineering
problems using lower-level, strongly-typed languages). Though guidelines for software development could be
formulated independently, a peer review mechanism would provide a strong impetus for doing so.
Peer review would also promote a sustained discussion on the software needs of the Humanities
Computing community, and future directions for development. Assessing the value of a particular piece of
software assumes a reasonable notion of what already exists and what would be most useful to have. A review
process would be an opportunity for both reviewers and developers to examine critically the current state of
affairs and reflect on where efforts would best be concentrated.
The theoretical and practical challenges to formulating a peer review process for software
development are numerous. In order to consider whether or not software should even be considered for peer
review—as Humanities Computing scholarship—we need to recognise that software packages can be
complex objects consisting of multiple dynamic parts, including:
• code (new, modified, or integrated)
• interface (design, frameworks for various platforms or languages)
• documentation (as comments in code, APIs or instructions for developers, and help for
end-users)
• other documents (research statements, technical articles, representative results)
Assessment of each of these components, together or separately, can be further complicated by the
circumstances of the development team: project leaders who may not have done any coding, research
assistants who may not have contributed to more recent versions, components that may have been developed
by external contractors, etc. Software resources tends to be organic entities that resist the fixity to which
review of published materials is accustomed, and perhaps dependent.
Meshed with the challenges of defining the nature of the object(s) to be reviewed is the question of
who would be qualified and appropriate to do the reviewing of which parts, and based on what general
principles and criteria. The relatively small number of developers in Humanities Computing poses a
heightened risk of biased reviews, at least for the code component. However, professional programmers (in
the public or private sectors) may not be an appropriate alternative, since their concerns and priorities would
likely be quite different.
There are useful materials that can provide some guidance as we work through these issues, such as
guidelines produced by the MLA2, and “The Stoa: A Consortium for Electronic Publication in the
Humanities” <http://www.stoa.org/>.
The panel will begin with a summary (by Sinclair) of the more prominant issues surrounding peer
review of software in the humanities. Then each of the participants to the panel will deal individually with
one or more of the following questions:
1. Can software tools be considered original contributions to the field comparable to other
contributions?
2. Practically speaking, can software be reviewed? Can peers be found who can review software
and would they do it? What would they be expected to do when reviewing?
3. What exactly would be reviewed? Functionality, appropriate ease of use for humanists, new
algorithms, multimedia...
4. Peer review is usually part of a larger process of publication. How would peer review
intergrate into a publication cycle? What outcomes would there be for a community that
supported this?
Following each speaker, there will be an opportunity for discussion on specific points raised. After all
of the panelists have contributed, there will be a general discussion. Finally, ten minutes will be reserved at
the end to formulate a plan of action for future progress.
Despite the challenges involved, the Humanities Computing community is in urgent need of software
that is worthy of the sophisticated text encoding schemes that exist for editing and publication. In order to
spark a concerted and sustained effort of development, we need to establish mechanisms to recognise
institutionally the time and effort that are required, and the valuable contribution to Humanities Computing
144
scholarship that well developed software represents.
NOTES
1. For enlightening details on the history and function of peer review, see “Peer Review and Imprint”
in The Credibility of Electronic Publishing: A Report to the Humanities and Social Sciences Federation of
Canada, Ray Siemens (Project Co-ordinator), http://web.mala.bc.ca/hssfc/Final/PeerReview.htm.
2. See especially the 2000 MLA report entitled “Guidelines for Evaluating Work with Digital Media
in the Modern Languages,” http://www.mla.org/reports/ccet/ccet_guidelines.htm.
If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.
In review
Hosted at University of Georgia
Athens, Georgia, United States
May 29, 2003 - June 2, 2003
83 works by 132 authors indexed
Affiliations need to be double-checked.
Conference website: http://web.archive.org/web/20071113184133/http://www.english.uga.edu/webx/