University of Nebraska–Lincoln
University of Nebraska–Lincoln
The Monk Project (http://www.monkproject.org/) has for the
last year been developing a Web-based system for undertaking
text analysis and visualization with large, full-text literary
archives. The primary motivation behind the development of
this system has been the profusion of sites like the Brown
Women Writers Project, Perseus Digital Library, Wright
American Fiction, Early American Fiction, and Documenting
the American South, along with the vast literary text corpora
oered by organizations like the Text Creation Partnership and
Chadwyck-Healey. Every one of these collections represents
fertile ground for text analysis. But if they could be somehow
combined, they would constitute something considerably
more powerful: a literary full text-corpus so immense as to be
larger than anything that has yet been created in the history
of computing.
The obstacles standing in the way of such a corpus are well
known. While all of the collections mentioned above are
encoded in XML and most of them are TEI-conformant, local
variations can be so profound as to prohibit anything but the
most rudimentary form of cross-collection searching. Tagset
inclusions (and exclusions), local extensions, and local tagging
and metadata conventions dier so widely among archives, that
it is nearly impossible to design a generalized system that can
cross boundaries without a prohibitively cumbersome set of
heuristics. Even with XML and TEI, not all texts are created
equal.
TEI Analytics, a subset of TEI in which varying text collections
can be expressed, grew out of our desire to make MONK
work with extremely large literary text corpora of the sort
that would allow computational study of literary and linguistic
change over broad periods of time.
TEI Analytics
Local text collections vary not because archive maintainers
are contemptuous toward standards or interoperability, but
because particular local circumstances demand customization.
The nature of the texts themselves may require specialization,
or something about the storage, delivery, or rendering
framework used may favor particular tags or particular
structures. Local environments also require particular metadata
conventions (even within the boundaries of the TEI header).
This is in part why the TEI Consortium provides a number of
pre-fabricated customizations, such as TEI Math and TEI Lite,
as well as modules for Drama, transcriptions of speech, and
descriptions of manuscripts. Roma (the successor to Pizza
Chef) similarly allows one to create a TEI subset, which in turn
may be extended for local circumstances.
TEI Analytics, which is itself a superset of TEI Tite, is designed
with a slightly dierent purpose in mind. If one were creating a
new literary text corpus for the purpose of undertaking text
analytical work, it might make the most sense to begin with
one of these customizations (using, perhaps, TEI Corpus). In
the case of MONK, however, we are beginning with collections
that have already been tagged using some version of TEI with
local extensions. TEI Analytics is therefore designed to exploit
common denominators in these texts while at the same time
adding new structures for common analytical data structures
(like part-of-speech tags, lemmatizations, named-entities,
tokens, and sentence markers). The idea is to create a P5-
compliant format that is designed not for rendering, but for
analytical operations such as data mining, principle component
analysis, word frequency study, and n-gram analysis. In the
particular case of MONK, such documents have a relatively
brief lifespan; once documents are converted, they are read in
by a system that stores the information using a combination
of object-relational database technology and binary indexing.
But before that can happen, the texts themselves need to be
analyzed and re-expressed in the new format.
Implementation
Our basic approach to the problem involves schema harvesting.
The TEI Consortium’s Roma tool (http://tei.oucs.ox.ac.uk/
Roma/) was fi rst used to create a base W3C XML schema for
TEI P5 documents, which we then extended using a custom
ODD fi le.
With this basis in place, we were able to create an XSLT
“meta-stylesheet” (MonkMetaStylesheet.xsl) that consults
the target collection’s W3C XML schema to determine the
form into which the TEI P4 les should be converted. This
initial XSLT stylesheet is a meta-stylesheet in the sense that it
programatically authors another XSLT stylesheet. This second
stylesheet (XMLtoMonkXML.xsl), which is usually thousands
of lines long, contains the conversion instructions to get from
P4 to the TEI Analytics’s custom P5 implementation. Elements
that are not needed for analysis are removed or re-named
according to the requirements of MONK (for example,
numbered <div>s are replaced with un-numbered <div>s).
Bibliographical information is critical for text analysis, and both
copyright and responsibility information must be maintained,
but much of the information contained in the average
<teiHeader> (like revision histories and records of workfl ow)
are not relevant to the task. For this reason, TEI Analytics uses
a radically simplied form of the TEI header. Here is a sample template in the meta-stylesheet (in a
somewhat abbreviated form):
<xsl:template match=”xs:element[@
name=$listOfAllowableElements/*]”>
<!-- elements are identified in the
MONK shema, and narrowed to list
of allowable elements-->
<xsl:element name=”xsl:template”>
<!-- begins writing of ‘xsl:template’
elements in the final XMLtoMonkXML.xsl
stylesheet -->
<xsl:attribute name=”match”>
<!-- begins writing of the (approximately
122 unique) match attributes on
the ‘xsl:template’ elements -->
<xsl:choose>
<xsl:when test=”$attributeName
= $attributeNameLowercase”>
<xsl:value-of select=”@name”/>
</xsl:when>
<xsl:otherwise>
<xsl:value-of select=”concat(@
name,’ | ‘,lower-case(@name))”/>
</xsl:otherwise>
</xsl:choose>
</xsl:attribute>
<!-- ends writing of the match
attributes on the ‘xsl:template’
elements -->
<xsl:element name=”{$attributeName}”>
<!-- writes the unique contents of
each ‘xsl:template’ element in the
XMLtoMonkXML.xsl stylesheet -->
<xsl:for-each select=”$associ
atedAttributeList/list”>
<xsl:choose>
<xsl:for-each select=”child::
item[string-length(.) > 0]”>
<!-- all strings (in the dynamicallygenerated
list of associated
attributes) greater than
zero are processed -->
<xsl:when>
<xsl:attribute name=”test”>
<xsl:value-of select=”concat(‘@’,.)”/>
</xsl:attribute>
<xsl:copy-of>
<xsl:attribute name=”select”>
<xsl:value-of select=”concat(‘@’,.)”/>
<!-- copies the element’s
attributes, constrained to a list
of attributes desired by MONK -->
</xsl:attribute>
</xsl:copy-of>
</xsl:when>
</xsl:for-each>
<xsl:otherwise> </xsl:otherwise>
<!-- any zero-length strings (in
the dynamically-generated list of
associated attributes) are discarded -->
</xsl:choose>
</xsl:for-each>
<xsl:apply-templates/>
</xsl:element>
<!-- ends writing of ‘xsl:template’
elements in the final XMLtoMonkXML.xsl
stylesheet -->
</xsl:template>
All processes are initiated by a program (called Abbot) that
performs, in order, the following tasks:
1. Generates the XMLtoMonkXML.xsl stylesheet
2. Edits the XMLtoMonkXML.xsl stylesheet to add the
proper schema declarations in the output fi les.
3. Converts the entire P4 collection to MONK’s custom P5
implementation.
4. Removes any stray namespace declarations from the
output fi les, and
5. Parses the converted les against the MONK XML schema.
These steps are expressed in BPEL (Business Process Execution
Language), and all source fi les are retained in the processing
sequence so that the process can be tuned, adjusted, and rerun
as needed without data loss. The main conversion process
takes, depending on the hardware, approximately 30 minutes
for roughly 1,000 novels and yields les that are then analyzed
and tagged using Morphadorner (a morphological tagger
developed by Phil Burns, a member of the MONK Project at Northwestern University). Plans are underway for a plugin
architecture that will allow one to use any of the popular
taggers (such as GATE or OpenNLP) during the analysis
stage.
Conclusion
We believe that TEI Analytics performs a useful niche function
within the larger ecology of TEI by making disparate texts
usable within a single text analysis framework. Even without
the need for ingestion into a larger framework, TEI Analytics
facilitates text analysis of disparate source fi les simply by
creating a consistent and unied XML representation. We also
believe that our particular approach to the problem of XML
conversion (a small stylesheet capable of generating massive
stylesheets through schema harvesting) may be useful in other
contexts|including, perhaps, the need to convert texts from
P4 to P5.
If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.
Complete
Hosted at University of Oulu
Oulu, Finland
June 25, 2008 - June 29, 2008
135 works by 231 authors indexed
Conference website: http://www.ekl.oulu.fi/dh2008/
Series: ADHO (3)
Organizers: ADHO