Digital Text Resources for the Humanities - Legal Issues

panel / roundtable
Authorship
  1. 1. Georg Rehm

    Universität Tübingen (University of Tubingen / Tuebingen)

  2. 2. Andreas Witt

    Universität Tübingen (University of Tubingen / Tuebingen)

  3. 3. Erhard Hinrichs

    Universität Tübingen (University of Tubingen / Tuebingen)

  4. 4. Timm Lehmberg

    Universität Hamburg (University of Hamburg)

  5. 5. Christian Chiarcos

    University of Potsdam

  6. 6. Felix Zimmerman

    Hannover University

  7. 7. Heike Zinsmeister

    Universität Tübingen (University of Tubingen / Tuebingen)

  8. 8. Johannes Dellert

    Universität Tübingen (University of Tubingen / Tuebingen)

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

The session "Digital Text Resources for the Humanities –
Legal Issues" consists of three papers that address the
legal aspects connected to several crucial phases of handling
text resources: collecting, compiling, curating, analysing,
distributing, and archiving text resources such as corpora, are
tasks carried out on a day-to-day basis by people involved in
fields such as, for example, humanities computing,
computational and corpus linguistics, information retrieval and
text mining. Despite the ubiquity of document collections, the
legal issues that are intrinsically tied to virtually all texts created
and published by third parties (most importantly, their
copyright, as well as privacy issues), do not typically attract a
lot of interest. Though these issues are acknowledged, they are
often regarded as rather insignificant for the research question
at hand, or a project does not have any jurisprudential expertise
to deal with legal issues in an adequate way. As a consequence,
distributing a corpus (for example, to other interested researchers) whose provenance is unknown or questionable, or
publishing excerpts from a document collection on a website,
may become next to impossible from a legal point of view. This
is why scholars often decide not to publish their collections (or
parts thereof) online at all, in order to avoid any potential legal
problems. The session aims to provide an overview of the
following legal aspects:
• The first contribution, "Language Corpora – Copyright –
Data Protection: The Legal Point of View" (Timm
Lehmberg, and Felix Zimmermann), highlights the legal
requirements that hold with regard to the construction of
digital text resources, special emphasis is given to the aspect
of copyright and data protection (for example, potential
reasons for the need to anonymise text corpora).
• The second presentation, "Collecting Legally Relevant
Metadata by Means of a Decision-Tree-Based Questionnaire
System" (Timm Lehmberg, Christian Chiarcos, Erhard
Hinrichs, Georg Rehm, and Andreas Witt), consists of two
parts: first, a web-based questionnaire is introduced that
was developed to capture the requirements research projects
have with regard to the archiving and distribution of their
corpora; second, initial results from a study that spans three
large research centres and more than 60 individual research
projects are reported.
• The final paper, "Corpus Masking: Legally Bypassing
Licensing Restrictions for the Free Distribution of Text
Collections" (Georg Rehm, Andreas Witt, Heike
Zinsmeister, and Johannes Dellert), introduces the idea of
masking an annotated text corpus whose original source
text collection is copyright-protected, so that the masked
version can be distributed without any restrictions;
furthermore, a fully working tool for masking an
XML-annotated corpus is presented.
The authors of the three papers are associated with a joint
project situated in three Collaborative Research Centres (SFB,
Sonderforschungsbereich) that are sponsored by the German
Research Foundation (DFG, Deutsche
Forschungsgemeinschaft): SFB 441 (Linguistic Data Structures,
Tübingen University), SFB 538 (Multilingualism, Hamburg
University), and SFB 632 (Information Structure, Potsdam
University). Each of these three research centres consists of
about 15 to 20 research projects. Most projects work with digital
text collections, in practically all cases these collections and
corpora are constructed by the respective researchers
themselves. A problem people involved in the fields of digital
humanities or computational linguistics are often confronted
with concerns the fact that the sustainability and reusability of
corpora is not given too much attention – or that these aspects,
in a worst case scenario, are completely ignored. Corpora are
often created for an application or for a project that has a very
specific research question, but when the project is finished it
becomes next to impossible (especially for third parties) to gain
access to the resource that took several months or maybe even
years to create. The joint project Sustainability of Linguistic
Data was therefore established to provide the conceptual,
technical and infrastructural basis for a solution to the problem
of sustainably archiving these digital text collections, addressing
issues as diverse as, for example, annotation and metadata
frameworks, best practice guidelines, legal issues of distributing
text collections, and unifying diverse tag sets by means of an
ontology.
Session Chairs: Georg Rehm and Andreas Witt
Language Corpora – Copyright – Data Protection:
The Legal Point of View
Felix Zimmermann and Timm Lehmberg
1 Introduction
Creating comprehensive and sustainable archives of linguistic
data and making them (or parts of them) accessible to the
research community leads to a number of essential legal
questions being raised by different aspects of law. Like any
discipline handling large amounts of data, the digital humanities
are confronted with a complex system of authorities and
restrictions. From acquisition, through storing and processing
to the annotation and finally publication of the data, there are
a number of rights as well as duties each participant in this
process has to consider. Additionally, some legal systems
provide special rules for the use of data for scientific purposes.
On the one hand the opacity of the legal position leads to the
assumption that, in many cases, linguistic data are used and
transferred in a way that does not comply with legal
requirements. On the other hand there is a noticeable tendency
not to transfer linguistic data for fear of breaking the law (see
Jüttner 2000, and Patzelt 2003).
2 Relevant Areas of Law
Two different areas of law play an important role in the use of
linguistic data for research purposes:
• "Intellectual Property Rights" provide legal protection of
non-material goods which are any kind of intellectual
property of a third party. This includes, amongst others,
literary works as well as databases, software and utility
patents. In terms of law language corpora are defined as
databases.
• "Privacy and Personal Data Protection Law" imposes
restrictions for the processing of any personal data, i.e., any
data that can be linked to an individual. In the face of
linguistic data processing any audio and video recordings (and their transcriptions) as well as metadata that contain
personal information on speakers are covered by this law.
Both areas are relevant to the complete process of data
processing and have to be considered from the initial step of
the data based work (normally the acquisition of the data) to
the time of publication.
3 Aspects of National and International Law
In everyday legal practice a particularly relevant role is played
by those legislative rulesets that are based on constitutional
norms. Within these, interests and entitlements of other involved
individuals and institutions, which are worthy of protection,
are often outlined in minute detail in relation to the procurement,
processing, and transfer of linguistic primary data.
Federal states, which contain individual member states with
their own legislative authority (such as the US, Germany,
Switzerland, Austria, Spain) may have enacted specific member
state rules. This leads to the possibility that there may be
complex and potentially internally conflicting legislation within
a state in a federation.
It is not just, however, the original national legal situation which
regulates the use of linguistic data. International obligations
may, through direct or indirect applicability, have considerable
impact. In 2007, 27 member states of the European Union
adhere to European legal instruments (such as directives and
regulations) in relation to the national and international use of
data. Pursuant to the doctrine of direct applicability enshrined
in Article 10 of the Treaty establishing the European
Communities, these norms have priority in relation to potentially
conflicting national norms. What needs to be borne in mind is
that the individual member states have some leeway in the
implementation of the instruments, which may lead to minute
differences in the level of protection.
Finally, public international treaties which oblige their
signatories to adhere to certain minimal standards need to be
taken into consideration. In relation to linguistic data and the
problem of copyright, the Copyright Treaty of the World
Intellectual Property Organisation (WIPO, 1996) is to be
considered as particularly relevant. The question of personality
rights with a view to individuals whose data are processed is
addressed in the Convention on Human Rights and Fundamental
Freedoms (1950). Additionally, the Convention for the
Protection of Individuals with regard to Automatic Processing
of Personal Data (1981) provides further normative guidance
for the member states of the European Union.
4 The Legal Impact of Intellectual Property
Copyright protection of language corpora is provided by
different aspects of applicable law. In order to simplify the
presentation, there will be a focus on the law of harmonised
rules by the European Communities that are placed within the
framework of the World Intellectual Property Organisation
(WIPO).
4.1 Directive 91/250/EC on the Legal Protection of Computer
Programs
The different tasks of linguistic data processing (transcription
as well as annotation etc.) require a considerable number of
software tools. For this purpose, apart from commercial
development, software is written by the research establishment's
employees. The participants in this process rarely bother with
legal protection of their work. By implementing the Directive
91/250/EC, computer programs in all Member States of the
European Community are protected by copyright law. In
accordance with Article 1.3 of the Directive 91/250/EC, a
computer program is protected, if it is original in the sense that
it is the author's own intellectual creation. Ideas and principles
of a computer program are not protected by this Directive. The
term of protection is the author's lifetime plus a period of 50
years. The author owns the exclusive rights to reproduce,
translate, adapt and publicly distribute his computer program.
If a computer program has been created by an employee, in
accordance with Article 2.3 of the Directive 91/250/EC, the
employer is, unless otherwise provided by contract, the
copyright holder of the resource. In the case of software being
developed within a research project, from this point of view
the copyright is held by the respective research establishment
(University etc.).
4.2 Directive 96/9/EC on the Legal Protection of Databases
In accordance with Article 1.2 of the Directive 96/9/EC, a
database is defined as a "collection of independent works, data
or other materials arranged in a systematic or methodical way
and individually accessible by electronic or other means".
Without exception, linguistic corpus data come under this
protection. This Directive makes two significant stipulations.
First, it offers protection by copyright to databases which, by
reason of the selection or arrangement of their contents,
constitute the author's own intellectual creation. Thereby the
author owns the exclusive right to carry out or authorise the
reproduction, alteration and distribution. Secondly the Directive
creates an exclusive right protection sui generis for makers of
databases, independent of the degree of innovation. This
protection of any investment allows the makers of databases
to prevent unauthorised extraction and/or re-utilisation.
4.3 Copyright Directive 2001/29/EC
The Copyright Directive 2001/29/EC adapts legislation on
copyright and related rights to reflect technological
developments into European Community law. In this process,
it discusses and harmonises the property of reproduction,
communication and distribution rights. Concerning linguistic
research data, attention should be paid to Article 5.3(a) of the Copyright Directive. It gives freedom to Member States in
supporting non-commercial science by making copyright less
restrictive for academic use of copyrighted work.
5 The Legal Impact of Data Protection
Directive 95/46/EC on the protection of individuals with regard
to the processing of personal data imposes strict restrictions for
the elevation and utilisation of personal data. Personal data are
pieces of information which can be linked to a specific person.
The processing of personal data only is permitted by law, if
there is a clear and lawful purpose at the time of data
procurement, and if the respective person has expressed his/her
consent. Further restrictions are imposed, if the racial, national
or ethnical origin, political opinion, religious or philosophical
beliefs are apparent. The same applies to the disclosure of health
conditions or sexual life. If personal data are transferred to
countries outside of the European Union (Transborder Dataflow
to third countries), a level of protection has to be guaranteed
that is equivalent to the European level, for example by means
of the Safe-Harbour-Principles. The respective person may
enforce his/her rights by means such as disclosure and deletion
of the data. Article 6.2, Article 11.2 and Article 13.2 of the
Data Protection Directive contain privileges for academic
research. An escape strategy in respect of data protection law
problems is complete anonymisation (disguising by removing
personal information by abbreviating names, locations etc.) or
pseudonymisation (disguising by aliasing individuals, locations,
etc.) of the personal data. However, it remains unsolved which
level of abstraction constitutes sufficient anonymisation,
particularly if it is possible to draw conclusions by joining the
data with other resources.
Figure 1 gives an overview about the different types of right
holders to a database.
Figure 1: The different types of right holders
Legal Competence by Trusted Third Parties
An additional option is given by the use of a trusted third party
hosting the information that has been disguised by
anonymisation or pseudonymisation. It may act as a trustee,
passing the aliased or anonymised data from its origin to a
requesting research institution. The trusted party is not required
by law, but it has the ability to provide a high level of data
security, integrity and protection during the whole data
transaction process (Kilian et al 1995, p. 63). Additionally a
trusted party can provide specialist advice in technical and
copyright matters. Further, we suggest proceedings to increase
legal certainty in case of creating and using linguistic databases.
Bibliography
Jüttner, Irmtraud. "Mannheimer Korpus und Urheberrecht. Die
Einbeziehung zeitgenössischer digitalisierter Texte in die
computergespeicherten Korpora des IDS und ihre juristischen
Grundlagen." Sprachreport 3 (2000).
Kilian, Wolfgang. "Daten für die Forschung im
Gesundheitswesen." Gutachten II. Toeche-Mittler Verlag, 1995.
57-76.
Patzelt, Johannes. "Unter juristischem Blickwinkel: Textkorpora
und Urheberrecht." Korpuslinguistik deutsch: synchron –
diachron – kontrastiv: Würzburger Kolloqium 2003,. Ed.
Werner Wegstein and Johannes Schwitalla. Würzburg, 2003.
Collecting Legally Relevant Metadata by Means
of a Decision-Tree-Based Questionnaire System
Timm Lehmberg, Christian Chiarcos, Erhard
Hinrichs, Georg Rehm, and Andreas Witt
1 Introduction and Overall Concept
Most metadata standards used for corpus linguistic purposes
(TEI, OLAC, IMDI etc., for a complete overview see Lehmberg
and Wörner 2007) require elements that contain legal
information about the rights holder to the particular resource
and/or its accessibility. Normally these metadata elements are
kept very abstract and do neither distinguish between the
different types of personal rights nor do they consider the option
of multiple holders of copyright.
The legal situation upon which the evaluation of linguistic data
to be used for scientific purposes is based is clearly defined,
but too complex to be understood completely by non-experts.
Furthermore, it varies from one country to the other and is in
a constant state of flux. In the framework of our joint sustainability initiative (see the
introduction to this session), a large number of heterogeneous
corpora have been acquired from multiple sources and multiple
projects, and processed with regard to different individual
requirements (Schmidt et al. 2006). This heterogeneity is
responsible for the problem that the legal metadata that need
to be collected strongly vary with regard to the respective corpus
and data situation. Only for a small number of projects
associated with our sustainability initiative are detailed sets of
legal metadata that inform a potential user of the corpus about,
for example, stipulations or copyright holders, readily available.
For the majority of projects and corpora, this task has to be
performed retroactively.
Figure 1: A concept map visualising the query structure
Facing the complexity of the legal context (see Zimmermann
and Lehmberg, in this session), it is almost impossible for
non-experts to evaluate the situation of their language data and
to extract the relevant metadata without professional advice.
To reduce the complexity of this task, concept maps were
created with the goal of making the legal situation as well as
the legal terminology transparent and understandable to
non-professionals. Unlike mindmaps that are primarily used
for the (often spontaneous and intuitive) mapping of ideas and
processes, the technique of concept mapping is intended more
for knowledge modelling: concepts are represented by nodes,
links represent the relations between them.
As a utility to create the concept maps modelling the legal
situation within our joint sustainability initiative we used
CmapTools, a program distributed by the Institute for Human
and Machine Cognition (IHMC). IHMC CmapTools provide
a client/server architecture that allows users at different
locations to work collectively on Concept Maps and to discuss
their structure and content online.
Based on these schemata and following the principles of
decision-trees, we built an additional concept map representing
the query structure of a questionnaire. Digressing from the
original principles of concept mapping mentioned above, in
this map queries are represented as nodes whereas responses
are represented as links between them. The primary query given
in the centre node (see figure 1) corresponds to two central
aspects of law (data protection and copyright, see Zimmermann
and Lehmberg, in this session). Each response leads to a large
number of additional queries that again, depending on the users'
response, have subordinated queries. Further sections of the
concept map deal with the accessibility of the data as well as
their respective principles and standards of data processing.
In same manner we modelled the query structure that surveys
the meta information that ideally has been collected in
connection with the compilation process of corpora. Therefore
it contains queries asking for established metadata standards
(TEI, DC, OLAC, IMDI etc.) that may have been used, and if
necessary asks for additional information.
Due to the fact that the IHMC CmapTools provide an export
of concept maps into an XML-based format, the content and
structure of the concept map can be processed automatically to
create the web based questionnaire that is described in the
following section.
The complete concept map structures will be demonstrated in
conjunction with example scenarios in our presentation.
2 Implementation
As the questionnaire has to be accessible from different research
project locations, it has been implemented using a XAMP (any
operating system, Apache, MySQL and PHP) architecture to
create a user-friendly, web-based interface. The conceptual
structure represented by the concept map is transformed into a
relational database model. Accordingly, it is possible both to
model the tree structure of the queries (Celko, 2004) and to
save responses to these questions within the database.
Additionally, the database includes user data (as well as user
access control data) and links them to the metadata sets of the
resource being acquired by the questionnaire.
Figure 2: A web-based wizard guides the user through the questionnaire
The user interface is generated by a script that parses the
database and guides the user though the questionnaire tree with the help of a web-based wizard (see figure 2). This architecture
has several advantages:
Figure 3: An overview page gives information about the data collection progress
• Subordinate queries that refer to specific details of some
legal aspects automatically can be skipped if they become
superfluous. For instance, there is no need to query
contractual agreements with subjects if there is no personal
data contained in the corpus.
• The data model provides users with the option of registering
multiple corpora and running the questionnaire wizard
individually. Furthermore, users can share the data they
entered into the system with other registered users so that
it is possible to edit the data across project locations (for
example, queries can be skipped, answered later, or left to
other users).
• Should the structure or content of the questionnaire tree be
changed, the database will be modified accordingly. If the
change leads to unanswered queries, this will be indicated
to the user in a status page. For this reason, every user
account has an overview page that gives information about
the state of progress of every registered resource (see figure
3).
• The questionnaire includes queries about metadata content
and standards that already have been applied to the
registered corpora, so that users do not have to insert
redundant information already contained in existing
metadata sets.
• Administrator users have unlimited access to all data in the
database, so that users can be provided with support, if
needed.
We are currently in the process of collecting legally relevant
metadata from about 60 different research projects with the aid
of the questionnaire system described in this paper. Content
and structure of the concept maps is available on our project
homepage at <http://www.sfb441.uni-tuebingen
.de/c2/>.
Bibliography
Celko, Joe. Trees and Hierarchies in SQL for Smarties. San
Mateo: Morgan Kaufmann, 2004.
Lehmberg, Timm, and Kai Wörner. "Annotation Standards."
Corpus Linguistics, Handbücher zur Sprach- und
Kommunikationswissenschaft (HSK). Ed. Anke Lüdeling and
Merja Kytö. Berlin: de Gruyter, In press.
Schmidt, Thomas, Christian Chiarcos, Timm Lehmberg, Georg
Rehm, Andreas Witt, and Erhard Hinrichs. "Avoiding Data
Graveyards: From Heterogeneous Data Collected in Multiple
Research Projects to Sustainable Linguistic Resources."
Proceedings of the E-MELD 2006 Workshop on Digital
Language Documentation: Tools and Standards – The State of
the Art, East Lansing, Michigan, June 2006. 2006.
Corpus Masking: Legally Bypassing Licensing
Restrictions for the Free Distribution of Text
Collections
Georg Rehm, Andreas Witt, Heike Zinsmeister,
and Johannes Dellert
1 Introduction
Though XML-annotated text collections are commonplace in
humanities computing, the value of the annotation is often
underestimated, as interesting applications can be realised by
ignoring the content and considering the annotation exclusively.
At the same time, the distribution of text collections (e. g.,
linguistic resources) is often restricted by rigid licence
agreements. Usually, a corpus consists of a source text
collection (STC) acquired from third parties such as web sites
or publishers, and annotation layers that refer to, for example,
structural or linguistic properties. In practically all cases the
STC is a copyrighted property, so that it is up to the copyright
holder to decide if, and under which conditions, the corpus - a
crucial part of which is the STC - can be made available to the
public or to the research community.
The example we use in this paper is TüBa-D/Z ("Tübingen
Treebank of Written German" (Telljohann et al, 2004 &
Telljohann et al, 2006)). This manually annotated treebank is
based on a CD ROM that contains an archive of the issues the
newspaper die tageszeitung (taz) has published since 1986. If
a researcher (the licencee) wants to obtain TüBa-D/Z, available
for academic purposes free of charge, he or she has to sign a
licence agreement with Tübingen University's Linguistics
Department (the licencer) which states that the licencer is the
copyright holder of the annotation and that the STC, as
published on the taz CD ROM, is copyrighted by contrapress media GmbH. The licencee has to certify that he, she or the
institution the person works for has a valid licence for this CD
ROM.1
Figure 1: Masking linguistic corpora by example of the TüBa-D/Z treebank
We propose the notion of corpus masking, i. e., obfuscating the
STC, but not the annotation layer(s), the STC is "removed", so
that the original licensing restrictions no longer hold for the
"new" resource. The advantage is that the valuable annotation
information can be made available for free (see figure 1).2
2 Corpora – Licence Restrictions – Sustainability
When linguists have created a corpus it can become quite
difficult to gain access to the corpus once the project is finished.
In an ideal world, academics can turn to a sustainability
initiative in order to archive their datasets and to make them
available to other researchers, e. g., by means of a web-based
corpus platform (Dipper et al, 2006 & Schmidt et al 2006).
Apart from issues such as providing standardised markup
languages and metadata sets (Chiarcos et al, 2006 & Wörner
et al, 2006), sustainability initiatives have to take the copyright
of the original data into account.
We developed a tool that is able to mask corpora on the fly.
Should someone who is interested in a corpus that is available
under a rigid licence model not have a valid STC licence, he
or she can still receive the corpus, albeit in masked form. A
corpus potentially can be associated with several accessibility
regulations: full access to TüBa-D/Z requires a licence for the
taz CD ROM, whereas masked versions can be placed under,
say, the GNU Free Documentation or a Creative Commons
Licence. Therefore, a sustainability initiative has to come up
with a flexible system of representing the relationships and
dependencies between the STC and the different annotation
layers and their individual licence restrictions.
3 How to Mask Linguistic Resources
The easiest option to obfuscate an annotated corpus is to remove
the text. A less radical solution substitutes every STC character
with, for example, "x" and every digit with "0". In addition to
preserving word length, this process retains information on
upper and lower case by substituting these with "x" and "X"
(Toms & Campbell, 1999).
We developed CorpusMasker, a Java-based tool for the
parameterised masking of linguistic resources represented as
XML documents. The XML element(s) or attribute(s) that
comprise the actual words or tokens to be masked (in case of
TüBa-D/Z, the <orth> element) can be specified to handle
arbitrary annotation schemes. CorpusMasker features a
dictionary approach: after collecting all word forms, every word
is mapped onto a randomly generated string and replaced by
that string. Word length can be retained, as well as information
on the distribution and positioning of vowels and consonants.
If a word is usually written with an initial lower case character
and that word appears with an initial upper case character, the
same randomised word is used (e. g., "dort" -> "kulp", "Dort"
-> "Kulp"). CorpusMasker performs an affix analysis that is
similar to morphology induction. The algorithm analyses certain
words, masks the roots, but retains the affixes, so that the text
is masked but valuable linguistic information that in itself is
insufficient to reconstruct the source text or even to interpret
the masked text, is kept intact for further analysis. Parameterised
masking can be performed with several different degrees of
retaining linguistic information, from the complete removal of
the STC to a rather light but sufficient masking that keeps, e.
g., closed word classes unchanged (see table 1; affixes are
marked in italics).3
Linguistic corpora often contain POS information so that the
randomisation process results in a list that could act as a key
to unlock the masked corpus, i. e., to reconstruct the STC. As
publication of this complete list would contradict the purpose
of the tool, we will only provide a reduced version of the file
so that the randomly generated words can be mapped onto POS
tags. 2. The institution that created the annotation holds its copyright and
can decide the distribution conditions. As modern corpora may
comprise several annotation layers created by more than one
research group, each group can be considered the creator of its
annotation layer and can decide its terms of distribution (as a
consequence, every annotation layer should potentially comprise
a complete metadata record). Commercially available software
tools that were used in the annotation process might restrict the
terms of distribution of the resulting data set as well.
3. After DH 2007, a downloadable version of CorpusMasker will be
available on our web site under an Open Source licence (<http
://www.sfb441.uni-tuebingen.de/c2/>).
4. For centuries, typographers and graphic designers use the "Lorem
ipsum dolor sit amet" text fragment to evaluate new layouts without
resorting to writing actual text. The blind text gives the impression
of a natural distribution of characters and whitespace without
distracting the reader by conveying any meaning that could be
interpreted intuitively. This approach might be useful for
visualising masked corpora by means of XML to SVG
transformations (Piez, 2004).
5. In a message posted to Corpora-List on Aug 19th, 2006, Péter
Halácsy suggested an interesting method to distribute a copyrighted
corpus under "fair use" conditions. Part of the copyright notice
Halácsy et al. apply to the Creative Commons-based licence of
the "Hunglish" corpus (D. Varga et al, 2005) reads: "We prevented
the illegal use of copyrighted material by shuffling the texts at
sentence level. This form is still useful for research purposes, while
it does not infringe upon the rightholders' interests."

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2007

Hosted at University of Illinois, Urbana-Champaign

Urbana-Champaign, Illinois, United States

June 2, 2007 - June 8, 2007

106 works by 213 authors indexed

Series: ADHO (2)

Organizers: ADHO

Tags
  • Keywords: None
  • Language: English
  • Topics: None