Integration of Distributed Text Resources by Using Schema Matching Techniques

paper
Authorship
  1. 1. Thomas Eckart

    Institute of Computer Science - Universität Leipzig (Leipzig University), Natural Language Processing (NLP) Group - Universität Leipzig (Leipzig University)

  2. 2. David Pansch

    Institute of Computer Science - Universität Leipzig (Leipzig University), Natural Language Processing (NLP) Group - Universität Leipzig (Leipzig University)

  3. 3. Marco Büchler

    Institute of Computer Science - Universität Leipzig (Leipzig University), Natural Language Processing (NLP) Group - Universität Leipzig (Leipzig University)

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

Integration of Distributed Text Resources by Using Schema Matching Techniques
Eckart, Thomas, Natural Language Processing Group, Institute of Computer Science, University of Leipzig, Germany, teckart@e-humanities.net
Pansch, David, Natural Language Processing Group, Institute of Computer Science, University of Leipzig, Germany, dpansch@eaqua.net
Büchler, Marco, Natural Language Processing Group, Institute of Computer Science, University of Leipzig, Germany, mbuechler@e-humanities.net
Scattered Landscapes
The world of humanities has seen an enormous growth in available digital text resources in the last decade. This development was driven by many factors like advancements in relevant technologies like OCR, increasing competence in the field of digital encoding and publication, and the spreading of widely accepted encoding formats. It is by now widely understood (among both researchers and funders) that publications and created resources have to be standardized to ensure their relevance for future work and their (re-)use in a linked data environment.

More and more projects with partially very specific research questions are working on the encoding of their results in various (mostly XML based) formats. Encoding standards and formats were established that are widely used and supported by various tools and a helping community. In the field of encoding textual resources the standards of the Text Encoding Initiative (TEI) and their various dialects are well represented. To cover a wide range of data it is common to create a specific dialect that fits the own data best without losing all compatibility with other projects. As a consequence, various encoding variants exist that cope with similar data but create different schemata to represent them. A drawback of this specialism are problems with aggregating existing data stocks to one global resource: Even combining solely meta data of editions of the same work becomes an expensive (since labor intensive) task. Creating true hypertextuality in digital libraries (Berti et al., 2009) that will massively connect resources in a distributed infrastructure will intensify this problem. Hence data integration will gain relevance in the field of distributed resources.

Since data storage solutions like relational database management systems are often used (for example when fast access to or complicated requests on the data is necessary) this issue does not only apply on the XML data model. These systems often use ETL-procedures (Extract, Transform, Load) to gain uniform and comparable data. The key problem remains the same: A lot of time is spent to gain a clean stock of data and consistent meta data. Experiences show that especially at projects using quantitative approaches, with a demand for large (and as homogeneous as possible) data sets, up to one third of all human resources are needed to overcome different kinds of heterogeneity.

This paper concentrates on the question how existing schema matching techniques that are established in data warehousing and information integration can be used to identify identical structures of different editions of the same source material. Therefore a high similarity of the content can be assumed, whereas structural and semantic heterogeneity prevents fast integration. As all modern storage solutions rely on schema definitions, it was not the task to identify corresponding element instances of two documents but to identify correspondences between collections of documents. For this reason in the following the term 'element' will be used for the set of all elements having the same position in one schema (for example all TEI/teiHeader/fileDesc/titleStmt/titleelements in a set of XML files or all values for one column work-namein a relational database table).

This approach has two advantages: generic profiles for every schema element can be created (thus minimizing the effects of outliers) and computational time is reduced by minimizing the number of comparisons that have to be made to find useful schema mappings.

To illustrate the procedure different versions of the Duke Databank of Documentary Papyri (DDbDP) were used. These include the Perseus version of the DDbDP, its EpiDoc encoded equivalent (Epiduke) and an extraction from the latter, stored in a flat relational schema. These data are only to be seen as a first testing environment. Further evaluation on other text types is in progress.

General Approach
The whole process of finding corresponding elements or larger element structures can be separated into three major working steps:

Fingerprinting: By using various features a fingerprint is created for every element, taking different element properties into account.
Linking: Elements of both schemata are chosen pairwise that are likely matching candidates.
Scoring: Every linked pair is scored by a similarity measure.
To identify corresponding elements of two schemata, for every addressable element various features are used. These features address different types of similarity like structural similarity (with the focus on schema information) or semantic similarity (with the focus on elements' content). Most of the used features do not depend on specific structures or access methods, hence every addressable element can be used and compared. As a consequence XML or SGML documents, columns in a relational database or every other (semi-)structured input schema can be used. Existing works have shown that using only a single feature is not sufficient to identify similarity (Algergawy et al., 2009). For this reason all measures are combined and normalized by a weighted sum.

As there is a wide range of syntactic and semantic ambiguities it is unlikely to achieve a full automated matching. Hence it is the goal to establish an integration procedure that allows a more efficient handling of new data resources to minimize the effort of integrating these resources into an existing data stock.

Methodology
Used Features
A wide range of different features are known in the field of data integration. Some of these make use of structural schema information (schema based) while others use the elements' content (instance based). A constraint based approach checks the type and limitations of data, e.g. the domain of numbers or the differences in cardinality or uniqueness of elements. These features work on different levels: some concentrate on the combination of elements, their hierarchy or their number of child nodes (structure level), while others focus on individual elements and their attributes (element level).

The following features were tested on their usefulness for the described problem. Table 1 gives a short overview of used approaches and their classification (based on Rahm et al., 2009).

Name similarityuses the Levenshtein distance to compute string similarity of database column names, respectively XML element names. For example an element name authorshas a distance of 1 from an element named author, but a distance of 5 from the string work.
Path similaritycompares the structural depth of elements, under the assumption that similar elements have similar positions (and therefore similar distances to the root element) in their respective schema.
Cosine similarityuses the Vector Space Model by representing the content (i.e. the occurring terms) of every element as a vector in a high dimensional vector space. To reflect different importance of terms (for example stop words versus domain-specific keywords) all terms are weighted by using the tf.idfmeasure (Salton et al., 1988). The result vectors are compared using the cosine similarity: $sim_{cos}(p_1, p_2) = \frac{v_{p1} \cdot v_{p2}}{|v_{p1}| \cdot |v_{p2}|}$.
Dice coefficientcalculates the ratio of words, that appear in both compared elements to all occurring words: $sim_{dice}(p_1,p_2) = \frac{2 |W_{p1} \cap W_{p2}|}{|W_{p1}| + |W_{p2}|}$. For example an element that contains the words {bank, money, account, credit} is similar to an element containing the words {bank, money, account, financial} (Dice coefficient = 0.75), but less similar to an element containing the words {bank, river, water} (Dice coefficient ~ 0.29)
Frequency similarityuses the assumption that similar content is encoded by a similar number of elements. Therefore this measure produces a high value if the number of occurrences of the compared elements are similar.
Content typecompares the ratio of numbers to letters. Hence an element with mostly numbers becomes dissimilar to an element containing mostly textual data.
All results were normalized to the interval [0,1] (where necessary), `0' corresponding to no similarity and `1' to identity.
Features that address the element's content use all available data: For example the union of all text addressable with the same XPath expression in a collection of XML files or all data in a column of a relational schema.

Overview of used measures and their classification

Similarity measure Schema-based Instance-based Constraint-based Structure level Element level
Name x x
Path x x
Cosine x x
Dice x x
Frequency x x x x
Content type x x x
Linking
Experiments have shown that many elements in the chosen XML collections occur very rarely. Therefore only elements were considered that occur in at least 50 percent of all documents of the respective collection to reduce the computation time. All other elements of both compared data sets were linked with each other.

In general a more sophisticated approach would be useful to minimize the number of comparisons. This holds especially true in a distributed environment where network response and transmission time is a limiting factor. This was not considered in this work as the focus was on identifying useful features and all resources were locally available.

Scoring and Results
The values of all similarity measures are combined by a weighted sum, yielding a similarity value between `0' (no similarity) and `1' (identity). Starting by identical weights for all measures the weights were iteratively adjusted to enhance the matching precision.

The results show that especially the instance-based approaches (Dice and Cosine) were successful for identifying matching elements. These measures worked well on both XML-XML and relational database-XML comparisons. All other measures turned out to be strongly dependent on the compared formats.

Especially structural differences between optimized (and redundancy-free) relational schemata and XML documents prevent good results when relying solely on schema information: in these cases matching precision drops significantly. Only instance-based measures (Dice, Cosine, and content type similarity) achieved good results, whereas all other measures could be ignored (hence weighted with 0).

A more balanced result shows in the XML-XML analysis: as both document collections are based on subsets of TEI formats many relevant elements are similar regarding their name and position in the XML structure tree. Therefore especially name similarity turned out to be a useful indicator for matching elements. Nonetheless again instance based measures performed best on these comparisons.

Visualization
A graphical user interface was created to visualize the pairwise similarity of elements. In a matrix (c.f. Figure 1) every row stands for an element of the document collection 1 and every column for an element of document collection 2 (or database columns in case of a relational schema). Each cell contains the weighted sum of similarity values for the two respective elements. High certainty values are emphasized with a strong green color. A tooltip on each cell gives additional information about the comparison results (used features and similarity values).

Figure 1 shows an excerpt of a result matrix where a relational database is compared to a collection of XML files. Every column represents a database column of the extracted version of the Epiduke, every row represents a path in Perseus' DDbDP XML files. As an example the element placeNamefrom the collection of XML files has a strong similarity to the geographycolumn in the database, even though different element names were used. By analyzing their content it becomes obvious that there is a strong extensional overlap between these elements' content. On the other hand there are no similar XML elements for the database column female(information about the author's gender) and work_id(a database-specific identifier for every work). This is due to the fact that both elements were created during the extraction process (or the following post-processing) and therefore have no corresponding elements in the original material.

Figure 1: Graphical output for comparison of epigraphic data hold in Perseus' DDbDP XML and an extracted version of the Epiduke.

Full Size Image

Conclusions and Further Work
Various comparisons have shown that especially semantic approaches are promising for identifying similar elements. Apparently these measures' results will degrade when data is compared with a very small semantic overlap (like editions of different domains or languages). As a consequence structural information could be taken into account. For the analyzed data these measures proved useful where complex structures exist, but failed on flat relational schemata.

The existing system is only to be seen as a prototype that will be extended in the future. The further focus will be set on adding new features and analysis of an extended set of input. It is expected that more efficient and domain-specific profiles can be created that will be basis for useful weights and combinations of features. Additionally it is expected that the results can be improved by using a more in-deep view on the data (like identification of key words) or further exploitation of structural information (by including schema definitions into the classification process).

References:
Algergawy, A. Nayak, R. Saake, G. 2009 “XML Schema Element Similarity Measures: A Schema Matching Context, ” Confederated International Conferences, CoopIS, DOA, IS, and ODBASE 2009 on On the Move to Meaningful Internet Systems OTM, Vilamoura, Portugal, 2009 1246-1253

Berti, M. Romanello, M. Babeu, A. Crane, G. 2009 “Collecting fragmentary authors in a digital library, ” 9th ACM/IEEE-CS joint conference on Digital libraries, Austin, TX, USA, 2009 259-262

Bizer, C. Heath, T. Berners-Lee, T. 2009 “Linked Data - The Story So Far, ” International Journal on Semantic Web and Information Systems, 1-22

Epiduke Online publication, 1 November 2010 (link)

LaQuAT: Linking and Querying Ancient Texts, 28 November 2010 (link)

Pansch, D. 2010 Data Integration Methods for Structural Heterogeneous Data in an eHumanity Context, Leipzig

Rahm, E. Bernstein, P.A. 2001 “A survey of approaches to automatic schema matching, ” VLDB Journal, 10

Salton, G. Buckley, C. 1988 “Term-weighting approaches in automatic text retrieval, ” Information Processing and Management, 5 513--523

TEI: Text Encoding Initiative, 23 November 2010 (link)

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2011
"Big Tent Digital Humanities"

Hosted at Stanford University

Stanford, California, United States

June 19, 2011 - June 22, 2011

151 works by 361 authors indexed

XML available from https://github.com/elliewix/DHAnalysis (still needs to be added)

Conference website: https://dh2011.stanford.edu/

Series: ADHO (6)

Organizers: ADHO

Tags
  • Keywords: None
  • Language: English
  • Topics: None