Digital Humanities in Cultural Areas Using Texts That Lack Word Spacing

paper, specified "long paper"
Authorship
  1. 1. Kiyonori Nagasaki

    International Institute for Digital Humanities

  2. 2. Toru Tomabechi

    International Institute for Digital Humanities

  3. 3. A. Charles Muller

    University of Tokyo

  4. 4. Masahiro Shimoda

    University of Tokyo

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.


As most of modern and pre-modern western writing systems explicitly represent division of the words in a sentence by spaces or breaks, it has been easy to use computers to analyze texts based on each word and its meanings. However, there are several modern and pre-modern writing systems that do not explicitly indicate word separation in texts; that is, all words in a sentence are contiguous. A major contemporary representative of this kind of writing is seen in the language system of East Asia. Moreover, a popular Japanese pre-modern writing system called kuzushi-ji (cursive style characters) had often been presented with undivided characters even in typesetting until the late nineteenth century (Fig. 1, Fig. 2). As the lack of word-separation has been evoking not only ambiguity but also multiple interpretations, it has formed an aspect of cultural richness in Japanese culture. However, as a result, Japanese texts have intrinsically presented difficulties: not only in the case of textual analysis but also in both manual and automatic transcription in the digital era. This presentation will discuss problems in these writing systems and the current situation of attempts to resolve them through the methods of digital humanities.

Fig.1: Typesetting printing (
Ise-monogatari (Tales in Ise). Saga-bon. 1608.
http://dl.ndl.go.jp/info:ndljp/pid/ 1287963/6
)

Fig. 2: Woodcut printing (Yamamoto Shunsho ed.
Eiri-Genji-Monogatari (Pictorial tales of Genji. 1654.
http://base1.nijl.ac.jp/~anthologyfulltext/ )

Difficulties of transcription
Recent Japanese texts do not have serious problem in case of OCR due not only to the separation of each character but also accuracy and clarify of its printing. However, it is difficult to OCR books printed even ten decades ago because of two points: most of them uses relatively complicated characters for OCR and parallel embedded small-font size texts (called ruby in HTML5) which explain pronunciation of a word, and are too close to the explained word to OCR (Fig. 3), even though they were printed by metal typesetting. More three decades ago characters were sometimes connected, and the writing style of characters were partially cursive (Fig. 4). Recently, some researchers are attempting to develop tools for recognition of kuzushi-ji not based on the shape of individual characters but by continuous shapes of characters. They have not yet reached the stage where they are able to transcribe all characters accurately, for both technical and intrinsic reasons, but the technology can nonetheless assist in reading such texts by showing candidates of characters (Fig. 5)

Hashimoto, Yuta, et al. The SMART-GS Project: An Approach to Image-based Digital Humanities.
Digital Humanities 2014:476-477. 2014.

. One of reasons why such kind of image recognition of a series of characters by machine is available in many cases is that many resources written by kuzushi-ji are woodcut printing, in which case the continuous cursive characters are more or less normalized within a single book.

Fig. 3: Ruby close to text body (Ohashi Matatarou.
Jituyou-ryouri-hou, A Guidebook to practical cooking). Hakubunkan. 1895.
http://dl.ndl.go.jp/info:ndljp/pid/849051/19

Fig. 4: Continuous characters (Ryusuitei Tanekiyo ed.
Shiranui-monogatari (Tales of Shiranui). Vol. 68a. Enju-dou. 1885.
http://dl.ndl.go.jp/info:ndljp/pid/ 884924/8 )

Fig. 5: A result of image search in SMART-GS

However, there are special difficulties presented when a needed character is not encoded in Unicode. It seems to be similar with the case of Medieval Unicode Font Initiative

http://folk.uib.no/hnooh/mufi/
, but the number of unencoded characters would be much more in the Japanese case included in East Asia culture. Especially, as Japanese culture has been involved with foreign cultures and developing them in its contexts, several writing systems are preserved in its cultural resources, including Kanji, Hiragana, Katakana, Hentaigana, and Siddham scripts. Siddham scripts were encoded in Unicode 8.0 with its variant characters by efforts of Script Encoding Initiative, international experts, and SAT project

Pandey, Anshuman. Proposal to Encode the Siddham Script in ISO/IEC 10646. ISO/IEC JTC1/SC2/WG2 N4294. 2012.
http://www.unicode.org/L2/L2012/12234r-n4294-siddham.pdf .

KAWABATA , Taichi, Toshiya SUZUKI, Kiyonori NAGASAKI and Masahiro SHIMODA. Proposal to Encode Variants for Siddham Script. ISO/IEC JTC1/SC2/WG2 N4407. 2013. http://std.dkuug.dk/JTC1/SC2/WG2/docs/n4407.pdf .
Anderson, Deborah, et al. 2013-11-22 Siddham Script (梵字) Meeting @ Tokyo, JAPAN, Earth. ISO/IEC JTC1/SC2/WG2 N4523. 2013.
http://std.dkuug.dk/JTC1/SC2/WG2/docs/n4523.pdf .

. There are already 80,000 Kanji (CJK unified characters) registered, but thus number will continue to increase. Hentaigana (including over 200 glyph shapes) was proposed to the ISO committee on October 2015

ITSCJ SC2 Committee, IPSJ, JAPAN. Proposal of Japanese HENTAIGANA. ISO/IEC JTC1/SC2/WG2 N4674. 2015.
http://unicode.org/wg2/docs/n4674-Japan_Hentaigana_Proposal-a.zip .

. In order to make easy-use digital scholarly edition for Japanese texts, especially classics, this process will be continued.

While efforts of transcription, due to commoditization of digitizing textual materials in hi-resolution, digital image databases have also been grown in Japan. Especially, the National Diet Library in Japan has been addressing the publication of digitized collection including over 300,000 books--since over decades ago and recently stated that most of them are to be released in the public domain

http://dl.ndl.go.jp/
. And some institutes such as Kyoto Prefectural Library and Archives

http://hyakugo.kyoto.jp/
and the University of Tokyo Library

http://dzkimgs.l.u-tokyo.ac.jp/utlib_kakouzou.php
are publishing their digitized collections under open license. The Art Research Center in Ritsumeikan University and the National Institute of Japanese Literature

http://www.nijl.ac.jp/
have released many digitized textual resources under academic license. The latter institute plans to distribute parts of their contents under open license in this year in its new comprehensive digitization project

http://www.nijl.ac.jp/pages/cijproject/index_e.html
. Needless to say, these are useful to enhance the convenience of humanities research. Especially, in Japanese contexts, many humanities researchers mention that validation of research results has been made much more efficient by the increased use of the digitized images.

Crowd sourcing transcription has recently emerged also in Japan. Transcribe JP project has been conducted as a SIG of the Japanese Association for Digital Humanities. It provides a Web service

Hondigi2014. http://lab.ndl.go.jp/dhii/omk2/
for transcription with Omeka and Scripto plugin. Moreover, it started a micro task crowd sourcing project on

翻デジ@JADH×Crowd4U. http://www.jadh.org/transcribejp
October 2015 in cooperation with Crowd4U project

Crowd4U. http://crowd4u.org/en/
. Contributors can determine whether a character is exactly OCRed or not, comparing a candidate character with a piece of an image only by one click. The first experiment was finished in a much shorter time than we expected. Further results will be reported at the DH2016.

Difficulties in Word Separation
In spite of the difficulties of transcription, there are many digitized texts in Japanese. Aozora-Bunko

http://www.aozora.gr.jp/
, a public domain Japanese texts repository similar to the Gutenberg Project, provides over 10,000 texts on its Web site and GitHub. The National Institute of Japanese Language and Linguistics (NINJAL)

http://www.ninjal.ac.jp/
publishes several encoded historical Japanese texts with POS tags on Web and Web services of textual analysis on modern Japanese texts including 100 million words with POS tags each word in its original format. The SAT project

http://21dzk.l.u-tokyo.ac.jp/SAT/
also provides digital texts of Buddhist scriptures consists of 100 million characters mainly in Chinese and Japanese with some philological tags on Web.

The texts of NINJAL consists of separated words with POS tags, but most of the others do not use this method. Then, methods for textual analysis are common in Japan: The one is n-gram analysis regarded a character as one “n”. The other is developing tools for automatic separation of words sometimes with POS tagger, such as Mecab

http://taku910.github.io/mecab/
, Chasen

http://chasen.naist.jp/hiki/ChaSen/
, and Kuromoji

http://www.atilika.com/ja/products/kuromoji.html
. These tools realize a high degree of precision, but sometimes produce erros. In this case, one has to manually correct the result of the tools if sharing exactly-processed texts is necessary. Moreover, even if a separation is not mistaken, it might support an interpretation in some cases. Such kinds of cases can also be occurred in word-separated corpora. This type of writing system includes such kinds of issues.

Rendering of texts
In XML-formatted texts, suc has those maintained in TEI, JATS

http://jats.nlm.nih.gov/
, and so on representation of breaks in source XML files seems to be regarded a space as a separation between words in popular stylesheets. But in the case of non-separated texts, it causes problems such as unnecessary separation. The XSLT-processed Japanese text in fig.7 must exclude spaces between characters in spite of line-breaks in the XML source (fig.6). Conversely, as a Japanese semi-governmental open access journal system adopting JATS ignores line breaks even in English, the words are connected in the case of Fig. 6 and Fig.7. This problem seems to be recognized in ePub with solution in CSS according to the target language

http://www.idpf.org/epub/30/spec/epub30-overview.html

. While it must already be discussed even in contexts of DH because non-spacing texts have been generated in various time and place, the differences of treatment of the line-breaks in XML source files should be carefully treated regarding not only representation but also analysis of texts.

Fig. 6 An example for contrast of word separation in XML format

Fig. 7 An example for a result of XSL Transformation of the Fig. 6

In contexts of current DH, huge humanities resources have still been dormant. According to their awakening, these kind of issues should be gradually revealed and needed to be solved from both practical and abstract viewpoints. Through solving them earnestly under global communication, DH will come to better fruition.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2016
"Digital Identities: the Past and the Future"

Hosted at Jagiellonian University, Pedagogical University of Krakow

Kraków, Poland

July 11, 2016 - July 16, 2016

454 works by 1072 authors indexed

Conference website: https://dh2016.adho.org/

Series: ADHO (11)

Organizers: ADHO