Encoding Ancient Greek Music

paper, specified "short paper"
Authorship
  1. 1. Zachary David Sowerby

    College of the Holy Cross

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.


<DH-Heading1>1. Introduction</
DH-Heading1>

This paper presents a project to encode and analyze ancient Greek music.
Notated music of Ancient Greece survives only in about 40, often quite fragmentary, sources spanning a chronological range of 500-600 years. Only one song exists that can be considered complete. The information preserved in these documents, however, can be quite rich. It informs us about lyric, pitch, rhythm, meter, section breaks, dynamics, and instrumentation. Documents containing notated ancient Greek music survive primarily in the form of papyrus fragments. A smaller but significant number appears in inscriptions, while a limited amount of the music of Hadrian's court musician, Mesomedes of Crete, survives in a manuscript tradition. Due to the accidents of survival, the majority of these compositions date from the Roman period, although a significant number of Classical and Hellenistic fragments, such as a fragment from the Orestes of Euripides, survives as well. They range geographically from Greece to Anatolia to Egypt, and include tragedies, paeans, hymns, comedies, instrumental works, theoretical exercises, and one early Christian hymn. Huge challenges for the study of this corpus are its fragmentary nature and its geographical, chronological, and topical extent.
The goal of this project is to implement a design permitting digital study of this corpus, and to provide a framework for research and pedagogy. The digital corpus must be able to encode the same information across a variety of source types but also take the different aspects of the documents' histories into account so that understanding can be made in context.
2. The Digital Corpus
Our digital editions should capture the semantics of ancient Greek musical documents. They should support computational analysis and presentation directly from the information recorded in the ancient documents. To do this, they must address two fundamental challenges: recording ancient Greek musical notation, and coordinating musical notation, lyrics, and other analytical data sets.
It is not possible to record ancient Greek musical notation using existing systems for the digital encoding of music such as MusicXML and the Music Encoding Initiative (MEI). The Music Encoding Initiative (https://music-encoding.org/), for example, is excellently suited to encoding modern western music notation. Its XML scheme can be extended with modules for particular kinds of notation. However, MEI assumes that music can be encoded in terms of pitches as notated in familiar staffed systems. The limits of this assumption are most evident in MEI's guidelines for encoding neumes, which are forced to avoid encoding unstaffed neumes entirely. For this reason, Waller et al. ("Encoding the Oldest Western Music," https://dh2018.adho.org/en/encoding-the-oldest-western-music/) developed the Virgapes notation for encoding medieval plainchant, which they presented at DH2018.
We need instead an encoding scheme that expresses the semantics of ancient Greek documents. The Unicode block for Ancient Greek Musical Notation, for example, does not satisfy this requirement, since its codepoints focus on graphemes rather than sememes, with the result that visually identical but semantically distinct symbols are encoded to a single codepoint. We therefore define an original encoding establishing a one-to-one correspondence between the symbols used in Greek musical texts and their digital encoding.
Most surviving Greek musical documents are vocal. Our editions of the musical notation must therefore be coordinated with editions of the lyrics. We adopt the model mentioned above, by Waller et al. Musical notation and lyrics are transcribed in separate TEI XML documents aligned by canonical citation using Canonical Text Services (CTS) URNs.
We extend the model of Waller et al. by adding further editions. Any aspect of analysis which is not able to be easily captured in an existing edition is simply captured in an aligned edition. For example, Greek texts had a pitch accent that is explicitly written in modern printed editions, and for a source document can be reconstructed by the editor. Similarly, meter can be inferred from the length of syllables in the text. Both accent and meter are recorded in separate documents and aligned by CTS URN with the diplomatic editions of the lyrics.
3. Systematic Data Extraction, Computational Analysis, and Presentation
Along with its digital editions, the project includes a suite of programs to extract and analyze data from the editions. This endeavor operates by the principle that the corpus and the manipulation of the data therein are to be entirely separate.
Data within the corpus can be systematically extracted for computational analysis or presentation. Since all editions of a single document are aligned, correlated patterns in different types of editions can be identified and analyzed. For example, accent information and pitch information can be compared, to study the known phenomenon in ancient Greek music by which melodic pattern tends to correlate with accent. Instances of agreement and of conflict can be separately analyzed in relation to the other editions to achieve an understanding of how and why agreement or conflict might occur.
For presentation, for example, composite information from the aligned editions can be exported as a text file which can then be read by a custom-coded application created with the music program Max MSP. This aural analysis provides the choice to realize the music according to a variety of tuning systems championed by a variety of competing ancient theories, such as those of Aristoxenus and Ptolemy, for which the program is specially equipped.
4. Open Source
All material from the project will be available in a public repository on Github. New editions and code from contributors can expand the project, creating an ongoing initiative to advance the study of this topic.
This framework's explicit modeling and coordination of a large and complex corpus of information underscores the theme of "Complexities." The automated generation of aural analysis with Max MSP takes a novel approach of audio synthesis by systematic data extraction, and introduces a range of options for presentation and teaching.

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

In review

ADHO - 2019
"Complexities"

Hosted at Utrecht University

Utrecht, Netherlands

July 9, 2019 - July 12, 2019

436 works by 1162 authors indexed

Series: ADHO (14)

Organizers: ADHO