Paderborn University, Center for Music, Edition, Media (ZenMEM)
Beethoven-Haus Bonn, Research Centre “Beethoven-Archiv”
Oxford e-Research Centre - Oxford University
Répertoire International des Sources Musicales (RISM) Digital Center
Oxford e-Research Centre - Oxford University
Modeling work in the digital humanities has traditionally focused on written texts; music, however, requires data models that can capture the varied, overlapping layers that are characteristic of its structure. Most encoding models for notated music—i.e., those presenting a precise representation of what can be performed by a musician—provide good coverage of the layers on the immediate music ‘surface,’ like measures, staves, and notes. Other, more analytical and less apparent structures are often not as well addressed. This includes formal objects such as musical themes and motifs, as well as properties that often lack explicit means for symbolic notation, such as texture and timbre. Our paper describes a model that includes a component to specifically address these types of musical structure, using different arrangements of the same musical work as examples.
Musical figures require descriptions that include both the beginning and end points that mark their extent, as well as a specification of the individual parameters lying at different structural layers that comprise that figure. For example, early arrangements of Beethoven’s Eighth Symphony contain a replacement of the novel triple forte dynamic marking with a simple fortissimo, or double forte, at the moment of the first movement’s recapitulation. We can point to the respective measures and state that one has a different dynamic marking than the other, but this does not include our identification of the measures as different expressions of the same music-theoretical structure: a recapitulation. Although digital annotations can reference passages in multiple works, these references do not make sense unless they are linked together as separate expressions of the same analytic object. In order to present such a comparison, we need to posit a distinct, abstract class to model it. Such parallelism is entailed in our common understanding of what an arrangement is, though as Flanders and Jannidis (2021) note, the modeling already inherent in such a term is ‘invisible through [its] very familiarity.’ Before beginning work on a data model, therefore, we are obliged to examine what it is we are looking for when examining ‘versions.’
Our model builds on Linked Data principles demonstrated in projects using the Music Encoding and Linked Data (MELD) framework and consists of three modules: one for identifying resources, one for scholarly annotation, and, at the core, a framework for categorizing, labeling, and comparing user-selected structural features along with their formal analogues in different arrangements. After considering other standards, we have aligned our framework with the Functional Requirements for Bibliographic Records (FRBR), adding specialized subclasses of the standard FRBR entities for use in music comparison. Our model's music component introduces a class at the FRBR Expression level so that targeted commentary can be attached not simply to a contiguous block of music, but also, for example, to symbols that indicate the manner in which an accompanying melodic figure is to be played, or to the repetition and variation of a certain theme within a single piece.
The data model has further been designed to accommodate source materials in different formats, including standardized methods to refer to segments (e.g., Media Fragments, EMA, and IIIF): by collecting individuated classes at the manifestation level, a specific portion of music can be treated independently of its realization in different media. This way, an annotation can target a semantically meaningful musical selection across file formats, rather than a set of resource-specific IDs.
A prototype using the model has been successfully developed for a digital musicology study of 19th-century arrangements of orchestral works:
https://github.com/DomesticBeethoven/bith-annotator. This application lets a user view scores encoded as MEI files side-by-side. They can then select a group of measures from two different versions of the same work and mark them as corresponding excerpts of the same passage of music. These excerpts are then saved as a single object—a parallel passage—ready for scholarly annotation.
Targeting musicological objects in an encoding involves more than capturing the symbols present in a particular region of a printed score. It entails the selection of parameters that are constitutive of the object, yet cannot be specified in advance. By introducing a class that incorporates these features into a single object with a musicologically meaningful label, this data model allows such abstract structures to be compared to one another in multiple versions and multiple files. In addition, because the model is compatible with Linked Data formats, these objects can be reused in future research, thus providing digital musicology projects with the potential to have a greater, longer-lasting contribution to scholarship in the field.
Funding: This research was undertaken by the project
‘Beethoven in the House: Digital Studies of Domestic Music Arrangements,’ and supported by a UK-Germany funding initiative: in the UK by the Arts and Humanities Research Council (AHRC) [project number AH/T01279X/1], and in Germany funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) [project number 429039809].
Bibliography
Flanders J. and Jannidis, F. (2019). Data Modeling in a Digital Humanities Context: An Introduction. In Flanders, J. and Jannidis, F. (eds.),
The Shape of Data in the Digital Humanities: Modeling Texts and Text-based Resources. New York: Routledge, 2019, pp. 26–96.
International Image Interoperability Framework (IIIF). (2020). API Specifications.
https://iiif.io/api
/ (accessed April 28, 2022).
Lewis, D., Page, K. and Dreyfus, L. (2021). Narratives and Exploration in a Musicology App: Supporting Scholarly Argument with the Lohengrin TimeMachine. In
8th International Conference on Digital Libraries for Musicology (DLfM ’21). New York: Association for Computing Machinery, pp. 50–58.
Music Encoding and Linked Data (MELD). (2021). Overview.
https://meld.web.ox.ac.uk/ (accessed April 28, 2021).
Music Encoding Initiative (MEI). (2019).
https://music-encoding.org/ (accessed April 28, 2022).
Viglianti, R. (2015). Enhancing Music Notation Addressability (EMA).
https://music-addressability.github.io/ema/ (accessed April 28, 2022).
Weigl, D., et al. (2021). Notes on the Music: A Social Data Infrastructure for Music Annotation. In
8th International Conference on Digital Libraries for Musicology (DLfM ’21). New York: Association for Computing Machinery, pp. 23–31.
W3C Media Fragments Working Group. (2012). Media Fragments URI 1.0.
https://www.w3.org/TR/media-frags/ (accessed April 28, 2022).
If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.
In review
Tokyo, Japan
July 25, 2022 - July 29, 2022
361 works by 945 authors indexed
Held in Tokyo and remote (hybrid) on account of COVID-19
Conference website: https://dh2022.adho.org/
Contributors: Scott B. Weingart, James Cummings
Series: ADHO (16)
Organizers: ADHO