Form and Format: Towards a Semiotics of Digital Text Encoding

Authorship
  1. 1. Wendell Piez

    Mulberry Technologies, Inc.

Work text
This plain text was ingested for the purpose of full-text search, not to preserve original formatting or readability. For the most complete copy, refer to the original conference program.

Theories of the sign
This consideration begins in the theories of signification
proposed in Structuralism, particularly as elucidated by
Ferdinand de Saussure and interpreted by Roland Barthes
(drawing on Hjelmslev and others). It will further be informed
by the general theory of “autopoiesis” as articulated by
Maturana and Varela (1987) and by other recent studies of
language and signification influenced by systems theory.
Of particular importance to this treatment are the following
Structuralist principles:
• Composite nature of the sign - a sign is an event or a relation
between two components, a signifying part (signifier,
expression, or token) and a signified part (the content or
meaning of the sign).
• Arbitrary nature of the sign - the relation between signifier
and signified in a sign is arbitrary, not inevitable or given
by nature. Thus slippage is possible in principle and, in
some systems, is common. This potential for slippage is
what accounts for much of the flexibility, adaptability and
power of sign systems.
• Signs work in combination: individual words, for example,
have their significations, but when words are combined into
sentences they become more useful and more expressive.
• Complete signs (considered as signifier/signified pairs) can
also enter into signifying relations. For example, a
metalanguage is a system of signs that describes another
system of signs; expressions in the metalanguage represent
signs in the signified sign system. Metalanguages provide
channels of regulation and feedback that are conducive to
the development of the sign systems they describe, and
eventually (when formalized sufficiently) enable automated
processes to manipulate signs systematically. Conversely,
a connotative system is a system of signs in which the
signifying parts of signs are themselves signs (one might
think of literary texts). Connotative systems demonstrate
the reciprocal relations between layers in a sign system and the way significations at higher levels can condition and
affect signification at lower levels, even apart from the
application of metalanguages. While metalanguages abstract
and formalize the sign systems they describe, connotative
systems work by deploying signs as they are used concretely
in other contexts, bringing alternative significations into
play. Likewise, while metalanguages systematize and
formalize, and thus indicate where automated rules-based
processing is possible, connotative systems draw on and
reflect particular significations available only in specific
local contexts, and indicate where automation by traditional
means is difficult or impossible.
Figure 1: In both natural and artificial systems, a combination of simplicity
of design with complexity, versatility and adaptability of application is achieved
through a layered structure in which components with distinctive functions are
made by combining simpler components.
The Semantics of Layers and the
Layering of Semantics
Avery useful distinction can be made between two different
modes of signification, which we can call operational
and representational semantics. The representational semantics
of a sign corresponds to the conventional notion of what a sign
is and how it functions, that a sign “stands for” something,
reflecting and naming an actually existent “thing” in a real or
imagined world. In contrast, the operational semantics describes
not what the sign refers to in a purported world (which may or
may not be present to the senses), but rather simply how it
operates within the signifying system — which generally
includes the world, or at least the present and active
circumstances of the sign's use. (This in itself is the major
difference between “human” and “machine” semantics [Piez
2002].) The rules of combination that allow any given set of
signs to be assembled into a higher-order organization, which
itself has signifying potential, are part of their operational
semantics; but so also are any disambiguating “incidentals”.
At all layers, operational semantics may be conditioned or
constrained by their context of operation: the “meaning” of a
sign is not built into the sign, but is established ad hoc, by
means of those distinctions between it and possible alternatives
that result in particular outcomes in application.
Interestingly, careful consideration suggests that while we often
consider that the representational aspect of a sign is fundamental
to its operation and provides part of its operational semantics,
in general the opposite is the case: operational semantics are
primary, and representational semantics can only be established
once operational semantics are set and perhaps codified. Once
a sign's operation has been established repeatedly it can start
to “carry its context with it” — that is, its context can begin to
include prior contexts, implicitly recognized — and this is the
beginning of representation. (A compelling description of this
process in human language development as well as in the more
rudimentary linguistic capabilities of chimpanzees, bonobos
and gorillas, may be read in [Greenspan and Shankar 2004].)
Thus, what any given sign represents must be inferred from its
context of operation — which may include the traces of other
occasions of its use — and much energy is given, in the
application of sign systems to doing useful work in the world,
to negotiating these inferences.
The construction of layered sign systems, in which complex
representations may be reliably constructed by a rules-based
assembly from simpler components, has the precise advantage
of managing and reducing this expenditure of energy.
Metalanguages (such as an orthography, formal logic, or the
grammars of natural or artificial languages), which describe
sign systems themselves and stipulate rules for their operation,
are more than sterile intellectual exercises. By abstracting and
codifying the application of rules to signs, they reduce the need
to rely on sheer brute force methods (memorization or
negotiation) to support the assembly of signs into sign systems,
thus reducing overall complexity and enabling more
sophisticated operations at higher levels.
Accordingly, media can be built out of building blocks
constituted of other media. Within a (relatively highly evolved)
language, utterances take the form of sentences or statements,
composed out of words; words are composed of phonemes (or
of letters, in the case of written words). When sentences or
statements or propositions are combined, they constitute yet
another medium at a higher level, which might be identified
with argument or narrative. Thus, media are built in layers,
each layer a hierarchy of subsystems on a lower layer. This is
characteristic of complex systems generally in both the natural
and artificial worlds (see [Simon 1996]).
In general, this layering is characterized by two related
phenomena: • The distinctions between the layers becomes clearer over
time: media formats evolve and become more systematic
and coherent in composition. More and more complex and
comprehensive structures become progressively easier to
realize, at the cost of a certain kind of expressive power
characteristic of early or individual experiments. The higher
layers, as they solidify and consolidate, induce a process of
simplification and rationalization at lower layers.
• Likewise, as you go up the stack, the distinction between
layers becomes less clear. The difference between a letter
and a word is almost always clear, but the distinction
between a statement and an argument is less so.
As media ascend in complexity, each layer is capable of more
“meaningfulness” than the layer it is built on; or at least it has
greater representational power, due to the consolidation of
operational and (on that basis) even of representational
semantics at lower layers. In turn, the context provided by the
combination of any set of tokens at any layer provides
information relevant to the construing of the local meaning of
each of the constituent tokens. This interpenetrating influence
between layers, familiar to all students of language and
literature, is an important feature of the entire system.
The most complex and “highest” of these layers currently might
go by the name of information architectures. Until recently we
have not really needed to have a name for this layer or members
of it (signs that appear in it), as they have for the most part been
identified with print media: we have spoken of indexes,
cross-references, tables of contexts, abstracts, summaries and
bibliographies without being aware that these things might not
have to be printed on paper. This is because traditionally, in
order to stabilize complex bodies of information, particular
features of print media have been indispensible — temporal
persistence and asynchrony; “random” or holistic access;
conventions of citation; graphic and typographic queues
representing organization and providing for navigability;
economies of scale afforded by mass production; and so on.
Now these features are available in the formats provided by
digital media — along with elaborations of them such as the
hyperlink, dynamic display and device independence — it
becomes useful to consider this higher layer without identifying
it with print technology.
Figure 2: Various different forms of communications media are formed by
layering.
Figure 3: At lower layers, alignments between constituent parts are generally
possible, leading to representational correspondences not only between media
and the world, but across media themselves. At higher layers, media find their
own particular strengths, constituting 'worlds of their own' whose
representational capabilities are both presumably greater, and more
problematic.
The Emergence of the Digital
Like all media (cf. Marshall McLuhan), digital formats
begin as representations (operational refigurations) of
prior media. But just as alphabetic literacy, following the logic
of its own peculiar operational semantics, quickly assumes
forms distinct from the oral forms it begins by mediating, digital
media soon become something other than the print counterparts
that were to be “transmitted” by telecommunications
technologies. Again, it achieves these forms by layering.
Despite the highly developed form of the processing stack when
it comes to the computer's own processing (its operations),
however, when it comes to digital media we are still at a point when these layers are inchoate — although the standardization
efforts of the last decade (especially as regards HTML, CSS,
XML and XSLT) are providing a level of metalinguistic control
conducive to their development and maturation. Yet for the
most part, applications of digital media are still either derivative
of other forms (in the sense that a page on the web may be
almost entirely analogous to the same document in print) or
directly in service to them; digital media have not fully come
into their own.
Nonetheless, the usefulness and power of layering in this
context too has long been recognized: we only need to recall
the familiar dogma (and the discussion that has surrounded it)
of the “separation of format from content” in the design and
deployment of markup-based publication systems (see
[Sperberg- McQueen and Burnard, 1994], [Durand et al. 1996],
[Piez 2001], [Sperberg-McQueen et al. 2002], [Piez 2002]).
This tradition recognizes that so-called “descriptive encoding”
works by anticipating and expressing at a lower layer (in what
we call source code), the rationale for structures to be expressed
at higher layers through site organization, page layout,
typography, screen widgetry, linking and all the apparatus of
a full-blown architecture. In this respect, the tagging of an XML
document whose encoding is descriptive and data-oriented
rather than prescriptive and application-oriented proves to be
a connotative system, as the signifiers (the element names “title”
and “p”) that describe the data (this chunk of text is a nominal
paragraph; that one a title) are themselves signs, to be
transformed by a heuristic and rules-based process into
renditions that will themselves signify to readers that they are
paragraphs or titles. This anticipation or prolepsis by markup
of further signification elucidates the confusion as to whether
we consider descriptive encoding to be at a “higher” or “lower”
layer, as indeed it is paradoxically both. Within a classical
three-tiered architecture, the XML encoding is “below” its
HTML (or print, or SVG, or ODF) rendition; yet we also
describe the conversion from descriptive XML into a
presentational format such as HTML as a “down-conversion”
(since it “loses information”), by rendering only indirectly in
presentational features, if at all, what is explicit in the source.
The reason we can, in effect, go down to go up, is that here the
lower layer achieves its aim of scalability and reusability by
working to describe a higher layer that it is not yet practical (if
it ever will be) for the computer to infer on its own: it provides,
in representational form by a kind of "sleight of hand" (the
operational semantics being invisible to the machine and left
to the stylesheet designer), information that would ordinarily
be available only by a processing context not yet available —
the act of reading itself. In fact, the transformation from
descriptive XML to HTML is not actually taking it “up” the
ladder towards richer information design; it is merely
transposing the data into another stack altogether, where, since
the operational semantics of HTML are more tightly bound to
standard processing contexts (the browser), its implicit
functionalities can be elaborated, while the representational
aspect can be deferred to where it makes more sense — where,
because the tagging now signifies “large and bold”, the reader
can be trusted to infer “title”.
Yet the same discussion has also masked deeper problems and
issues (see [Buzzetti 2002], [Caton 2005]) stemming from the
limitations in current markup systems, which can gracefully
handle only a single organizational hierarchy at a time, and thus
lack the representational and expressive power necessary to
take full advantage of the computer's capabilities for useful
automated processing of complex textual artifacts.
Nevertheless, understanding digital text encoding technologies
as complex sign systems elucidates how and why they function
without resorting to the metaphysical appeal that “good”
encoding should be designed to describe the “thing itself”.
Especially given the limitations inherent in XML's design, such
a position proves soon to be untenable; yet XML systems
succeed in doing useful work notwithstanding, and provide the
foundations for sustainable, scalable and navigable information
resources, whose presentational features (interfaces) can be
improved over time. This in itself represents a major advance
over what was ever possible in the past.
More generally, a consideration of digital text encoding as a
distinctive semiotic system, with its own metalanguages and
its own relation to media artifacts, suggests why the humanistic
study of digital media remains so foreign to traditional
disciplines in the humanities. Likewise, it points the way to the
future, as it becomes clear what, and how much, still remains
to be done.
Figure 4: While metalanguages can be expensive to maintain, they provide
means not only to describe but also to maintain and control the communications
media they describe. Bibliography
Barthes, Roland. Elements of Semiology. Trans. Annette Lavers
and Colin Smith. New York: Wang and Hill, 1967.
Buzzetti, Dino. "Digital Representation and the Text Model."
New Literary History 33.1 (2002): 61-88..
Caton, Paul. "Markup's Current Imbalance." Markup
Languages: Theory and Practice 3.1 (2001).
Caton, Paul. "LMNL Matters?" Proceedings of Extreme Markup
Languages 2005, Montréal, Québec, August 2005. 2005. <ht
tp://www.idealliance.org/papers/extreme/p
roceedings/author-pkg/2005/Caton01/EML200
5Caton01.zip>
Durand, David, Steven J. DeRose, and Elli Mylonas. "What
Should Markup Really Be? Applying Theories of Text to the
Design of Markup Systems." ACH/ALLC 1996. Available from
<http://cs-people.bu.edu/dgd/ach96_talk/R
edefining_long.html>
Greenspan, Stanley I., and Stuart G. Shanker. The First Idea:
How Symbols, Language, and Intelligence Evolved from our
Primate Ancestors to Modern Humans. Da Capo Press, 2004.
Maturana, Humberto R., and Francisco Varela. The Tree of
Knowledge: The Biological Roots of Human Understanding.
1987. Shambhala, 1992.
Piez, Wendell. "Beyond the 'Descriptive vs. Procedural'
Distinction." Proceedings of Extreme Markup Languages 2001,
Montréal, Québec, August 2001. . <http://www.ideall
iance.org/papers/extreme/proceedings/html
/2001/Piez01/EML2001Piez01.html>
Piez, Wendell. "Human and Machine Sign Systems."
Proceedings of Extreme Markup Languages 2002, Montréal,
Québec, August 2002. Ed. B. T. Usdin and S. R. Newcomb.
2002. <http://www.idealliance.org/papers/ex
treme/proceedings/html/2002/Piez01/EML200
2Piez01.html>
Renear, Allen. "The Descriptive/Procedural Distinction is
Flawed." Extreme Markup Languages 2000, Montréal, Québec,
August 2000. Reprinted in Markup Languages: Theory and
Practice. 2000.
Renear, Allen H., David Dubin, C. M.Sperberg-McQueen, and
Claus Huitfeldt. "Towards a Semantics for XML Markup."
Proceedings of the 2002 ACM Symposium on Document
Engineering, McLean, VA, November 2002. Ed. Richard Furuta,
Jonathan I. Malectic and Ethan V. Munson. New York:
Association for Computing Machinery, 2002. 119-126.
Saussure, Ferdinand de. Course in General Linguistics. Trans.
Wade Baskin. 1916. New York: McGraw-Hill, 1966.
Simon, Herbert. The Sciences of the Artificial. Cambridge, MA:
MIT Press, 1996.
"A Gentle Introduction to SGML." Guidelines for Electronic
Text Encoding and Interchange. Ed. C. M. Sperberg-McQueen
and Lou Burnard. 1994. Chicago: TEI Consortium, 1997. 13-36.
<http://www.isgmlug.org/sgmlhelp/g-index.
htm>
Sperberg-McQueen, C. M., David Dubin, Claus Huitfeldt, and
Allen H. Renear. "Drawing Inferences on the Basis of Markup."
Proceedings of Extreme Markup Languages 2002, Montréal,
Québec, August 2002. Ed. B. T. Usdin and S. R. Newcomb.
2002. <http://www.idealliance.org/papers/ex
treme/proceedings/html/2002/CMSMcQ01/EML2
002CMSMcQ01.html>

If this content appears in violation of your intellectual property rights, or you see errors or omissions, please reach out to Scott B. Weingart to discuss removing or amending the materials.

Conference Info

Complete

ADHO - 2007

Hosted at University of Illinois, Urbana-Champaign

Urbana-Champaign, Illinois, United States

June 2, 2007 - June 8, 2007

106 works by 213 authors indexed

Series: ADHO (2)

Organizers: ADHO

Tags
  • Keywords: None
  • Language: English
  • Topics: None