Featured Abstract: March 12

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Tagging in the cloud. A data model for collaborative markup”

Jan Christoph Meister, University of Hamburg

This paper discusses the data model underlying CLÉA, short for “Collaborative Literature Exploration and Annotation”, a Google DH Award funded project based on the CATMA software developed at Hamburg University. The goal of CLÉA is to build a web based annotation platform supporting multi-user, multi-instance, non-deterministic (and, if required, even contradictory) markup of literary texts in a TEI conformant approach. Apart from technical considerations, this approach to markup has some more fundamental consequences: First, when one and the same text is marked up from different functional perspectives, markup itself starts to become fluid, allowing researchers to aggregate markup just as we aggregate other meta-texts, namely according to their specific research interest. Second and in addition to the functional enhancement, there is also a social aspect to this new approach: the production of markup becomes a team effort.This paradigm shift from individual expert annotation to an “open workgroup”, crowd sourced approach is based on what we call a “one-to-many” data model that can be implemented using “cloud” technology. “Tagging in the cloud”, therefore, combines three new aspects on text markup – the social, the technological, and the conceptual.

Advertisements

Featured Abstract: March 8

Each Monday and Thursday, an abstract from one of the symposium participants will be posted to facilitate discussion.  We welcome your comments!

Featured Abstract: “On the Value of Comparing Truly Remarkable Texts”

Gregor Middell, University of Würzburg

Looking at the comparatively short history of editions in the digital medium, one notices
that those projects which are highly critical about the edited text invariably end up pushing the boundaries of established practices in text modeling and encoding. For example, this has been the case for the HyperNietzsche edition, which developed its own genetic XML markup dialect, or for the Wittgenstein edition, which went as far as developing its own markup language. The ongoing genetic edition of Goethes Faust is no different in as much as it makes use of common XML-based encoding practices and de-facto standards like the guidelines of the Text Encoding Initiative but at the same time felt the need to transcend those, so it can cope with the inherent complexity of modeling its subject matter. Rooted in the tradition of German editorial theory, the Faust edition strives for a strict conceptual distinction between the material evidence of the the text’s genesis as found in the archives on the one hand and the interpretative conclusion drawn from this evidence on the other hand, the latter eventually giving rise to a justied hypothesis of how the text came into being. These two perspectives on the edited text, though complementary, are structured very differently and moreover cannot be modeled via context-free grammars in their entirety. Therefore it is already hard to encode, validate and process a single perspective via XML concisely and eciently, let alone both of them in an integrated fashion. Given this problem and the need to solve it in order to meet the expectations of scholarly users towards an edition which in the end claims to be “historical-critical”, the Faust project turned to multiple, parallel encodings of the same textual data, each describing the textual material from one of the desired perspectives. Necessarily the different encodings have to be correlated then, consequently resulting not in the common compartmentalized model of an edited text but in an integrated, inherently more complex one. In the work of the Faust project, this crucial task of correlating perspectives on a text is achieved semi-automatically by means of computer-aided collation and a markup document model supporting arbitrarily overlapping standoff annotations. The presentation of both this editorial workflow as well as its underlying techniques and models might not only be of interest in its own right; it might as well contribute to the answer of a broader question: Can we gradually increase our notion’s complexity of “what text really is” while still being able to rely on encoding practices widely endorsed by the DH community today.

Featured Abstract: March 5

Each Monday and Thursday, an abstract from one of the symposium participants will be posted to facilitate discussion.  We welcome your comments!

Featured Abstract: “Comparing representations of and operations on overlap”

Claus Huitfeldt, University of Bergen

Overlapping document structures have been studied by markup theorists for more than twenty years. A large number of solutions has been proposed. Some of the proposals are based on XML, others not. Some are proposals for use of alternate serial forms or data models, and some for stand-off markup.  Algorithms for transformations between the different forms have also been proposed. Even so, there are few systematic comparative studies of the various proposals, and there seems to be little consensus on what is the best approach.
The aim of the MLCD Overlap Corpus (MOC) is to make it easier to compare the different
proposals by providing concrete examples of documents marked up according to a variety of proposed solutions. The examples are intended to range from small, constructed documents to full-length, real texts.  We believe that the provision of such different parallel representations of the same texts in various formats may serve a number of purposes.
Many of the proposals for markup of overlapping structures are not fully worked out, or not well documented, or known only from scattered examples. Encoding a larger body of different texts according to each of the proposed solutions may help resolving unclarities or shed new light on difficulties about the proposals themselves.
Running or developing software to perform various operations on the same data represented in different forms may also help in finding out which forms are optimal for which operations. Some operations, even though well understood for non-overlapping data, may turn out not to be clearly defined for overlapping data.
Finally, a parallel corpus may serve as reference data for work on translations between the various formats, for testing conversion algorithms, and for developing performance tests for software.

Featured Abstract: March 1

Each Monday and Thursday, an abstract from one of the symposium participants will be posted to facilitate discussion.  We welcome your comments!

Featured Abstract: “Modeling Collaboration”

Julia Flanders, Brown University

If collaboration, in practical terms, is predicated on the compatibility of data (expressed variously and debatably as interoperability or interchange), then we can also say that it requires a kind of meta-modeling: that is, a clear expression of the differences and similarities between models. Tools like the TEI customization mechanism offer one approach to this kind of meta-modeling, but many questions require more detailed consideration: the level of precision at which this meta-modeling must take place, the specific vectors of similarity to be expressed, and the meaning or motivations of our customizations. Is it possible to use a mechanism of this kind in a rigorous way to support more effective collaboration?