Featured Abstract: March 14

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “A theoretically-rich approach to teaching to model”

Elena Pierazzo, King’s College

Modelling is at the heart of most of my teaching: when teaching XML, XSLT, TEI within an MA in Digital Humanities you need to provide the students with intellectual challenges as well as technical skills. In fact, modelling can be seen as the intellectual activity which lies at the base of any computational effort, namely the methods and the languages we invent to communicate our understanding of a particular cultural object (such as a text, a statue, a piece of music) to the computer and, via the computer, to the users. Effective modelling depends on a deep analysis and understanding of the object to be modelled, so it is also essential to encourage and train students’ analytical skills as part of introducing them to modelling; the provision of theoretical frameworks within which to conduct the analysis and subsequent modelling has proven to be a highly successful approach with MA and PhD students. The case study to be presented here will be the modelling of texts of manuscripts and of the transmission of texts through centuries, materials and people. Transmission of texts can be seen as an act of communication, and so communication and linguistic theories (particularly those of Shannon-Weaver 1948/63, Berlo 1960, Saussure 1961 and Jakobson 1960 ) can cast some new light over the way we analyse, model and understand the texts contained in manuscripts as well as their relationship with the author’s intentions and the reader’s experience. The use of such a complex theoretical framework has proven to help students move conceptually from the empirical to the abstract, a process that is fundamental for modelling. My talk will present some considerations and examples of analytical and modelling activities applied to text transmission and which have been used in the classroom at King’s College London.

Featured Abstract: March 14

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Taking Modeling Seriously”

Allen Renear, University of Illinois, Urbana-Champaign

There are many kinds of modeling. I am concerned here with the sort of
modeling that emphasizes theoretical or epistemic objectives, modeling,
that this, that purports to provide an account of how things are in a
domain of interest.   The demands of this sort of modeling are exacting –
and not to everyone’s taste. But the rewards in insight and understanding
are worth the effort. This sort of modeling may sound like straightforward
philosophical ontology development, and of the usual naïve and realistic
sort. Perhaps in a sense it is. However my focus throughout will not be on
self-declared ontologies or ontology design, but rather on examples that
have more practical objectives (such as systems design) and are typically
carried out in familiar graphic conceptual modeling languages, such as
entity relationship diagrams and UML class diagrams. It is these ordinary
modeling efforts I will be taking seriously, and in doing that I will be
thereby doing some serious modeling of my own. In my experience the
stresses and paradoxes latent in familiar unpretentious conceptual
modeling give us a natural manageable start in thinking through some of
the hardest problems in developing a formal understanding of cultural
objects and relationships. In the end however I will argue, as you
probably suspect, that taking modeling seriously requires specific
logic-based formal methods. Serious modeling takes modeling more seriously
than it takes itself.

Featured Abstract: March 13

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Modeling: Perspectives, Objectives, and Context”
Daniel Pitti, Institute for Advanced Technology in the Humanities, University of Virginia

Humanists, scholars and cultural heritage professionals (archivists, librarians, museum curators, and keepers of sites and monuments) share a common focus on artifacts, objects created by humans that provide the historical evidence for our understanding of what it means to be human. Cultural heritage professionals focus on preserving and facilitating access to selected artifacts, and scholars study the artifacts, from a variety of perspectives, and attempt to analyze and understand them.

Humanists have turned to (and become increasingly comfortable with) information technologies for practical reasons: the technologies allow them to achieve particular professional or scholarly objectives. Broadly speaking, those are preservation and access for the cultural heritage professionals, and analysis and understanding for the scholars. To facilitate achieving their objectives, the scholars and professionals seek to represent an artifact or class of artifacts, descriptive representation (for example, a catalog record) or content representation (for example, a TEI-encoded text). The ways in which any given artifact or class of artifacts can be represented is unlimited, but the mission of the cultural heritage professional and the disciplinary perspective of the scholar narrow the possible representations.

Modeling or representing artifacts digitally involves philosophical issues—metaphysical, epistemological, and even ethical issues—as well as quite practical issues. A particular technology’s capacity to represent artifacts will limit or direct its application, making it a better or worse servant to our objectives. Economy and efficiency must be a factor, both in the processing efficiency of a chosen technology, and the financial and administrative economy of creating and maintaining the representation data. Social context and objectives also have an impact on the modeling design and process. Representations that are created and maintained by a lone scholar, a small group of two or three working together closely, or a large distributed community, have their own specific design challenges. Finally, for both scholars and cultural heritage professionals, the desired audience must be an important social factor to be considered in the modeling process.

Featured Abstract: March 13

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Analyzing linguistic variation: From corpus query towards feature discovery”

Elke Teich, Universität des Saarlandes, Saarbrücken, Germany

In the study of linguistic variation (dialect, sociolect, register), we can distinguish two types of analytical situations: the feature-centric and the variable-centric. In a feature-centric perspective, we start from a given feature (or set of features) and want to derive the variables (e.g., place, social group, user group) associated with the given feature/features. This is a typical situation in dialect studies, where we are interested in the geographical distribution of a particular phonetic realization (e.g., +/- rhoticity and British dialect areas). In a variable-centric perspective, we start from a given variable (e.g., a register) and want to determine the features that are typically associated with that variable. In the feature-centric perspective, we obtain the necessary information (typically a frequency distribution of a feature) employing a corpus query approach, by means of which we extract the instances of a given feature from an appropriate set of language data. In a variable- centric perspective, we are faced with the problem that we may not know a priori what the relevant features are; instead, we have to find ways of discovering linguistic features potentially suitable for analysis.

In my presentation, I will illustrate these two perspectives with examples from the study of register variation in the scientific domain, looking at selected lexico-grammatical features and the variables of discourse field (scientific discipline) and time (diachronic evolution of registers) (cf. Teich & Fankhauser, 2010; Degaetano et al., 2011; Degaetano & Teich, 2011).

Featured Abstract: March 13

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Modelling as a centre of Practice and Pedagogy”
Susan Schreibman, Trinity College Dublin

Last year I designed a MPhil in Digital Humanities and Culture for Trinity College Dublin. We are now in the second semester of the first year. Unlike teaching a single DH course that centres on a specific area (from introductory courses to more specific text encoding, digital scholarly editing, web technologies, etc) the longer timescale of a full year allows for significant cross-fertilisation in understanding how disparate technologies, methodologies, and theories interrelate to comprise the salient core of this new and somewhat abstract discipline of humanities computing.

Many of us have been in this field long enough to know that the technologies we teach our students today will be surpassed and replaced by the yet-to-be invented.  What is more lasting, however, is the understanding how to model the objects of our contemplation: from the narrative arc of a thematic research collection, to a TEI-encoded document, to a relational database. Models serve as an abstraction of an analogue object, its relationship to other objects, as well as a representation of what we think is important about them. By teaching our students how to model, we give them the tools to represent and re-present the yet-to-be-encountered.

The question then remains: how do we teach this interrelatedness of things and their properties. Should there be a knowledge representation course which covers modelling more abstractly, or should it be covered within context, when teaching subjects such as relational databases, TEI encoding, or virtual world construction. Should knowledge representation be the theme that binds the disparate strands of DH together, or a theme. If knowledge representation, as Willard McCarty suggests, is the coherent or cohesible practice that binds all of DH together, then it follows, that KR must be at the centre of our pedagogy.

Featured Abstract: March 12

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Text, Intertext, and Context: Modeling a Map of Medieval Scholarly Practices”

Malte Rehbein, University of Würzburg

Medieval scholarly practices can be fully understood “erst im Zusammenspiel verschiedener methodischer Zugänge und nur unter Berücksichtigung der Gesamtüberlieferung” (only by interplay between various methodological approaches and with consideration of the whole written tradition) as Claudine Moulin has pointed out, analyzing early medieval vernacular glosses [1, p. 76]. It hence requires a holistic approach that takes into account not only texts but also intertextual relations and contextual information: historical, paleographic, codicological, bibliographical, and appreciation of processes of textual production and usage. Representing such comprehensive information is challenging as it requires the interplay of different data models. This presentation discusses this challenge along the case-study of the 8th century
“Würzburg Saint Matthew”[2]: a gospel text with glosses and commentaries compiled from more than forty different sources making it an intriguing, though complicated object of study[3].  The presentation further outlines considerations on the development of models for presenting this information within a research environment and concludes with an outlook toward a comprehensive visual map of medieval scholarly practices.

[1] Moulin, Claudine (2009): Paratextuelle Netzwerke: Kulturwissenschaftliche Erschließung und soziale Dimensionen der althochdeutschen Glossenüberlieferung. In: Gerhard Krieger (Hg.): Verwandtschaft, Freundschaft, Bruderschaft. Soziale Lebens- und Kommunikationsformen im Mittelalter, 56–77.
[2] Rehbein, Malte (2011). “A Data Model for Visualising Textuality — The Würzburg Saint Matthew”. Digital Humanities 2011: Conference Abstracts. Ed. The Alliance of Digital Humanities Organizations. Stanford: Stanford University Library. 204–205.
[3] Cahill, Michael (2002): The Würzburg Matthew: Status Quaestionis. In: Peritia 16, S. 1–25.

Featured Abstract: March 12

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Elisabeth Burr, Universität Leipzig, Germany

“It’s all about Integration and Conceptual Change”

In Romance Philology the ordering of information extracted from sources into knowledge domains has always been part of the research process. However, the ordering of index cards containing such information according to certain categories and the setting up of relations between them has never been regarded as knowledge modelling or as the building of ontologies. Similar things could be said about ‘data’, ‘filtering sources for information’ or ‘markup’. Still today, doing research and presenting research results tends to be seen more as mental processes than as disciplined activities which need to be made explicit and taught. This state of affairs has serious implications for students. Not only is doing research and writing a ‘disciplined’ academic paper not widely taught, but methods of research and of good academic practice are also not conceived of as being of epistemological interest. Instead, they are considered to be mere skills which can be acquired in courses offered by non-academic service centres. Furthermore, as the computer is still looked upon and used as if it was just a modern form of typewriter, no need is seen to teach students a meaningful way of exploiting computer technologies for their research and writing in academic courses. If anything, students are encouraged to believe that writing an academic paper is all about ideas, creativity and genius and that the structure and the layout of a paper, the consistency of citations or the integrity of bibliographies among other things are formalities.
In order to change the concept of doing research and of academic paper writing and to foster a meaningful exploitation of computer technologies I have implemented in one of my courses of Romance linguistics a project-oriented approach where methods and tools of, and questions posed by the Digital Humanities play an important role. By involving students in the creation of a digital version of a text and showing them how to apply a TEI schema with the help of an xml-editor like oXygen or by getting them to archive the information about sources in a database like the one EndNote provides, students can actually learn a lot about texts, about data and styles, about systematization and consistency. Furthermore, the linking of information and the building of ontologies makes many aspects of scholarly work explicit to them. If the work they are doing contributes, moreover, to a portal like the one which has been created within the framework of the project (see “Von Leipzig in die Romania”, http://www.culingtec.uni-leipzig.de/JuLeipzigRo/) and which can itself be used to write TEI compliant academic papers, they not only get the chance to develop a different concept of doing research and writing academic papers, but also to conceive of the computer as a device for the manipulation of systematized data and thus also for the modelling of knowledge and not as a mere high-tech typewriter.

Featured Abstract: March 12

Check in frequently this week to view featured abstracts, leading up to the symposium! We welcome your comments.

Featured Abstract: “Tagging in the cloud. A data model for collaborative markup”

Jan Christoph Meister, University of Hamburg

This paper discusses the data model underlying CLÉA, short for “Collaborative Literature Exploration and Annotation”, a Google DH Award funded project based on the CATMA software developed at Hamburg University. The goal of CLÉA is to build a web based annotation platform supporting multi-user, multi-instance, non-deterministic (and, if required, even contradictory) markup of literary texts in a TEI conformant approach. Apart from technical considerations, this approach to markup has some more fundamental consequences: First, when one and the same text is marked up from different functional perspectives, markup itself starts to become fluid, allowing researchers to aggregate markup just as we aggregate other meta-texts, namely according to their specific research interest. Second and in addition to the functional enhancement, there is also a social aspect to this new approach: the production of markup becomes a team effort.This paradigm shift from individual expert annotation to an “open workgroup”, crowd sourced approach is based on what we call a “one-to-many” data model that can be implemented using “cloud” technology. “Tagging in the cloud”, therefore, combines three new aspects on text markup – the social, the technological, and the conceptual.

Featured Abstract: March 8

Each Monday and Thursday, an abstract from one of the symposium participants will be posted to facilitate discussion.  We welcome your comments!

Featured Abstract: “On the Value of Comparing Truly Remarkable Texts”

Gregor Middell, University of Würzburg

Looking at the comparatively short history of editions in the digital medium, one notices
that those projects which are highly critical about the edited text invariably end up pushing the boundaries of established practices in text modeling and encoding. For example, this has been the case for the HyperNietzsche edition, which developed its own genetic XML markup dialect, or for the Wittgenstein edition, which went as far as developing its own markup language. The ongoing genetic edition of Goethes Faust is no different in as much as it makes use of common XML-based encoding practices and de-facto standards like the guidelines of the Text Encoding Initiative but at the same time felt the need to transcend those, so it can cope with the inherent complexity of modeling its subject matter. Rooted in the tradition of German editorial theory, the Faust edition strives for a strict conceptual distinction between the material evidence of the the text’s genesis as found in the archives on the one hand and the interpretative conclusion drawn from this evidence on the other hand, the latter eventually giving rise to a justied hypothesis of how the text came into being. These two perspectives on the edited text, though complementary, are structured very differently and moreover cannot be modeled via context-free grammars in their entirety. Therefore it is already hard to encode, validate and process a single perspective via XML concisely and eciently, let alone both of them in an integrated fashion. Given this problem and the need to solve it in order to meet the expectations of scholarly users towards an edition which in the end claims to be “historical-critical”, the Faust project turned to multiple, parallel encodings of the same textual data, each describing the textual material from one of the desired perspectives. Necessarily the different encodings have to be correlated then, consequently resulting not in the common compartmentalized model of an edited text but in an integrated, inherently more complex one. In the work of the Faust project, this crucial task of correlating perspectives on a text is achieved semi-automatically by means of computer-aided collation and a markup document model supporting arbitrarily overlapping standoff annotations. The presentation of both this editorial workflow as well as its underlying techniques and models might not only be of interest in its own right; it might as well contribute to the answer of a broader question: Can we gradually increase our notion’s complexity of “what text really is” while still being able to rely on encoding practices widely endorsed by the DH community today.

Featured Abstract: March 5

Each Monday and Thursday, an abstract from one of the symposium participants will be posted to facilitate discussion.  We welcome your comments!

Featured Abstract: “Comparing representations of and operations on overlap”

Claus Huitfeldt, University of Bergen

Overlapping document structures have been studied by markup theorists for more than twenty years. A large number of solutions has been proposed. Some of the proposals are based on XML, others not. Some are proposals for use of alternate serial forms or data models, and some for stand-off markup.  Algorithms for transformations between the different forms have also been proposed. Even so, there are few systematic comparative studies of the various proposals, and there seems to be little consensus on what is the best approach.
The aim of the MLCD Overlap Corpus (MOC) is to make it easier to compare the different
proposals by providing concrete examples of documents marked up according to a variety of proposed solutions. The examples are intended to range from small, constructed documents to full-length, real texts.  We believe that the provision of such different parallel representations of the same texts in various formats may serve a number of purposes.
Many of the proposals for markup of overlapping structures are not fully worked out, or not well documented, or known only from scattered examples. Encoding a larger body of different texts according to each of the proposed solutions may help resolving unclarities or shed new light on difficulties about the proposals themselves.
Running or developing software to perform various operations on the same data represented in different forms may also help in finding out which forms are optimal for which operations. Some operations, even though well understood for non-overlapping data, may turn out not to be clearly defined for overlapping data.
Finally, a parallel corpus may serve as reference data for work on translations between the various formats, for testing conversion algorithms, and for developing performance tests for software.