C.M. Sperberg-McQueen, Closing Keynote

Closing Keynote Address (March 16):

C.M. Sperberg-McQueen (video)


[Sperberg-McQueen] The talk, of course, has a structure…. We’ve been here for two days, two and a half days, going on three days. What have we learned about modeling? What’s the state of modeling in the digital humanities. And I thought I would start by talking about the issue of the uses of modeling, and then go on to remarks connected to the form our models take, and then talk about issues of completeness. And of course, as you well have to be expecting after Wendell’s beginning, the talk has another structure. We’re going to start with remarks connected, more or less tenuously, to the Sapir-Whorf hypothesis and to Marx’s theory of base and superstructure; and then we will move to some issues related to the paradox of analysis, and we will end with a barrage–a farrago!–of quotations that I think, well, they start with lightness. [Intermittent laughter.]

[Audience member] Goes downhill from there, huh? [Intermittent laughter.]

[Sperberg-McQueen] Yeah, pretty much….


In the second beginning of this talk, I take up Paul Caton’s challenge to talk about the transcription of the carpet. Those of you who aren’t physically here need to know–maybe you can pan the camera down and look at the carpet. The carpet is made of large and small rectangles with letters of the Latin alphabet on them–various forms of the Latin alphabet: They’re not all, strictly speaking, letters used in writing Latin. Some of them are used in writing other languages written with the Latin script. There’s at least one e-acute over there…. And this is actually a wonderful challenge, Paul, because the carpet illustrates a lot of problems that we have. If you wanted to–you have to, of course, be patient, and accept the idea that the transcription of a thing is in some sense a model of the thing. So it’s really the transcription that faces these problems. But, of course, the way we write the transcription faces these problems as well. The carpet is defiantly, ineluctably two-dimensional, and our usual representation of text … is based on language, all of our writing systems, essentially, in the world being logographic and not ideographic. And language is ineluctably one-dimensional, because I’ve only got one mouth. Conversation can be multidimensional, but our way of thinking about language essentially takes a lot from the fact that I can only pronounce one segment at a time, and, therefore, there is a sequence of segments that form my linguistic utterances. And this is two-dimensional. If you wanted to transcribe them in a single order … if you just transcribed them in a single order, you’d lose all the information. Well, here’s our first challenge: which piece of information do we lose? Do we lose the fact that this U is connected to a U, to a Y, to an O, to an O? Or, do we lose the adjacency in this direction? You’re all familiar with this kind of problem. You’re all familiar with it either from the formatting of tables, where you must decide–or your markup language decides for you–whether you transcribe the table in row-major form or column-major form. Do you transcribe one row at a time, so that the table is a series of rows and the rows are a series of cells? Or, do you transcribe one column at a time? It’s six of one, half a dozen of the other.


Programming language implementers face the same problem and solve it in exactly the same way. And language specs, that don’t mind exposing things under the hood, will tell you whether implementations are expected to be row-major or column-major, so that you can do dirty things with addresses. But if you do keep track of the two-dimensional coordinates, then you can at least record the information about adjacencies, even if they’re not necessarily terribly convenient to get to. If you look more closely, you will see that–well, of course, this wasn’t woven on a single loom. As far as I can tell it’s woven on a loom about 70 inches wide, and I’m pretty sure, at least I reached the conclusion, that in this part of the room it’s … which direction is this? east-west? … it’s east-west major … north-south, north-south–sorry, local information is occasionally incorrect!–and over there it’s east-west major. So you might want to respect that major orientation. It’s a difficult choice because, of course, which form/which way you transcribe it will have consequences. It will privilege one view over another, and it will make other views harder to get to, even if you record the information. You can of course transcribe it twice, in which case, you have more work if anyone updates it. It’s also interesting, because, in many cases, what you want to say is underdetermined. All right: This is a U, and its top is in that direction; and that’s an S, and its top is in one of those direction. I’m going to guess that those O’s are probably mostly either in one direction or another, but you could imagine O’s in which the number of possible orientations is either four or infinite. So there’s a certain underdetermination, and it would be nice to be able to capture that. There’s also … it’s wonderful–the commentary on the carpet (on the wall next to the door) illustrates a problem that has come up many times within the past couple of days. It contains factual statements, some of which some of us will want to distance ourselves from. Yes, the Latin alphabet is ancient and honorable, but it is … I would not be willing to put my name to a statement that [says] it is the oldest writing system still in use, with something like its current form and meaning. That’s crazy talk. But it’s there so if you want to record that information, too, you have to record information that you don’t believe, and you have to be willing to tolerate contradiction. And, also, the artist is concerned, in the same way that many of us as humanists are concerned, with the problem … with the fear of foreclosing possibilities. So there is a reassuring statement about the Latin alphabet. While it is not neutral, the Latin alphabet is a field that forms one of the linguistic traditions from which the research programs of the institutions here both arise. It has infinite possibilities, because it has infinite possibilities for recombination. All of those things we’ve heard before, and many of them we’ll hear again in the next couple of minutes.


The third beginning to the talk begins with a quotation from one of the grand old men of our field, Wilhelm Ott of the University of Tübingen, who wrote a characterization of computing in the humanities or digital humanities many decades ago that still strikes me as being a useful definition to come back to. He’s speaking of electronic data processing and says “Ihr [d.i. der EDV] Einsatz ist überall dort möglich, wo Daten irgendwelcher Art — also auch Texte — nach eindeutig formulierbaren und vollständig formalisierbaren Regeln verarbeitet werden müssen.” “Data processing can be applied everywhere where there is data of any kind, including texts, which must be processed according to unambiguously formulatable and completely formalizable rules.” Remember those two properties: “Unambiguously formulatable,” “completely formalizing.” Now notice he doesn’t derive the necessity of formalization from the computer. He observes that it’s a property of a great deal of humanistic research. As Stephen Ramsay said, and as various people have echoed in various forms, data modeling of the kind we’ve been talking about here is not new. In one form, one recognizable form or another, it’s been a property of humanistic studies since humanistic studies got their name in the Renaissance. It’s always been part of what we taught our students, at least up until the New Criticism and after the [Second World] War. And as Elli Mylonas pointed out the other day, you can read many of the results of humanist scholars and scholars through the centuries as being exactly the kind of things we do with one proviso that quite often they have errors; they are not consistent in particular. And, well, I think there are two ways to interpret that. One is to say, “Well, really maybe they were doing something slightly different.” And the other is to say, “Well, maybe they were human, and maybe there were mistakes.” So what is different about what we’re doing from what they were doing? At some essential level, “Nothing,” is my firm belief. This is … it’s a topic that keeps coming up, sometimes in disguised form, because it’s tied to the question, “Is digital humanities a new discipline or not?” And I’ll go ahead and make my confession: “No, it’s not.” The argument that it is essentially says, “Ah, but digital humanities has to make things explicit, it has to clarify concepts, it has to see how things work.” And I want to say, “And philology, and art history, and history don’t? What kind of teachers did you have if you don’t think that it’s part of the foundations of any scholarly work to be clear about your concepts and to check their implications?” But there is one huge difference–I hardly need to tell any of you–we have computers today, and they didn’t have them then. Now for some of us–and Wendell [Piez] said this on the first day–that changes the way we approach our humanities disciplines. We’re not in this room, because we wanted to do digital humanities; we’re in this room, because we wanted to do humanistic work. And we do it digitally, because we live when we live! How can you be serious about certain kinds of work and have a tool like the computer available to you and not want to apply it? Well, if you do apply it, of course, you have easier opportunities to discover the dirt and the inconsistency in your data than some of our forebears had; and our responsibilities are correspondingly higher.


Now in the fourth beginning of this talk, I thank the organizers for a small mercy: You did not begin the symposium with a panel discussion on the definition of the term “modeling.” Thank you! [Audience laugh.] We would still be here, we are still here, but we wouldn’t have covered anything else. But I will at least … I should make clear, because Fotis [Jannidis], for example, has several times pointed out that, you know, “Modeling [vs] data modeling aren’t necessarily the same thing.” The term “data modeling” has broad and narrow connotations, and so forth. So let me say this briefly: What I understand by the term … this is partly descriptive but mostly prescriptive … You see: I was trained as a philologist and not as a(n) “L”inguist, and, therefore, I am perfectly happy being prescriptive. I do not have this tattoo “Descriptivism only” on my arm the way people who went into the linguistics department got [one]. So I’m happy to be prescriptive. One of the problems with defining modeling is, of course, that the word has a huge range of meaning, and it’s a great example to use if you want to persuade people that Wittgenstein was right, and you should just give up. But there do seem to be some things in common over a huge variety of uses of the word, and even more in common in the range of meanings that I think are applicable to us. The best definition I have seen is one quoted by Willard McCarty in his essay on modeling in the Blackwell’s Companion to Digital Humanities of a few of years ago. He quotes Marvin Minsky as saying, “To an Observer B, an Object A* is a model of an Object A. If B can use A* to answer questions that interest him about B … sorry about A!” which is beautiful: It has a wonderful advantage of covering every example I can think of. It has the disadvantage, as a definition, of the fact that it’s not a definition: it doesn’t distinguish modeling from writing books; it doesn’t distinguish a model of the German language from Fotis, on the one hand, or Elke [Teich], on the other hand, or even those of us who aren’t native speakers but can answer a few questions about German, on the other. So as a definition it’s no good at all, but it does capture something interesting and crucial about the relation of the model to the thing modeled; the sort of displacement–the lack of identity. And of course it raises the question, “Well, why not just look at … why not answer your questions about A by looking at A?” And the usual, or a common, answer is that, “A is hard to look at; A is complicated, so we use models to make our life simpler.” There is a … Or [another answer might be], “Because we’re using a model to check some things before we commit ourselves.” Maybe we’re building A, and we build a model of A to say, “Well, what’s this going to look like,” kind of, sort of. So we build a model of a building and we can say, “Oh no, we like that facade a little wider. Can we lower the roof? Can we increase the fenestration? Can we do this, or that, or the other?” And in this context modeling is a means to an end; it is purely utilitarian. There are other reasons you might use models, and we’ll come to those.


There’s an essay on modeling by [Arturo] Rosenblueth und [and] Wiener–Norbert Wiener at MIT–that describes this utilitarian view: “Sometimes the relation between a material model and the original system may be no more than a change of scale in space or time. At any proving ground [Sperberg-McQueen: You can see what kind of a consultant Wiener was!] … At any proving ground experiments on shells will be carried out not with large, expensive, and unwieldy calibers but with handy, cheaper, small calibers. Another example is the use of small animals instead of large animals for biological experiments.” They’re smaller, they’re cheaper, they’re quicker, they’re easier to work with. So answer as many questions as you can with those models. Now, for uses of models of that kind the incompleteness of the model makes no difference at all. The fact that a mouse is not really a good model for an elephant doesn’t matter. They both have hearts, [and] we can answer general questions about the circulatory system with the mouse as well as we can with the elephant.


And one of the important uses of models in digital humanities is in the service of building things. You gotta build things. If you don’t build systems, if you don’t produce tools that other people can use, what are we doing? One of the things that characterizes this field is that we produce tools–sometimes for ourselves, sometimes for others. And in building systems, we commit ourselves to things. Jan-Christoph Meister’s use of model–his explicit modeling of the kind of annotation he wanted to support–struck me as a perfect example of this kind of purely utilitarian model. And there are a lot of concerns that have been aired here that are of concern to us, because we’re interested in utilitarian models. The question of how to maintain models over time and how to track changes in models over time is really, I think, maybe not exclusively, [… important, because] we’re using those models; we’re not just admiring them. The tension that we see continuously between the needs of customization and standardization only exist, because we’re conscious that, from a utilitarian point of view, standardization has virtues that customizations do not have for the community as a whole. And from a utilitarian point of view, customization has advantages for the individual project that standardization frequently doesn’t have. So there’s a trade-off of the good of the many versus the good of the few, and it’s no wonder that the scale comes down so often on the side of customization, given that the people making the decision are the few, and it’s their rent money that we’re concerned with. So those of us who wish everybody would do it the same way so that we could reuse their data, [they] will just remember that the shoe is sometimes on the other foot, and they wish that you would do it another way so that they could reuse your data. And you may have a little more patience with those other people. The mapping between models seems often a utilitarian concern.


But there’s a second reason to care about modeling or to be interested in modeling. To cite another important figure in the history of our discipline–someone to whom we owe the TEI: Nancy Ide’s first book was a book about [Blaise] Pascal; it was a textbook, Pascal for the Humanities, and in her foreword she addresses the question: Do humanists need to learn a program at all? And she says, “No, humanists don’t need to write their own programs…. [Exact quotation starts here.] But experience with computer programming provides an understanding of fundamental computing concepts, familiarity with the principles of algorithmic thought, and a grasp of the ways in which information is stored, accessed, and manipulated. So, yes, humanists need to learn how to program–not in order to write their own software, but to learn how to look at the materials within their discipline in new ways, and intelligently utilize and perhaps develop tools that help them do it.” I don’t now remember who it was who mentioned information modeling as the modern equivalent of Greek and Latin. General intellectual training–it’s right there in one of the important figures of our field. And it’s true. [If] someone doesn’t know anything about information modeling, I’m tempted to think there’s part of their brain that they haven’t learned yet to exercise. So, that hasn’t been terribly well described here although I guess it came up to a certain extent in the session on pedagogy.


But there’s a third use that we have for models, and that is to see the models as ends in themselves. The model is not something we build as a way to build something else; it is what we’re working on. Allen [Renear]’s talk [and] Paul’s talk are sort of pure examples of this. And, again, here’s a quotation not from someone in digital humanities, for a change, but perhaps the best formulation that I’m aware of this view of models as intellectual constructs in their own right. It’s from a guy who works up north from here a little bit, Noam Chomsky, who wrote in the foreword to Syntactic Structures (fifty-five years ago): “Precisely constructed models for linguistic structure can play an important role, both negative and positive, in the process of discovery itself. By pushing a precise but inadequate formulation to an unacceptable conclusion, we can often expose the exact source of this inadequacy and consequently gain a deeper understanding of the linguistic data. More positively, a formalized theory may automatically provide solutions for many problems other than those for which it was explicitly designed.” So modeling can be more than just an epitome, a model of clear thinking; it can be the kind of thinking we’re aiming at–it can be an end in itself. This connects with the characterization of XML, and before [that], SGML, that I owe to Lou Burnard, who always liked to begin our workshops by saying, “Markup is a way of making explicit our interpretation of a text.” And that’s what I want to say modeling is. Modeling is a way for us to make explicit our assumptions and beliefs, our premises–the premises for our work. Now, when models are an end in themselves, are a serious intellectual work product in and of themselves, apart from any systems they might help us build, then Wendell’s second workflow becomes the natural way to do humanistic work. You want to work on the data, and you need to work on your understanding of the data. You capture your understanding of the data by refining the model. And that leads to what Julia [Flanders] described as “schemas as reflections of our convictions rather than as recipes for things to do.” Now, if you think of the model as a way of clarifying things we’re interested in, then one obvious question is: “What exactly do we need to model?” I was trained as a philologist, so my bias and my assumptions are heavily textual, and I realized as I was coming up here, you know, this is not a picture of things humanists are interested in; this is picture of Roman Jakobson’s model of text. And it’s not really everything that exists in the world, but I don’t have any other way to organize these remarks, so please bear with me.


What are we modeling? Well, you can do worse in any moment of uncertainty about what to do with texts than think back on Roman Jakobson’s essay on “Linguistics and Poetry” in a 1960 conference on semiotics where he puts up a model of communication (mmm, there’s that word again!) … First model of communication: You know, well, you have your message; it comes from a sender; a sender sends a message to a recipient, and then he observes this is hopelessly [too] simple, and he ends up adding three more components to it. If you want to understand communication, it’s not enough to have a sender, a message, and a recipient; they must be connected. So there must be contact, and they have to share a code so there must be a code commanded by both sender and recipient, and typically the message is about something, so there is typically a referent. And there are functions of language devoted to all of those things. There’s phatic —there’s obviously declarative communication talking about the referent. There is expressive language talking about the state of the sender. There is connotative language, instructing the recipient to do something. There is metalanguage talking about the code. And there is phatic language establishing contact or checking on the contact–things like, “Is this mike on?” And, of course, those of you who read this in your proseminar in the first year of graduate school will remember the triumphant final function that Jakobson identifies is the “poetic function” that focuses on the text itself. I always thought that was kind of hokey, but … the model of constituents I find, I continue to find helpful, and sure enough, we have models for many of those things. We have–we’ve heard about many of them here today–we have…. (No idea where those cards are!) We have plenty of talking about the TEI and how you model the structure of the message itself. We heard indirectly through Paul about DigiPal, which is a way of trying to encode some information about the carriers of the text, and thus not the context. We have Kari Kraus’s frightening description of just what gets entailed when you really do care about the code, and it’s something for which the culture as a whole is not responsible so you are responsible. So, yes, we have the program, and we have the runtime libraries that program requires, and we have the runtime libraries that those libraries require. And you go down, and eventually you’re going to need some chips or an emulator for those chips. And the boundaries are very very slippery. And we have descriptions: Daniel Pitti’s and Stefan Gradmann’s and Alexander Czmiel’s about demographic descriptions of people and places, which can be used to describe the sender and the recipient and the people mentioned to the extent that we’re talking about nonfiction text. I don’t think any one of those person databases [mentioned] talks about literary figures, do you? Anyone? Any records for literary fiction? Yes. Oh, okay! So, thank you!

[Daniel Pitti.] He’s got millions.

[Sperberg-McQueen.] He’s got millions of fictive characters.

[Pitti.] It’s an equal opportunity description.

[Sperberg-McQueen.] That’s good. Now this distinction between models of purely utilitarian character and models that are an end in themselves helps us, I think, recognize some of the discussions … or some of the topics that have occasionally come up as inescapable–maybe not worth worrying about too much. I don’t think I was the only person. I know I’m not the only person who thought, because at least one other person I heard say, “Oh my God! Are we still talking about overlap?!” on the first day. And whether that’s a distressing proposition or not depends entirely on whether you’re interesting in modeling in a utilitarian sense–in which case, you really would like to see some progress, and you do not want to think that the community has spent … thirty years discussing the same damn topic without noticeable movement in front. Although this is the way the First World War went, and you don’t always have a choice about making progress. But if you think about modeling as a way to force ourselves to a better understanding, then it’s not only natural for us to be still talking about overlap but essential. As Allen said the other day, there are some, you know … for some things you’re doing, if you’re taking the modeling seriously, you don’t go past that problem, because there’s nothing to go past it to. That is the problem that you’re here to solve. If you don’t get through it, what are you going to do? You go home, tail between your legs? Say “I failed”? There are some problems where it appears that the only way to get through the brick wall is to keep banging your head against the wall until either you find the soft place in the brick wall or your head gets harder. [Some audience members laugh.] This is connected also to the question of whether we’re modeling for ourselves or for others…. And I think that the humanists had utilitarian … I think that one reason parts of our situation seem new to us is that humanists didn’t really use to have to worry about utilitarian modeling, because we didn’t build things; we didn’t…. Computers have given us that possibility and that responsibility. It also connects with the difference between a priori modeling and a posteriori modeling that was mentioned earlier today. Most of the time, a utilitarian model will be an a priori model. If it’s not, you may have some workflow issues … and project planning issues that you should be dealing with. I have the impression, I don’t know for sure, but my layman’s understanding is that the difference between utilitarian modeling and modeling as an end in itself may have been … may be easier to negotiate for engineers and physicists, in part because it’s sort of discipline-based. Engineers do the utilitarian model; physicists are interested in natural law. Computers, however, changed the game there, because of a phenomenon that was first noticed in our community by people involved with markup under the name “tag abuse.”


Tag abuse, some of you will be striving to remember, is the use of SGML or XML elements in semantically inappropriate ways, typically motivated by an attempt to attain a particular kind of processing. Classic example: In the early days of the web if you wanted to get bold italic, the most reliable way to do it in some browsers was to tag the words as an H5. Now, they weren’t headings, so there was a sense in which you were lying to the processor. But if the only processor you cared about was that browser, then you were getting bold italic. It was only when you moved to a different processor and got different results, or when you applied a different process–you applied, for example, a processor that reads an HTML file and produces a list of headings, a table of contents–and you saw these phrasings in the middle of your paragraph showing up as headings; they’re not really headings. Well, the program can’t do anything about that. You lied. That’s the cost you pay. It turns out that lying to the processor is not just ethically problematic; it’s bad engineering. So when computing in the creation of objects in this ethereal realm of computers, … a utilitarian model and a model whose primary goal is to be true don’t have nearly as much distinction between them as they used to. It now becomes primarily an issue of when we’re willing to stop and when we’re willing to cut corners. It’s not an issue of difference of kind in the way that it may be between physics and engineering. The result is that … at least with the advent of descriptive markup and, I think more generally, as we see in the discussion of modeling in other areas here, computers force us to think again and in new ways and with new urgency about the things … of our discipline. And as Doug Knox pointed out, if there’s something there that you don’t have a name for, it’s harder to think about it; it’s almost impossible to ask questions about it. You may find you have to name things; you may have to reify them, not in the social science sense of making them … beneath attention but in the AI sense of making them into things that you can address and talk about and think about. That’s one of the reasons for modeling in general. One reason why do we care about making our assumptions and fundamental beliefs explicit? So that we can look at them and say, “Well, actually I don’t like that one! Could we do without that one? Can I build a system that doesn’t rely on that assumption?” If you don’t ever surface your assumptions, you’re never going to be able to think about building that system, let alone build it. And you might want to build it, because our assumptions turn out to have teeth.


Now, Marx taught us years ago, and Sapir-Whorf … argued in a different way, that our thoughts are not always completely free. They’re predetermined–they aren’t necessarily predestined, but they lean toward running in certain ways. They’re influenced by things, and they’re certainly influenced by our notations and by our assumptions. And when we reify those assumptions in the form of computer programs, we can really see what they were talking about. We can really see that a particular ideology has consequences. There is a sense in which doing digital humanities–doing humanities with computers–is playing humanities for keeps, because you can see the costs of certain assumptions, and you can, in terms of data structures, you can even calculate them in terms of the complexity of the calculations and the Big O [notation] algorithmic complexity of doing certain operations on certain kinds of representations. Range algebras are faster for some operations, and they are slower for others, and which of those operations counts more to you. If you say, “Oh, they all count equally, I love them all equally” [A member of the audience laughs.], your programmer is going to make his own decisions … or her own decisions. So you really really want to … you don’t want to leave them alone. Now, the difference in use (in the telos) of modeling, utilitarian on the one hand, end in itself on the other, is, I think, a difference that we have to live with and we should be glad to live with. There’s no need for us to say that the purpose of modeling in digital humanities is one or the other. It should be both, I think. It should be both, because theory and practice need to interpenetrate each other and need to inform each other, and if you don’t worry about getting your utilitarian models correct, you’re … going to build systems that are not as useful. And if you don’t worry about making your true models concrete enough to sustain implementation, then there’s a sense in which you’re not really able to put them to the test in the way that you can put things to the test when you’re going to implement them. But there’s another … area in which we vary where I’m going to be a little more prescriptive, and that’s the form … our models take. There’s a whole range of … I don’t want to describe a particular notation for models. But I do think that we do well to control our temptation to get by with informal models. Informal models are … it’s too easy for them to provide an excuse for hand-wiping. The more formal we can make our models, the easier it is to put them to the test, to see what their consequences are, and to force them to conclusions which may show us, by the absurdity of the conclusion, that the model has a flaw and may help us, therefore, understand the thing being modeled better. So I’m going to argue that when we are … taking modeling seriously as a way to make our assumptions and beliefs explicit, that it’s better to be formal rather than informal.


Now, sometimes bringing in the whole heavy machinery, wheeling in the engines of logical inference and so forth, seems like overkill, because really we have pairs, we have named pairs, we have a name and a value. Do you really want to call that a “data model”? Various people have said “No, no, that’s not a data model. That’s too simple to be a data model.” And I say to you, “It embodies your assumptions, and if you don’t identify them explicitly as assumptions, it will betray you. You … have to be conscious of them. You have to surface them. So yes, by all means, call even as simple a thing as a set of attribute value pairs a ‘data model’, because it does embody your assumptions.” There’s no need to reserve the term “data model” for things that are big and complicated and intended to be permanent anymore that there’s a need to restrict grammars to things that have five hundred productions and can produce complicated, English language-like sentences. Any grammatical system that cannot formally define the set of all strings consisting of any symbol in any order has a gap, and none of the serious grammatical formalisms has that gap. There’s a reason that, in conventional regex notation, the expression “.*” counts as a regular expression. It’s important to have that be part of the model of regular expressions. And, similarly, we should not say, “Oh, that’s too simple. That’s not really a grammar. That’s not really a regular expression.” No, even things as simple and unconstraining as that are regular expressions, and even things as simple as and unconstraining as … if we’ve got a bunch of attribute value pairs are … data models, because they’re embodiments of our assumptions. So what do we use? Well, my own instinct is to say the closer we are to first-order predicate calculus, the better. Or some symbolic logic. And I say that for a particular reason.


Chomsky says, “The search for rigorous formulation has a much more serious motivation than mere concern for logical niceties or the desire to purify well-established methods of linguistic analysis.” Although frankly I think those are perfectly good reasons, he wants formal models so that he can show what consequences they have and test them by their consequences. And if you don’t have formal models, it’s extremely difficult to test their consequences, because it’s even…. Well, I’ll come back to this in a moment…. One reason to be formal in our models is that we want our models–or we should want our models–to be ways in which we can disagree with each other. Probably the biggest single indictment I saw of FRBR in Allen’s incandescent indictment of FRBR’s ontology is not really … I mean, yes, there were plenty of problems, but my biggest problem is that I’m not entirely certain that after all his effort, Allen succeeded in disagreeing with those people, because he said, “Oh look, they say it has inheritance. [But] it doesn’t have inheritance.” I immediately said, “Are you guys talking about the same thing?” And because they [the formulators of FRBR] didn’t formalize that part of it … I [i.e. Sperberg-McQueen] don’t know. If you point to an error in the XPath–point to a logical problem–and report it to the working group, there is a risk that someone will say, “Oh, you know who we mean.” And it’s very difficult to sustain the task of saying, “No, I don’t”; or, “Even if I do, there will be readers who don’t.” You can’t really … you don’t really want to restrict the standard to the group of people who already know what it means. The whole point of externalizing information like a standard is to make it possible for people who don’t know what it means a priori to learn and to interoperate. But formalization is hard, takes a long time, and working groups have other problems on their mind.


A second reason to be formal is precisely that if something is formal enough, either in its original form or after we have formalized it, we can use logical inference to show what consequences the assumptions have. And we can test those consequences for their plausibility. And oddly enough, we can be surprised by that. This is a little surprising in itself, because as philosophers–many philosophers–have noted, Allen told me today, this is called the paradox of analysis: You know, when you have something you’re trying to explain, and you say, “All right, I’m going to explain this by providing another description,” one of the most obvious constraints is, well, the meaning of the two things has to be the same–you have to preserve the meaning, right? But if you preserve the meaning, in what sense can you say that we learned anything? We already knew exactly what you just said, because we said that other thing. This is a paradox, because in fact we do that all the time. I think this is just another instance of the limitation Allen pointed out: there are things that we don’t know how to say very well in first-order predicate calculus. We have been working as a species on formalizing our logic for hundreds of years, and there are still things we don’t know how to do very well. There are things we don’t know how to do at all well. So even while I say, “The most important new development for modeling in the digital humanities that I can speak, that I can think of, second only to the invention of the computer itself, is the development in recent years of better tools for model-checking and satisfiability analysis and the consequent development of tools like Alloy at MIT or various theorem-provers, of which I’ll happily give you a list and bend your ear later [coughs], that allow us to check the consequences of our assumptions in ways that were never possible before. We can write models that do not have glaring logical holes in them today, because we can check them in ways that were not possible even ten years ago. And if we’re not using those tools, I want to know why not? If our models are at all complicated, it’s worthwhile using those tools. It’s worth using those tools in part because the devil is in the details. In the prospectus for the conference the organizers say, “Modeling is very important. It has fundamental implications.” And yet in pedagogy and in the literature it’s always treated as a technical topic. And unspoken here I hear the wish that we could treat it as a sort of non-technical topic. And I think that’s not possible. It’s necessary to treat it as more than a technical topic, but it is inescapably technical. Never trust someone talking to you about the intellectual implications of models and how assumptions have teeth if they’ve never built a model themselves–just the way you’d never hire a gardener who doesn’t have dirt under their fingernails. [Audience murmur a laugh.] So there are certain technicalities that we really can’t get by, can’t afford to skip … except I am going to skip. I was going to talk about the … well, there’s one–I have a favorite complaint about RDF but I’ll spare you that one. [An audience member coughs loudly.]


But there is an interesting parallel between the intellectual moves that several people have addressed, including Julia today, talking about the potential move from naming counties to always naming county time pairs, and atomizing things at preferably ever smaller and smaller units so that they can be recombined. Well, as you can say, I’ve been experimenting with atomization myself. [Audience murmur a laugh.] Recombination is not always easy, but you already knew that, too. The thing that strikes me at first glance about that move is, ultimately, if we do that consistently, we’re committing ourselves to precisely the theory of atomic facts that is the least plausible part of Section One of the Tractatus, right? No one reads Section One of the Tractatus without saying, “Wait, every fact is atomic? What on earth are you talking about? I can’t imagine a world in which facts are atomic.” I mean, I can imagine that it would be nice for purposes of formal reasoning; I can see … the motive, the emotional motive. But as an account of the world we’re trying to reason about? No. Unfortunately I don’t have any better solution, so, as with many things, my only advice is: Do it and be really really careful; do it and don’t get caught.


Final note on the formality: There is one thing missing from many of our models. And that is, I think we would do better if we emulated our friends the linguists as illustrated by … in Elke Teich’s talk in thinking harder about pulling predictions out of our models and making them testable. Max[imilian] Schich several times raised the issue of … as model assessment. That’s important for utilitarian models, but it’s also important for models taken as ends in themselves. And we’re not … as humanists we typically don’t have a lot of practice pushing conclusions out of premises. But that’s something … that’s a skill we need to learn to cultivate.


There’s a third cluster of issues that comes up in connection with modeling, and here too there’s a gap in the population here. I want to identify these issue around the axis of attitudes toward simplification and incompleteness. Several people–several of us–mentioned on the first day that one of the immediate reactions of scholars in the humanities fields to almost any data model you show them is, “That’s too simple. You’re losing nuance. You’re losing information. You’re losing things.” And this is a very serious, serious reaction. It’s a very serious sociological fact. It’s also an important intellectual fact. One of the few things that is common to almost every attempt to define models is the fact … is the observation that model is almost always simpler than the thing being modeled. If it’s not, what was the point? Now, to a certain extent that stems … that comes from the utilitarian approach, as illustrated by Wiener and Rosenblut. Right? We have a model instead of … because it’s cheaper and faster. And we could use the original system if we can. Of course, if you’re an astronomer, that’s a little harder, or if you’re a cosmologist, experimental verification is just hard. But simplification is not only motivated by desire to save cost either in time or in money. It can be motivated pedagogically. We build models of the DNA atom to help students visualize what a double-helix is, because if you just wave your hands like this, they don’t understand; besides, your arms get twisted up. But when we treat models as ends in themselves, simplification is an important tool. It’s part of the goal. And abstraction comes with that. There’s a lot of resistance to abstraction among traditional humanists as well. What Wiener says is that, “No substantial part of the universe is so simple that it can be grasped and controlled without abstraction.” One reason that I know we’re all geeks in this room is no one here was fighting desperately against the abstraction inherent in modeling. That we’re all still in touch with non-geek humanists is visible in the many reports of people who bang on their doors and complain. But even hugely reductive models, like the delta distance that Fotis was showing earlier, can be enlightening. And in some ways, the more reductive the model is, the more astonishing it is that it should … retain any information at all, and, therefore, the more useful it is. So although from the TEI and before onwards, many digital humanists have thought about modeling primarily in terms of preserving the nuances of the mess, we should also keep in mind that sometimes radical reduction is useful. And of course those of us building resources for other people to use need to bear in mind that quite often what they’re going to want to do is strip out much of the nuance. Steve Ramsay told me the other day, “Oh, I love the TEI header. I love the TEI header, because it has a start tag and an end tag, and, therefore, the first thing I can do when I get a TEI text is delete the damn thing.” Because it’s marked, I can delete it. I’m horrified, but … [Audience laugh.] … but that’s why … explicit markup marks things up so that they can be ignored. That’s why some old SGML hands and XML hands react with a shrug when people say, “Oh, but, you know, I don’t want all that interpretative stuff in there, because we say, ‘Well, turn it off in your browser then, for crying out loud. It’s … a switch. Learn how to use CSS. Display none.  It’s not hard; it’s really not hard. Just learn it. Do it.’” You gotta build things. You gotta be concrete. Incompleteness shows up in another way as well, of course, and that is that humanists often have incomplete data and modeling incomplete information is hard. Modeling information we have is hard enough. Modeling the absence of information is even harder, and that’s a place where first-order predicate calculus may be letting us down.


But logic remains the best way we have for saying things in a formal way for which we can see what the consequences are, and for which we can be responsible. Even though it doesn’t handle non-discursive discourse in non-denotative statements, and it has problems with all sorts of elocutionary acts other than statement. It’s now, what, 2012? So it’s been 300, almost 300, no … 200-odd years since Leibniz imagined the language [of logic] not terribly dissimilar to the languages we have available to us today in which you could formulate things and then check their consequences in the same reliable and mechanical way as you could use the calculus to calculate speeds and distances. And he imagined a world in which when you had a serious disagreement with someone, you could formulate that disagreement in this language, and you could invite your interlocutor to the calculating machine and say, “Calculemus: Let’s work it out [to be more specific, “Let’s compute”]! Now, a little while later, Leibniz’ successor as librarian at the Herzog August Bibliothek [HAB]–which I’m glad to see represented here today–Leibniz’ successor, Lessing, described in his great play, Nathan the Wise, how some things that matter a lot to us, maybe we want not to put in the realm of things that we plan to decide mechanically and correct each other on. Some things may be beyond the scope of calculatable inferences. For those, we must … cultivate tolerance. And then, a long time later, Kurt Gödel pointed out that even within the restrictive realm of formal languages there are things that it’s possible to formulate sentences for which no mechanical proof within the framework of that system is possible. Now it’s interesting. Many people cite Gödel’s [Incompleteness] Theorem as a dark and ominous symbol, [a] reminder of the limitations and the inadequacy of the human endeavor. They always make me want to start singing Brecht: Der Mensch lebt durch den Kopf der Kopf reicht ihm nicht aus. But that’s not the way Gödel saw it. Gödel saw it as rescuing mathematics from the deadening influence of a purely mechanical proof theory. There are always things that are going to require human intervention and human ingenuity. It’s the machines that are doomed to fail, not humans. Of course, some people think humans are doomed to fail, too, but maybe that’s not too bad. As Beckett is quoted as saying, “Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.” 

[Audience applaud.]

[Sperberg-McQueen and a member of audience exchange a few words.]

[Julia Flanders] That was beautiful…. Anyone dare ask questions? [Laughter.] Okay, in lieu of questions, let me just say, and I think Fotis probably joins me in this sentiment, what an incredible honor and privilege it’s been to gather this group together here and to listen to everything that’s been said and to have the opportunity now to digest it all and turn it into something that can continue to live and generate discussion and hopefully generate more work that we can do and more events that we can have. So thank you all very very much! Safe travels.

1 thought on “C.M. Sperberg-McQueen, Closing Keynote

  1. Pingback: Knowledge Organization and Data Modeling in the Humanities: An ongoing conversation | datasymposium

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s