Open discussion–Key themes: (video)
[Julia Flanders] So, I think it’s probably prudent for us to get going on the final leg of the marathon. I know a number of you have flights which are leaving, or which would require you to leave, slightly before the scheduled ending time for the event today. However, Fotis [Jannidis] and I talked about this and we’re thinking that we may be able to slightly compress and invigorate the event today, so that instead of having the, sort of, “We have to make it to 5:30” feeling we would have the kind of “sprinting” feeling. So what we’re going to do this morning is, depending on how the discussion goes, and so forth, we’re thinking perhaps we only really need half an hour for the opening conversation, in which we would like to talk a little bit about sort of the agenda for further work. In other words: Where does this get us? What have we achieved so far and what remains to be done? And obviously the answer is “A whole lot.” But, in concrete terms, if we were going to write, for example, a grant proposal for another larger event or something like that, what would be the key topics that would belong in the agenda for that event? So, if we could sort of spend a little bit of time talking about that, that would be great. And then, that might enable us to move lunch up a little bit. And then there might also turn out to be other ways, depending on how things are going, that we could compress a little bit in the afternoon. So anyway, in that spirit, is there anything that you would like to add to my random thoughts about housekeeping?
Okay, in that case, I think what we’ll do is, Fotis will lead the discussion on this agenda for further work thing and I’m going to take dynamic notes in this Google Doc. If you also want to take notes in the Google Doc, there is the URL. It’s just a kind of sketch pad place where we can put stuff. So feel free to put anything you like in there. In the meantime, I will sit down so I can type.
[Indistinct speaking while Fotis Jannidis gets set up at microphone.]
[Fotis Jannidis] So the basic idea is that we wanted to talk with you about this use of ideas. We think these are…what we have tried to have come up with are topics which we think are, obviously, a cluster of ideas or problems, which are specific to the topic we are discussing the last two days. So, if you would write a book about this, or try to sub-structure a longer conference on this, you would use, or could use, these as aspects which have to be treated. So, don’t think of it as a hierarchy or whatever, it’s just a loose set of ideas. And, we would like to have your input. The first, actually is: are there more? Do you see things on this list…oh, sorry, do you see not-things? Are there things absent which should be there? And, knowing this is a bit difficult to see what’s not there, please as soon as some idea pops into your head, during the coming discussion, please let us know.
[Thomas Stäcker] Identification issues.
[Fotis Jannidis] Identification issues. Okay. Identificate the issues. Can you describe, exactly, what it would be you are talking about?
[Thomas Stäcker] Just think of, like triples, which you research, and you have to decide what kind of identification you use. I think there is some need of well, maybe, global identification tools. And I think we should discuss this in the context of RDF.
[Fotis Jannidis] So identification of specific objects, persons, as well. Yeah. So this comes together with the talk of Stefan [Gradmann] and others and, so, how to identify…and I mean there are some practical issues here…A lot of practical issues…I wonder if Daniel Pitti will explain some of these. Are are there also conceptual problems to be solved? Yeah.
[Stefan Gradmann] Closely related to identification was granularity issues—on what level of granularity are we actually trying to model. […]
[Fotis Jannidis] Okay, thank you
[Maximilian Schich] I think there’s an extension in the same direction. What’s the relation between a priori models and ex post models. So how does the model emerge from the data by itself, and how do we treat and find it, because obviously we cannot start with a blank slate. And that’s a very important kind of thing because it takes you to this kind of thing where if there is something like a gradient between, say, a schema or model and the instances—there is something between and that is very, very undefined.
[Fotis Jannidis] So this would be the whole topic, actually, of how do I create a model and refine it? And it would touch with the whole thing of assessment and validation too. Because responsible validation could mean that I have to refine my model […] Syd.
[Syd Bauman] So, I think you already have this in your minds under defining data modeling, but I think it might be something a little more explicit. We’ve kind of danced around the issue here but when we talk about data modeling are we talking about using data to model things in the real world or are we talking about the models we use to model…to use data to model things in the real world.
[Fotis Jannidis] Can you repeat the second part?
[Syd Bauman] Are we talking about using data to model things in the real world? Like, creating surrogates of real things using data of some sort. Or are we talking about the models we use to generate data surrogates of real-world things and call them models?
[Fotis Jannidis] Okay.
[Wendell Piez] I think, picking up on that but also on Max’s, one other question that I’ve noticed is the adequacy of the models. Because, I believe that one of the things that’s important, that’s going to be discovered is that the tools we’re using are actually not adequate to the tasks which we’ve set ourselves. So that presents us with certain problems: do we devise workarounds? Do we try to formalize solutions? Do we compromise our goals? Do we actually look again at the underlying models to see whether they should be extended or set aside in favor of something else? There’s a set of open questions there, which I think most humanities projects face in some way.
[Fotis Jannidis] So we’re talking about a process model of data modeling where you have at some point to validate and assess how good your model is and then you take further steps.
[Wendell Piez] Right, but you see this happens at more than one level. Because Syd [Bauman] is identifying and reminding us that (7:42) we have models and then we have tools and methods and technologies that we build models out of, which are themselves models at one level. The relation there, for example, of an XML tree versus the TEI system of labels and notes or elements in the tree. And that inadequacy can be in either place.
[Fotis Jannidis] I think there is a distinction useful to say: we have data models, meta models, and meta meta models. That could be the instance of a text encoded in TEI, TEI as a schema, and XML as a schema language. And a way to instantiate the whole thing…
[Wendell Piez] Well, right but you can also look at the stack the other way. In the other direction. Because, the thing is that—you talk about the XML as being a meta meta model, which is describing the thing that describes the thing that you’re describing. But the only reason that you need to describe something is because it’s actually lower down. Because everything’s built on top of that.
[Fotis Jannidis] Just a way to distinguish between..I’m sorry, I lost track—Elena.
[Elena Pierazzo] I wanted to discuss the tension between customization and standardization. Because the models—we talk about the importance of shared models, but we also have particular […] our tag set is completely different from anybody else’s. And this is a big tension. And these are the basic problems of building tools, and of real interchange…so I do think this is crucial. And TEI has a solution for that but is that working? I’m not sure that it is. You know this problem of tool building, tools for digital humanities, which has come to seem almost impossible because of these tensions. Because the two extreme poles…
[Laurent Romary] I want to jump on this because it relates to a point I wanted to make concerning the process of defining models. What is the toolbox we’ve got at hand, which relates to customization—and this toolbox has to be agnostic with regard to the TEI and what-have-you background you have. Because, how do you model a person? The characteristics of a person? We’ve seen things in EAD. We’ve got things in TEI. We’ve seen things related to the Europeana model. Same for bibliographic data. How do we model that? We discuss the issue of the corpus. How do we model this corpus? And we need to be able to—from the point of view of training, but also from the point of view of community—to share those basic components and say look: If you’re at this framework and you want to model a person, start with this or this and you’ll be compatible with what model people are doing. The Lego brick, sorry Wendell [Piez] maybe you’ll have nightmares seeing Lego bricks falling on me…
[Fotis Jannidis] I think there are two main metaphors for this conference: Lego, and food…
[Kari Kraus] I would just pick up too on really emphasizing what a couple of other people have brought up, which is the creation or the design of models, and their accuracy or their usefulness. Something I mentioned on day one was that in certain branches of information science, you approach the design of almost anything, whether it is a new tool, a new interface, a new ontology, whatever, by first creating an experimental design, where you’re actually studying the community you’re trying to serve in a very systematic fashion, where you’re applying, in terms of data collection, analysis, […] modes of analysis, whether you’re applying kind of grounded theory or whatever. So, thinking about different approaches to design. And then also, we’ve really concentrated on data modeling, but, of course, the initial title of the conference was Knowledge Organization and Data Modeling. So it’s interesting that knowledge organization has largely been absent. (11:55) But I’m also thinking from the design or the creation perspective in classification systems, you often see from a diachronic perspective—I think this touches on some of the presentations from yesterday—that systems themselves develop in a kind of very ad hoc, almost sedimentary fashion over time. So the two main kinds of classification systems you have are faceted and hierarchical. And something like the Dewey Decimal System has gone through twenty-two different editions. It started out as a hierarchical system, but it’s become increasingly—it’s the inverse, a faceted system, over time. The inventor claims to have been sitting, listening to a church sermon when he had his eureka moment for what the Dewey Decimal System should be like. So, the way in which, thinking about that diachronic dimension, we don’t simply scrap things and start over when we start to see their flaws, but rather we use this accretion process where we build on what we’ve already done and what that means for the model.
[Fotis Jannidis] It kind of goes with what Julia [Flanders] was talking about […]
[Kari Kraus] Yes, exactly. Absolutely.
[Fotis Jannidis] Yeah.
[Stefan Gradmann] I’d like to take up a few words from what Syd was saying; he was mentioning real-world versus models, you also see that in nature versus artifact. My question would be regarding defining data modeling, whether there is a real world accessible to us without modeling.
[Syd Bauman] What???!!
[Stefan Gradmann] Yeah. Maybe there is just modeling. And there is messy, confused modeling and there is high-level modeling, which is what we are talking about. But I think all we do is modeling, […]
[Fotis Jannidis] So I think this catches up on the point we catch up on repeatedly. Whether data modeling is just another word for theory or for conceptualizing or for having classes and concepts and ordering the world, or whether is it a specific sub-genre actually or sub-activity of this general activity. We had, I think, different stances and so I’m really eager to hear your input on this.
[Maximilian Schich] I think, that said, one should always keep in mind that […] may be different. So, think about how we represent […] in the brain are not syntactic, necessarily, right? That can something which is highly [dynamic?], four-dimensional or whatever can […] be modeled the way we think of models? And there may be a model but it is something which is not congruent, it’s not—you know, my theory is that it would be great if there would be a dictionary for geometries, so you could translate one to the other, but it’s not necessary that that exists—we don’t really know if that exists… We should also always keep in mind that our models are always increments. And that’s the interesting part—because we can reach something new, a new model, if we find that the old one actually has a new addition which we now have to…
[Fotis Jannidis] Agreed. The fact that there are things out there which are not compatible with our models isn’t an argument for […], because there are many things out there which have nothing to do with my models. For example, if I’m modeling persons and there’s a stone out there, well, good for the stone. So, in movements and modeling things, it doesn’t really catch on each other…
[Maximilian Schich] That’s what I meant, right. So basically if you say that everything is a model, right… But definitely most of the world is probably out of scope of any model…
[Julia Flanders] I’m going to intercede here and just say that this is really an agenda-setting exercise. So I think that identifying topics for thought is very important. But trying to work out the details of how that thought would go may be something that we could save for a later discussion.
[Fotis Jannidis] Yeah.
[Douglas Knox] Sometimes when we refer to the ‘real world’ in an academic, humanities context, we mean the non-academic, non-humanities part of it. I’m wondering why that is. The politics, the strategy, the social context of modeling, there are models of persons that don’t come from humanities that are of interest or use, book digitization is not really—at the scale of tens of millions—is not driven by humanities; Wikipedia is a big player, linked data let’s say—so where is all that?
[Fotis Jannidis] Yeah, the politics of the humanities—that’s a good point. I was hoping we could leave politics out but you’re right, we have to include it.
[Unidentified speaker: Stephen Ramsay?] There are people who verb and people who noun.
[Paul Caton] To have a subject and a verb and an object, if you want to think in those terms, then maybe let’s define some verbs or setups—what are the verbs that we’re interested in? What are the nouns that go on each side and what are—what is one state of affairs at one side of a verb and what is the state of affairs on the other side of a verb. So that you have: wolf eats sheep. We know what a wolf is, we know whether the act of eating has successfully taken place or not, because there’s a defined state of affairs at the end where the sheep is gone. It would be nice to know what an object at one side was and what the verb was and whether that verb had successfully happened or not by saying what it is we expect at the other side. And I’m clearly not trying to define these things but to have some sense of what they are. I mean, to just say that there’s nouns and there’s verbs. But what are they? That are important to us…
I’m trying to understand you. Is it an argument for that we have to have clear definitions of when a model is a good model. Or is this too general?
It’s one step further, more concrete than that. It’s—you’re modeling what you model and you’re modeling something that involves a verb, but if it involves a verb there must also be nouns associated with that verb so what are they? It could be transcription. That could be your verb: transcribe. Well okay, what does that—what’s the state of affairs before “transcribe,” what’s the state of affairs after?
Can you tell me—I’m trying to take notes here, is this kind of a phrase at all getting at anything like what you mean: where it’s not the substitute of the equally if-y phrase.
It’s as good as I can make it.
I just wanted to say something in response to the idea of theory. That could be a very interesting way of approaching it for people who are non digital humanists—a way in that’s like work we do in the rest of the humanities.
Okay, yeah. That would be having people understand—
Put it under politics and strategy [on notes]
And having people outside of the digital humanities…
If you frame it within a theory-based paradigm, then in fact it’s like what we understand what we do already in the humanities, not something different.
One of the tricks Elena [Pierazzo] was talking about.
Yes. But I wanted to say that the pedagogy—competencies in pedagogy may be one of the things that could be a framework for teaching data modeling, because you don’t teach it the same way in all the different situations. So if I have a Digital Humanities Master’s, it would be maybe appropriate to have a whole course or a module. If I just have one course I’m teaching in an English department, it might be different. So, what would it look like in different situations possibly? What kind of framework for that?
And maybe the interesting aspect too would be if you just have fifteen minutes for talking about data modeling, what are you talking about for these fifteen minutes compared to the whole course?
Absolutely. I think that’s a fantastic idea and I would say that, in terms of pedagogy, that’s one thing that I think abstractly in literature we teach knowledge organization all the time—Aristotle, Francis Bacon, I mean, we’re always talking about these concepts. And so in that sense it’s very easy, I think, for me and for people like me to make that bridge. What I think is less intuitive are practical applications for, okay, now we see that this is an ancient problem and we see that people are working on this problem in a new context, how can our students have access to that? One idea that I had was: what if there were project-specific pedagogical tools or exercises or ideas-starters, what have you. So for example the Women Writers Project has something that: “If I’m teaching Aphra Behn and I would love to show some data from this project, what should my students know about the structure of this data and how can I introduce them to that without getting into all of the specifics of markup that I might not have time for in my class?” I think that would be something that would be extremely useful and concrete for people to make these things.
I think we just have to make markup a prerequisite.
Of course. [Laughter]
I was thinking that may be related to both politics and strategy and also to pedagogy I can’t remember the category […] The word “performative” has come up several times, maybe it’s performative, maybe it’s rhetoric, maybe it’s narrative, but it applies both to thinking about doing modeling, so that’s where the politics and strategy may come up, and it applies to actually doing modeling, which may be where the competencies and pedagogy applies. But I think the performative aspect of it is a way of thinking about how you do modeling, why we do modeling, so that may be another way of phrasing things that have already been said.
I think when we’re developing a data model and developing a data strucutre, the data structure must correspond to functions on a one-to-one wrapping; if we declare there is a certain element that we want to do something with that element, if we want to inter-operate with it, we want to make a very clear one-to-one mapping. If we want to declare something to be a data structure, then we need to have an intention of actually implementing it in some way. And what interoperability means is that this mapping has declared and has been agreed, and there is some kind of testing procedure that can verify that all the documents conforming to this data model can be processed in a particular piece of software which conforms to that. It’s very strict, something like SVG for example is a format that conforms to that, or various programs which read SVG, which file images work. I think we need to work towards something like that, not necessarily every single subjective tag or anything, but we need to have some kind of interoperable format that we can load and reuse.
In digital humanities in general, or do you think for specific areas or for specifics mediums?
You mean more general?
Yeah, I mean you could say “I have the subject form for texts between 1800 and 1900” or you could say “I have this for texts in general—”
I don’t think it should be at that level. I think that’s already subjective but as soon as you have different kinds of documents, even in one period, you have vast arrays of different kinds of tags you want to use, so if you want to make it interoperable you have to kind of take that subjective judgment out of it and interoperate with the technology…in the same way that XML, for a fact, is a platform that’s interoperable, isn’t it? There’s a total standard, it conforms to that standard, lots of programs can read XML. It is interoperable in itself and the problem is that if one defines tags within XML then there’s a problem about getting interoperability unless you perform these very strict kind of controls, which I don’t think we’re prepared to do anyway. As I was saying before, if you just remove the tags from the text you can interoperate with the text, because it’s Unicode text, and the tags can be interoperated with at the level of the technology, you can process tags in various ways similar to the way you do it now. That would increase the amount of interoperation that is happening in the various components of the system rather than merging it all together and confusing the issue and making it harder to build interoperability in. Is that clearer?
I think this comes to the whole topic of standardization and how we at the one group—Elena [Pierazzo] was talking about this contention between expressing something in a way that makes it interoperable and the other way seeing the individual features and artifact that you want to express and very often it makes it potentially impossible to do one or the other. And most humanists attempt to go for the direction of expressing themselves over the artifact. And while we have obviously for many reasons to go in the other direction to make things more interoperable. And I think that—
We can just change the format and you can liberate the interoperability problem. That’s all I’m saying.
Have I at all captured—it’s a little hard to hear…Is this roughly about what you’re getting at? Or have I missed it..?
Here on the screen.
[Participants talking over each other while looking at the screen.]
Under politics and strategies: the kind of so-what question, the so-what? Under politics and strategy: so what? Why should we care? We’ve been managing to do a kind of muddling along where people not in digital humanities don’t even, aren’t aware of it. Why should they care? Why should people care about this? No, I mean really.
Yeah, yeah. True, true. But at the moment we are in a good position. At least in Germany we have the alternative—the other side of the discussion where humanities people come to us and say “people care so much about you, what can I do to be one of you?” [Laughter] Yeah, just give it a few years and it will change.
But they might care about digital humanities, but not data modeling.
I think that very often applies—
I think there’s another in-between thing, which is often left out. If we look at the practice of projects, there is odd little data modeling that’s more often on the agenda than say the workflow, so that extended data modeling, like workflow planning or how to implement the model from the user interface, how to set up queries in order to access the model in the right way and all that stuff. So from the project I’ve worked with over fifteen years, that’s basically the thing where I was—yeah we do all that, but nobody writes it down, nobody evaluates it, nobody is really interested in it. But it’s something, you know, a lot of you guys worked with Scrum, for example, it’s something regular people in the humanities never have heard about. Even though it’s so essential for your day to day life. So it’s something which will be interesting: that these kind of workflows should also at least taught in a way that people know that they exist, and how they work, and what kind of productivity they bring. Because it’s very different from going to the library, reading a book, and writing an article.
So I think there’s a tension actually between two different approaches here, one being data modeling is a very concrete process and ability and knowledge. And on the other hand to say: actually, you can learn this by yourself. You have to have basic concepts and the grounding concepts and obviously in practice we are going to move from one to the other. In my experience, it helps to have a more abstract approach but I always stay at a university so, again, I’m the last person to ask—
Yeah, I don’t want to contradict that, but I do, I mean, the thing is at the same time, I think that one of the points that we’ve made and actually this does address Susan’s [Schriebman] question also, is that people are already doing data modeling. They don’t necessarily call it data modeling, they don’t necessarily have formal methods or maybe they do and they don’t even know that they’re formal methods, right? And this also goes to Kari’s [Kraus] point about knowledge organization. You know “data modeling” is something that’s a label that we’re using to describe an activity which is part of what we do, but is also part of what other people do, and part of what humanities have done for a very long time, without computers. And I think even as we accept that there are these blurry boundaries where it’s not a situation where something is data modeling because you call it data modeling. On the contrary, this is actually an activity which we’re labelling. And people are also doing that without the label. And understand that the actual activity itself is to make the links that address the questions about the relevance and nature of the activity and how it […]
Yeah, thank you.
I’m gonna kind of support that by saying Susan’s question points to the problem of translatability, in terms of data modeling, the digital humanities is hard enough for people to get a hold onto and understand what that might mean. Data modeling: I get the face, the glass-eyed stare, the fear. And I think if we could think about ways to make all of these processes and cultural practices about knowledge making more tangible for people, I think it would be so useful. A few years ago, Wordle, do you guys remember that thing? That was so enormously helpful as a kind of snapshot. Oh, here I can dump in a text and get an immediate visual representation of one way that text is organized. And people could really grasp onto that in a way that was so helpful and useful. And I think that it would be great if we could think of more tangible examples of that kind of thing. So, I think there’s a really big gap.
I’m just sort of observing this tension the last couple months, between—the relationship of data modeling to our sort of projects and workflows and getting things done and getting those practical applications underway. And this other tension of wanting to use data modeling to say something true that advances the intellectual discourse in a particular way, which I think is coming out of some of the things that you’ve said and Paul [Caton] has said. And figuring out a way to reconcile those two things seems to be an important bit of work to do.
Can we say then…I’m not sure how to unpack that.
I’ll just work in the Google Doc.
One thought driven by […] In the very beginning we started to talk about whether there’s anything specific to data modeling in the humanities. What I hear after two days of discussion could be—which is probably greater here than I have to admit—is it’s the content, but it’s more specifically—obviously we are working on basic concepts of the humanities all the time. Like book, like text, like transcription. And we are analyzing them to make them more formally attractable. And this is a collaborative work, going on for some time now. This could be one of the central essences, actually, of data modeling in the humanities, something like the whole reflection on the FRBR model, and we have to translate to the digital world. Things like ethics and thoughts about transcription and data and so on. Do you think this would work? Does this sound in any way sensible? Or is it just a snapshot from the last days? And how do you think of the work of data modeling? It really can relate and change your attitude toward the central concepts of your work. Most of you are coming from a humanist background and so becoming digital has been something in your life, so did it change the attitude to these concepts?
I think the answer’s definitely yes. But that’s not only in the humanities. So what is it about “going digital,” as it were, that makes this change—you have to become expressive about things. So making things more explicit will change things. Also another process is, there are other areas of data experience, so, yes.
So it comes down to: because we never had to describe things more explicitly in contrast to natural sciences, we now do have and so it changes us more than other fields of studies.
I think that’s true but my first answer was Getrude Stein’s answer when she was asked: have Americans changed and her answer was, “How could they change, they are only more alike than they were already.”
And I think the point actually speaks to this also because making more explicit is also coming into consciousness, right? It’s meaning that we’re noticing and admitting something, and owning something that was already there implicitly. So, in that respect, one of the interesting things about this, and we’ve talked about this also in the last couple of days, is that part of what’s interesting about modeling is the way in which it forces us to come to terms with the residuum, right, with the part that isn’t in the model, which is always interesting to the humanities, right? I mean this is why Frankenstein is that ur-text because somehow by creating something we also exile something and therefore then we have to come to terms with what the consequence of that is. At our end, I think that’s actually something that’s really very important and very momentous about what’s happening now with digital technologies in the humanities because the humanities, in some ways we’ve sort of been coasting. For example on the stage we talk about the “two cultures” problem—how the humanities are here and scientists are over there and now you can’t really get away with that any longer and that’s actually a really great opportunity.
I just want to really quickly directly respond to what Fotis was saying about “How has data modeling changed your work?” I’m thinking I’m glad you added the bit about, for those of you coming from humanities: “Data modeling is my work, what do you mean?!” With that geek mind on a little bit, one thing that might be on our agenda (and may be there already, I haven’t read the whole thing) is how do our models affect our ability to do the processing, our particular metamodels? How much we have to actually worry about having modeling systems that our processor can handle in overnight or less, that type of problem. We only have limited resources, do the modeling systems we choose affect that? Or are we in such a utopian universe that we just don’t care?
I think the changes go deeper, at least for me for example. My whole view of languages changed, for example, since I used corpora, and that has to do with going digital too. And I think something else has changed too, the concept of text has changed, at least text as we produce it now so perhaps there’s a difference between the texts which have been produced before and the texts we produce now because there’s much more of a process where you can actually put one bit from here to another bit in a much more flexible way than this was possible when you did handwriting but it would be nice to reflect about it I think.
I think this kind of “two cultures” thing is a very interesting thing because it’s also data modeling but if you look at the disciplines and how they’re [affected?], right? And everybody says you can’t get away with separating the natural sciences from humanities. If you look at the
statistics, actually you can’t get away with it to do both because people will just not follow, right? Because you’re in between things which is different from what you’re describing which is really bad. And I think what data models can do with that kind of situation is you can actually put the model on a piece of paper. It’s not like somebody says “Yeah, humanities are like this-or-that”—we have something we can put on a table to collectively think and discuss about and then we can put the data and then we can say, you know “Your picture is really disconnected but if we look closely there’s a lot of connections and maybe it will make sense to, you know, [pass?] the lines, becomes a model through, and then it becomes the grooved kind of ski-run,” so you can get back and forth from physics to art history in that way because it makes sense. I think that’s the interesting part of it. We can reach these data models to actually blur that part of it, or how does the data reflect that part of it, and I think it’s always this back and forth. (40:00) You cannot do data models without looking at the data and you cannot work with data without having a model.
Just to echo some of that, one of the things that really helps with collaboration of course is having a shared vocabulary and shared research questions. But one of the things we’re talking about here is, what do you do when in fact the data models are sometimes isolating you from those you could in fact be collaborating with. So from being explicit, analyzing the data models in ways that help expose some of the implicit assumptions that are built in, moving at that meta level can actually help can the work done on the ground.
[very unclear] Actually, […] I was going to say, allowing for contradictions, [mental?] disruption, ambiguities, these kinds of things then when you brought up the cultures I was thinking again are probably not specific to the humanities because I share this kind of need with quantum physicists for instance, much more than with quantitative linguists. […] humanities-specific […] of the world… it is interpretetation, it is measuring; it is these kind of oppositions. So maybe the term “digital humanities” is a useful term for branding; it is not very useful in terms of explaining what we’re actually doing, and what we do in data modeling.
Do you think that data modeling in the humanities is basically just a collection of different activities which have nothing really to do with each other, more with each other than with other activities so we are just constructing something? [Nods]
[Kari Kraus] Just to build on Stefan [Gradmann]’s statements about some of those values that he was expressing that have come up in a number of different presentations, so Stefan’s point that we want RDF triples that can do more than express computations but can also express affective information or statements, contradictory markup, and so forth. This relates to the question of, what might be of interest in humanities data modeling that might be of interest to others. So your point about physicists who might be interested in these things, I’ve been trying to wrap my mind around how much of these values have actually been realized in some of the models and ontologies and how much of it is still aspirational? So I think for Stefan in terms of the RDF triples it is largely still aspirational, but I think in terms of the contradictory markup, the community has made some headway. So I think actually taking the time to highlight some of the ways we’ve actually made progress on tools or models or ontologies that actually do try to instantiate those values would be really useful in that advocacy and outreach community.
[Fotis Jannidis] [To Stefan Gradmann] I would like to contradict you, actually, because saying that what you’re doing as digital humanities, the data modeling has nothing more in common than data modeling in others…I think what we are talking about on the last day may show that our many commonalities, because they are objects we are talking about, they have these commonalities, and Allen [Renear] was trying to point to this by maybe using this somehow misleading concept of intentionality, but we all understand, I think, now, what he meant by that—and this really is a difference—makes a difference to whether we’re talking about stones or particles at an atomic level. So I think there is a point in looking at that and saying that there’s a group of activities which is somehow more closely related than all activities of data modeling.
[Thomas Stäcker] I would actually like to support this view: I think you mention the text, the text is something specific to the humanities. It doesn’t mean we maybe apply methods from statistics for instance, which we share with the natural sciences or the physicists, but there is a specific object within the humanities which is text in a very broad sense. Our tools are related to that text and characterize the models we create for describing this text. Maybe the very term “data” is problematic here, because as a humanist[?] I wouldn’t speak about data but rather “document”, “text”, “books”, whatever. But data is not a key term for our profession, and so this is a bit ambiguous. There are data of course but it’s not a key term of our profession.
[Maximilian Schich] I think I can understand that in the current state of the humanities as a concept and […] But I very much have the opinion that archeologists, palaeographers, and epigraphers who work with objects, people who working with [media?], art historians who work with visual things, historians […] We model and use texts as an argument is always with images the background. And if you look at how […] say, actually there is not only the text as an argument; particle physicists have conferences where you only have differential equations and then there are people using plots and people using visualizations. And I think the central kind of thing both as a way of communicating both […] can mean something totally not text—
[Fotis Jannidis] But may I interject with that’s, I think—
[Thomas Stäcker] The very product for research is the text. It’s not an image, so as an art historian you don’t create an image as a result of your research, but you create a text—
[People talking over one another]
[Fotis Jannidis] I think it’s good to remember what Julia said before. We know now—
[Julia Flanders] That is is clearly an issue that needs to be addressed. Let’s express it as a question.
[Elke Teich] There might be another aspect in the area of disciplines, which is specificity. It seems to me we put modeling contradictions, ambiguity, indeterminacy, things like that, it’s true they are all sound a bit negative, no? Isn’t it also about diversity, that we want diversity and we want modeling to represent the diversity in the models. We want to represent our different opinions in our texts and our different models—other kinds of models—on texts or on some other artifact. I think that’s very distinctive.
[Syd Bauman] You’re reminding me of The Hitchhiker’s Guide [to the Galaxy]: we demand rigidly defined areas of uncertainty and doubt. [Laughter]
[Fotis Jannidis] Any more input? Can we just finish in good time? Thanks for your lively discussion.
[Julia Flanders] This has been extremely valuable, I think, and this is a good artifact for us to use and report and white paper but also as possible next steps. And thanks to Anonymous Users 46 and 195 [on Google Drive].