Never Mistake the Tool for the Task

I’ve really enjoyed reading the “syuzhet” debate, but with all due respect to both Jockers and Swafford, the post which grabbed me the most was Scott Enderle’s–not so much for his specific observations, but simply because he pulled back a bit from the back-and-forth and asked a slightly more fundamental question beyond the technical limitations Swafford pointed out–in this case, is the Fourier curve a good model to apply to “sentiment” at all?

It seemed to me that the more Swafford and others pointed out larger conceptual problems, the more Jockers dug in and defended the fundamental workings of his tool, but he never took a step back and defended the ultimate logic of applying it to the problem in the first place. In other words–arguments over whether nor not the 85% accuracy rate that the Stanford tool could produce was “good enough” were interesting, but absolutely nothing in this entire exchange–with all due apologies to the late Mr. Vonnegut–ever convinced me the whole enterprise has any inherent value. I’m just not convinced that even a more accurate plot curve or a more detailed and nuanced sentiment curve would TELL us that much.

I didn’t say that the would tell us nothing, mind you–there is some value in being able to establish certain universal or common themes or plot structures. I can think of plenty of possible insights one might gain by being able to rather quickly graph the general emotional arc of thousands of novels from a particular genre, time or place; or to verify certain universals in plot construction. Jockers, I’m certain, has since come up with a number of such findings.

However…he seems to have begun his project with the notion that there are “six or seven” basic plots, and while he later acknowledges that there arguments that that estimate might be a little low, it’s low by a matter of degree, not kind. In other words–Yes, according to Syuzhet, there are indeed only a handful of plots in all of modern fiction.

Well…OK then! So what? I don’t mean to be glib, but while Jockers had a ready answer for every one of Swafford’s objections, I think he completely overlooks the implicit objection that underlies her entire argument–that his version of “distant reading” is ultimately ONLY interesting “from a distance”, so to speak. The closer you get, the less informative it seems. A program which has to “smooth out” a model of a real thing to such a degree that others find the exceptions and misreadings more indicative than the overall interpretation might simply be TOO distant to be of much use.

I don’t know if I even believe that, to be honest. In a different mood, I might take Jockers’ side in this argument. Ultimately, I find the entire exchange to be mostly a reminder that one of the LEAST true cliches I’ve ever heard is “necessity is the mother of invention.” That, I believe, is rarely true. Edison thought the phonograph would be a teaching tool. He didn’t think the motion picture had commercial applications. Alexander Graham Bell thought he was merely improving the telegraph. I doubt Berners Lee had internet trolls in mind a few decades ago.

It’s OK to find unexpected uses for new tools; it can be exciting to play with a new technology just to see what it can do. But if we’re going to be Digital historians, then we need to go looking for tools that will help us do useful things, rather than look for things just to put our tools to use.

This Mapping thing is all over the place

Our readings this week suggest that there is a wide range of possibilities for mapping in the digital realm. Last week, I suggested that there is a strong link between networking and mapping. After reading “Hypercities”, I am even more convinced that this is the case.

Hypercities is not a single product or database; rather, it’s an ongoing collaborative project which utilizes a number of different resources (including many common commercial products, including Twitter and Google Earth) along with data-mining to create “deep maps” of places (such as Berlin or Los Angeles) and events (such as the Arab Spring and the 2011 Japanese Tsunami).

“Deep” maps, the authors go on to explain, are not the same as “dense” maps; ‘deepness’ implies an ability to provide multiple layers in temporal, thematic, and contextual ways. A “deep” map of Berlin has the capacity to look at Berlin historically; to consider ways in which the landscape of the city has changed over time.

But that is a rather straightforward approach to the possibilities of deep mapping. A more interesting approach is possible when creators think more fundamentally about what it means to map a “place”; why do standard maps included geographical features, landmarks, buildings, transportation routes (be they railroads, streets, or pedestrian bridges), and political demarcations, but not human beings? Why are the people who live in those buildings, alter those geographic features, and utilize those transportation routes not considered part of the landscape as well?

I don’t have my copy of the book with me as I’m typing this, so apologies for not remembering the man’s name, but there is a section on an African-American man who worked for decades as a sweeper (tasked with maintaining the cleanliness of a busy intersection/circle) in early 19th century New York City. Because of his striking appearance as well as his visibility as a full-time custodian of a very public space, there are numerous images of that intersection/circle over several decades which include him if not featuring him. The authors point is that a true map of NYC as experienced by inhabitants of the time might very include him. Because if a map is a representation of a ‘place’ then shouldn’t it reflect the place as actually was/is rather than a mere geographic abstraction?

It’s an intriguing notion, one which leads into further discussions of the use of Twitter to recreate or visually conceptualize the real-time aggregate experiences of thousands of people who were participants in the Egyptian protests against the Mubarak regime in Cairo, and many who were survivors of the Japanese tsunami of 2011. The creation of visual networks, in which hashtags are nodes portrayed in relative sizes, as well as their “weight” in terms of the number of connections to other nodes (hashtags).

All of this suggests something which, again, I mentioned last week–that there is a fundamental difference between “digital mapping” on the one hand , and “using digital tools for cartography” on the other. Digital maps have the ability to ignore or transcend issues of scale–a fundamental concern of traditional cartography, in that deciding what scale is appropriate for a traditional map is not only a crucial step which must be decided early in the creative process, it is also from that point on a static and unalterable decision. With digital tools, that cartographic truism is no longer necessarily true–and the ability of digital maps to transcend or “play with” scale means that digital maps can perform functions beyond the original intent of their creators.

We don’t value visual evidence? Since when?

A frequent complaint/observation in many of our readings for this week is some variation of the idea that historians don’t appreciate or value information presented in visual or graphic form. As David J. Staley puts it in his introduction to Computers, Visualization, and History:

Many historians view images as intrinsically inferior to words. In fact, historians often equate “serious history” with “written history.”

My first reaction was to consider the origins of the historical profession in the late 19th century; the emphasis on written records from authoritative sources in (implicit or explicit) support of the nation-state almost certainly hard-wired a preference for written documentation into our field from the beginning. For that matter, ask the average person what a good “history of [X]” would be, and they would almost certainly assume you are inquiring about a book, if not a multi-volume monograph.

But that explanation seems inadequate, for the implication in these statements isn’t just that our profession has a preference for-slash-implicit bias towards written primary sources; rather, the argument is that we don’t much care for visual representations in the secondary literature. “Real” historiography, according to this bias, might be supplemented with images, but the text is where the meat of the historical argument is found.

On the one hand, even as a part-time grad student who experiences the profession and the “academy” on a somewhat semi-removed basis, I do have a sense of this bias, and perhaps have even subconsciously internalized it to some degree. I know that I will regard a “proper” monograph differently than a text which prominently features visuals, or which plays with typeface and design. I don’t dismiss the latter, but I probably read it with different (lower?) expectations.

As an aside–I am a frequent participant on a soccer-themed message board, and I’ve noticed that myself and many other posters of my age (mostly men, but I’m not sure if this is a gender thing as the posters tend to be overwhelmingly male) shy away or totally avoid using “emojis”; just today I had an exchange by Facebook Messenger with my wife (who is currently overseas) where at one point she was teasing me; I made a deadpan response which she initially thought was serious so she asked to make sure I wasn’t upset. My response was–after assuring her I wasn’t the slightest bit perturbed–to say that “I should have used an emoji”. I followed that up with the near-universal 😉 just to clear things up.

So why am I so reluctant to use emojis and smiley-icons in text-only conversations? Is it the suspicion that they are “silly” and that my words should be sufficient, even though in this situation it wasn’t as we were actually kidding about something which we had had some low-level disagreement over fairly recently, and we haven’t seen each other in nearly a month? Conversations in person rely on tone, body language, and facial expressions not just to “supplement” the spoken word, but to convey information as well. There is a reason why a script for a movie will read differently on the page than dialogue from a novel or short story will–the latter will use prose and context to fill in the “information” that the actors (and the camera work) will convey in the former. The same is true in conversation–emojis, then, aren’t merely “cutesy” accessories to written information, they are also visual cues to the meaning the sender intends. When my wife teased me about something we had bickered about shortly before she left for an extended trip, my typed response “Whatever you say”, shorn of any indication that I was smiling when I typed it, seemed brusque and even defensive if not dismissive. As I admitted to her, a simple “winky-face” smiley would have spared her a few moments of concern from 6000 miles away.

Back to history–although I somewhat agree with Staley and others, I also wonder if this attitude isn’t something which has to be learned in the first place–surely, I can’t be the only nascent historian who first developed an interest in the past by pouring over maps of historic battles, or coffee-table books loaded with pictures of knights, or American Indians, or Matthew Brady’s Civil War photography? Surely many of us were originally drawn to this profession by the very visual evidence that we later learned to denigrate and mistrust once we got “serious”?

When did we stop pouring over maps as if they were talismans of ancient knowledge? When did we decide that the pure pleasure of SEEING the past in artifacts, pictures, and historic sites was somehow suspect, maybe too “easy” to be rich in meaning?

Perhaps we don’t need to teach the history profession to begin to trust visual evidence so much as we need to stop teaching other to not stop finding value in it in the first place.

[Networking] + [Mapping] = [Kirk might get this digital thing yet]

As I was paging through different chapters of Mark Newman’s Networks: An Introduction, it occurred to me that there is a great deal of potential crossover between networking and mapping. Given that my interest is in transportation, I had always assumed that mapping would be the aspect of DH I would find most relevant and useful. I had assumed that the ability to make maps would allow me to illustrate some of my findings as well as to help myself conceptualize my research.

After reading some of the opening chapters, however, I began to realize that “networking” is a broader subject than the computer networks I admit I assumed this week’s readings would be about. And then I read Scott Weingart’s two-part blog post, and it hit me that a “transportation network” is a network, and that it was at least possible that some network modeling or analysis techniques might be appropriate for using a map as a dynamic tool, or at the very least networking would provide a way to make a map of a transportation network more meaningful. Given that I am hoping to look at shifting patterns of transport and sale over a regional transportation network, a simple map might be too static and too lacking in a temporal dimension to represent my subject in a meaningful way. Multiple maps would be better–and I had always assumed they would be necessary–but given that I haven’t gotten to the point of needing those maps yet, I hadn’t yet considered the problem of how I would make them. Or, more specifically, how I would come up with the information in a cohesive, visually meaningful way.

And then, I read Elijah Meeks’ Visualization of Network Distance, and right off the bat there I was looking at a visualization of the transportation network of the Roman Empire at its height. The graphic was recognizable as a map, but later examples of the same basic network formatted in different ways–to emphasize travel time, particularly–which break away from the geographic “realism” of the first version, but all the same it still struck me that the divide between a “network” and a “map” was a lot more porous and conditional than I might have previously believed.

Given the relative paucity of actual data I have to work with so far, I’m not yet sure that I would need more than some of the simpler programs available to handle the project I am beginning to think about. But the potential to think about my topic as a network is exciting, and I think potentially very fruitful.

DH: You Gotta “Do” to “Be”

I’m having a very busy July. My wife is out of town for the entire month visiting friends and family, so I’m holding down the fort on my own. Of course, our son is 17 and very self-sufficient, the dog gets tired easily and doesn’t need that much exercise, and the cat is a cat. Still, on top of my job, I’ve got this minor field readings course on Digital History, I’m teaching a six-week course at NVCC, and some other things going on. So, busy. But while trying to stay on top of things, I was also hoping to find some time to take care of a few minor household repairs I’ve been putting off for some time. Particularly, the shower I share with my wife could really use re-caulking. A few boards on the back patio need replacing. One porch light needs to be replaced. And so on.

Home improvement projects are not really in my comfort zone. I don’t get a whole lot of pleasure out of them, for one thing, but more importantly I didn’t do any of that kind of work growing up so quite often with anything of the sort, I end up feeling a lot more anxiety about getting started. What if I screw up? What if I end up wasting too much time–time being something I don’t have a whole lot of these days–on a task which in theory should not take nearly as long? What if I find that I don’t own all the tools I need, so that the only way I could finish the task would be to spend more money, thus negating the financial advantage of doing it myself.

That sort of thinking will keep you from trying anything–and then you’re just a homeowner with good intentions who ends up paying somebody else to perform simple repairs and projects that you could have done yourself if you had only allowed yourself to stop second-guessing yourself. Rather than letting “I don’t know how to do this, and I don’t know if I want to invest the time learning how” become a reason to stay within the confines of your existing skill set, it’s better to simply dive in, knowing that plenty of other homeowners just like you have tackled similar projects and more, and trust yourself to figure it out on the fly.

OK, I get it. Theory is good and interesting. But talking about DH is not what DH is; you’re not really a digital historian until you’re DOING digital history somehow. I know I came into this with the self-imposed caveat that I’m a “digitally literate historian” rather than a digital historian. Perhaps. But that “explanation” is really, to be honest, something of a crutch. Or an excuse to avoid feeling obligated to learn new skills and acquire new competencies rather than spend that time bolstering what I already know and refining what I already know how to do.

So, I’ve downloaded R, and R Studio. I’m paging through Jocker’s manual. I have no idea what I’m doing and I’m still not convinced this is what I WANT to do, but I’ve run out of excuses not to try. So I’ll dive right in.

I just hope I do a better job caulking our shower than I did on the bathtub in our son’s bathroom last year.

“Is it working?”

Bethany Nowviskie, at one point in her online article Evaluating Collaborative Digital Scholarship (or, Where Credit is Due), asks a very fundamental question about the development of a digital project: “Does it work?” This question is the crucial one for all the readings this week.

I re-phrased her question for the title of this blog post because I wanted to apply it even more broadly than Nowviskie does–I believe that fundamentally, every reading this week somehow dealt with the question of “is digital [scholarship;citation; publishing] working?” Whether it is William Thomas III recounting the process of refining “The Difference Slavery Made” in light of several rounds of feedback from colleagues in both the digital world and the history realm, or Edward L. Ayres wondering what future, if any,digital scholarship has–every one of our readings was either implicitly (as in Ayres’ case) or explicilty (as in the case of Nowviskie, as well as just about any of the readings which include the word “evaluate” in the title) about thay very basic question.

This issue is important for a couple of reasons. First, it is very easy to get tempted by the promise of new technology, and to assume that ‘doing history digitally’ is self-evidently a good thing because, hey, the world is online now. That’s “where people live” so obviously we should meet them there. Problem solved!

Or is it? We don’t know as much as we should know about how much–or how WELL–our scholarship is being read and used once it’s out there in the Great Pixel Wilderness. Nowviskie in particular makes a strong argument that creators of digital scholarship need to do more to find out what, how, and how much our work is being read, cited, downloaded–USED.

Secondly, this issue has professional consequences. Programs need to be mindful that while they are training future practitioners of a new and evolving field, these students and future academics are currently entering a field with some very deeply entrenched, ‘analog’ ways of doing things–specifically, of evaluating the work of others. While it’s an interesting exercise to consider how peer review might be little more than a relic of older methodologies, or how the granting of tenure should be premised on alternate routes of knowledge creation–newly minted graduates of MA and PhD programs must contend with departments with older, often digitally-averse or at least skeptical senior faculty, as well as time-honored methods of validating academic work and research. If we are going to expect newer scholars to pursue digital scholarship methodologies, then we need to come up with more tools and metrics such as Thomas’ typologies to justify and defend the value of the work they are creating.

New digital scholarship deserves respect; part of gaining respect is being able to demonstrate value and effective. We need to be able to answer in the affirmative when asked “is it working?”

Teaching digital humanities starts with the students

Ever since my first day in Clio I back in the Fall of 2013, one question I’ve grappled with is “What comes first–the ‘digital’ or the ‘humanities’?” I hope that doesn’t sound glib because this imagined dichotomy was simply my way of framing a larger issue–is DH primarily a phenomena of the digital world and therefore a distinct field, or it rather the humanities being “done” digitally, and therefore simply a subset of an existing field?

I am still not sure about the answer to that question–in fact, I don’t think there IS an answer to be had (nor is one necessarily desirable)–but this week’s readings suggest to me that, despite the fact that my primary motivation to get a PhD in history is so I can teach at the college level, I was missing the bigger picture. The most important reason to embrace the move toward DH–wherever you think the emphasis is–is because going digital will help practitioners and educators find the audience “where they live”. And for those of us who wish to become history educators, the audience are students. They should come first.

Mills Kelly’s book touches on many aspects of the issue of pedagogy in digital history, but one theme that runs through his book is his deep respect for students and their capacity to learn IF we meet them “where they live.” Good educators must have a respect for their students as the foundation of how they teach. “Respect” is not the same things as deference, and it goes without saying that it’s anything but pandering.

What it is, rather, is a willingness to trust that students can learn; and furthermore, to recognize that the pedagogical methods must fit the students, not the other way around.

Kelly’s books is a call to rise to the challenge of using digital methods in teaching, and not to be intimidated by the seeming digital literacy of our students. As he puts it:

As mentioned earlier, today’s students are adept users of technology, but they are only rarely adept learners with the technology.”

I also found a lot of value in Ryan Cordell’s blog entry “How Not To Teach Digital Humanities”. He writes:

“In such an environment, digital humanities remains a useful banner for gathering a community of scholars doing weird humanities work with computers. And I suspect it will continue to be useful for awhile yet, long after the current wave of DH mania subsides, I hope, into a more productive rapprochement with the larger humanities fields.”

The message here is that humanities students are here for the humanities, so no matter how digitally literate they are, we should never drape DH in so much techno-talk and futurist hype that we scare those students away or lead them to believe that the heart of the humanities–the kernel of their passion–is being swamped with tools, methodologies, and media-studies theorizing. We should regard DH as a continuation and enrichment of the humanities.

And going back to Kelly, I recognize that I have an obligation to maintain this dialogue beyond the parameters of this class. One thing I intend to do is to link this blog to my own website and leave it there–hopefully, the knowledge that I’m making this public will motivate me to log in from time to time when something has struck me or I have something to articulate. I want to avoid the fate he illustrates in Chapter 4:

That class blogs die at the end of the semester should be no surprise, because students so rarely see any benefit to a class blog beyond the grade they earn in that class.”