Never Mistake the Tool for the Task

I’ve really enjoyed reading the “syuzhet” debate, but with all due respect to both Jockers and Swafford, the post which grabbed me the most was Scott Enderle’s–not so much for his specific observations, but simply because he pulled back a bit from the back-and-forth and asked a slightly more fundamental question beyond the technical limitations Swafford pointed out–in this case, is the Fourier curve a good model to apply to “sentiment” at all?

It seemed to me that the more Swafford and others pointed out larger conceptual problems, the more Jockers dug in and defended the fundamental workings of his tool, but he never took a step back and defended the ultimate logic of applying it to the problem in the first place. In other words–arguments over whether nor not the 85% accuracy rate that the Stanford tool could produce was “good enough” were interesting, but absolutely nothing in this entire exchange–with all due apologies to the late Mr. Vonnegut–ever convinced me the whole enterprise has any inherent value. I’m just not convinced that even a more accurate plot curve or a more detailed and nuanced sentiment curve would TELL us that much.

I didn’t say that the would tell us nothing, mind you–there is some value in being able to establish certain universal or common themes or plot structures. I can think of plenty of possible insights one might gain by being able to rather quickly graph the general emotional arc of thousands of novels from a particular genre, time or place; or to verify certain universals in plot construction. Jockers, I’m certain, has since come up with a number of such findings.

However…he seems to have begun his project with the notion that there are “six or seven” basic plots, and while he later acknowledges that there arguments that that estimate might be a little low, it’s low by a matter of degree, not kind. In other words–Yes, according to Syuzhet, there are indeed only a handful of plots in all of modern fiction.

Well…OK then! So what? I don’t mean to be glib, but while Jockers had a ready answer for every one of Swafford’s objections, I think he completely overlooks the implicit objection that underlies her entire argument–that his version of “distant reading” is ultimately ONLY interesting “from a distance”, so to speak. The closer you get, the less informative it seems. A program which has to “smooth out” a model of a real thing to such a degree that others find the exceptions and misreadings more indicative than the overall interpretation might simply be TOO distant to be of much use.

I don’t know if I even believe that, to be honest. In a different mood, I might take Jockers’ side in this argument. Ultimately, I find the entire exchange to be mostly a reminder that one of the LEAST true cliches I’ve ever heard is “necessity is the mother of invention.” That, I believe, is rarely true. Edison thought the phonograph would be a teaching tool. He didn’t think the motion picture had commercial applications. Alexander Graham Bell thought he was merely improving the telegraph. I doubt Berners Lee had internet trolls in mind a few decades ago.

It’s OK to find unexpected uses for new tools; it can be exciting to play with a new technology just to see what it can do. But if we’re going to be Digital historians, then we need to go looking for tools that will help us do useful things, rather than look for things just to put our tools to use.

This Mapping thing is all over the place

Our readings this week suggest that there is a wide range of possibilities for mapping in the digital realm. Last week, I suggested that there is a strong link between networking and mapping. After reading “Hypercities”, I am even more convinced that this is the case.

Hypercities is not a single product or database; rather, it’s an ongoing collaborative project which utilizes a number of different resources (including many common commercial products, including Twitter and Google Earth) along with data-mining to create “deep maps” of places (such as Berlin or Los Angeles) and events (such as the Arab Spring and the 2011 Japanese Tsunami).

“Deep” maps, the authors go on to explain, are not the same as “dense” maps; ‘deepness’ implies an ability to provide multiple layers in temporal, thematic, and contextual ways. A “deep” map of Berlin has the capacity to look at Berlin historically; to consider ways in which the landscape of the city has changed over time.

But that is a rather straightforward approach to the possibilities of deep mapping. A more interesting approach is possible when creators think more fundamentally about what it means to map a “place”; why do standard maps included geographical features, landmarks, buildings, transportation routes (be they railroads, streets, or pedestrian bridges), and political demarcations, but not human beings? Why are the people who live in those buildings, alter those geographic features, and utilize those transportation routes not considered part of the landscape as well?

I don’t have my copy of the book with me as I’m typing this, so apologies for not remembering the man’s name, but there is a section on an African-American man who worked for decades as a sweeper (tasked with maintaining the cleanliness of a busy intersection/circle) in early 19th century New York City. Because of his striking appearance as well as his visibility as a full-time custodian of a very public space, there are numerous images of that intersection/circle over several decades which include him if not featuring him. The authors point is that a true map of NYC as experienced by inhabitants of the time might very include him. Because if a map is a representation of a ‘place’ then shouldn’t it reflect the place as actually was/is rather than a mere geographic abstraction?

It’s an intriguing notion, one which leads into further discussions of the use of Twitter to recreate or visually conceptualize the real-time aggregate experiences of thousands of people who were participants in the Egyptian protests against the Mubarak regime in Cairo, and many who were survivors of the Japanese tsunami of 2011. The creation of visual networks, in which hashtags are nodes portrayed in relative sizes, as well as their “weight” in terms of the number of connections to other nodes (hashtags).

All of this suggests something which, again, I mentioned last week–that there is a fundamental difference between “digital mapping” on the one hand , and “using digital tools for cartography” on the other. Digital maps have the ability to ignore or transcend issues of scale–a fundamental concern of traditional cartography, in that deciding what scale is appropriate for a traditional map is not only a crucial step which must be decided early in the creative process, it is also from that point on a static and unalterable decision. With digital tools, that cartographic truism is no longer necessarily true–and the ability of digital maps to transcend or “play with” scale means that digital maps can perform functions beyond the original intent of their creators.

We don’t value visual evidence? Since when?

A frequent complaint/observation in many of our readings for this week is some variation of the idea that historians don’t appreciate or value information presented in visual or graphic form. As David J. Staley puts it in his introduction to Computers, Visualization, and History:

Many historians view images as intrinsically inferior to words. In fact, historians often equate “serious history” with “written history.”

My first reaction was to consider the origins of the historical profession in the late 19th century; the emphasis on written records from authoritative sources in (implicit or explicit) support of the nation-state almost certainly hard-wired a preference for written documentation into our field from the beginning. For that matter, ask the average person what a good “history of [X]” would be, and they would almost certainly assume you are inquiring about a book, if not a multi-volume monograph.

But that explanation seems inadequate, for the implication in these statements isn’t just that our profession has a preference for-slash-implicit bias towards written primary sources; rather, the argument is that we don’t much care for visual representations in the secondary literature. “Real” historiography, according to this bias, might be supplemented with images, but the text is where the meat of the historical argument is found.

On the one hand, even as a part-time grad student who experiences the profession and the “academy” on a somewhat semi-removed basis, I do have a sense of this bias, and perhaps have even subconsciously internalized it to some degree. I know that I will regard a “proper” monograph differently than a text which prominently features visuals, or which plays with typeface and design. I don’t dismiss the latter, but I probably read it with different (lower?) expectations.

As an aside–I am a frequent participant on a soccer-themed message board, and I’ve noticed that myself and many other posters of my age (mostly men, but I’m not sure if this is a gender thing as the posters tend to be overwhelmingly male) shy away or totally avoid using “emojis”; just today I had an exchange by Facebook Messenger with my wife (who is currently overseas) where at one point she was teasing me; I made a deadpan response which she initially thought was serious so she asked to make sure I wasn’t upset. My response was–after assuring her I wasn’t the slightest bit perturbed–to say that “I should have used an emoji”. I followed that up with the near-universal 😉 just to clear things up.

So why am I so reluctant to use emojis and smiley-icons in text-only conversations? Is it the suspicion that they are “silly” and that my words should be sufficient, even though in this situation it wasn’t as we were actually kidding about something which we had had some low-level disagreement over fairly recently, and we haven’t seen each other in nearly a month? Conversations in person rely on tone, body language, and facial expressions not just to “supplement” the spoken word, but to convey information as well. There is a reason why a script for a movie will read differently on the page than dialogue from a novel or short story will–the latter will use prose and context to fill in the “information” that the actors (and the camera work) will convey in the former. The same is true in conversation–emojis, then, aren’t merely “cutesy” accessories to written information, they are also visual cues to the meaning the sender intends. When my wife teased me about something we had bickered about shortly before she left for an extended trip, my typed response “Whatever you say”, shorn of any indication that I was smiling when I typed it, seemed brusque and even defensive if not dismissive. As I admitted to her, a simple “winky-face” smiley would have spared her a few moments of concern from 6000 miles away.

Back to history–although I somewhat agree with Staley and others, I also wonder if this attitude isn’t something which has to be learned in the first place–surely, I can’t be the only nascent historian who first developed an interest in the past by pouring over maps of historic battles, or coffee-table books loaded with pictures of knights, or American Indians, or Matthew Brady’s Civil War photography? Surely many of us were originally drawn to this profession by the very visual evidence that we later learned to denigrate and mistrust once we got “serious”?

When did we stop pouring over maps as if they were talismans of ancient knowledge? When did we decide that the pure pleasure of SEEING the past in artifacts, pictures, and historic sites was somehow suspect, maybe too “easy” to be rich in meaning?

Perhaps we don’t need to teach the history profession to begin to trust visual evidence so much as we need to stop teaching other to not stop finding value in it in the first place.

[Networking] + [Mapping] = [Kirk might get this digital thing yet]

As I was paging through different chapters of Mark Newman’s Networks: An Introduction, it occurred to me that there is a great deal of potential crossover between networking and mapping. Given that my interest is in transportation, I had always assumed that mapping would be the aspect of DH I would find most relevant and useful. I had assumed that the ability to make maps would allow me to illustrate some of my findings as well as to help myself conceptualize my research.

After reading some of the opening chapters, however, I began to realize that “networking” is a broader subject than the computer networks I admit I assumed this week’s readings would be about. And then I read Scott Weingart’s two-part blog post, and it hit me that a “transportation network” is a network, and that it was at least possible that some network modeling or analysis techniques might be appropriate for using a map as a dynamic tool, or at the very least networking would provide a way to make a map of a transportation network more meaningful. Given that I am hoping to look at shifting patterns of transport and sale over a regional transportation network, a simple map might be too static and too lacking in a temporal dimension to represent my subject in a meaningful way. Multiple maps would be better–and I had always assumed they would be necessary–but given that I haven’t gotten to the point of needing those maps yet, I hadn’t yet considered the problem of how I would make them. Or, more specifically, how I would come up with the information in a cohesive, visually meaningful way.

And then, I read Elijah Meeks’ Visualization of Network Distance, and right off the bat there I was looking at a visualization of the transportation network of the Roman Empire at its height. The graphic was recognizable as a map, but later examples of the same basic network formatted in different ways–to emphasize travel time, particularly–which break away from the geographic “realism” of the first version, but all the same it still struck me that the divide between a “network” and a “map” was a lot more porous and conditional than I might have previously believed.

Given the relative paucity of actual data I have to work with so far, I’m not yet sure that I would need more than some of the simpler programs available to handle the project I am beginning to think about. But the potential to think about my topic as a network is exciting, and I think potentially very fruitful.

DH: You Gotta “Do” to “Be”

I’m having a very busy July. My wife is out of town for the entire month visiting friends and family, so I’m holding down the fort on my own. Of course, our son is 17 and very self-sufficient, the dog gets tired easily and doesn’t need that much exercise, and the cat is a cat. Still, on top of my job, I’ve got this minor field readings course on Digital History, I’m teaching a six-week course at NVCC, and some other things going on. So, busy. But while trying to stay on top of things, I was also hoping to find some time to take care of a few minor household repairs I’ve been putting off for some time. Particularly, the shower I share with my wife could really use re-caulking. A few boards on the back patio need replacing. One porch light needs to be replaced. And so on.

Home improvement projects are not really in my comfort zone. I don’t get a whole lot of pleasure out of them, for one thing, but more importantly I didn’t do any of that kind of work growing up so quite often with anything of the sort, I end up feeling a lot more anxiety about getting started. What if I screw up? What if I end up wasting too much time–time being something I don’t have a whole lot of these days–on a task which in theory should not take nearly as long? What if I find that I don’t own all the tools I need, so that the only way I could finish the task would be to spend more money, thus negating the financial advantage of doing it myself.

That sort of thinking will keep you from trying anything–and then you’re just a homeowner with good intentions who ends up paying somebody else to perform simple repairs and projects that you could have done yourself if you had only allowed yourself to stop second-guessing yourself. Rather than letting “I don’t know how to do this, and I don’t know if I want to invest the time learning how” become a reason to stay within the confines of your existing skill set, it’s better to simply dive in, knowing that plenty of other homeowners just like you have tackled similar projects and more, and trust yourself to figure it out on the fly.

OK, I get it. Theory is good and interesting. But talking about DH is not what DH is; you’re not really a digital historian until you’re DOING digital history somehow. I know I came into this with the self-imposed caveat that I’m a “digitally literate historian” rather than a digital historian. Perhaps. But that “explanation” is really, to be honest, something of a crutch. Or an excuse to avoid feeling obligated to learn new skills and acquire new competencies rather than spend that time bolstering what I already know and refining what I already know how to do.

So, I’ve downloaded R, and R Studio. I’m paging through Jocker’s manual. I have no idea what I’m doing and I’m still not convinced this is what I WANT to do, but I’ve run out of excuses not to try. So I’ll dive right in.

I just hope I do a better job caulking our shower than I did on the bathtub in our son’s bathroom last year.

“Is it working?”

Bethany Nowviskie, at one point in her online article Evaluating Collaborative Digital Scholarship (or, Where Credit is Due), asks a very fundamental question about the development of a digital project: “Does it work?” This question is the crucial one for all the readings this week.

I re-phrased her question for the title of this blog post because I wanted to apply it even more broadly than Nowviskie does–I believe that fundamentally, every reading this week somehow dealt with the question of “is digital [scholarship;citation; publishing] working?” Whether it is William Thomas III recounting the process of refining “The Difference Slavery Made” in light of several rounds of feedback from colleagues in both the digital world and the history realm, or Edward L. Ayres wondering what future, if any,digital scholarship has–every one of our readings was either implicitly (as in Ayres’ case) or explicilty (as in the case of Nowviskie, as well as just about any of the readings which include the word “evaluate” in the title) about thay very basic question.

This issue is important for a couple of reasons. First, it is very easy to get tempted by the promise of new technology, and to assume that ‘doing history digitally’ is self-evidently a good thing because, hey, the world is online now. That’s “where people live” so obviously we should meet them there. Problem solved!

Or is it? We don’t know as much as we should know about how much–or how WELL–our scholarship is being read and used once it’s out there in the Great Pixel Wilderness. Nowviskie in particular makes a strong argument that creators of digital scholarship need to do more to find out what, how, and how much our work is being read, cited, downloaded–USED.

Secondly, this issue has professional consequences. Programs need to be mindful that while they are training future practitioners of a new and evolving field, these students and future academics are currently entering a field with some very deeply entrenched, ‘analog’ ways of doing things–specifically, of evaluating the work of others. While it’s an interesting exercise to consider how peer review might be little more than a relic of older methodologies, or how the granting of tenure should be premised on alternate routes of knowledge creation–newly minted graduates of MA and PhD programs must contend with departments with older, often digitally-averse or at least skeptical senior faculty, as well as time-honored methods of validating academic work and research. If we are going to expect newer scholars to pursue digital scholarship methodologies, then we need to come up with more tools and metrics such as Thomas’ typologies to justify and defend the value of the work they are creating.

New digital scholarship deserves respect; part of gaining respect is being able to demonstrate value and effective. We need to be able to answer in the affirmative when asked “is it working?”

Teaching digital humanities starts with the students

Ever since my first day in Clio I back in the Fall of 2013, one question I’ve grappled with is “What comes first–the ‘digital’ or the ‘humanities’?” I hope that doesn’t sound glib because this imagined dichotomy was simply my way of framing a larger issue–is DH primarily a phenomena of the digital world and therefore a distinct field, or it rather the humanities being “done” digitally, and therefore simply a subset of an existing field?

I am still not sure about the answer to that question–in fact, I don’t think there IS an answer to be had (nor is one necessarily desirable)–but this week’s readings suggest to me that, despite the fact that my primary motivation to get a PhD in history is so I can teach at the college level, I was missing the bigger picture. The most important reason to embrace the move toward DH–wherever you think the emphasis is–is because going digital will help practitioners and educators find the audience “where they live”. And for those of us who wish to become history educators, the audience are students. They should come first.

Mills Kelly’s book touches on many aspects of the issue of pedagogy in digital history, but one theme that runs through his book is his deep respect for students and their capacity to learn IF we meet them “where they live.” Good educators must have a respect for their students as the foundation of how they teach. “Respect” is not the same things as deference, and it goes without saying that it’s anything but pandering.

What it is, rather, is a willingness to trust that students can learn; and furthermore, to recognize that the pedagogical methods must fit the students, not the other way around.

Kelly’s books is a call to rise to the challenge of using digital methods in teaching, and not to be intimidated by the seeming digital literacy of our students. As he puts it:

As mentioned earlier, today’s students are adept users of technology, but they are only rarely adept learners with the technology.”

I also found a lot of value in Ryan Cordell’s blog entry “How Not To Teach Digital Humanities”. He writes:

“In such an environment, digital humanities remains a useful banner for gathering a community of scholars doing weird humanities work with computers. And I suspect it will continue to be useful for awhile yet, long after the current wave of DH mania subsides, I hope, into a more productive rapprochement with the larger humanities fields.”

The message here is that humanities students are here for the humanities, so no matter how digitally literate they are, we should never drape DH in so much techno-talk and futurist hype that we scare those students away or lead them to believe that the heart of the humanities–the kernel of their passion–is being swamped with tools, methodologies, and media-studies theorizing. We should regard DH as a continuation and enrichment of the humanities.

And going back to Kelly, I recognize that I have an obligation to maintain this dialogue beyond the parameters of this class. One thing I intend to do is to link this blog to my own website and leave it there–hopefully, the knowledge that I’m making this public will motivate me to log in from time to time when something has struck me or I have something to articulate. I want to avoid the fate he illustrates in Chapter 4:

That class blogs die at the end of the semester should be no surprise, because students so rarely see any benefit to a class blog beyond the grade they earn in that class.”

Who are the public and why can’t they be just like us?

All of our readings for this week were pretty compelling. While reading Gelfand’s article on archival exhibits, I couldn’t help but reflect on the fact that I completed my MLIS in Archives and Records Management only one decade ago, and I don’t recall studying, discussing, or even considering the possibility of incorporating exhibitions and curating into the profession I thought I was preparing for. I’m really out of the loop (I allowed my student-level membership to the Society of American Archivists to expire right around when I completed my studies and recognized that I was already embarked on a career in public librarianship), so I have no idea if things are any better now.

The reading which really resonated with me, however, was Sheila Brennan’s “The Public is Dead, Long Live the Public”. Given that I have been, for a little over a decade, a public librarian, the issues Brennan addresses are particularly relevant to how I currently spend my office hours and earn a living. Public libraries presume to serve the entire public without making any distinction as to worthiness or entitlement (which often means, in practice, explaining to nervous parents that, No, we won’t be asking the smelly homeless man in periodicals to leave the building); therefore, public librarians often carry on our work both in our facilities and through our online presence as if who “the public” is is self-evident–it’s EVERYBODY. Therefore, we often put little work into determining which members of “the public” actually use the library, and how.

This is an especially important issue online. Studies and surveys have shown that library websites are used for basic information such as hours, location, phone numbers and program information, as well as requesting and renewing items for checkout. What they are NOT used for is as portals for online research. Although our subscription databases get some use, they don’t get nearly enough. And part of the reason for that is because our websites reflect the same outdated modes of information searching that the architecture of our older buildings sometimes reflects–with separate desks for circulation and reference, with “pages” moving through the stacks pushing cartloads of books which they are allowed to re-shelve even though they are somehow not trusted to direct patrons towards those same books. And so on–in a world of Google and Wikipedia, we still insist on maintaining gatekeeper status both in form and in function. Our web presence reflects this. Too much verbiage, too much jargon, too many hoops–all justified by the fading belief that our information is “better” than that which is much easier and more convenient to find.

I, personally, love libraries. I love archives. I love browsing through JSTOR, for that matter. But I also know that “the public” isn’t some amorphous mass of undifferentiated outsiders who need to be taught how to search for information the “correct” way. That doesn’t mean I believe we should lower our standards or “dumb it down.” But I do recognize that we have to first, metaphorically, get “the public” to cross the threshold and then STAY HERE for a little bit.

Bringing the Archives along for the ride

Before I say anything else, there’s something I must get off my chest–I cannot abide Nicholson Baker. Years ago, when I was in library school, I heard about–and then read–“Double Fold”, his seemingly well-researched but actually thoughtless attack on the ways in which libraries and archives were trying to deal with the physical and logistical challenges of newspaper back-files, outdated card catalogs, and superfluous monographs. His attack on my then-new profession was superficially grounded in research, but dig a little deeper and it became clear that the man simply didn’t understand to scope or scale of the physical holdings which archives and libraries would be tasked with holding, if he had been able to get his way.

A few years later, when I was interested in the Yugoslav Wars of the 1990s and some of the attendant issues, I read his pacifist history of World War II, “Human Smoke.” What a moral abomination that book is. I find pacifism troubling on philosophical basis, but Baker just made it too easy–the man wrote a book in which Adolf Hitler was the passive victim of Winston Churchill, and in which the Holocaust was practically forced on a hapless Third Reich by the vengeful, bloodthirsty Allies. I was interested in Baker’s arguments because they echoed so many of the apologists for Serbian ultra-nationalism in Croatia, Bosnia, and Kosovo, but I had no idea how morally corroded his worldview is.

So, suffice it to say–reading anything in which Baker is being quoted as any kind of authority is tough going for me.

Beyond that…

The readings for this week were very provocative. The first piece, by Manovich, stakes out the territory to be explored further, but beyond a general sense that databases are a new kind of discourse in competition with–or rather, as an alternative to–narrative, it was hard for me to get a sense of how this idea would play out in our field. I don’t think that is the fault of the author, but rather because while I was understanding his arguments I was still looking at the issue through the idea of the database (broadly conceived) as a distant entity from the historical profession. I would also say that I say it as static. Obviously, I understand that databases are dynamic within their own parameters, by which I mean that databases are dynamic tools that are capable of producing ever-changing outputs. However, I still thought of them as a distinct thing to which the user turns to; as a “fixed point” on a scholarly journey, so to speak.

Subsequent readings broadened my horizons in two ways; to begin with–and as alluded to above–I began to see the database as a dynamic component of the research process, rather than a tool which exists apart from that process. Secondly, it became clear from the readings that we were being asked to think of the “database” as part of a larger institution, what I would call a ‘reference infrastructure’ that also includes libraries and archives.

This blurring of the line between databases and libraries and archives is particularly rational when you consider that when we talk about databases we might be talking about tools created by or for researchers for the purpose of storing, processing, and analyzing data/information, but we might also be talking about products like JSTOR, which provide remote, searchable access to professional publications and secondary literature. The latter database essentially serves as a digital, “virtual” library; BUT it can also function as a research tool itself, given the capabilities of full-text searching across the holdings.

The report on historian’s research practices by Rutner and Schonfeld really brought this home to me; many of the findings were compelling and a lot of the quotes were intriguing, but my main takeaway was a broader one–how by and large, the scholars seemed to regard the world of research as one big process that mixes digital, analog, on-site, and face-to-face reference and research methodologies without a great deal of friction. The fact that libraries are still important sources for secondary research co-existed with the fact that many of the same researchers, say, use Google Books even to find citations in monographs they own a paper copy of. The authors could very well have subtitled the study “Whatever Works.” And that sort of attitude reflects a dynamic, ongoing research process which does not make hard distinctions between sources of information, or between the work and the archive/database.

Finally, Lara Putnam’s piece on digital searching and transnational history brought up an interesting point–the rise of digitized sources made the research and writing of microhistories more likely and more possible–but the initial discoveries of many of these potential microhistories could only be found on the marco- level. Which, again, was made possible by the same resources. The deep, granular level of detail is, of course, NOT transnational in scope, but it was the transnational scope of the resources she was using that allowed her to make a connection, and therefore a new insight, that a narrower point of view would have missed. And rather than fret over this “contradiction”, Putnam embraced it, and recognized this inherent polarity as a new vista made possible by digital research.

The archives (and the library, and the database), in other words, is no longer just a place we go to learn more ABOUT something; it’s a place we can looking for new “abouts”.

Are we over thinking this, or not thinking about it enough?

Our readings for this week focus on various issues both practical and theoretical regarding various facets of digitization of “analog” documents, whether text, image, or some media (although audio media really didn’t get much ink–make that pixels–this week). From my perspective, there were two broad questions:

1) Issues both technical and even theoretical concerning how digitization is carried out (whether through OCR, ‘crowd-sourcing’, or some other method);

2) Methodological questions and concerns stemming from the fact that a digitized document/artifact is both a distinct object rather than merely a facsimile of the ‘real’ object, as well as a document which will be viewed, used, or experienced differently than the original (which might have been a newspaper, a microfilm OF a newspaper, etc.).

These are intriguing questions. It is not surprising that these readings took us deep into the world of media studies, as many of these issues relate to the idea of a document as being inextricably part of the medium in which it is produced, stored, distributed, and used. Other questions relate to technical and administrative concerns–what sort of parameters and control mechanisms need to be in place in order to execute a successful Open Character Recognition-driven digitization process? How to manage open-source contributions to a project, which appear to be a balancing act between maintaining certain academic standards versus honoring and fostering enthusiasm and a willingness to participate by members of the larger public?

But while intriguing, are all of these questions as urgent as the authors believe them to be? Or are they merely cautionary? I was of two minds while reading, say, Ian Milligan’s piece on the (selective) digitization of certain major Canadian newspapers. It’s not that I think he’s wrong, mind–I believe his insight that academics might be unthinkingly over-relying on those newspapers/sources which digitization have made readily available without considering the bias or at least skewed perspective that doing so entails, is a good one. However, I don’t think we need to set off the alarm bells–just acknowledge the issue and move on. There has never been a time when historians have had ideal access to sources; the rise of digitization will likely continue to impose and create new asymmetries, but other than acknowledging that I don’t know what else scholars should do.

Likewise the concern Nicole Maurantonio expresses regarding the the text-only presentation of newspapers in the Newsbank database. I am aware of Newsbank’s limitations (as a librarian responsible for managing database subscriptions for Prince William County, I have first-hand experience with how they run their business–on a shoestring, in short) and Maurantonio certainly has a point that their approach is less than ideal. The gap between “material” and “content” is particularly wide here. The lack of visual information–not only the pictures themselves, but also the layout and the overall impact of the way in which the Philadelphia newspapers presented the MOVE story–certainly would hinder interpretation. So I accept her argument. But, again, by her own admission she was only able to access the original images because she lives in Philadelphia where the physical originals are housed–so in other words, digitization turns out to be almost a zero-sum game here because, prior to digitization, most people far from Philly would not have been able to research this story at all.

I am not trying to be cynical or reductionist; I truly appreciate such concerns on a theoretical level; but to my mind these are concerns which need to be honored and kept in mind, but not so much that they become paralyzing. The perfect should not be the enemy of the good, to dust off a hoary (if apt) cliche. For “deep” reading, yes, these issues are important. However, I’d argue that for every scholar for whom the materiality of a digitized newspaper might present a barrier to nuanced reading and interpretation, there will be at least a dozen others for whom a “good enough” transcription of the content (Platonic or otherwise) will be good enough.