Moveable Types of Information Literacy: Emerging Electronic Genres and the Deconstruction of Peer ReviewLiteracy Weblog)
Vannevar Bush, writing in 1945, lamented that the volume of scientific knowledge being published each year forced researchers to spend unprecedented time and energy searching for relevant information (and choosing what to ignore). His solution, the Memex, was a photocopier crossed with a microfilm storage and access device. A Memex user would theoretically create links between documents, annotating those links, add those annotations to the filing system, and share the resulting “trails” with other researchers. In some sense, what Vannevar Bush was trying to accomplish with his annotated “trails” has been implemented through the weblog genre (specifically, the research blog or “edublog”).
Traditional textual scholarship aims to construct a specific, ideal, “correct” text. But computer science — the discipline that generates the technology that drives (or hampers) information literacy — aims instead for abstraction. In the open source software development model, particularly as described by Eric Raymond’s “The Cathedral and the Bazaar,” individual programmers contribute their labor freely to a common project made available to the general public for free.
Given the financial pressures publishers of journals exert upon libraries, and the brewing rebellion against what some activists characterize as a cabal of print publishers, some emerging electronic forms have radically altered the dynamics of the scholar-publisher relationship, without necessarily reducing the filtering value provided by peer-review. Electronic journals such as First Monday offer cutting-edge, peer-reviewed scholarship on a timeline of weeks. Even more radical is the Wiki, a form of electronic authorship that decentralizes authority and encourages all readers to annotate, expand, edit, or completely revise a common text.
In such genres, peer-review (in the form of inbound links, e-mailed or posted corrections/refutations, revision, or even deletion) is expected to happen after a text is published, thus making the process of peer review visible, instead of simply the product. Popularly-edited texts online typically summarize general knowledge, rather than offer a forum for the presentation of new knowledge or controversial opinion; further, emerging electronic genres also typically over-represent particular opinions espoused by technorati who manipulate the system (an effect which inspired the term “Googlewashing,” and illustrated by the recent online prank that now causes a Google search for “miserable failure” to point first to George Bush’s official biography on the White House web site). Developing strategies to compensate for these anomalous effects is a vital skill for 21st Century information literacy.
Will writes: “…the much older code is written is a completely different language than the modern ones we’re used to.”
Much the same can be said of literature! :)
And Will, don’t the anually-updated textbooks you have to buy, with updates on the publisher’s website, come pretty close to the scenario you describe?
Dr. Jerz, to be honest, I’m not entirely sure exactly what you’re trying so get across about programming, so I’m not sure how much of a help I can be. And, as you’ve noted many times, it’s damn hard to read a longer piece of text on a computer screen.
First, I think you are both right. Consider this – a computer program starts out very specific and exact about what it can do. We programmers aim for abstraction because a program that runs has very specific and exact behavior – we already have “exact and specific”. We aim for abstraction because we are missing it. We go to great lengths to achieve abstraction because it would be awful nice if we didn’t have to be so exact and specific all the time. Writing, on the other hand, starts out very abstract and fluid. So much so, that without discipline on the part of the writer writing will ramble off one way, then another, then another – ending up with nothing that means anything. Writers aim for specific, correct, and ideal text exactly because they start off having so much abstraction.
Now, programmers do, as Dennis mentioned, sometimes aim for exact and ideal. And writers certainly do aim for a certain level of abstraction.
That’s all I have to say specifically to what you said. I think the medium plays a greater part in how things are looked at as well. Writers aim for exact, specific, and ideal because once they have their text printed, it is impossible to make changes – so wouldn’t it be great if they could just get it right? While you might think the same would be true of computer programs, programs have upgrades. Can you image asking readers of your books to upgrade their books every few years? ;-)
Dennis points out that programmers write more new code than review old, while authors review more old works than write new. But I think this has more to do with the age of the fields. Writing has centuries of work that was previously done. Programming has only decades. PLUS the much older code is written is a completely different language than the modern ones we’re used to.
I could go on, but I’m not sure how useful what I’m writing is, and of course I have other things to do. ;-)
Point taken — I could have defined those terms more clearly; I had a certain “lovers of dusty tomes” thing in mind when I meant “traditional textual scholarship,” colored perhaps by the po-mo sendup of new historicism that Nabokov put in Pale Fire. And obviously, hypertext scholarship has a different approach to this issue. I was thinking more in terms of textual efforts to use an author’s later drafts to reconstruct earlier drafts, re-construcing the UR biblical text, the lost Canterbury Tales manuscript, etc. That’s part of a metaphor I was developing in contrast to the programmer’s technique of eliminating errors, inefficiencies and redundancies in order to arrive at a set of coding instructions that, for instance, sorts a list in a minimum number of steps, using a minimum amount of memory, or in a minimum amount of time. And, to echo your point about terminology, I am thinking of software rather than hardware. (Will, if you’re following along, can you help me out, or at least point out what I’m botching in my attempt to explain programming to a fellow humanist?)
I’d surmise that there are probably far more CS people who are professionally involved in the creation of code than the historical/cultural reconstruction and re-interpretation of existing code. Concurrently, I surmise that there are probably far more literary professionals gainfully employed by the investigation of existing literature than in the production of new literature.
And yes, by releasing a dense chunk of prose onto the Internet without first running it past an expert, I risk exposing my ignorance about certain assumptions; but hey, this post has attracted a comment, woo hoo! More seriously, the conversations and new points of inquiry that open up along the way will, I hope, be worthwhile.
I think this is a super topic. I might take issue with the wording of this claim: “Traditional textual scholarship aims to construct a specific, ideal, ‘correct’ text. But computer science — the discipline that generates the technology that drives (or hampers) information literacy — aims instead for abstraction”. When I first read this, I thought it was counter-intuitive and that, indeed, the opposite is the case. I don’t think textual scholarship aims to construct a specific, ideal, ‘correct’ text. Scholarship itself can deconstruct text — whether peer reviewed or not. So maybe it’s a matter of your terminology. Perhaps editorial conventions do this, or the function (in the Foucaultian sense) of “authorship” does this, but not “scholarship.” And certainly computer science on the face of it is about anything BUT abstraction (which is a human cognition, not a matter of programming). In fact, isn’t the converse true: that instruments like computer tech are inherently material and non-abstract and programmatic, whereas scholarship is a somewhat fluid subject under perpetual revision? In any case, your point about “peer review made visible” is, of course, the crux of things and sounds like a great topic to explore. There’s more of a cultural “power move” at work in all this, in my view, than anything substantially unique regarding information literacy, but perhaps your terms just seem to need clearer definitions for this argument to fly. It’s all about discourse communities for me, in the end: here there’s open access to the conversations that have heretofore occurred behind closed doors. This disempowers the cultural elite while at the same time empowers the masses; but what is at stake in this for publishing?