Context for Halyes, My Mother Was a Computer

Hayles is an established authority on a humanities-centered approach to human-computer interactions, and My Mother Was a Computer (2005) is her third book on the topic.

At times she writes with the expectation that her readers already know some foundational topics that she may have spent more time introducing in her previous books.

In the mid-1990s, when the first graphical web browsers were released, early HTML authors  began to learn, mostly through trial and error, how embedded images and hyperlinks introduced new kinds of reading and writing experiences. At that time, even laptops had to be wired to connect to the internet, so people had to make a conscious choice to go online, and since it was all new to us, we were conscious of how “being online” challenged our perceptions of the world. There was much talk of “virtual reality” as a kind of digital version of the offline world.

Recognizing the Blurred Boundaries between Online and Offline Culture

Hayles was one of the early humanists who noticed that greater access to the internet was blurring the lines between the online and offline world.

To anyone born around 1990, it is a truth universally acknowledged that online world is something you consult regularly and often take along with you, in order to enhance and shape and give meaning to your daily experiences, but in the early 1990s this was a challenging concept for humanists who saw technology as dehumanizing and disembodying us.

Hayles strongly argues that we cannot understand technology unless we recognize that we experience technology through the subjective, emotional, physical vehicle of our senses; likewise, we cannot understand human experience if we do not explore how the tools we create — including spoken language, the written word, the printing press, and computer code — defines, shapes, and forms our interactions with that world.

Steve Mann of the University of Toronto has, since the late 1970s, experimented with wearable computers. (Image: Wikipedia)

Just last week, Google employees were spotted walking around wearing ubernerdy square wire glasses like these…

Then this video appeared on YouTube.

Whatever our reaction to the version of life Google presents in the video, we should remember that just 10 years ago, only on-duty police officers (with radios), on-call doctors (with one-way pagers), and Starfleet officers cared about personal communicators.

In 2012, it would be somewhat bizarre to schedule your life around access to a pay phone, but that was mainstream behavior just 10 years ago.

(Image: PoorlyDressed)

Hayles starts with the assumption that technology has so enmeshed itself with our daily lives that those of us who think deep thoughts about words are doing ourselves a disservice if we focus mainly on how the internet changes the act of reading and writing (a concept known as “remediation”). She prefers to use the term “intermediation,” a term that emphasizes the blurred boundaries between new media and old media. For instance, the printed books of today are written, edited, typeset, marketed, distributed, and purchased with the aid of computers, so we cannot usefully discuss book culture without also considering e-book culture.

Analog vs Digital

Hayles explores what she sees as a natural progression from speech, to writing, to code, as humans seek more effective methods for make sense of the world. Just as the invention of writing did not destroy speech, Halyes does n0t present code as superior to writing or speech; instead, she calls our attention to the complex interplay between the analog world (a subjective field of continuous curves and lines — think of the sweeping motion of the second hand on an old-fashioned clock, or the mercury in a thermometer, or the tiny blobs of ink adhering to tiny fibers of paper in and old-fashioned photograph) and the digital world (a world of precise, divided values — zero and one, true and false, a display that reads precisely 98.6 or 98.7 degrees, the blocks of precise color that you see when you zoom in on a digital image).

An analog clock tells the time through the sweeping circular motions of clock hands. The hour hand is moving all the time, just very slowly; we can barely perceive the motion of the minute hand, but it too is moving. We can digitize the analog motion of clock hands by reporting a precise number (e.g. 12:08) that represents a discrete state; each hand is replaced by a tally, not via a smooth motion of a curve in space, but an unchanging, standardized “digit” (a word that also means a finger or toe, reminding us of the physiological reason why we have a base 10 number system).

Since computers deal strictly with zeros and ones, you might think that digitization is something that pertains only to computers, but Hayles points out that  long before computers came around, humans started chopping up streams of experiences and actions into discrete chunks, a process that Hayles calls “digitization.”

Just as we might have divided up a clock face into 10 or 24 pie slices instead of 12, we might very well — and other cultures certainly do — divide up human sounds into different discrete chunks.  To my ears, when someone from the South asks for “tape” I hear the word “type,” so the distinction between what I recognize as “long A” and  what I recognize as “long I” is not universal. Likewise, the Japanese language features a single “liquid consonant” sound that is properly performed with the mouth positioned somewhere between the way native English speakers would produce an “L” and an “R.”

The Computational Universe

Just as the existence of complex clockwork gave 18th Century philosophers the opportunity to notice that the metaphor of a clockwork universe was a useful way to explore the implications of scientific discoveries that were neatly compartmentalizing and defining the observable universe, Hayles introduces the concept of the “Computational Universe” — the viewpoint that the universe acts as a computational engine, hosting complex processes that operate according to discernible rules that include feedback loops. Computers are far more complicated machines than any clock, and the programs we write for computers have their own beautiful complexity and beautiful utility.

Hayles says she’s not interested in discerning whether the universe really IS a computational engine, or whether that’s just another useful metaphor; either way, she points out that humans are discovering incredibly complex processes in Nature that are impossible to define according to even the most complex mathematical equations, but that are possible to describe with a fairly simple set of rules — what she calls “code.”

DNA is a strand of four different bonded molecules that arrange themselves in such a way as to store huge amounts of data, to retrieve and act on that data when certain conditions exist, and to make copies of itself , including making a copy of the instructions for using, and duplicating, DNA.  (Image: Wikipedia)

Emergent Behavior

Numerous times, Hayles uses the term “emergent,” which means a complex behavior that results from the application of a simple set of rules.

In biology, we know that an individual ant is a very simple organism, while the hive is capable of acting in very complex ways. Only the queen is capable of reproducing, but she cannot get food on her own, and she does not make decisions or give orders the way a human regent would.  How do ants know when to forage, when to fight off attackers, when their queen is ready to reproduce, or dying, and how do they know what to do in each of those cases?

The individual ant is so simple it is incapable of learning or remembering anything; all ants are, however, programmed with certain behaviors. For instance, if an ant is foraging for food, it secretes a certain pheromone; if it encounters lots of other ants who are also secreting the “foraging” pheromone, chances are that if there is any food in the area, the other ants will get it, so it switches to another job — cleaning the hive, or digging new tunnels, etc.

If, let’s say, all the ants that are currently out foraging get washed away in a flood, then the surviving ants will notice the lack of the “foraging” pheromone in their environment, and even before anyone starts getting hungry, the right proportion of the surviving ants will switch on their “foraging” mode.

Each individual ant is pretty dumb and clueless; there is no way we can predict, based on the environment, what any individual ant will do; however, thanks to the rules of anty behavior that is hard-wired into each rather dumb and clueless ant, the hive can thrive. The complex communal behavior emerges from the very simple set of behaviors that define aunthood.

Conway’s Game of Life (Cellular Automation)

A classic example of how a set of simple rules can result in complex behavior is called The Game of Life. (Not the board game with the rainbow spinner.)

Imagine an infinite two-dimensional grid. Each cell can be empty (“dead”) or occupied by a cell (“live”). Each turn, we check each square in the grid for these rules.

  • If an empty space has exactly three living neighbors, it comes to life (“birth”).
  • If a living cell has two or three neighbors, it stays alive (“survival”).
  • If a live cell has fewer than two living neighbors (loneliness) or more than four (overcrowding), it disappears (“death”).

That’s it…

Try the Game of Life (Mochimedia)

(other versions: Java version courtesy of math.com; or iOS version of Conway’s Life, from the iStore).

Saussure on Speech, Derrida on Writing, Hayles on Code

Hayles, being an English literature professor, draws heavily on literary theory to explain her take on code. A quick rundown of the major points.

Ferdinand de Saussure (Swiss, 1857-1913) is frequently invoked by linguists, philosophers, and cultural theorists, for his development of semiotics — the study of how spoken words represent concepts.

Hayles gives a good account in the section “Saussure and Material Matters,”  where she quotes him as saying “A language and its written form constitute two separate system of signs. The sole reason for the existence of the latter is to present the former.”

Thus, Saussure sees written language as unnecessary and impossible without spoken language; indeed, until the invention of the printing press, the written word was so labor-intensive that writing was more of a cold archive for records, a place to consult what you might otherwise forget; the medieval monks would encounter the written word mostly indirectly, as during meals one monk would read from an important text while the rest of the community listened.

A famous painting by Rene Margritte (translated as “This is not a pipe”) reminds us that  there is a difference between pipe and a painting of a pipe.

This Is Not A Pipe by René Magritte

The sequence of letters p-i-p-e does not bring about the physical existence of the pipe; neither does anything about the physical property of a pipe contribute to our communal decision to connect the  word “pipe” to whatever object that happens to carry that name.

In order to create meaning, humans make the connection between the signifier  (in this case, the word “pipe”) and the signified (the pipe itself).

Some words, like “buzz” or “crash,” do have some relationship to the objects they signify, but speakers of different languages choose different combinations of sounds to represent the same concepts.

Ultimately, Saussure holds that language is arbitrary, because the complex meaning that we ascribe to the signifier/signified pairs is artificial, holding meaning only because enough people share the same understanding of those pairs.

This reminds us that language has no independent meaning. The philosophical and literary theory school called “structuralism” looks closely at how we apply meanings and values to words, gestures, buildings, and images, and how those meanings are interpreted and subverted (frequently by artists, and occasionally by nature).

Bird on No Birds Sign

 

Derrida and the Deferred Meaning of Code

While Saussure saw writing as a byproduct of speech, Jacques Derrida (French, 1930-2004) sees the human ability to recognize meaning as a kind of writing, something that was necessary before speech; thus for Derrida, we had to be able to mentally inscribe meanings before we could utter them in any form.

Derrida’s approach has the merit of unraveling the neat packages that Saussure seems to give us when he (Saussure) sees writing as springing from speech. Yet Derrida’s insistence that every meaning conveyed by a spoken utterance must depend on some meaning. This can be infuriating because there seems to be no origin to any idea; because we are surrounded by ideas, logic tells us they must have come from somewhere, which suggests that ideas must have some independent meaning before anyone can utter it. Unlike Saussure, who is content to say that the pairing of signifier and signified is arbitrary, Derrida is not satisfied with the fact that the pairs exist, and (as Hayles puts it) “Derrida transforms difference into différance, a neologism suggesting that meanings generated by differential relations are endlessly deferred.”

In other words, to make his point that we can’t really know the origins of any words, he made up a brand new word to describe the unknowability of words.

Derrida is most commonly associated with deconstructionism — another word Derrida invented — an extension of Saussure’s semiotics that insists “there is nothing outside the text,” by which he means when we try to interpret and understand the meaning of a text (or really any human artifact), all the words and concepts and ideas we can compare it to, or define it by, are themselves texts that are open to (and insist upon) further interpretation.

Critical theories are not just names and facts to memorize, they are tools that we can use. You don’t really know what it’s like to dance a waltz or skate on the ice unless you actually DO it, so let’s take a moment to DO deconstructionism.

Let’s “deconstruct” Batman. Not “Batman” in general, but his his portrayal in a particular incarnation, such as a specific movie or a specific comic book. We would explore how the dialogue, artwork, action, and tone challenge our understanding of traditional categories such as “good vs. evil” or “hero vs. villain.”  We might demonstrate that the character Batman is so scarred by the murder of his parents that he suffer a twisted compulsion to protect Gotham City’s citizens by any means necessary, often including illegal violent acts of vigilanteism. To what extent can we blame the criminals, who were themselves scarred by their environments… ware they not compelled and twisted by their own circumstances to carry out kind of violent acts?  A successful artistic portrayal of Batman must, of necessity, leave this good/evil tension unresolved,  because there would be no Batman if it were not for the interaction between the categories of innocent victim and heartless criminal, both in Batman’s environment and in his own psyche.

Literary criticism already uses the word “code” in the sense of values that are “encoded” into a genre, rules of gender that “program” a young women’s behavior, “codes” of honor that define male culture, etc.  So to some extent, Hayles may be said to be borrowing the existing uses of “code,” the way a century ago we saw new machine-age metaphors like “a loose screw” or “mental breakdown.”

But if we apply deconstruction to the word “code,” we find that it’s the same word from which we get “codex,” which in turn means “tree trunk” or “block of wood.”  (The first bound books were actually hinged wooden tablets. And today, we still call pages “leaves.”)

So the word “code,” when applied to a series of procedures for a machine to follow, is actually a metaphor, chosen to echo the “moral codes” of religious books and “codes of law” that defined civil society.  Thus, “code” in the sense that literary critics use it, and “code” in the sense that programmers use it, are both applied metaphorically — by different communities, for different purposes, and semiotics reminds us that any signifier/signified pairing is arbitrary and pragmatic, rather than truly definitive. (“This is not code,” we might be tempted to say.)

A Word about Critical Theory

When I was first exposed to Saussure and Derrida as a graduate student, my responses ranged from being amused by the wordplay, feeling terrified because it seemed so far above my head, being mad at my professor for not just telling me “the right answer,” I being depressed because all this indeterminacy stuff seems to be saying none of the literature I was studying has any real meaning.

But as I’ve taught critical theory over the years, I’ve settled on seeing deconstructionism as a creative, playful, even empowering approach. If we have nothing but words, and words point to no external meaning outside the meanings that we can describe in words, that means all that we have, everything we can name, understand, believe in, reject, or transform, all that we are, we are because of words.

“There is nothing outside the text,” says Derrida. But there are, as our senses and our intellect tell us, many wonderful and amazing and complex things, and we perceive and understand them, not despite words, but because of words.

Deconstructionism has been a favorite method of exploring the “différance” that breaks down the apparent opposition between concepts such as male/female, master/slave, and (as we will see as we keep reading Hayles) human/machine.

 

One thought on “Context for Halyes, My Mother Was a Computer

  1. Pingback: Context for Hayles, My Mother was a Computer (Ch 3 & 4) — Jerz's Literacy Weblog

Leave a Reply

Your email address will not be published. Required fields are marked *