Hello world! How does it feel to be posthuman?

Interview with Katherine Hayles by Arjen Mulder, published in The Art of the Accident, 1998.

How does it feel to be posthuman?

The Art of the Accident, 1998

Arjen Mulder: What is good about accidents?

Katherine Hayles: Accidents are, by definition, events that have not been planned. Often these events lead to annoyance or even catastrophe, but sometimes they open windows on new understandings. Many technical developments came about because of accidents. For example, while a chemist is not watching, rubber boils over and, under the direct heat of the burner, becomes much harder. The chemist realizes that as a result of this accident, he can use heat deliberately to harden rubber, the process now called vulcanization that is essential for most modern uses of rubber. Or another example: a chemist uses a recipe that doesn't turn out as he expects, yielding a glue that isn't very sticky. Rather than simply label this event an accident, he starts thinking about possible applications for not very sticky glue and eventually comes up with post-it notes. Plans are absolutely essential, of course, but when plans turn out exactly as we expect, we don't learn anything new. Accidents have the capacity to reveal things undreamed-of and paths unknown.

AM: From what point of view can an event be seen as an accident, and from what point of view as an orderly event?

KH: This is an interesting question because it suggests that point of view is essential in deciding whether an event is an "accident" or not. When a plan is in place, and an event happens which has not been anticipated, we call that event an "accident" because it does not coincide with our expectations. From other points of view, the "accident" may well be seen as an orderly event. For example, we now know that rubber hardens under direct heat; what first appeared as an accident has become an expectation. The "accidental" is not so much a fixed category as the boundary between the known and unknown, the expected and the unexpected; the "accidental" happens where waves break on the beach of knowledge. Before a wave breaks, it is part of the undifferentiated mass of things about which we have no knowledge; as it breaks, it comes within the horizon of our experience but has not yet solidified into terra cognita.

AM: So all accidents, either creative or catastrophical, are only accidents in the eye of the beholder? But now the accident has entered its age of digital reproducibility. How do you reckon (creative) accidents can be planned?

KH: The problem with plans of any kind, of course, is that they are limited by our intentions, and our intentions are limited by what we (already) know. How to intentionally escape intention, without having our escape plans contaminated by precisely that which we are trying to elude? Which is to say, how to open ourselves to what we cannot imagine? John Cage's strategy was to invent "chance operations" - operations that proceeded according to rules he had devised but whose outcomes he could not predict. Moreover, once having set these rules into play, he would go to astonishing lengths to carry them out precisely, lest his intention re-enter and contaminate the process. In many different ways, he tried to convey the message that the most interesting messages aren't messages at all but noise. As Vilém Flusser, and Claude Shannon before him said, information is what we don't expect.
Artificial life programs follow much the same strategy, and for much the same reasons. Following simple rules, the programs inject chance using, for example, pseudo-random number generators to replace expected code sequences with unanticipated bits. If artificial life "creatures" (that is, programs that carry out the basic biological functions of reproduction and information coding) are allowed to mutate in this way and placed in an (artificial) environment that places selective pressures on them, the result can be an explosion of creative adaptation. In Tom Ray's "Tierra" program, for example, an entire ecology developed from an original 82 byte "ancestor." The mutated creatures included parasites who preyed on the ancestor' by running their own reproductive programs on the ancestor's time, so to speak, and hyperparasites that preyed on the parasites. One 42-byte creature evolved that used programming tricks so clever that no human had ever thought of them. After analyzing the creature's program, Ray issued a challenge on the Internet to human programmers to accomplish the task in the same amount of code. After three weeks, he got a response from an MIT hacker who accused him of perpetrating a fraud. The hacker proclaimed that it couldn't be done - and yet an entirely unconscious creature devoid of any kind of intention had accomplished it, merely through the blind force of creative evolution.
Accidents always have a double edge of danger and creativity. Once evolution gets going, it may not always produce results that humans can live with (sometimes in a literal sense). The "Tierra" program illustrates this aspect of accidents as well. Ray built his "Tierra" program using a "virtual computer" inside the physical computer. Only the virtual computer understood the topological coding scheme Ray used for his "creatures" reproduction. One reason for creating a "virtual computer" was to create software that would be robust enough so it wouldn't crash when it was presented with new sequences of code defining the mutated creatures. Another reason was to keep whatever creatures evolved contained within the virtual computer, so they could not escape and contaminate larger computer systems. If a creature did get into the physical computer, Ray argued, its code would be read as data sequences rather than as programming instructions. Within the virtual computer, a program called the "reaper" executed (deleted) creatures who had lived a certain number of computer hours. In one programming run, however, creatures evolved who were able to take over the reaper program and prevent it from killing them. If the creatures were able to get into the programs overseeing their reproduction, it seems probable that they might also find a way to escape from the virtual computer into the actual computer, from where they could spread to computers throughout the world. To escape human intention can also mean to outrun or subvert human intention. Which leads to the conclusion that it may be an illusion to think that we can control the evolution of artificial life forms. To get them to evolve, some degree of control must be abdicated, and it may not always be possible to wrest control back if we don't like the way things are going.

AM: What makes a person an individual?

KH: What a question! If viewpoint is important in how one constructs an answer to this huge question (and it certainly is), then there will be almost as many answers as there are viewpoints, and almost as many viewpoints as there are people to hold them. Counting people alive and dead, this gives us an order of magnitude of about seven billion or so possible answers. To narrow the ballfield a little, let me speak about some research that I have undertaken for my latest book, "How We Became Posthuman." According to Western liberal philosophy as it has been explicated by M. B. Macpherson, someone counts as an individual when he has the capacity to own himself, specifically his own body and the products of his labor. The stress placed here on ownership is of course characteristic of a capitalistic society; in Britain and Europe during the seventeenth and eighteenth centuries, liberal philosophy defined personhood through a set of interrelated assumptions that Macpherson calls "possessive individualism." Because someone owns himself or herself, it is possible for him or her to engage in market relations and thus to enter society as an individual. Macpherson recognized there is a chicken and egg problem here, for this view of individualism is certainly bound up with a society based on market relations, but philosophically the individual is said to pre-date those relations. In today's terminology, we might say that possessive individualism and capitalistic societies were co-emergent phenomena. Each required and catalyzed the appearance of the other. In any event, around the notion of the possessive individual accreted other qualities associated with this ability to engage in market relations. If the market was considered to be self-regulating, for example, that implied that the individuals participating in the market could also be self-regulating. Through such networks of associations, a number of qualities also came to be attached to the liberal humanist subject, including rationality, free will, independent agency, and the mind as the seat of identity. What I see happening today is a complication of the assumptions of possessive individualism coming from such fields as cognitive science, artificial intelligence, artificial life, computational theory, and mobile robotics. In these versions of the individual, the subject is seen not as a self-regulating, self-aware conscious subject with the full power of agency, but rather as a collection of semi-autonomous agents, each of which runs its own relatively simple program. Far from being the seat of identity, consciousness in this view is a late evolutionary event and much less important than consciousness thinks it is. I am reminded of a remark by comedian Emo Philips:"I used to think the brain was the most important organ in the body, but then I thought, who's telling me this?" For these researchers, consciousness becomes a epiphenomenon, an emergent property that needs only to provide a reliable interface with reality, not necessarily an accurate one ("accurate" is itself problematic in this context, for one would need to specify accurate according to which viewpoint). To demonstrate, Rodney Brooks, a mobile roboticist working at the MIT Artificial Intelligence Laboratory, points out that we all go through life with a large blank spot in the middle of our visual field, and remain for the most part happily unaware of it.
The posthuman individual, then, is not so much a single identity as a collection of agents working together. Agency still exists, but it is complicated because different agents have different agendas. Consciousness does not set these agendas; rather, it kicks in only at certain times to adjudicate conflicts between various subprograms. Moreover, consciousness remains largely unaware of the real nature of subjectivity, which is fractured, conflictual, and ultimately reducible to simple programs. Because the most interesting phenomena are often emergent (that is, properties that appear at the global level of a system that cannot be predicted from the individual parts), "accidents" enter into this world view as an intrinsic part of evolutionary processes. Although they remain unpredictable, in a sense they have been expected. They are also highly valued, for it is through such accidents that complexity, including life and human consciousness, characteristically come into being. It is no "accident," I think, that this vision of human individuality allows the (post)human to be seamlessly articulated together with intelligent machines. Essentially this research is aimed toward understanding human agency, subjectivity, and identity as the result of computer modules running relatively simple programs. It comes about through what may in retrospect be seen as a second "grand synthesis" of the twentieth century (the first grand synthesis was the union of genetics and evolution). In the second synthesis, evolution as it is currently understood (an understanding that itself is a hybrid offspring of genetics and Darwinian selection) joins with computational theory. The individual, as he or she is emerging from this second synthesis, is seen as a creature forged by evolutionary programs that run on bioware. The proof for this claim is taken to be the successful simulation of such programs in intelligent machines. Rather than human being the measure of all things, as the Greeks thought, increasingly the computer is taken as the measure of all things, including humans.

AM: How can my avatar be an individual?

KH: If one thinks of the "individual" as a collection of semi-autonomous agents, then it becomes clear that the avatar is a typical posthuman individual, for it is comprised of many interacting parts: the person operating the program; the intelligent machines that run the various programs; the software that creates the avatar; the CRT screen that displays the avatar's actions; etc. When these parts all seamlessly work together, one may have the illusion that the avatar is a mere extension or expression of a self-contained, autonomous, agential subject. As soon as some glitch happens, however, the user becomes uncomfortably aware that the avatar is not under his or her control alone.

AM: How can my avatar get a body, with a lot of (normal, funny) actions and reactions? That would make it interesting to me.

KH: There's been a lot of ballyhoo about the "disembodiment" of cyberspace, but to my way of thinking, this kind of talk is possible only when one does not attend closely to the practices that produce virtual and real bodies. Many researchers now working in the social studies of science are looking at bodies not as pre-existing objects but as collectives produced through various kinds of practices. This has the effect of erasing (or better, bracketing) the body as an ontological entity and focusing on the processes through which bodies enter our perceptual horizons. In a sense, this approach is following a line of thought similar to that Jorge Luis Borges explores in "Tlon," "Uqbar," "Orbis Terius" when he imagines "Tlon" as a world with a language that can speak only verbs, never nouns. How would we know the body if it were a verb, a process, a constantly fluctuating and contextualized series of interactions rather than a noun, an object? Since practices are multiple, it follows that the bodies produced through practices are also going to be multiple, in many senses. As an example, say I am at my computer, reading a hypertext fiction such as Shelley Jackson's marvelous electronic hypertext, "Patchwork Girl." To access the text, I have to perform various physical actions - turn on the computer, call up the program, open it, etc. My body-as-a-verb can be understood, then, as the incorporated practices through which I interact with the computer and CRT screen. We can call this body the enacted body, because it is produced through the actions I perform. On the screen is an image of a body, which we can call the represented body. This body exists, however, only because of the intelligent machine that is mediating between the represented and enacted bodies. It is important not to underestimate the importance of the intelligent machine as an active agent in this process; exactly what kind of machine it is, and what kinds of programs it runs, will have very significant impacts on my reading experience, from the way the represented body is imaged to how long it takes for the programs to load. My reading, then, is really a collective action performed through complex interactions between the enacted body, the intelligent agents running various programs in the machine, and the represented bodies in the text I am reading. Where is the "I" in this process? If we don't assume identity as a pre-existing entity, then the "I" cannot be located in the body alone but rather is a production that emerges through various kinds of complex processes, of which this is only one example. Is this interesting? I don't know if you would consider it so. What I do know is that it embodies a very different way of looking at identity, subjectivity, and reality than traditional liberal humanism.

AM: What would you like VR to become?

KH: VR will soon be populated by a variety of intelligent agents - some of them human, some of them synthetic. As the environment becomes richer in this respect, more complex interactions with it will be possible. Moreover, I think the days of the bulkly helmets are limited; in the future, VR will exist both in immersive forms, easily accessible through lightweight glasses - or in what Marcos Novak calls "eversive" projections - that come out into the real world and with which we interact. What these environments will look like are fluid exchanges between simulated and real contexts, with flexible and relatively non-obtrusive interfaces of various kinds, from motion and infra-red sensors to more specialized interfaces constructed through screens or glasses. To some extent these environments already exist, but we are only beginning to theorize them as integrated complex adaptive systems. For example, in the U.S. most supermarkets and department stores have door sensors that open the doors when you get within a certain range; this is an instance of a low-level and fairly ordinary eversive environment. Other instances are the cursor on the computer screen, a visual point that serves as a minimal kind of avatar, although we may not be accustomed to think of it as such.

Moreover, intelligent agents will take increasingly active roles in constructing and filtering information for human users. In an information-rich environment, as Richard Lanham has pointed out, the scarce commodity is not information; rather, it is human attention. It's no "accident," I think, that just when the information economy really begins to take off, in the late 1970s and 1980s, the medical profession identifies a syndrome they call "attention deficit disorder" (ADD). It is not difficult to imagine that when human attention is in short supply, there will be pressure to develop intelligent agents that can take over for us tasks that may not require our active attention. For example, there are already in existence intelligent agent programs designed to filter your email. The agent, built along lines similar to neural nets, begins by observing your habits in answering your email. It notices that you always read messages from Arjen first, but that you systematically delete messages from Roger. So it starts putting Arjen's messages at the top of the queue, and shunts off Roger's messages into a "low priority" file that will be automatically deleted after 30 days. In this vision of how information-rich environments will operate, human intelligence will ride on top of a highly articulated ecology, with many mundane and routine tasks being done by the intelligent agents that are part of that ecology. The idea is to conserve human intelligence for the tasks where it really counts. Of course, such a scheme also inevitably means that the human is delegating some measure of control to intelligent agents. Although many people find this scenario scary (notably Joseph Weizenbaum, who cautioned against precisely such developments twenty years ago, in Computer Power and Human Reason), the delegation of control to technological objects is scarcely new; humans have been doing it for thousands of years, starting with stone axes and flints for making fire. While it implies increasing interdependence on synthetic intelligence (and soon, perhaps, synthetic sentience), in my view this is scary only if we persist in thinking of humans as autonomous beings who can control their environments. In fact, of course, this has always been more or less of an illusion. Humans have always been interdependent with the ecologies within which they live. Maybe it's time we recognized this interdependence and gave up the fantasy of complete and total control.

 

 

© 1998 Arjen Mulder / V2_

Document Actions
Mailinglist: Subscribe to the English or Dutch version.

Follow usfacebook_16.png twitter_16.png youtube_16.png Favicon Vimeo googleplus
Related Items
Arjen Mulder

Arjen Mulder (NL) is editor at V2_ Publishing.

Katherine Hayles

N. Katherine Hayles (US) is a postmodern literary critic, combining scientific insights from chaos ...

The Art of the Accident

Collection of essays stating that the concept of “accident” contains not just the idea that ...

Book Launch: From Image to Interaction Mar 08, 2011 07:30 PM

After the success of Understanding Media Theory, V2_ is proud to present Arjen Mulder’s new book, ...

From Image to Interaction

In "From Image to Interaction," Arjen Mulder traces evolutionary lines in the fine art of the past ...

"Dick Raaymakers: A Monograph" wins AICA "Dick Raaymakers: A Monograph" wins AICA

"Dick Raaymakers: A Monograph," published by V2_ in 2008, has been awarded the AICA Certificate. ...

Alessandro Ludovico

Alessandro Ludovico (IT) is an artist, writer and editor in chief of Neural.

DEAF98 Symposium Nov 19, 1998 10:00 AM

Symposium of DEAF98 about "The Art of the Accident".

New Book: Giving and Taking New Book: Giving and Taking

A new book on why art is made, the genetic drive to create, and non-pecuniary values. Forthcoming ...

Vital Beauty

"Vital Beauty: Reclaiming Aesthetics in the Tangle of Technology and Nature" focuses on the ...

Arie Altena

Website editor at V2_.

The Politics of the Impure

The Politics of The Impure is an explicit attempt to create a new creation myth – one of a ...

V2_authors received AICA Certificate Mar 19, 2010 05:00 PM

On March 19, authors Arjen Mulder and Joke Brouwer received the AICA Certificate for "Dick ...

Sarah Cook

Sarah Cook (UK/CA) is a curator and writer.

Thomas Munz

Thomas Munz (DE) is an independent curator and editor.

Understanding Media Theory Presentation May 06, 2004 08:00 PM

"Understanding Media Theory: Language, Image, Sound, Behavior" by essayist Arjen Mulder is a clear ...

more ...
 
Personal tools
Log in