Life, Intelligence and other Dangerous Constructions of Mind and Matter

Text by Kerstin Dautenhahn from the 1997 publication "TechnoMorphica."

Life, Intelligence and other Dangerous Constructions of Mind and Matter

TechnoMorphica 1997


The research agenda of Artificial Life (AL) is to synthesize "life-like" systems, complex structures, emerging from silicon instead of carbon. This bottom-up, synthetic approach towards complexity and intelligent behavior is strongly opposed to classical Artificial Intelligence (AI) and its definition of intelligence as computing and problem solving. However, it does not necessarily mean that AL contributions are more relevant to issues of life and intelligence than classical AI. The scientific methodology of AL is to use natural and artificial systems as part of a "comparative study." AL and AI are both based on the assumption of an objective human designer, needed to detect analogies and to evaluate the quality of the model and its implications. But following radical constructivism objectivism itself is an illusion, created as a social convention in order to communicate analogous experiences. So, how can I, as a designer of artifacts, learn about life and intelligence by being myself part of the system? As an answer, my own interactive conception of intelligence and life-like behavior is based on studying the social system comprising artifact and humans, who are involved as experimenters, designers and/or observers. Discussing the way how humans interpret technology is one of my main concerns. Instead of assuming a clear distinction between the domains of the observer on the one hand and the "object" on the other hand, object plus observer can be seen within one domain of the cognitive construction of social "realities" and life.

There is no "right answer" to how life can be defined. In our everyday life we have numerous examples of how "artifacts" are created and what they can mean to humans. Humans are from their early childhood on experts at taking various abstract or fictional things for real entities and become emotionally engaged (comprising comic, television, or video game characters, football teams as well as political theories and religion). What counts is, in my view, the question of what is real to the embodied mind of an individual person. Following the argumentation of radical constructivism it should be more useful to discuss about the individual's constructed conception of reality than about an objective reality. The meaning, and not the technological basis is central. In this way, experiences which are important to the life of an individual should be taken seriously, no matter if they originate in real, simulated, virtual or fictional experiences. Humans are social beings, and they want technology to be "familiar" to them. If the meaning of robots, its social and cultural "embeddedness" would be equally well studied as their navigation or problem-solving abilities, then even robots could become "familiar," and "socially adapted" to us.


Recently the concept of "believability" has gained attention in the autonomous, and intelligent agents community. The concept originated from the arts and was first discussed for software agents. Believability is not necessarily dependent on exhibiting intelligent, complex or realistic behavior. "Believability" is in the eye of the observer. Human's attitudes towards artifacts have been mainly studied by artists. The focus of interest of a scientific researcher is somehow different from that of an artist. But in the case of constructing artifacts intelligence is expressed (and measured) by the behavior of the artifact. At this point the observer's individual personality, naive psychology and empathy mechanisms enter the stage and can hardly be separated from what are supposed to be "objective performance parameters." Examples of believable characters are "Toy Story" or "Luxor Jr." (by John Lasseter, Pixar Animation Studios) or the computer game "Creatures" (by Millennium). Does not any kind of artifact which is evaluated by human observers face this aspect of believability? Believability is not about fooling, namely a shallow way of attaching features to the artifact so that a human observer finds it appealing. You can attach a tail to the back of a robot, paint two big eyes on its front end, and cover it with cat fur, this does not make it a believable cat robot. We cannot build believable robots by a shallow "imitation" of animal creatures and their behavior, because humans are very sensitive towards what a plausible, coherent, "good" story is. Believability is not about cheating, about exploiting the typical human way of perceiving and interpreting the world in terms of intentionality. We enjoy a love song or a movie which makes us cry or laugh. This is not a sign of exploiting human preference for romance or tragedy. We may like it or not. But it is a sign for a good story, a story which provokes engagement.


In AI and more generally in computer science the concept of memory was dominated by the data base metaphor. This metaphor has also had a strong influence on concepts of human memory in cognitive science. In contrast, Roger C. Schank and Robert P. Abelson argue for a more dynamic account of memory. They point towards the relation of stories to knowledge and memory and the role of stories in individual and social understanding processes. 1

1. R. S. Wyer, "Knowledge and memory: the real story," Lawrence Erlbaum Associates, 1995.

They argue that stories about one's experiences and the experiences of others form the fundamental basis of human memory, knowledge, and social communication. New experiences are interpreted in terms of old stories. Reconstituted memories form the basis of the individual's remembered self. In such a way shared story memories within social groups should define social selves. Stories do not only comprise humans. Machines and technology have always been part of them, in the same way as animals (as pets, livestock, mythical characters like wolves and bears). We have to be prepared that intelligent, autonomous artifacts will more and more become part of our individual and cultural memory, characters and actors in our social stories.


How do "stories" which seem to play an important role in how we perceive and interpret the world influence scientific research? To give an example: the technology which we use to build a robot, the experiments we design, the methods we use to quantify and evaluate the robot's behavior, the language and style we use to talk about and present this work, all this is part of what the robot is, it is the "scientific story" about this particular system. If the scientific work is about the construction of an artifact which exhibits "life-like" behavior then, i.e. which has a body or behavior similar to that of a natural living system, then believability (particularly in public opinion) plays an important role. The story can develop an existence on its own, even if the robot is already disassembled, the "story" still exists, in the mind of humans, in the "cultural media" (journals, video, et cetera). The chances that the story persists increase with its degree of plausibility, i.e. whether the story is "good" or not. In the scientific realm plausibility is related to the degree it can be added to already existing chapters in the book of scientific discourse, i.e. whether it can be linked to already existing knowledge and conceptions about what is "right" and "reasonable." We are aware that technological, historical and social aspects can influence scientific thinking, but these insights do not seem to influence ongoing scientific work. Plausibility is what Gerhard Roth considered as the "only" goal of scientific research in the age of constructivism in which we are about to learn that the truth is not out there! However, for some people, questioning objectivity in scientific investigation represents a "danger" to scientific work since it threatens its methodological and epistemological foundations, e.g. it would completely abolish the hallmark of science, namely the notion of reproducible results. But, we can still do experiments, strive after reproducible results, while at the same time not believing that we can find the one and only truth.


But how can artifacts cross this boundary between conceptions of understanding in computationalism and phenomenology and become "social minds"? In my view the phenomenological dimension of social understanding is closely coupled to empathy as an experiential, bodily phenomenon of internal dynamics, and on a second process, the biographic re-construction which enables the empathizing agent to relate a concrete communication situation to a complex biographical "story" which helps to interpret and understand social interactions. Biographic re-construction which I regard to be important for elaborated kinds of empathic understanding of another person can be thought of as creating a plausible story about the person's context, the biography, including aspects of past, present and future. I believe that stories are not only a fundamental structure of our remembering processes, they are inherent to our (social) understanding and inter-action with the world. This creative aspect of story-telling, to tell autobiographic stories about oneself and biographic re-constructions about other persons, is linked to the empathic, experiential way of relating other persons to oneself. I hypothesize that this is the central mechanism for social understanding which constitutes what we call "social intelligence." This general idea of experiential understanding is not limited to physically embodied agents, we can think of a common "social interface," i.e. to construct means of interaction and communication in which all kinds of agents can become engaged. In my view engagement is a key concept for meaningful, embodied interactions. Software agents and physical agents (robots) need not necessarily have a "natural" form of social behavior, communication and interaction, they can build up social structures within their own communities. But interactions (e.g. in communication situations or co-operative task solving) with humans create a need for all these creatures to behave "natural," i.e. in a way which is acceptable and comfortable to humans. Embodiment need not be limited to a physical body, it can be realized in the embeddedness and coupling of an agent with its environment. We will get used to intelligent, personal agents which navigate in computer networks, assisting us in both business and private issues, giving instructions to intelligent machines, negotiating with autonomous robots, and spinning the web of a world-wide cross-technological, and cross-species society of physical and virtual creatures. Behaving "life-like," made of metal, silicon and carbon.


In my view scientific investigation and artistic "creation" (which are generally considered as opposite and/or independent areas) can be combined in the goal to create complexity along the AL framework. I distinguish two AL research directions. The probably most widespread direction is driven by the underlying assumption that objective criteria exist to develop an objective "logic of life" which is independent of the matter in which the life-form is realized. This "logic of life" is regarded as a language which produced biological life, which can produce artificial life (robots, software creatures) and which potentially also applies to other, unknown, to be discovered alien life forms on other planets. This underlying "philosophy of thinking" is closely related to AI and its quest for a definition of intelligence and tends to develop criteria and a Turing-test for life. Along this direction the issue of "objective" criteria for life is discussed, and how tests (analogous to the Turing Test for human intelligence) can separate living from non-living artifacts. I do not see a need for scientists giving a specification of life and defining mechanisms of "living" which apply across species and across the physical basis. Conceptions of life and living are personal, subjective, individual. But then, if not specifying life, what can AL be about?

Why not consider AL in the creative "story-telling" framework, so that the goal is to find forms of complexity in artificial media which appear to be natural ? and which give us plausible explanations about life. Here, we do not assume the existence of a "logic of life." Starting point is the physical matter, not an abstract theory. The challenges are rather to explore and investigate what form complexity can adopt in artificial media, how it can evolve and/or how it can be designed. We need not only study evolution in order to investigate complexity. Design of complex systems can give us plausible explanations as well. Each material can have its own inherent "logic," i.e. properties and mechanisms which form complex structures. In addition, why is it necessary to define life? Life (and intelligence) are in the eye of the beholder (i.e. in the mind of the observer), these concepts characterize the way how human beings interpret the world. Why should scientists define life, if every individual human being has his/her own conceptions of life and living? Nevertheless, the research agenda is defined as finding believable "scientific stories" which can give us plausible explanations and interpretations of life. The difference between these two AL approaches is not identical to the weak/strong interpretation of AL. Neither is the difference due to the use of particular architectures or mechanisms, e.g., dynamic systems approaches versus logic-oriented approaches. The crucial difference is rather whether one believes that something is "out there" which has to be discovered (the logic behind the things, the logic of life), or whether it is the very process of creating, constructing, exploring, designing systems which ? helps us to learn about life. This is my answer to Simon Penny's question: "Why do we want our machines to seem alive?" 2

2. Simon Penny, "The pursuit of the living machine," Scientific American, 1995.

This creative, "story-telling" direction of AL is in my view what makes AL fundamentally "special" and can potentially guarantee its survival as a genuinely interdisciplinary research field.


What will be relationships between robots and humans in my "believable story-telling" approach to AL? We know quite well what a human is like, because we are human beings. We do not know exactly what humans "are for," but we have an idea about what they can do, should do, normally are doing. Our picture of humans and their "function" is not consistent, and need not be. But the picture is colorful, and structured, it is full of stories. What is an artifact for, e.g. a robot?
It can be a machine, working and solving tasks for us, an intelligent vacuum cleaner for example. If it has finished its work then we switch it off or it goes to its "nest." In any case, it shall not bother us. It can be a very complex, unpredictable, "intelligent" machine, solving complex tasks, surviving in sewerage pipes or on the Moon. They should "function," by whatever means and techniques this function can be realized. Do we care about them? Well, they are somehow "life-like," they mimic some aspects which remind us of animals. But they are not like us, so why should we care about them? We should be able to control them, they could become dangerous. They are not adapted to us. If they are really good, then they can be competitors for resources. Will they be able to entertain us, to please us, to tell us stories? I believe not. They are enacting the stories they have been told (by the human designer). Humans will be "better," maybe physically weaker and incredibly slow. But they keep the role of the story-teller. The complex human embodied mind will still be the source of creativity. Robots can be our "companions," our personal robots, helping us in daily life, interacting with us in an individual way, keep us comfortable, help us survive, entertain us. They are adapted to us, as an individual person, to our human society, to human life. They can play these funny imitation games with us and make us laugh when they desperately try to flip-flap, but fail, their wheels blocking every time. They learn during their life-time, about themselves, about us. They are sitting with us in the garden, watching the crows frolicking in the air. We say how much we wish to be able to fly. The robot expresses: "I know what you mean." They can listen. And create their own stories. Do we care about them? Do we care about our dog pets? Of course, they are a bit like us, somehow family members. They have a meaning to us, in our "world," the mental world created inside our mind, the only world we have access to. The only "real thing." We don't care what species the robots belong to or what kind of material they are made of. They are our friends.


© 1997 Kerstin Dautenhahn / V2_

Document Actions
Document Actions
Personal tools
Log in