Version: 3.0 (Tue Apr 25 1995)
Word Count: 8083
Brian L. Keeley
Philosophy Laboratory, Department of Philosophy (0302), University of
California at San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0302; email: email@example.com
Current address: Philosophy Field Group, Pitzer College, 1050 N. Mills Ave., Claremont, CA 91711. EMAIL: firstname.lastname@example.org
Against the Global Replacement:
On the Application of the Philosophy of Artificial
Intelligence to Artificial Life1
This paper is a complement to the recent wealth of literature suggesting a strong philosophical relationship between artificial life (A-Life) and artificial intelligence (AI). I seek to point out where this analogy seems to break down, or where it would lead us to draw incorrect conclusions about the philosophical situation of A-Life. First, I sketch a thought experiment (based on the work of Tom Ray) that suggests how a certain subset of A-Life experiments should be evaluated. In doing so, I suggest that treating A-Life experiments as if they were just AI experiments applied to a new domain may lead us to see problems (like Searle's “Chinese room”) which do not exist. In the second half of the paper, I examine the reasons for suggesting that there is a philosophical relationship between the two fields. I characterize the strong thesis for a translation of AI concepts, metaphors, and arguments into A-Life as the “global replacement strategy.” Such a strategy is only fruitful inasmuch as there is a strong analogy between AI and A-Life. I conclude the paper with a discussion of two areas where such a strong analogy seems to break down. These areas relate to eliminative materialism and the lack of a “subjective” element in biology. I conclude that the burden of proof lies with the person who wishes to import a concept from another discipline into A-Life, even if that other discipline is AI.
In many ways, Artificial Life (A-Life) has long been the poor, younger sibling of Artificial Intelligence (AI). The two fields share many superficial similarities: Where AI can be seen as the synthetic, engineering side of the more analytic theoretical psychology, A-Life can be seen as the synthetic, engineering side of the more analytic theoretical biology. Both fields make extensive use of the modern digital computer, currently only as models, but also (practitioners in both fields hope) potentially as instances or examples of the phenomena they study. The philosophical literature of A-Life is littered with concepts, metaphors and arguments taken from AI. Variously, there is mention of A-Life Turing tests, A-Life dualism, A-Life functionalism, A-Life Chinese rooms, etc., all of which are concepts familiar from decades of discussion in AI.
Some, like Elliot Sober  have even gone so far as to point to a strong analogy between AI and A-Life; an analogy that seems to vindicate wholesale philosophical looting of traditional positions in AI. But it is the nature of analogies-- even strong analogies-- that there are differences between the two related entities. A-Life is not AI. On the basis of these differences, I argue that artificial life would be best served by originating new philosophical positions and metaphors of its own, without haphazardly borrowing such constructions from artificial intelligence.
The spirit of this paper is to act as a complement to the growing pool of literature which either documents or implies similarities between A-Life and AI. Instead, I highlight some dissimilarities between the two endeavors. In particular, I wish to point out areas where these differences are actually advantageous for A-Life, and where looking looking at A-Life through “AI-colored glasses” will lead one to see problems that may not exist.
Following this introduction, Section 2 begins with a thought-experiment that is meant to capture an idealized picture of one of the goals of A-Life: to create life in a computer. Based loosely on the work of Tom Ray   this thought experiment is intended to explore the relationship between natural systems and A-Life systems that purport to exhibit biological phenomena. I hope to decide on what basis we should decide whether a given A-Life system is a genuine example of artificial life. In doing so, I suggest the basis for this judgment is different from that traditionally involved in determining whether a system is an example of artificial intelligence. I conclude that treating A-Life as if it were just AI applied to different natural phenomena leads one to grapple with “Chinese room” objections to A-Life . However, I argue that the proper evaluation of A-Life experiments is sufficiently different to allow them to escape such considerations.
In Section 3, I turn to the more abstract issue of the proposed analogy between AI and A-Life. What are the arguments in its favor? More importantly, given such an analogy, what license does it give when deciding which concepts and metaphors from AI should be taken up in A-Life? I argue on the side of caution when “translating” the philosophy of AI into a philosophy of A-Life, pointing out that doing this properly requires a familiarity with both what is analogous and what is disanalogous between the fields. I introduce the concept of an extreme position relative to the application of concepts from AI to A-Life, which I call the “Global Replacement Strategy.” This strategy would have us import into A-Life most or all of the philosophical framework developed within AI. But such an extreme position is only warranted inasmuch as there is a strong analogy between the two fields. With this in mind, I end the paper with a discussion of two strong disanalogies between AI and A-Life: the lack of a viable eliminative materialist position within A-Life, and the lack of anything analogous to the “problem of consciousness” in A-Life.
2.1 Blob World vs. Blip World: an A-Life metaphor
Let us now turn to that old chestnut of philosophical methodology, the thought experiment. In what follows, I will consider an idealized example of an A-Life experiment in order to examine where the epistemological priorities lie, and whether they lie in places suggested by a strong relationship to AI.
Imagine, if you will, a medium, which exhibits some phenomena of interest to biology (Figure 1, left side). Unfortunately, the scale of these phenomena is microscopic-- it is invisible to the naked eye-- requiring the use of some kind of “visualizer” which can magnify the behavior (in a way that preserves any regularities) in order that it may be seen on a CRT screen. We see on that screen an image consisting of slowly moving circles and some darker masses, all embedded within a heterogeneous medium. As we watch, some of the circles envelop the dark masses, while other circles occasionally split into two more-or-less identical circles. Let us call this medium and its phenomena “Blob World.”
[Figure 1 about here – Figures unavailable in Web version]
Now imagine a second medium, which also exhibits some interesting behavior (Figure 1, right side). It too is very small and otherwise invisible to the naked eye, so a second “visualizer” is required to make the phenomena visible on a CRT screen. What we see on this screen is a column of letters: “0080-aaa,” “0045-aab,” “0135-aaa,” etc., next to which are some horizontal bars that are hectically pulsing out and back across the screen. As we watch, new alpha-numeric combinations come into existence, while others disappear. Though the appropriateness of doing so is not yet apparent, let us call this second medium and its phenomena “Blip World.”
It should be no surprise when a microbiologist comes around and tells us that Blob World is a group of microscopic single-celled organisms feeding and multiplying in a petri dish. And, as she has been recently reading up on research in A-Life, she also tells us that the Blip World output looks a lot like the real-time output of Tom Ray's Tierra simulator . (Blip World is not identical to Tierra in all its details-- Blip World is simplified for ease of presentation-- but they are meant to be identical in their philosophical status. That is, Tierra is one of many possible Blip Worlds. The “avida” system  is another, related Blip World.)
In Blip World, each alpha-numeric string identifies an artificial “species.” A “species” is defined as a collection of artificial “organisms” in the memory (a portion of the “Random Access Memory” (RAM)) of the computer which are made up of identical strings of instructions. The numeric portion of the identifier indicates the length of the string. The letters differentiate between species of code of the same length. For example, the identifier “0005-aaa” would refer to the first instance of a species whose members all consist of same pattern of five instructions. The next five-instruction species would be designated “0005-aab” .
The bar next to the identifier represents the proportion of memory occupied by instances of that species -- the individual “organisms.” These individuals are essentially pieces of self-replicating code; code which contains the instructions required for replicating itself in the medium of RAM. They are patterns of instructions which can successfully manipulate the operating system of the computer into producing numerous copies of themselves. As such, the pattern of instructions is both genotype and phenotype; it is both the instructions for replicating and what is replicated.
In this thought experiment, I imagine we begin by placing a hand-engineered organism in RAM. If allowed, this “Ancestor” would soon replicate itself to the point that it filled up the entire memory with little copies of itself. However, this is prevented by two mechanisms. First, the Ancestor (and its descendants) is not allowed to replicate itself perfectly every time. Every now and then the operating system intentionally makes an error and writes a “0” instead of a “1” or vice versa, thereby changing its string of instructions and introducing a new species into Blip World. In this way, mutations of the initial seed code enter the population. Just as with natural organisms, most of these mutations are fatal, in that they do not lead to code capable of self-replication, but some do turn out to be viable in this sense. Second, in order to keep the successfully-replicating code from filling up the memory of the computer, a proportion of organisms is culled each generation. The way Blip World is set up, the smaller an organism is, the more quickly it can reproduce. To stay ahead of the “reaper” which is constantly removing a portion of the population, a particular species must generate as many copies of itself as it can, as quickly as it can.
Blip World exhibits the same interesting behavior as Tierra, including “parasites” which locate themselves next to “hosts” and trick these hosts into copying the parasite's code instead of their own, and “hyperparasites” which play a similar trick on the parasites. We also see extended periods of stasis in the diversity of species interspersed with spurts of tumultuous change as new species compete with and occasionally replace the old.
Blip Worlds like Tierra are A-Life simulations. We are called upon to evaluate the claim that what is going on in these worlds is similar enough to what is going on in real biological systems, such as the petri dish, that the predicates “alive” or “biological” ought to be applied to each with equal force. In essence, the claim is that Blip World contains life, just as biologists agree that Blob World does. The only potentially relevant difference, so goes the claim, is that Blip World exhibits man-made or artificial life, whereas natural life is going on in Blob World. In other words, the only important difference between the two situations is one of origins. To evaluate this claim, we need to examine the two kinds of systems in detail to determine whether any relevant dissimilarities or asymmetries between the two exist. It should be kept in mind that the task here is not to determine whether the claim is true, but to say in virtue of what it is or is not true. This latter task is the philosophical one that we must confront. Only after we have determined on just what criteria the decision of “life” or “not life” is to be made, can we turn to details of a specific system (like Tierra) and attempt to make that decision.
One difference between the two scenarios can be found on their respective display screens. With Blob World, we see a picture of the petri dish, whereas with Blip World all we see is some kind of data chart. It is like the difference between seeing William S. Burroughs through the lens of a video camera and reading his biography. Clearly, one feels, the situation in the two scenarios must be markedly different. In Blob World, real biological phenomena (eating, reproducing, etc.) are going on. We can actually see them on the screen. But in Blip World, all that is going on is some kind of symbol manipulation and we are treated to the results of these computations on the output screen. At best, only simulated-- as if-- biological phenomena are going on.
However, this conclusion is hasty. The behavior of the two worlds is indeed differently visualized, but this is due primarily to the different temporal scales of the two situations. Let us call the representation given in Blob World a window representation (WR): an as accurate as possible representation of the appearance of the world under scrutiny. It provides the viewer with a “window” on the medium. It is what we imagine we would see if we were miniaturized, or if the petri dish and all of its inhabitants were magically enlarged to the size of a swimming pool. Let us call the representation of Blip World a dynamic time-course representation (DTCR): a representation of the long-term, gross dynamics of the world represented in aggregate, statistical form.
If these representations were somehow uniquely and exclusively tied to the worlds at hand, this would indeed be an important difference between them. But that one of these representations is commonly and preferentially used with each respective world is just an artifact of what we find most informative about each world and which is easiest to generate. For instance, a DTCR of Blob World could be generated by identifying the individual organisms and keeping track of their movement and reproduction (Figure 2, left side). In other words, if we were patient enough, we could keep track of lineages (perhaps by chemical tagging) and the percentage of the Blob World each lineage occupied. It would admittedly be difficult to generate such data (especially in real time), but there is no reason in principle why it could not be done.
[Figure 2 about here]
That the dynamics displayed in the DTCRs of Blip World and Blob World show similarities is a crucial point. I claim that it is on the basis of this kind of similarity alone that we are led to believe that current A-Life research is worth taking seriously. Artificial life's biggest claim to fame is that computer models of biological systems are often remarkably good at capturing the gross, high-level dynamics of biological systems. The literature is packed with computer models which capture population dynamics, the evolution of cooperative behavior, speciation, learning, etc. Often, all these models capture is such examples of the “look and feel” of biological systems, but some systems-- in particular Blip Worlds-- purport to capture more.
Just as a DTCR of Blob World can be produced, it is fairly trivial to produce a WR of Blip World (Figure 2, right side). It would be a bit map of memory; a plane of 1's and 0's blinking on and off at a very high rate. These numbers represent the patterns of high and low voltages present in the memory of the computer. Where the WR of the petri dish is made up of collections of “blobs,” the WR of this alleged “electronic petri dish” would be made up of collections of “blips.” A Blip World WR would be pretty meaningless to most viewers, and this is why this representation is rarely used to display the behavior of Blip World systems like Tierra.
But the patterns are there to be seen, if one could train oneself to see them. Properly trained, one would see that certain strings of bits are more numerous than others, as the more successful codes (and their descendants) copied themselves. The more fit would be the more populous. If one watched closely enough, new types would be seen arising in the population, as mutation and selection occur. Some of these new types would spread and take over the world, whereas others would die out immediately. The existence of these patterns is another crucial similarity between Blip World and Blob World, and, as discussed below, this similarity is lacking or unimportant for similarly constructed AI models.
The asymmetry in the representational forms is then an accident of the combined effects of the dynamics of the systems involved, the limits of our perceptual capabilities, and our level of familiarity with the types of representations involved. It is more informative to see the time course data of Blip World, and they are relatively easy to generate. With life in petri dishes, such aggregate data are hard to produce. Also, familiarity with Blob World WRs makes it easier to see the behavior in which we are interested using that kind of representation.
2.2 What ought to be made from this metaphor?
In many ways then, the situations with Blip World and Blob World are analogous. But at the same time, there are indeed differences between them: a) the species of Blip World seem to have a much faster generation time, and b) behavior in Blip World is more easily quantified than that of Blob World. We might also add c) the form of energy used by both systems is different-- Blip World organisms use electricity whereas Blob World organisms use sugars and sunlight. However, these differences are not necessarily relevant to the question of whether Blip World is truly biological. We can imagine genuine living systems whose metabolisms and life-cycles occur at a much higher rate than that of life on Earth. We can also imagine developing the technology to produce from petri dishes the kinds of DTCRs we can generate so easily for Blip World. Similarly, the details of how living systems convert energy into useful behavior also seem to be accidental and not an essential property of life. That Blip World is different on these counts only illustrates that, if it is truly biological, it is a biology different from life-as-we-know-it.
Some might say that I have so far overlooked an important difference between the two systems. It might be argued that Blip World is “merely a simulation,” that all that is going on in Blip World systems is mere symbol manipulation. The crux of this complaint can be traced to John Searle's now classic 1980 paper, “Minds, Brains, and Programs” , in which he claims to refute what he calls “strong artificial intelligence.” Strong artificial intelligence is the claim that an appropriately programmed computer can be an instance of a truly intelligent system.
Searle carries out this refutation through the use of his “Chinese room” thought experiment, which purports to show that mere rule following and symbol-manipulation is not sufficient for true understanding, meaning, or intentionality. A conclusion Searle draws from his arguments is that the best an AI program could ever be is a model or simulation of meaningful behavior, but never an instance of it. For present purposes, I am going to accept a major conclusion of Searle's argument: a system cannot be said to exhibit a property such as “intelligence” (or in the case of A-Life, “life”) by virtue of its computational properties alone. As Searle might put it, computational properties are not the proper kind of causal properties to instantiate real intelligence (or life). “Real intelligence” (or life) requires a different set of causal properties to bring it about.
Stevan Harnad   has suggested just such an application of Searle's argument to the endeavor of artificial life. Specifically, his claim is that, unless it is grounded (hooked up to the world with sensors and effectors), the best an A-Life computer program could ever be is a simulation of life, never an instance of it. It would seem that such a criticism applies to Blip World programs like Tierra. (However, it is difficult to be sure, as he never mentions any specific A-Life research by name.) Blip World is not hooked up to the world outside the computer in any way significantly different from the way in which traditional AI models are. We are invited to draw conclusions about the reality of such A-Life models similar to those which Searle and Harnad draw about such AI models.
However, to draw this quick conclusion is to fall into the trap of looking at A-Life models as if they are simply AI models applied to a different domain. The two situations certainly look alike: a computer program is crunching away on a program and throwing data up on a screen that bears a striking resemblance to what some natural phenomenon would throw up on a screen and, if that resemblance is close enough, some people conclude that the computer is instantiating that natural phenomenon as well. If Searle has refuted this argument in AI, surely he has done so in A-Life, as well?
Wrong. To see why, consider how the position known as “functionalism” is generally put to use in AI. Particularly in its original Turing Test form, functionalism embodies the claim that, to some degree of abstraction, what a system is made of does not matter in determining whether it is “intelligent,” “conscious,” “intentional,” etc. What matters is whether it behaves in the correct way.2 This claim about the irrelevancy of the material substrate of cognition is referred to by philosophers as the multiple realizability thesis. The Turing Test  sets out a strict procedure for determining what is legitimate evidence for making judgements about intelligence: written answers to questions input to the system via a teletype. Modern versions of functionalism substitute other behaviors in place of those of the Turing Test, such as Harnad's suggestion that an AI system be able to sense the world and execute robotic behaviors. However, the multiple realizability thesis is maintained by limiting the level of detail concerning evidence based directly upon how the system produces its behavior. For example, that a system processes procedural memory in one subsystem and episodic memory in another might well be an acceptable level of detail about how a system produces the behavior it does. However, to note that the memory subsystem works by storing patterns of high and low voltages in RAM (rather than by, say, changing the strength of synaptic connections between neurons) is to evaluate the production of behavior at too fine a level of detail. Functionalism calls upon us first to determine whether a system's behavioral output meets some criterion (or set of criteria), and then to determine whether the system produces that behavior in a way that meets some functional description. If a system meets these criteria then an attribution (“intelligent,” “conscious”) is projected down onto the specific physical system that generated the behavior.
However, I want to stress that this is not how the claim of “life” is decided in the case of Blip Worlds. Whether Blip World contains living things is not determined on the basis of what is displayed in the DTCR, or on the basis of some high-level, functional description, as would be the case if Blip World were an AI system. Blip World is evaluated as living or not on the basis of what behavior it exhibits in the medium (as revealed to us in the WR). If the behavior of the medium is sufficiently like that of the petri dish, then we call it biological, or “living.” Given that neither medium is directly observable, and given confidence that the production of the WR does not introduce any artifacts, then the evaluation will, in practice, be carried out by comparing the behavior of the WRs. But it should always be clear that both the WR and the DTCR can be considered "life-like" only in virtue of the life-like behavior of the medium which gives rise to them. In A-Life, the physical medium should be judged to be lifelike or not and then that attribution is projected upward (not downward, as in AI) to the representations that system generates. A-Life attempts to rein in some of the extreme liberalism of the traditional multiple realizability thesis by arguing that the physical substrate which constitutes an alleged biological system must be evaluated, not just the gross high-level “functional” properties and behavior.
In the case of Blip World, this evaluation would involve noticing that there is some physical pattern of high and low voltages in the RAM of the machine which physically manipulates the rest of the machine into producing identical (or, when mutations occur, almost identical) copies of that pattern. Other patterns arise, some of which are more successful at manipulating the rest of the machine.
In the end, we may well decide that what is going on in the RAM of a specific Blip World like Tierra is just not similar enough to natural life to warrant the claim of artificial life. For instance, the Tierra organisms lack both development (they lack anything that resembles morphogenesis) and metabolism, and biologists may decide that these features are indeed crucial for characterizing something as a true biological system. And perhaps, as Michael Dyer (personal communication) has suggested, the physics of the internal world of a computer is just too simple and regular, compared to that of the Terrestrial world, to support the complex entities typically associated with life. However, such a decision must be made primarily on the basis of continuing work within theoretical biology.
In any case, I have proposed an answer to the philosophical question I set out when I introduced the Blip World vs. Blob World thought experiment: When deciding whether a particular Blip World program is truly biological, in virtue of what is that decision made? I have argued that it is in virtue of Blip World's physical properties (not its computational properties) that it exhibits relevantly biological behavior. While it is true that the medium in which this behavior is found is a “computer,” we should never forget that our computer is not some kind of Platonic “purely computational system.” It is a very down-to-earth physical system: a machine. Not everything that a computer does is “computational” in nature. My NeXT computer workstation can not only simulate a paperweight, it can actually instantiate one as well. It can not only simulate the heat output of a NeXT workstation, as a NeXT workstation it also produces real (not simulated) heat.3
The claim here is that what is going on inside a computer running a Blip World program is not a computational simulation of life. It is instead an automated physical procedure for seeding the computer's RAM with appropriate physical patterns of high and low voltages, and for appropriately visualizing the resulting dynamics. The fact that all this is going on in a medium that is typically used to perform operations that are systematically interpretable in computational terms is irrelevant.
This claim is highly counter-intuitive. I am suggesting that if Blip World is judged to be alive, it will be so on the basis of its physical, not its computational, properties. The Blip World I have described exhibits the property of self-replication in the same way my workstation exhibits the property of producing heat. Real, physical self-replication is going on inside the computer's RAM, as certain patterns of high and low voltages manipulate neighboring locations until they exhibit an identical pattern of high and low voltages. This is not simulated or as if self-replication, this is instantiated self-replication. 4
I hope that I have shown that the application of traditional AI philosophical analysis to prima facie similar situations in A-Life can be misleading, saddling A-Life with problems and concerns (like the Chinese room) that it can do well without. However, the application of AI thinking to A-Life is an appealing one, and presumably has its utility. To what extent, and in which situations, is such a comparison fruitful? This question is the topic of the second half of this paper.
3.1 Analogies and Strategies
In “Learning from Functionalism-- The Prospects for Strong Artificial Life,” Elliot Sober  explores the following analogy: “Artificial intelligence is to psychology as artificial life is to biology.” With this analogy (which I call the “Sober analogy”) he sketches a variety of positions and concerns from traditional philosophy of AI as they would appear in the philosophy of A-Life.5 He discusses “strong” and “weak” A-Life, biological dualism and identity theory, biological multiple realizability, etc. Sober eventually comes to argue for a functionalist approach to biology and A-Life which parallels the prominent philosophical position of the same name found in psychology and AI.
Sober is not alone in seeing parallels between AI and A-Life. In his seminal essay introducing the first A-Life proceedings, Chris Langton  follows a similar path (see, in particular, Figure 11, p. 40). He notes a similarity between connectionist AI (where relatively complicated “intelligent” behavior is generated using a relatively simple structural substrate) and A-Life modeling (where relatively complicated “living” behavior is generated using a relatively simple substrate as in cellular automata). Similarly, Pattee  notes that “It is clear from this workshop [Artificial Life I] that artificial life studies have closer roots in artificial intelligence and computational modeling than in biology itself.”
While this evidence indicates a connection between A-Life and AI, what Sober is arguing for is a close relationship between the philosophical situations in which each field finds itself. This position also seems to have support in the A-Life literature. Not only does Sober argue for an A-Life version of functionalism, there are discussions of an A-Life “Turing test” , an A-Life “Chinese room”  , and an A-Life hardware-software distinction . Given that A-Life is generally free of philosophical discussion (some would say refreshingly free), these examples suggest that Sober is not alone in pointing out a deep philosophical connection between AI and A-Life.
The Sober analogy is an appealing one, and there is no doubt a lot of truth in it. Where AI is the synthetic, engineering counterpart of the more analytic science of theoretical psychology, A-Life is the synthetic, engineering counterpart of the more analytic science of theoretical biology. Both AI and A-Life make extensive use of the digital computer and computer models of their respective phenomena. Both AI and A-Life argue that we can, in principle, build artificial examples of what have to this day been phenomena of purely natural origin.
However, analogies are not very useful by themselves. They just suggest that there are similarities (and differences) between things. What would be more useful would be a methodology based on the analogy which makes the analogy do some work. In other words, one wants to turn a simple logical relationship into a methodology. The “work” such a methodology could do for a new endeavor like A-Life might include setting out the set of important philosophical metaphors, positions and distinctions to be used in that endeavor. I feel that the above examples of the application of traditional AI distinctions to A-Life imply just such a methodology. In its most extreme form, this strategy (which I call the Global Replacement Strategy, “GRS” for short) is to take the thirty years of avid discussion in the philosophy of AI and translate it into what will then be the “philosophy of A-Life.” This strategy gives A-Life a way of generating a complete and well-worked-out philosophical landscape, merely by taking the canon of the philosophy of AI and (stealing a concept from word processing) globally replacing all occurrences of the word “intelligence” with the word “life.”
This extreme application of the Sober analogy is not without its merits. It allows the still-embryonic A-Life to take advantage of the large philosophical armory that AI has struggled to develop over the better part of three decades. A-Life can dispense with doing any of this hard work for itself. In a mere five years since its inception, so goes the GRS argument, Sober has given A-Life a rich and varied philosophical tapestry of positions, arguments, and metaphors to rival that of any other, more established, endeavor.
However, no matter how appealing it might seem, GRS is not the best course for the A-Life community to take. There is good reason to believe that there is much to be gained by originating a novel philosophy of A-Life, with little derivation from traditional philosophy of psychology and AI. As illustrated in the Blip World vs. Blob World example, thinking of A-Life in traditional AI terms can lead one astray. This example is one illustration of the dangers of the GRS, but a more general account of its hazards is needed.
3.2 Dis-analogies between life and mind
Like all analogies, the Sober analogy does not claim that the central phenomena of psychology (“mind”) and biology (“life”) are identical, but it does suggest that the ways these phenomena are (or should be) handled in their respective domains are significantly parallel. GRS is calculated to make use of that parallelism, turning the analogy into a constructive strategy for defining the proper philosophical problem space of A-Life. While attractive, there is a problem with this picture. The existence of important dis-analogies between the domains of psychology and biology points to large areas of concern that would resist the simple translation of one field into the other. In the remainder of this paper, I will consider what I believe are the two most important differences between the phenomena of life and mind: the lack of a strong eliminative materialist position in biology and the lack of a strictly biological concern with the subjective.
3.3 Folk biology and eliminative materialism
When looking at the arguments of those who wish to allege a strong analogy, it is often more instructive to note what the author fails to mention, rather than what he does. Among the positions traditionally available to the philosopher of AI (and, mutatis mutandi, to the would-be philosopher of A-Life), Sober mentions dualism, identity theory, and functionalism. But one position he fails to mention is eliminative materialism (EM). Originally argued by Paul Feyerabend    and currently championed by Stephen Stich  and Paul M. Churchland  , EM is primarily a thesis about proper scientific explanation. In particular, it seeks to reject the notion that scientific explanation must be carried out in terms of our folk scientific conception of ourselves. A “folk theory” is just another name for our common-sense notions about a particular domain. For example, Aristotelian physics might be considered an explication of ancient Greek folk physics: a physics in which rocks fall because they desire to return to the place of their origin, and where heavier objects fall faster than lighter ones. Folk psychology would consist of the myriad rules of behavior humans use in their everyday relations with one another. (See Churchland  for a sketch of these rules.) Central to this folk theory is the liberal attribution of “beliefs,” “desires,” “moods,” etc. to the entities that make up the domain of psychology: people, pets, fictional characters, etc. The issue with folk theories is not whether they are useful abstractions or whether they are important to our day-to-day dealings with the world. (They are essential. Just reflect on the central role folk psychological attribution plays in our justice system, for instance.) The issue is whether these common-sense theories have any special status within science. In the case of contemporary scientific physics, it is accepted that folk physics has no special status. If physicists can explain the motion of bodies without anthropomorphizing them, then physics should do so (and it does).
The status of folk psychology is very different. As mentioned above, Paul Churchland has argued that not only can folk psychology be banished from a mature scientific psychology, but that the time has come to actually do so. In making his case against folk psychology, he mounts a three-pronged attack: First, he reminds us that we should assess a theory not only on its successes, but also on its failings. There is a large inventory of presumably psychological phenomena that folk psychology simply fails to address adequately, including the nature and dynamics of mental illness, creative imagination, sleep, perceptual illusions, and learning, to name just a few. Second, he argues that the history of folk psychology does not give one reason to hope for the future of the endeavor. Churchland writes that “the story [of folk psychology] is one of retreat, infertility, and decadence.” It is a paradigm case of a degenerating research programme. Finally, Churchland outlines reasons for believing that folk psychology cannot easily be integrated with the rest of scientific explanations. Particularly, it seems to be very much at odds with the one field with which it would presumably have the closest associations: neuroscience. Citing these three deficits, Churchland argues that the days of folk psychology in scientific psychology are numbered.
However, Churchland's eliminative materialism in psychology is not without its objectors. Indeed, it is probably safe to say that it is still a minority view amongst philosophers of psychology. Some, like Dan Dennett  , and Terence Horgan and James Woodward , have argued that folk notions such as “belief” and “desire” should or must play a role in our scientific psychological explanations. For years, Dennett has argued for the importance of the concept of an “intentional system” for psychological explanations. An intentional system is one which is “reliably and voluminously predicted” via the attribution of “beliefs,” “desires,” and other common-sense notions to that system. And Dennett cogently argues that humans and many other animals are just such systems. This being the case, a scientific psychology must employ concepts from folk psychology.
Horgan and Woodward take a slightly different approach. They argue that the case against folk psychology is overstated: that folk psychology is actually quite a good scientific explanation of psychology, regardless of the failings EM sees in it. They also argue that EM places too stringent restrictions on how folk psychology should be integrated with our other scientific beliefs. That neuroscience cannot capture the basic notions of folk psychology in its theory is no reason to reject folk psychology in favor of neuroscience.
But for all this heated debate over the importance of folk theory to psychology, we do not find anything even vaguely similar to this going on in contemporary biology. On the face of it, it is not clear whether such a debate is even possible. The primary problem is determining whether a folk theory of biology even exists in the first place. And, if a “folk biology” can be rounded up for the purpose, will its fate be more like that of folk psychology or folk physics?
The first place one might look for a folk biology is in the lore of the “common person,” that general framework of common-sense and rules of thumb which has done our species so well through the ages. Aside from common-sense psychological knowledge about natural phenomena (e.g., “Always avoid contact with female bears when they are with their cubs, as mother bears are prone to protective violence when they believe their young are threatened,” “My dog is standing next to the door because he wants to go out.”), there seems to be little in the way of what might be called specifically biological knowledge.
There is a good deal of folk knowledge of breeding, such as the old maxim that “like breeds like.” The dangers of inbreeding, and the fact that like animals will only mate with like animals, have apparently been well-known to breeders for centuries. Our first candidate for a folk biology, then, would be some version of the science of breeding. Indeed, part of the inspiration for Charles Darwin's Origin of Species  was the great diversity of types of pigeon that breeders had been able to bring about (even without knowledge of Mendelian genetics).
It is appropriate that Darwin's ground-breaking work should be mentioned, as its title names that which would arguably be the central notion of any possible folk biology: the concept of a “species.” The notion that the biological world is made up of distinct kinds of creature is probably the first principle of common-sense biology. The Old Testament, Native American mythology, and many other creation stories share the common feature that distinct kinds of creature were created separately. And perhaps the biggest job of a scientific biology, from Aristotle onwards, has been the Herculean task of simply cataloging all the kinds of creature found within our incredibly diverse ecosystem. The notion of distinct species is so central to our notion of what biology should be that this was the central fact which Darwin felt called upon to explain with his theory of natural selection.
Along with this notion that there are different kinds of creature in the biological realm, perhaps another central notion of folk biology would be that the biological world constitutes a fundamentally different set of things, i.e., that there is something distinct and special about biological entities which separates them from the rest of the furniture of the universe. This notion of an essential difference between living and non-living things is perhaps best captured in the concept of the “vital spirit,” that substance which is the essence of the living. Possession of this spirit is supposed to be what makes a living cell different from a non-living collection of the same chemicals. Though the popularity of the belief in some kind of nonmaterial animating “spirit” has declined in this century, the crux of the issue survives in the demands that society places on biologists and physicians to come up with reliable criteria of “life” and “death.”
We are beginning to see that it is at least possible that there is something which answers to the name “folk biology.” It would have an ontology (that the world consists of the “biological” and the “non-biological,” and that the biological world is made up of distinct kinds or “species”). It would also have rules for the behavior between the elements of this ontology (like the laws of breeding). Folk biology might not seem to have the richness typically attributed to folk psychology (most of the breeding rules would seem to delineate all the things with which a given species cannot breed), but that might be because I simply have not adequately characterized it in the small space here. But one can imagine that some kind of likely story might possibly be put together.
However, even if the existence of folk biology is granted, it must be noted that, unlike the situation in psychology, there does not seem to be anybody interested in arguing for folk biology as the necessary or appropriate language of biological explanation. Where there is vociferous debate in the philosophy of psychology, there is only silence in the philosophy of biology.
If the preceding discussion has any cogency, it indicates that the current state of biology on the issue of eliminative materialism and the role of folk theory is different from that of psychology. This in turn indicates an area of dis-analogy within the Sober analogy. However, this is not the most striking difference between the study of the mind and the study of life, as discussed next.
3.4 Lack of the subjective in biology
The most striking difference between psychology and biology is one which may underlie many of the other differences I have sketched above. Psychological explanation has to explain more than just the behavior of psychological systems. One of the things that makes psychology such a difficult endeavor is that in addition to the straightforward behavioral, third-person phenomena which stand in need of explanation, in the case of humans at least, there seem to be additional experiential, first-person phenomena. Part of the burden of psychology is to explain (or explain-away) phenomena related to the prima facie claim that psychological systems exhibit attention, intentionality, consciousness, self-consciousness, a “point-of-view,” the property of there being “something-it-is-like-to-be” that entity, qualia, or any other of the constellation of concepts relating to the subjective nature of the psychological. Indeed, it seems plausible that it is this element of the psychological which makes it so resistant to mechanistic or reductionistic explanation. It is the difficulty of even conceiving a conscious mechanism which hampers the would-be psychological mechanist. Whatever consciousness is, it seems the sort of thing about which no collection of third-person facts would ever be complete; that after science has done its best, there will still remain first-person facts inaccessible to the traditional scientific method.
It is not our place here to assess or take sides on the role or nature of consciousness in psychology. It need only be noted that there is no concern in biology analogous to the debate over eliminative materialism and folk psychology. Perhaps we should be thankful, for this is one less obstacle for theoretical biology to overcome, or for A-Life to worry about. Biological phenomena, unlike their psychological counterparts, seem to be exclusively of the behavioral, third-person variety. There is no worry that, after describing all there is to measure of the physical nature of the system, there will be “something else” at which science cannot get. Now, determining what the correct parameters actually are and understanding exactly how biological systems produce the relevant behavior is a tough enough job on its own, but at least the phenomena in question are there-- waiting to be measured, probed, and replicated. (I should note that Stevan Harnad  makes many of these same points.)
A summary of the discussion so far: Sober proposed that the relationship between A-Life and biology was analogous to that between AI and psychology. I note that this seems to be a prominent point of view within the A-Life community. In fact, there seems to be support for the even stronger claim that the philosophy of A-Life should be the philosophy of AI translated into biological terms, a strategy I call the Global Replacement Strategy. However, we have seen that a variety of issues and debates endemic to the philosophy of AI-- those relating to eliminative materialism and the subjective nature of mind -- have no counterpart in biology. These issues cannot be discarded as being minor side-issues within the philosophy of AI. Quite to the contrary, if the amount of ink spilled over them is any indication, they are among the most central philosophical issues of that endeavor. But, if these important issues cannot be translated into the philosophy of A-Life, what does this indicate about the general usefulness of GRS? It indicates that whatever the alleged validity and usefulness of translating concepts, problems, and metaphors from AI into A-Life as a constructive strategy, the GRS is clearly too extreme. For all the similarities between AI and A-Life, as indicated by the Sober analogy, the phenomena of intelligence and life are sufficiently different to preclude any kind of straightforward relationship between the two sciences.
In this paper I have suggested that a relationship between artificial intelligence and artificial life is not as useful as it might at first seem. Until this point in time, the philosophical discussion within A-Life has been littered with references to positions, metaphors, and arguments made popular within the history of AI. However, with the notable exception of Sober's paper, we have seen little discussion specifically of the methodology of importing concepts from AI into A-Life. By and large, the justification for this procedure has been accepted simply on the basis of the close intellectual ties between the two fields and their respective practitioners. This paper is intended not as a refutation of that methodology, but as a caution against its unreflective overuse.
We should not be surprised that there are concepts from AI that will be of use to A-Life. However, contrary to the Global Replacement Strategy, a concept is not necessarily useful to A-Life in virtue of its being a concept from AI. One needs to make an argument for its usefulness beyond that which is provided by the Sober analogy. It is the conclusion of this paper that the burden of proof is laid upon anyone wishing to make use of any concept from AI in A-Life. The Sober analogy merely indicates a relationship between the two disciplines, and what one should expect is a sharing of ideas between them, not an eclipse of one by the other.
I would like to thank Aaron Sloman, Marcus Peschl, Derek Smith, Tom Ray, Ron Chrisley, and Inman Harvey for enlightening discussion on the topics of this paper. Georg Schwarz, Sandra Mitchell and Jim Murray all read drafts and, in the process of disagreeing with most of what I had to say, offered valuable criticism. Important feedback was also received during presentations of this material to UCSD's Experimental Philosophy Lab, the Comparative Approaches to Cognitive Science Summer School (1992, Aix-en-Provence), and the University of Birmingham School of Computing Science.
1. This is an improved version of my paper for Artificial Life III .
2.Modern versions of functionalism add the additional criterion that a system must instantiate the right kind of functionally-defined (generally, computational) description, but this description is made at such a level of abstraction that the physical details of the system's constitution are still largely irrelevant. Just as many physically different systems could, in principle, pass the Turing Test, many physically different systems can, in principle, instantiate a given functional description.
3.It might be argued that all properties are functional properties, including the ones I am calling “physical” properties, and that the distinction I draw is not between functional (computational) and non-functional (physical) descriptions, but between functional descriptions of differing degrees of physical specificity. In response to this suggestion, I would recast the conclusion of my argument in this way: A-Life differs from AI in that A-Life requires much more fine-grained functional descriptions than AI requires. The evaluation of Blip Worlds requires understanding much more about the physical details of the systems than analogous evaluations of AI systems would require (at least, according to functionalism).
4. Note that I am not claiming that the simulation is not occurring on a computer (this is obvious), I am only claiming that such a simulation is not making use of the well-known computational properties of the computer. Similarly, the use of my NeXT computer to determine the heat output of an identical make of computer depends on using a computer as non-computational simulation. Perhaps the term “model” (as in the scale models used by architects, or the scale models of warplanes built by children) is a better term in this situation.
5. Traditionally, the philosophy of AI has been seen as a specialization of the more general set of concerns of the philosophy of psychology. Similarly, one would expect that a “philosophy of A-Life” would be a specialization of the philosophy of biology. In lieu of the cumbersome phrasing “philosophies of psychology and AI” and “philosophies of biology and A-Life,” I will refer to only the “philosophy of AI” and the “philosophy of A-Life” for the sake of brevity.
Figure 1: A first look at Blob World (left side) and Blip World (right side). Both worlds consist of a medium (where the microscopic phenomena of interest take place) which is appropriately visualized by some mechanism. These mechanisms create a macroscopic representation of the respective behaviors of the two media.
Figure 2: A second look at Blob World (left side) and Blip World (right side). As in Figure 1, there are separate media that are transformed into representations by their respective visualizing mechanisms. However, there are two kinds of representation. A window representation presents a continual series of “snapshots” of the system. A dynamic time-course representation presents summary data of the dynamics of the system in a continually updating fashion.
 Adami, C., and Brown, C. T. “Evolutionary learning in the 2D artificial life system `avida.'“ In Artificial Life IV. Cambridge, MA: MIT Press, 1994.
 Bedau, M. A., and Packard, N. H. “Measurement of evolutionary activity, teleology, and life.” In Artificial Life II_, edited by C. Langton, C. Taylor, J.D. Farmer, & S. Rasmussen. SFI Studies in the Sciences of Complexity, Proc. Vol. X. Redwood City, CA: Addison-Wesley, 1992: 431-461.
 Churchland, P. M. Scientific Realism and the Plasticity of Mind. Cambridge Studies in Philosophy. Cambridge, MA: MIT Press, 1979.
 Churchland, P. M. A Neurocomputational Perspective: The Nature of Mind and Structure of Science. Cambridge, MA: MIT Press, 1989.
 Darwin, C. On the Origin of Species (First Edition). London: John Murray, 1859.
 Dennett, D. C. Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, MA: MIT Press, 1978.
 Dennett, D. C. The Intentional Stance. Cambridge, MA: MIT Press, 1987.
 Farmer, D. F., and Belin, A. d'A. “Artificial Life: the coming evolution.” In Artificial Life, edited by C. G. Langton. SFI Studies in the Sciences of Complexity, Proc. Vol. VI. Redwood City, CA: Addison-Wesley, 1989: 815-840.
 Feyerabend, P. “Explanation, reduction and empiricism” in Scientific Explanation, Space and Time, edited by H. Feigl and G. Maxwell. Minnesota Studies in the Philosophy of Science, vol.3. Minneapolis: University of Minnesota Press (1962): 28-97.
 Feyerabend, P. “Materialism and the mind-body problem” Review of Metaphysics 17 (1963): 49-66.
 Feyerabend, P. “Mental events and the brain.” The Journal of Philosophy 60, (1963): 295-296.
 Harnad, S. “Artificial Life: synthetic vs. virtual.” In Artificial Life III, C. G. Langton (ed), SFI Studies in the Sciences of Complexity, Proc. Vol. XVII. Redwood City, CA: Addison-Wesley, 1994: 539-552.
 Harnad, S. “Levels of functional equivalence in reverse bioengineering.” Artificial Life 1 (1994): 293-301.
 Horgan, T., and Woodward, J. “Folk psychology is here to stay.” The Philosophical Review, XCIV (April 1985): 197-226.
 Keeley, B. L. “Against the global replacement: on the application of the philosophy of artificial intelligence to artificial life.” In Artificial Life III, C. G. Langton (ed), SFI Studies in the Sciences of Complexity, Proc. Vol. XVII. Redwood City, CA: Addison-Wesley, 1994: 569-587.
 Laing, R. “Artificial organisms: history, problems, directions.” In Artificial Life, edited by C. G. Langton. SFI Studies in the Sciences of Complexity, Proc. Vol. VI. Redwood City, CA: Addison-Wesley, 1989: 49-61.
 Langton, C. G. “Artificial Life.” In Artificial Life, edited by C. G. Langton. SFI Studies in the Sciences of Complexity, Proc. Vol. VI. Redwood City, CA: Addison-Wesley, 1989: 1-47.
 Pattee, H. H. “Simulations, realizations, and theories of life.” In Artificial Life, edited by C. G. Langton, 63-77. SFI Studies in the Sciences of Complexity, Proc. Vol. VI. Redwood City, CA: Addison-Wesley, 1989:64.
 Ray, T. “An approach to the synthesis of life.” In Artificial Life II_, edited by C. Langton, C. Taylor, J.D. Farmer, & S. Rasmussen, 371-408. SFI Studies in the Sciences of Complexity, Proc. Vol. X. Redwood City, CA: Addison-Wesley, 1992.
 Ray, T. “An evolutionary approach to synthetic biology: Zen and the art of creating life.” Artificial Life 1 (1994): 179-209.
 Searle, J. R. “Minds, brains and programs.” In M. A. Boden (ed.), The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 1990: 67-88.
 Sober, E. “Learning from functionalism: prospects for strong artificial life.” In Artificial Life II_, edited by C. Langton, C. Taylor, J.D. Farmer, & S. Rasmussen. SFI Studies in the Sciences of Complexity, Proc. Vol. X. Redwood City, CA: Addison-Wesley, 1992: 749-765.
 Stich, S. From Folk Psychology to Cognitive Science: The Case Against Belief. Cambridge, MA: MIT Press, 1983.
 Turing, A. M. “Computing machinery and intelligence.” In M. A. Boden (ed.), The Philosophy of Artificial Intelligence. Oxford: Oxford University Press, 1990: 40-66.