Functionalism and the Number of Minds
Alexander R. Pruss
January 27, 2004
Abstract.
I
argue that standard functionalism leads to absurd conclusions as to the number
of minds that would exist in the universe if persons were duplicated. Rather than yielding the conclusion that
making a molecule-by-molecule copy of a material person would result in two persons, it leads to the conclusion
that three persons, or perhaps only one person, would result. This is absurd and standard functionalism
should be abandoned. Social varieties of
functionalism fare no better, though there is an Aristotelian variety of
functionalism that accepts irreducible finality that escapes this particular reductio.
According to functionalism, a mind is a particular kind of functional system. A functional system is composed of a number of subsystems considered as black boxes with inputs, outputs and states, and patterns of behavior that describe what outputs result from the arrival of a given set of inputs to a box’s input channels in a given state and how the state changes in the process, and with the output channels of some boxes being connected to some of the input channels of others. Each channel carries input/output values. Some input and output channels, respectively, of boxes will not be connected to the output and input channels, respectively, of any other box and these I will call “global input channels” and “global output channels”, respectively. Moreover, for added realism, we should think that each box has a special state that I will call its broken state, and its outputs in such a state are undefined. For convenience I will think of a system that has or has had any box in a broken state as itself counting as completely dead. Thus, a broken state is indicative of very serious damage, sufficient to count as death for the system.
I will assume the patterns of behavior are deterministic—whether a similar argument to mine can be constructed in the indeterministic case is an open question. More will be said about this assumption later.
Then, two systems are (functionally) isomorphic if there is a one-to-one map between their respective black boxes, input/output values, input/output channels, and the states of the black boxes, which respects the patterns of behavior of boxes and their interconnections, and which maps broken states onto broken states. Functionalism, then, holds that any system isomorphic to a mind is itself a mind.
Given
a functional system, we can talk about its history. The history of a functional system is a
complete record of the sequence of states, inputs and outputs of all of its
boxes over its lifetime, i.e., while it is not dead. We can say that two isomorphic functional
systems have “isomorphic histories” if the isomorphism between the two systems
also maps the history of one onto the history of the other.
The beauty of functionalism is that it does not matter how the black boxes are implemented. They may be made of neurons, or computer chips, or giant brass cogwheels. Only functional features of the system matter, i.e., features preserved under functional isomorphism. Any system isomorphic to a mind is a mind, and any system whose history is isomorphic to the history of a mind that is consciously thinking is itself a mind that consciously thinks. I do not commit the functionalist to the further claim that two systems with isomorphic histories think the same thoughts. One might, for instance, be a content-externalist, so that whether a person is thinking about H2O or of XYZ depends not only on the functional interconnections in place, but also on whether it was actually H2O or XYZ that caused a certain sensory experience. Or one might insist that two people thinking “I am a person” think different thoughts because of the indexicality of “I”. Nor do I commit the functionalist to the claim that two systems with isomorphic histories have experienced the same qualia: on some functionalist accounts of qualia, the identity of a quale associated with a conscious experience is dependent on the particular implementation of the black boxes in the system. It is highly plausible that if functionalism is true, then two minds with isomorphic histories are thinking in some sense “the qualitatively same” thoughts, but I do not want to have to spell out this notion in detail.
Attractive as functionalism may be, I will argue that in its standard materialist incarnation, according to which it is possible for at least some minds to be material objects, it is false.
There is also a generalization of functionalism which I will call social functionalism, which may be plausible to those who think, perhaps on private language argument grounds, that there cannot be a thinking being that exists on its own. On this generalization, we are to think of our functional systems are embedded in societies, so that the outputs of some of the systems are linked to black boxes in the environment and to the inputs of the other systems. We can think of a society, then, as a large functional system, with its black boxes distinguished into a number of “individuals”, so that an individual is a set of black boxes, as well as into a bunch of black boxes collectively labeled “environment”. Note that environment boxes don’t have a broken state. Two such functional systems are socially isomorphic if there is a functional isomorphism between them that maps the environment boxes of one system onto the environment boxes of the other and the boxes of individuals onto the boxes of individuals, and that respects the boundaries between individuals in the sense that the boxes of any one individual in one system are always mapped onto the boxes of some one individual in the other system. We can also talk about social histories, which include the histories of the individuals and of the environment. Social functionalism then holds that if we have a societies that contains minds, then another society socially isomorphic to this one contains just as many minds. Moreover, if the two societies are historically isomorphic, both societies contain equal numbers of actually thinking minds.
I
am not going to say very much about social functionalism, because my arguments
against standard functionalism also apply to social functionalism. One will just have to replace individuals in
the constructions with societies, and functional isomorphism with social
isomorphism.
Suppose the universe—the aggregate of all spatiotemporally located entities—contains only one mind with a history of conscious thought. Duplicate the mind, its history and its immediate environment far enough away in the universe so that there is no functionally relevant causal interaction between the two minds. We have a very strong intuition that in doing so we have done nothing more and nothing less than doubled the number of conscious minds in the universe. The universe now contains exactly two minds with conscious histories.
Functionalism
certainly appears committed to the claim that there are at least two conscious minds in the universe now. After all, the two minds, call them A and A*, together with their histories, are duplicates of one another,
and hence, a fortiori, they are
isomorphic and have isomorphic histories, where the isomorphism is the
straightforward one induced by the duplication, which will indicate with a
*. Thus, if x is a box, a state, etc., in A,
I will use x* to indicate the
corresponding duplicate item in A*.
The problem now is that functionalism multiplies minds beyond necessity. For consider a third functional system, A**. Each box b** of A** consists of the aggregate (i.e., mereological sum) of a box b of A and its duplicate b* from A*. The box b** has exactly the same number of states as b (or b*) does, and I will denote a state of it as s** where s is a state of b. Specifically:
Furthermore, the input/output channels of A** are the aggregates of pairs of corresponding input/output channels of B**, and the values carried in these channels can be taken to be pairs of values. The two paired values will always correspond as long as A** is alive, given how A**’s broken states were defined.
And we’re done. We have a new functional system, A**. Moreover, A**’s history is isomorphic to the history of A under the obvious isomorphism that takes a box b to b** and a state s to s** and that maps input/output channels in the natural way. Hence, A** is a mind thinking conscious thoughts. Moreover, plainly A** is distinct from A and from A*. Thus, if functionalism holds, there are at least three minds thinking consciously, whereas we have seen that there are only two. Thus, functionalism is false.
But perhaps functionalism will not let us say that A** is a mind distinct from A. After all, A is physically contained in A**. How can one have two minds, one of which is contained within the other? But such considerations will not help our functionalist. For suppose that A’s containment in A** removes A’s status as a mind on its own. Then by parity of reasoning, A* is also not a mind on its own. Hence, we have only one mind thinking conscious thoughts, contrary again to our clear intuition that we have two.
However, perhaps it is not A’s status as a mind on its own that is destroyed by the fact of A’s containment in A**, but it is A**’s status that is thus destroyed. The idea seems to be that if there is a smaller mind within what would otherwise be a larger mind, the larger mind does not exist. But this is surely contrary to standard functionalist intuitions. Imagine, for instance, that you take some very simple and tiny black box in my brain (assuming for the sake of the argument that my mind is material) and replace it with another system that does the same job. I will presumably remain a thinking thing. Indeed, this kind of an intuition is a standard argument for functionalism—one typically imagines doing the replacement box-by-box to conclude that functionalism is true. So take the tiny black box and replace it with a conscious homunculus genetically pre-programmed to do that job. The job is, of course, very simple. The homunculus, being a genuine human-like critter, will find that job on its own boring, and no doubt will do all kinds of other things as well. Indeed, he will be an independent mind, I may suppose, and might engage in scientific exploration of my brain. But, we may suppose, he’ll also reliably do the job of the box that he replaced. Well, then, on materialist functionalist grounds we have to admit that here we have a mind, namely mine, that contains another mind, namely the homunculus’s. Hence, it is false that containment destroys the larger mind—or that it destroys the smaller mind, for that matter.
Nor will it do to complain that system A** is doubly redundant, and we are only concerned with non-redundant systems. A system is redundant to the extent that you can destroy parts of it and keep it running. Presumably the reason why A** is thought to be doubly redundant is that if you damage any part of A, A** will keep on running given that the A* parts will do the job, and likewise if you damage any part of A*, A** will also keep on running. But this is incorrect. For as soon as you damage any part of A, the A-boxes and the A*-boxes will get out of sync, and hence the A**-boxes will count as being in A**’s broken state (i.e., broken**), and so A** will count as dead. There is no redundancy induced by the doubling.
But these objections are distractions from the real objection which is that A** just doesn’t count as a single functional system. Isn’t it in two causally separated halves? Well, not necessarily. The assumptions made were that A and A* were to be separated in such a way that there is no “functionally relevant” causal connection, a term I did not define, though there is an obvious intuitive definition. Two systems have a functionally relevant causal connection providing that one, in the course of its history, is in such a position that the states its boxes are in and/or the data in its channels can causally affect the occurrence of a state of at least one box of the other system. Lack of a functionally relevant causal connection does not imply a total causal separation. After all, I can have two non-networked computers running in one room with no functionally relevant causal connection. If the computers are properly shielded, and if their inputs and outputs are kept separate (not even mediated by human agents), the functional states (e.g., the states of the CPUs and other components) of one will not be affected by those of the other, even though there will be causal connection: However good the shielding, there will be electromagnetic emissions from one that affect the flow of electricity in the other’s CPU. However, given decent shielding, this will not shift the state of the CPU, since a given 0/1 state can in practice be realized by a range of voltages, and so a small shift within the range will not affect what state the system is in. Admittedly, one could theoretically be a functionalist and claim that minds have zero leeway in this way: any causal affecting of a mind shifts the state. But that is surely implausible. Deng Xiaoping’s waving of his arm in the privacy of his bedroom would not affect any of my mental states, even if it exerted some gravitational influence on my brain and my brain were my mind or a part thereof.
Nor
will it do to complain that a black box must, to be a single box, be spatiotemporally
interconnected, whereas b**, as the
aggregate of b and b*, is a scattered individual. For we can always tweak the case. Suppose that we’ve added a thin silk thread
joining every pair of corresponding boxes from A and A* in such a way as
not to induce any functionally relevant causal interaction. Our intuition that there are exactly two
minds is unchanged. Certainly we wouldn’t
produce a third mind just be putting functionally causally irrelevant threads
between the parts of your brain and those of mine. But now we can consider b** to be the aggregate not just of b and b*, but of b, b*
and the silk thread. And again
functionalism gives us three minds in the place of two. Besides, this is a poor objection, because we
can easily imagine how a bona fide
black box might contain subcomponents that work together not by physical
interconnection but by radio transmission.
But can’t the objector just say: “Well, sure, there may be causal interaction between box b and b**, but you just said that the causal interaction is not functionally relevant. It is the absence of a functionally relevant connection between parts of the box that makes it not be a single box, or between parts of a system that makes it not be a single system.” However, this forgets that “functional relevance” is relative to a functional system. When we are talking about A**, presumably we should be talking about the functional relevance of things to A**. Now, it may well be true that if a thing is not functionally relevant to a functional system, then it is not a part of the system. But in fact a box b of A is functionally relevant to A**. After all, were b in a different state from the state it in fact is in, then it would be out of sync with b*, and hence b** would thereby be in a broken state, and thus A** would be dead. What greater functional relevance could there be?
In general, a complaint that A** is not a genuine system is likely to be grounded in an intuition that the spatially scattered object b** in the original construction, or the object containing two spatially separated parts joined by a silk thread in the modified construction, is not a genuine box. But such a complaint ignores the basic guiding intuition of functionalism: We are dealing with black boxes, and it matters not how they do their job. To make a complaint stick, one would have to take the cover off a box and insist that the box consists only of those parts that are relevant to the functioning of the whole. But can we even say that b is a part of b** not relevant to the functioning of the whole? Of course not: we have already seen that if b changes state on its own, without b* doing the same thing, then b** enters a broken state. The state of b** thus depends on both the state of b and of b*. Besides, can’t we imagine that a functional system might sometimes contain a box which could be replaced with a different box which works by doing two tasks in parallel?
One might, however, complain about the disconnectedness of A** as a whole, its bipartite nature, even if this bipartite nature cannot be precisely expressed in a satisfactory way. There is, after all, an intuition that in some way A** is two systems. We can fix that. Say that two communications channels are “tied together” at their respective input (respectively, output) ends if at these ends a device is inserted that ensures that both channels always carry the same value, and what the value is depends on the values carried from or to both channels. One way to do this is to tie the two channels electronically together so that if both would otherwise have had the same value, that same value is allowed to flow onward, while if they would have had different values, some third value is made to flow. The exact details do not matter. Now “tie” the corresponding global input channels of A together with those of A*, with the tying always happening on the environment side (i.e., on the input ends of the channels) and do the same thing for the global output channels, again with the tying happening on the environment side (this time this will be on the output ends of the channels).
This time, let A** be A together with A*, except with the devices that tie the global input/output channels together being now considered to be parts of the channels of A**. Clearly now A** is a single functional system. It is very hard for the functionalist to duck out of this conclusion. But our intuition that we have two, not three, systems should stand. Imagine that my input channels are tied to yours in this way—my eyes are now incapable of seeing anything you don’t see, etc. Surely, still, we have only two persons, you and me, and there is no third person as a result of our input channels being tied together. Of course, we’re only going to be able to function well as intelligent agents if we find ourselves in almost identical environments. But suppose we do. And now do the next step. Tie our output channels together. For instance, implant radio transmitters in the nerves going to our muscles so as to ensure that our muscles move in unison. Surely, once again, we do not create a third conscious mind just by tying our muscles together like this. Nor would we create a third consciousness if it turned out that we always acted alike and had the same inputs.
One might back this reasoning up with the following principle: The number of consciousnesses is not changed by trying together input/output channels in a way that does not in fact affect any of the inputs and outputs going into the minds. If ex hypothesi the two persons whose global channels are tied together had always received the same inputs and produced the same outputs, the “tying-together” would not have affected anybody’s inputs and outputs, and hence should not have changed the number of persons.
Thus,
the intuition that system A** is
objectionably bipartite can be met by “tying” the respective inputs of A and A* together and doing the same for the outputs.
But in any case, a functional system can be implemented in ways in which it is difficult to identify a given box as a single physically unified part. Consider, for instance, how one might emulate the CPU of one computer on a computer of a different sort. The physical CPU to be emulated will consist of a number of localizable boxes. Depending on the level of resolution at which our functional analysis is made, these boxes may be individual logic gates or they may be larger boxes, such as the arithmetic logic unit, the floating point unit, the cache, the input/output subsystems, etc.
Now,
one might choose to emulate just the overall
functioning of the CPU. This may not
give a functional isomorphism. But one
might instead emulate each box of the original CPU in software on the emulating
system. Intuitively, the emulator, if
correctly programmed, will be functionally isomorphic to the original. However, it is most unlikely that the boxes
of the original CPU are going to be emulated by physically unified boxes in the
new system. Rather, they are going to be
emulated by areas of memory on the emulating system and have their causal
interconnections mediated by the emulator software. Now, these areas need not be physically
contiguous. There is nothing preventing
the emulating system from storing some of the data pertinent to the state of
box A in area A1 and some other data pertinent to that state in an
area A2 that is physically
quite distant from A1 on
the emulating system, say on a different memory chip. For instance, the state of a box A could be described by a number from
zero to seven, and this number could be encoded as three binary digits, each
stored on a different memory chip. Even
one piece of data need not even be stored at all times in the same physical
location: A virtual memory system may end up paging the data out to a hard
disk, and then paging it back in to a different physical area of memory
later. If, as is plausible, box-by-box
software emulation of a CPU constitutes a paradigmatic case of functional
isomorphism, then the requirement that a given physically unified box correspond
to something equally physically unified is misplaced.
If
one insists that there is something bogus about the broken state of A**,
because A** still seems to “be
working” even if in a broken state, and hence A** is not really a third mind, one can respond by saying that appearances
can be deceptive. An item need not be
smashed to bits to be “broken”. One can
imagine a computer system that detects a hardware failure and prints out: “I’m
dead, disregard all future output!” That
would clearly be a fine broken
state. Likewise, one could imagine that
every component of the computer might have two multicolored lights on it, and the
computer’s specifications be such that it counts as broken as soon as the two
lights were of different colors—there are all kinds of ways of indicating or
constituting failure. This is not too different
from the case at hand.
But for the person who still would like a more visibly broken state, modify the situation as follows. Go back to the original untied A**. Imagine now a person, call him Black for reasons that will become apparent soon, and suppose that he watches both A and A* and as soon as he sees a divergence between the state of any box B of A and that of the corresponding box B* of A*, he blows both A and A* to pieces. However, in fact, he does not actually do anything, because there is perfect convergence. It is clear now that the aggregate of A, A* and Black has a bona fide broken state. Note, too, that the resulting aggregate is tied together, and it is hard to deny that it is functionally a whole. Observe, also, that Black, like his namesake in Frankfurt’s discussion of the Principle of Alternate Possibilities(??ref), is a purely counterfactual intervener. In the actual world, he does not intervene.
Intuitively, there seem to be exactly three minds in the A-A*-Black system: A, A* and Black’s mind. But on functionalist grounds, we can divide up the system into boxes consisting each of a box B from A, a box B* from A*, and a Black as observer of both B and B*. Given the above functionalist considerations, it seems that the system as a whole forms a fourth mind, which appears absurd.
The
functionalist can make two responses. First,
she might bite the bullet and say that while the original A-A* system was not a
mind, the A-A*-Black system is such. Here
we get the implausible conclusion that just by watching A and A* while having a
plan of blowing them up when they diverge, Black produces a new mind. It is deeply implausible that two minds just
by being watched by someone who has a particular plan, and a primitive plan at
that, become a third mind. If the plan
were a complex one that involved lots of computation, as in the case of the man
in Searle’s Chinese room(??ref), then one might think
that the plan’s execution would constitute mindedness. Moreover, the fact Black is acting purely
passively in the actual world, while the fourth mind is actively thinking the way that A and A* are, makes this an implausible response.
A
more promising answer is to deny that we have a bona fide system here as the parts of the composite system now seem
to overlap physically, because Black seems to be a part of every box. But replace Black by a vast crowd of people,
each of which watches only one pair of boxes. Then the overlap disappears. Each box of the composite system is now a
composite of a box from A, a box from
A* and a person from the crowd. But it is still
implausible there is a mind there. Moreover,
it is implausible to suppose that whether there is an extra mind there depends
on precisely in what way the corresponding pairs of boxes are being observed. Just as merely counterfactual interventions
of a neurosurgeon should not have an effect on whether a person acts freely, so
too the merely counterfactual interventions of the watchers should not matter
here. Besides, suppose that indeed there
is an extra mind there when each pair of corresponding boxes is being watched
by a different person with a detonator. Suppose
there are a lot of boxes, and one of these people with a detonator falls
asleep. Do we then suddenly have only
two minds thinking thoughts qualitatively identical with those of A and A*, just because someone isn’t watching, someone who wouldn’t have
done anything were she watching since in the actual world there is no
divergence to be seen ever? If we still
have an extra mind when one watcher falls asleep, then just how many boxes need
to be watched to have that extra mind be there?
It does not appear possible to answer this in a way that is not ad hoc. Observe, too, that it does not make sense to
say that the mind functions to a degree proportional to the number of
watchers—for, because the extra mind’s history is isomorphic to that of A, it seems that the functioning of that
mind is either qualitatively equal to that of A or it doesn’t function at all.
But perhaps the functionalist will bite the bullet and insist that in all of the composite systems described, the original untied A**, the tied-together one, and the A-A*-Black system, there are not three minds, but there is one mind. We already saw how thinking that the parts of a mind do not have independent identity leads to this conclusion, a conclusion that threatens to swallow up any busy homunculi that I might happen to have running around in my brain. But there is actually something positive to be said in defense of this view in this case. Perhaps, after all, the Principle of Identity of Indiscernibles (PII) holds for minds. If it seems at first sight that we have two minds with a history of thinking qualitatively the same thoughts, perhaps there is just one mind, thinking one sequence of thoughts. In fact, perhaps minds are to be individuated by the types of thoughts they think, or maybe even more interestingly by their functional histories up to isomorphism, rather than in the way that we supposed when we naively thought that duplicating A would produce two minds.
But this solution is plainly implausible. First of all, some people think that the prima facie possibility of immaterial minds thinking qualitatively identical thoughts is a paradigmatic counterexample to Leibniz’s PII. Note, too, that arguments that the PII holds in general, and hence the systems A and A* are identical, will not help our functionalist. For we need not suppose that A* and A are exact duplicates. All the arguments require is that they should be functional isomorphs. They can have some tiny difference that is functionally irrelevant, but relevant from the point of view of the PII.
Besides,
consider two further lines of thought against this solution. Recall that in the Introduction there were
three reasons why we do not want to say that isomorphic histories imply having
the same thoughts. If any one of these
holds, then there is a serious problem for the person who thinks that after the
duplication there is only one mind. For A and A* might have qualitatively
identical thoughts in some sense, but if any of the considerations from the
Introduction work, it need not be the case that they think the very same
thoughts. Thus, A might be in contact with H2O while A* is in contact with XYZ, and hence the
analogues of the functional states that implement A’s thinking that there is water here would implement A*’s thinking a different thought, even
though A* would express that thought also
in the same words, “There is water here.”
And, besides, what is the referent of “here” in the two cases? A’s
“here” is different from A*’s. What is the “here” of the alleged one
composite A-A*-A** mind? Perhaps that mind is dislocated—perhaps with
Dennett we should say that there is no fact of the matter as to the
“here”. But does that mean that A becomes dislocated just by virtue of a
duplicate having been produced? It seems
like there is no dislocation problem if we pay attention to A, and why should the existence of a
duplicate, perhaps far away, be a problem?
Or suppose that qualia depend on the physical
implementation. Thus, we might suppose
that A* is not an exact duplicate but
only a functionally isomorphic one, so that A
and A* have different qualia. But if A and A* constitute one mind, how can that be?
Finally, we have the divergence problem for the claim that there is only one mind when we have the two functional isomorphs A and A*. For how many minds there now are should not depend on future contingencies. Suppose that hitherto A and A* have had isomorphic histories, and that if A and A* have isomorphic histories for all time then there is just one mind there. If how many minds there now are does not depend on future contingencies, then it follows from the possibility that they will have isomorphic futures that in fact there is one mind there now. However, surely, they can diverge in the future. Suppose that in fact they will. By the independence of future contingencies, it is still true that we have one mind now. Yet, A in its future stage is the same person and the same mind as A in its present stage while the future A* is the same person and the same mind as the present A*. Hence, if the present A is the same person and mind as the present A*, it follows by transitivity that the future A is the same person and mind as the future A*. But this is absurd—if they’ve diverged, surely they’re not the same.
A standard argument for functionalism is part-by-part replacement of a brain by prostheses. It seems plausible that if I take a sufficiently small functional part of your brain and replace it by a prosthesis that would function isomorphically in this context, you continue to think, and will not be thinking any differently (except maybe for issues of qualia?) than had the replacement not been made. We say that two boxes would function isomorphically in the context of a larger machine, provided that there is an isomorphism between a machine containing one of these boxes and the corresponding machine containing the other, where the isomorphism fixes everything other than possibly the internal states of this box. Not only does the functioning of the two boxes have to be isomorphic but the input and output value encodings have to be compatible for this to hold.
In any case, the successive replacement argument for functionalism proceeds by imagining successive replacement of small black boxes in the brain by ones that are functionally isomorphic in the context of the machine as a whole, and claiming that the result is likewise a mind, since no one replacement would change a mind into a non-mind. According to this argument, one can in principle change a brain into a largely electronic device, where all the computing is done by electronic boxes, but where mindedness remains. The reason for the word “largely” is that as described above, the interconnections between the boxes remain wet brainy ones. But by the same reasoning, one can also plausibly, one by one, change the communication channels between boxes from being wet brainy ones to being, say, wires, and the result would be entirely an electronic brain, which is a mind precisely because of its functional isomorphism with the original. Or so the functionalist contends.
But now let us proceed somewhat differently. Take a mind that is already electronic. Consider any given box. It has a bunch of wires coming in and a bunch of wires coming out, and inside has, let us suppose, a chip. Well, take the chip and add a double that functions in exactly the same way—indeed, perhaps, by an internally indiscernible copy. Then, take a wire that was attached to the original chip, and fork it—attach it to the corresponding contacts on both chips. You may need to add a little bit of electronic gear to prevent some nasty interactions between the chips. One way to do that might just be to put diodes between the fork in the wire and the chip to ensure that if this is an input, then current can only flow in, and if it an output, current can only flow out. I will assume that the chips the original machine was made from are sufficiently robust that they can work at different levels of current so that the resulting device still works. I will leave the details to the electronic engineers to work out. The net result is supposed to be a black box that contains two chips, but which work together in unison. The new box is supposed to be isomorphic in the context of the machine as a whole to the old box.
By
the same reasoning as in the original successive replacement argument, what I
have remains a mind after the new box is plugged in. Do this to every box successively. We now have a mind functionally isomorphic to the
original mind. We can say that a box
consisting of two chips is in state S*
corresponding to a state S providing
that both of the two chips in the box are in state S. In normal functioning it
should not ever happen that both chips are in different states from each other,
but if it does, we will deem the box to be in the broken state.
Each box then has its electronic gear doubled. Nonetheless, the boxes still have unitary communication channels between them. This is the next step to fix. Replace every wire going from point a to point b by a pair of wires going from point a to point b. Doing this step by step again should not change the mindedness of the whole. Things communicate just as well as they did before—actually, a little better because two wires in parallel have lower resistance than one.
For convenience, of each pair of wires in parallel, paint one red and the other blue. Also, of each pair of indiscernible chips that constitute a black box, paint one red and the other blue. Here is how things look now. We have a number of paired integrated circuits, one red and one blue. These have contacts which are joined to a common junction point, from which there runs a red and a blue wire to another common junction point that, except in the case of a global input or output channel, is connected to another pair of chips. Let us suppose that the way the whole machine is connected to the outside world is that some of the red and blue pairs of wires go to a contact on an input or output device (say, an eye or a vocal cord).
Now comes the final set of steps, which is the most controversial. Take any junction. On one side, there are contacts going to a red and a blue chip. On the other side, there is a red wire and a blue wire. Thus, a junction has four connectors: two being chip-contacts, and two being wires either going to the outside world or to another chip. The junction itself, however, we may suppose to be just a glob of solder (perhaps a microscopic one). Slice the junction through in half, in such a way that the red wire remains connected to the red chip and the blue wire remains connected to the blue chip, but there is no connection between the red wire and the blue wire, or between the red chip’s contact and the blue chip’s contact.
Since the way the device as a whole is set up is such that the blue chips do exactly the same thing as the red chips, this kind of a slicing does not seem to adversely impact the functioning of the whole. We can still identify a functional state of the black box as S* whenever both the blue and the red chip are in S, and say that it is in a broken state when the two chips are in different states. Given that the global outputs and global inputs are tied together, assuming there are no glitches in electronic functioning, this slicing should not affect things—we should still have a mind. But now do this kind of a slicing for every one of the junctions, one by one, except for the junctions with the outside world. If the reasoning above is right, the result is still a mind.
But
after all the slicing is done, it seems that what we really have is a pair of
minds, each like A, with the global
communication channels were tied together. On functionalist grounds we should say that
the whole is a mind. But the whole is
clearly made up of two minds. Hence there
are at least three minds there, which is absurd as the mere tying together of
the global communication channels of two copies of A should not produce a new mind.
What will be challenged here is the slicing process. Perhaps before the slicing there was one whole mind, and after the slicing that one mind ceased to exist, and instead two minds came to exist. But there is then a question which it does not appear possible to answer non-arbitrarily: How many junctions does one need to slice before there are two minds there?
Note
here that while there might be some plausibility to the idea that there is no
determinate answer to the question of how many minds there are because we could
imagine that there is a biological continuum from flowers through humans with
no determinate answer as to where mindedness starts, this kind of reasoning is
inapplicable here. For we can suppose
that A is a full-blown mind, thinking
sophisticated thoughts about general relativity. And the “extra mind” that the argument invokes
is just as much such a mind, and it is one that thinks qualitatively the same
thoughts as A does. It is not some shadowy semi-mind: it either
thinks the same sophisticated thoughts that A
does, or it does not think at all.
My basic argument was that duplication on a standard functionalist framework does not double the number of minds. It triples it. This is truly absurd. Standard functionalism is to be abandoned therefore. The same arguments apply to social functionalism: Just duplicate the society. By duplicating the society in a functionally isolated location, we should be merely doubling the number of minds, but by appropriately reaggregating the individuals, similar arguments to those used in the previous section show that we have tripled the number of minds. Of course if we have the identity of indiscernibles in an appropriate social form, then we cannot duplicate a society as the number of persons stays the same. But that won’t do for the same reasons that the claim that duplicating one person doesn’t double the number of persons won’t do.
Now,
a functionalist might just give up on the individuation of minds
altogether. There are thoughts, and they
are to be identified qualitatively, but there are no numerically identifiable
thinkers. But surely this is false. No matter what happens, either I will exist
tomorrow or I will not exist tomorrow.
If one is to abandon numerical identity of thinkers, one might just as
well say, with the Averroists, that the cosmos as a
whole is a mind that thinks all the thoughts that are being thought, and that the
aggregation of thoughts into thinkers is mere convention.
If individuation is not to be given up, is there a kind of functionalism that survives the argument? Perhaps. An Aristotelian functionalism that insists that each black box is supposed to be distinguished by a unified essence does survive, since the aggregate boxes of A** may not have a unified essence. However, this kind of functionalism, if it is also a materialism (i.e., if it allows for minds to be realized out of matter), falls afoul of the gradual replacement argument. I can take a sufficiently finely distinguished black box and replace it by an electronic version which would be an artifact and hence lack a unified essence (or if one thinks that artifacts do have essences, I can replace it by a randomly assembled bunch of parts that happen to be arranged just like the artifact would be). Surely consciousness continues. And I can continue on through all the boxes. If we’re not dealing with materialistic functionalism, the gradual replacement argument is less pressing: we do not know if it makes sense to talk about spiritual “parts” and we do not have intuitions about the meaningfulness of replacing them even if it does.
And of course, if the gradual replacement argument, or some other argument, shows that materialism inexorably leads to non-Aristotelian functionalism, then materialism is thereby refuted when functionalism is refuted as above.
There is, however, one technical cavil in the above. I have throughout been working with a deterministic functionalism. If some of the black boxes in A are stochastic, there will be different transition probabilities for the black boxes of A** than for those of A. At the moment, I do not have a way of fixing this problem. Thus, a functionalist can escape my argument if she holds that all conscious minds realized out of matter must have an indeterministic box somewhere. This is a strong and controversial claim: it is not enough to claim that some minds, or all actual minds, have indeterministic boxes, because if that is all that is claimed, I can just suppose A is one of those possible minds that is not indeterministic.
Moreover if we are dealing with a materialistic theory, the indeterminism will presumably be of the quantum variety, rather than some libertarian determination by an immaterial mind. But that kind of indeterminism is commonly taken to be just randomness (though I must confess that I am sceptical of this commonplace), if this is right, it would surely be insufficient to ground anything significant to human agency, such as free will. If quantum indeterminism is in fact just randomness, then it is difficult to see why it would be essential to the existence of a mind—why it should somehow make it easier to produce a mind out of quantum mechanical matter than out deterministic matter.
Note, too, how strong the insistence on indeterminism would have to be. It would have to be claimed that all conscious minds must be indeterministic. This is stronger than claiming that free will requires indeterminism, since after all there might be a conscious mind that lacks free will. For instance, perhaps some of the animals have conscious minds lacking free will.
Thus, even this route does not seem promising. Apart from the Aristotelian variety that insists that at least some parts of the mind need unified essences, so that replacement of these by things that lack a unified essence would destroy the mind, materialist functionalism has a serious problem with counting minds. And the Aristotelian variety would be a very different beast: for instance, it no longer implies strong AI in any obvious way if artifacts lack essences. For completeness, I should add that I do not, in fact, endorse the Aristotelian variety either, but that is a different question.