Chaos, Sceptical Theism and Preventing Evil
Alexander R. Pruss
October 5, 2007
Let me start with a puzzle. For aught that we know, our human world is chaotic in the sense that a small modification of the present state is apt to result in radical modifications of the future course of history. You ask me what time it is, and I answer correctly. As a result, you hurry and make it to your meeting. The conversation at the meeting parches your throat, so you have a drink from a water fountain afterwards. This increases the flux of water out of the Brazos into the Waco water system, and causes a minor change in the oceanic eddies, and in a thousand years a hurricane kills tens of thousands in Japan. Or maybe as a result of my telling you what time it is, the hurricane does not happen. In either case, the long-term consequences, which are entirely unpredictable, swamp the short-term significance of the act.
Even if we are not consequentialists, we have to recognize that consequences do play an important role in determining what we should do, and such major consequences as hurricanes trump the short-term significance of my telling you what time it is. So it seems that if we believed our world to be chaotic, we should be paralyzed much of the time in our decisions.
There had better be a way out of this problem of chaos, and I shall argue that there is. Furthermore, and more significantly, we shall see that solving this problem shows that Almeida and Oppy’s argument that sceptical theism should paralyze us in the face of evil fails. Let me sketch that argument, after giving some background.
The inductive argument from evil, as defended by Rowe  and others, begins with an actual evil E, say the suffering of an animal or a rape, and argues to the likely non-existence of God roughly as follows (the details of it do not affect anything I say):
(3) If God exists and has no prima facie justification for allowing an evil, then that evil does not occur. (Premise, following from conceptual truths about God’s goodness)
(4) E occurs. (Premise)
One of the most common theistic responses to the inductive argument from evil is a ‘sceptical theist’ response according to which the move from (1) to (2) is unjustified because we would expect that if God exists, the universe would be ‘morally deep’, with known evils often having justifications in terms of goods beyond our ken, and with a realm of value going far beyond what we know—for a good discussion of this move, see [Russell and Wykstra, 1988]. That we cannot find any even prima facie justification for allowing E does nothing, after all, to provide evidence against the thesis that there is a good beyond our ken that justifies God in allowing E.
Almeida and Oppy  have recently leveled against sceptical theism the objection that if sceptical theism is right, then we have no consequence-based reason to prevent evils E, even when we can do so easily and at no cost to ourselves, because if sceptical theism is right, then we have no reason to think that preventing E results in the better state of affairs. But, barring some special duty to prevent E (say, if we are officers of the state sworn to prevent E-type evils), we only should prevent evils when we have reason to think that preventing E results in the better state of affairs. Hence, sceptical theism implies we have no duty to prevent evils that we can easily prevent at no cost to ourselves, which is absurd. Note the similarity to the problem of chaos here. Indeed, it seems that if we accept Almeida and Oppy’s argument, we ought to likewise accept that a belief in chaos would likewise be paralyzing.
I shall argue that Almeida and Oppy’s paralysis conclusion cannot be drawn given a moderate sceptical theist view that allows (1) to give an insignificant amount of evidence for (2). My argument shall, furthermore, make no controversial anti-consequentialist assumptions like those which David Lewis [1988: 127] employs against paralysis arguments. We shall, further, see that it is possible to consistently hold that (1) gives an insignificant amount of evidence for (2) even while holding that we have a significant consequence-based reason to prevent E.
My argument begins with a discussion of a game that is analogous to both the sceptical theism and chaos cases. I will end with a discussion of two objections.
You know that George will make ten thousand tosses of a pair of fair coins, a red and a green one. Imagine two games. The R-game goes on for the ten thousand tosses, and whenever a red coin lands heads, you get $100 and whenever a red coin lands tails, you pay $100. On the G-game, the same is true, but with the green coin being the relevant one. You are forced to choose between the R-game and the G-game. You cannot switch between them mid-way, nor can you get out of the game before the 10,000 rounds are up. However, you hold one advantage. You make your choice between the games when the coins for the first toss are already in the air, and with your superb vision and mathematical skills you can predict that the first toss will have the red coin landing heads and the green one tails. But you can make no specific predictions about subsequent tosses.
If you choose the R-game, you are guaranteed to win the first time, and you don’t know about the rest of the tosses. If you choose the G-game, you are guaranteed to lose the first time, and you don’t know about the rest of the tosses. It is clear that the self-interestedly rational choice is to go for the R-game. In fact, the situation you are in is just as in a choice between getting $100 for free, and then playing 9,999 subsequent rounds of the R-game, or paying $100, and then playing 9,999 subsequent rounds of the G-game. There is no reason to choose the 9,999 subsequent rounds of the G-game over the 9,999 subsequent rounds of the R-game, and so it is clear that you should just accept the free $100 and play the R-game.
But now consider the following fact. Let F be the event that over the full 10,000 rounds, the red coin comes up heads more often than the green one. You only benefit from choosing the R-game if F is going to occur. Now, P(F), given the known result of the first game, turns out to be approximately 0.503. One might then erroneously think that the fact that this probability is only slightly greater than 1/2 implies that one only has a slight reason to prefer the R-game. But in fact one has a fairly strong reason to prefer the R-game, since getting $100 for free rather than losing $100 generates a fairly strong reason.
In making a self-interested rational decision, it is the difference in expected values that matters rather than the probability that one option will have a better payoff than the other. It is very easy to set up situations where most likely one will do better by choosing Game A but the self-interestedly rational choice is Game B. For instance, suppose both games cost $1 to play; Game A offers a 2/3 probability of winning $2, while game B offers a 1/4 probability of winning a million. Most of the time, one will do better playing game A than game B, but it is clear that the self-interestedly rational game to choose is B.
Now one might object that nonetheless only a very weak reason is generated in R- and G-game case. Not only is the difference in probabilities between the R-game being better for one versus the G-game being better for one insignificant, but the gain in utility is insignificant. For what is $100 more or less when we are playing 10,000 games, the stakes in each of which are $100? Most likely, we might reason, the $100 is only going to be a fairly small percentage of our winnings or losings, and as such it does not really self-interestedly matter much which game we choose. This reasoning is can be argued to be fallacious. We still are $200 ahead in expected value for choosing the R-game, and even though this may not be high as a percentage of the total gain or loss, it is still $200.
If the $200 seems insignificant in the context, consider a high stakes variant of the R- and G-games. A dictator has ten thousand innocent prisoners set to be executed tomorrow. Each time you win a round, i.e., each time the relevant coin lands heads, one of these prisoners is released. Nothing happens when you lose a round. Again, you can predict how the first coins will land: the red one will land heads and the green one will land tails. At this point it becomes clear that one has a very strong reason to choose the high stakes R-game, since by doing so one increases the expected value of the total number of lives saved by one. The fact that that one life is only a relatively small percentage of the total saved or lost is irrelevant. It is still a human life. It is no less worthwhile to save the life of an innocent when that life is endangered by a natural disaster where millions die. It is still true that the likelihood of getting better consequences when playing the high stakes R-game is only 0.503, which is insignificantly higher than 1/2. But, nonetheless, one has strong reason to play the high stakes R-game. The fact that, most likely, the consequences of choosing the high stakes R-game over the high stakes G-game, namely the results of the next 9,999 rounds, will swamp the outcome is irrelevant. (See also the discussion of ignoring small chances in [Parfit, 1984: 73-75].)
Let us now tighten the analogy with the problem of evil. Suppose Jones claims to be a perfectly good agent who can predict all coin throws in the world’s future. The choice between the high stakes R- and G-games is put to Jones. Let us suppose that we only get to observe the first round of the game, and never find out the final results. What we see is the first red coin toss yielding heads, the first green coin toss yielding tails, but Jones chooses to play the G-game. Let us suppose, also, that it is clear that nothing else would be relevant to a perfectly good agent under the circumstances than the lives of the people in question.
Consider the following question: Did our observations of the outcome of the first toss and of Jones’ choice to play the G-game provide evidence against the claim that Jones is a perfectly good agent who knows all future coin throw results? Under the circumstances, the answer to this question is the same as the answer to the following question: Did our observations provide evidence against the claim that Jones made a correct choice, i.e., a choice that would save at least many lives as the other option? The answer to this question is affirmative. What we observed was indeed more probable on the hypothesis that Jones made an incorrect choice than on the hypothesis that Jones made a correct choice, since when we bracket our knowledge of Jones’ choice, the probability that choosing the R-game is going to be at least as good as choosing the G-game is about 0.508, given the initial tosses.
However, while the answer is affirmative, nonetheless our observations provide only insignificant evidence against the claim that Jones made a correct choice. Most of the time, which choice is the better one will not turn on the outcomes of the first tosses, since these outcomes will be swamped by subsequent ones. Thus, given the initial toss results, the likelihood that the G-game is at least as good to play as the G-game is approximately 0.52, which is only insignificantly more than 1/2.
The high stakes game is parallel to the sceptical theism case. There is much we do not know about the consequences of the evil E, and how it might fit into a cosmic axiology. But likewise there is much that we do not know about the consequences of preventing E, and how the prevention might fit into a cosmic axiology. Just as goods and evils beyond our ken might result from E, so goods and evils beyond our ken might result from our prevention of E. This is parallel to the way we do not know the outcome of the next 9,999 rounds of the high stakes games. And just as the ignorance of the next 9,999 rounds does not take away our strong reason to opt for the game that on the first round will save a life, so too our ignorance of further long-term consequences and of cosmic values does not take away our strong reason to prevent E.
If this is so, then by analogy it may well be that the difference in probabilities between the hypotheses that E has prima facie justification and that E does not have prima facie justification is insignificant. But the difference in expected values between preventing and not preventing E, given total ignorance of further consequences of either choice, is intuitively precisely equal to the disvalue of E: we have no reason to suppose the aspects of the situation beyond our ken to favor the case of non-prevention over the case of prevention or vice versa.
Likewise, even given chaos, we know the short-term consequence of my telling you what time it is: you know what time it is, and that is a good thing. We do not know the longer-term consequences of this action. But that ignorance tells in favor of neither option, and so we should just go by the short-term result we can see, just as when choosing between the two games.
On some formulations of sceptical theism, the sceptical theist not only does not have any idea whether the occurrence of an evil might not contribute to the good from some more comprehensive point of view, but also does not have any probability assignments there—she does not, say, assign a probability of 1/2 to the claim that occurrence of E would contribute to the good, but instead perhaps assigns the full range of probabilities between 0 and 1, namely the interval [0,1], or just leaves it inscrutable. But in the game theoretic examples probabilities can be assigned to the outcomes of the 9,999 subsequent rounds of the R- and G-games. Hence these examples are arguably disanalogous to the sceptical theism case. Note also that the case of chaos is precisely like the sceptical theism case: we are in no position to evaluate the probabilities.
In response, modify the game theoretic case. Now, the games are as follows. The first round of the R*-game is a toss of a red coin, with a victory if it lands heads. The first round of the G*-game is a toss of a green coin, with a victory if it lands heads. But, let us suppose, we know nothing about the subsequent rounds of both games. We do not even know how many subsequent rounds there will be, nor what the rules will be. But we do know we’ll win the first round if we play the R*-game and we’ll lose the first round if we play the G*-game, and we have no choice but to play the one or the other (e.g., because one is deemed to have chosen the G*-game and its outcomes are forced on one if one does not opt for the R*-game).
It still seems the more rational thing to do to opt for the R*-game. And in a high stakes variant where in the first round at least an innocent human life is at stake, it seems that opting for the R*-game is definitely the right thing to do.
Now, one’s intuitions here may be biased by the fact that if we were in fact offered such games, we might think that the people offering the games are trying to cheat us in some way, and hence we might be suspicious of the game that seems better at the start. But that thought only makes sense given background knowledge of games of chance and the kinds of shady characters who set them before strangers. But I have assumed here that we know nothing like that.
Or suppose that you find yourself living out the plot of a crazy ‘Choose Your Own Adventure’ novel. In such novels, every couple of pages you need to make a decision, and then are given the page number to flip to depending on the decision. We could imagine such a novel—indeed, we may have read such—where there is rhyme or reason to what happens in the long-run given what we chose in the short run, and no probabilities can be assigned. But suppose we know we are living out the plot of such a novel, and we see an innocent person suffering terribly. If we know we can relieve her suffering, surely we should, even though we do not know what the crazy consequences of this there might be, at least if we have no positive reason to suppose the novel is actually perverse in such a way that doing the locally right thing tends to result in the worse results overall. We can say to ourselves: Relieving her suffering is good in itself. It may in the end lead to some disaster beyond my ken. But likewise so may a failure to relieve her suffering. Given the complete absence of further information, I should relieve her suffering—the further options are balanced.
The above response may work even on strong sceptical theism according to which (1) provides no evidence for (2). But given moderate sceptical theism an additional response is possible. The moderate sceptical theist will grant that the non-existence of apparent prima facie justifiers for E provides a small amount of evidence for the non-existence of prima facie justifiers for E. Thus, rather than the case being analogous to the choice between games where we know nothing about further rounds, it is more like a choice where we actually do have a tiny amount of evidence that the R*-game is the preferable one. If that is all the evidence we have, should we not act on it? It seems we have some evidence in favor of its being a good idea to prevent E and no evidence in favor of its being a good idea not to prevent E. So we should try to prevent E.
Now, it might be thought that when the amount of evidence is tiny, the reason generated is too weak. If so, then Almeida and Oppy could still show that sceptical theism generates conclusions that differ from common sense, in that they make the reasons we have for preventing evils be much smaller than otherwise. This, however, would be mistaken as we have already seen. The strength of a consequence-based reason to prevent an evil is based more on the difference in expected values rather than on a difference in probabilities. Now in the cases of sceptical theism and chaos, we cannot estimate the overall expected utilities, because they involve cosmic considerations as well as considerations of values beyond our ken. But if so, then we need to simply act on the basis of the difference in expected values insofar as we know them, and this is the difference between the occurrence of E and the non-occurrence of E. This difference, then, should be judged precisely in the same way by the sceptical theist as by someone who is not a sceptical theist.
One might, however, have the following concern about the skeptical theism case that does not apply in the chaos case. If I do not prevent the evil, then either God will prevent it or God will not prevent it. If God will prevent it, then my failure to prevent the evil will have no bad consequences. If God does not prevent the evil, then that seems to mean that it was better in the cosmic scheme of things for the evil not to be prevented. Hence, it seems that in neither case was any harm done by my failing to prevent the evil, and that I might as well not bother to prevent any evil.
This argument, however, is fallacious. In the second horn of the dilemma, the claim “that it was better in the cosmic scheme of things for the evil not to be prevented” is ambiguous between its being better for the evil not to be prevented by God and its being better for the evil not to be prevented by me. The paralyzing conclusion of the argument requires the “me” reading. But the assumption that God did not prevent the evil at most yields the claim that it’s better for the evil not to be prevented by God.
One can make the argument valid by adding the assumption that if it is better in the cosmic scheme of things for God not to prevent an evil, then it is better in the cosmic scheme of things for me not to prevent the evil. But the sceptical theist will deny this assumption. If the axiological structure of the universe is largely unknown to us, then the situations of God preventing the evil and my preventing the evil may carry vary different consequences and have very different value in the larger scheme of things.
And indeed, it is possible for God to have reasons for failing to prevent an evil which reasons would not apply to me. For instance, God might choose to refrain from preventing an evil in order that I might have the possibility of making a difference with regard to that evil, and I would not have the possibility of making a difference if God were guaranteed to step in and prevent the evil should I fail to do so. But that reason could not justify my refraining from preventing the evil—it would be incoherent for me to refrain from preventing the evil in order to give myself the chance to prevent the evil. Granted, I could refrain from preventing the evil in order to give someone else the opportunity to prevent the evil, but note that to use that as a justification for failing to prevent an evil, I would have to actually know that it would be better for someone else to be given that opportunity, and in typical cases we lack that knowledge. But God is omniscient.
Furthermore, the sceptical theist will contend that for all we know, there are many other reasons that God could have that justify his refraining from preventing but do not justify our refraining from preventing, given that the two potential preventions might have different significances in the larger scheme of things.
We need to make our practical decisions in the light of the known effects of actions when choosing between two actions that, as far as we can tell, equally may have many unknown effects. This is true even when the unknown effects might swamp the known effects. If we deny this, then we would have to admit that if we were to learn that the chaos hypothesis is true, we would be justified in being practically paralyzed in many of our actions. The global chaos hypothesis may well be true, and yet we are not justified in such paralysis, once we attend to the issues carefully.
Oppy and Almeida tried to argue that the sceptical theist response to the problem of evil leads to a scepticism about what we should do in prevention-scenarios in light of the unknown cosmic significance of our actions. That argument does not work. But there could be other arguments that sceptical theism leads to moral scepticism. In fact, I worry that sceptical theism leads to scepticism in general, not just moral scepticism. But that is a different story.
Almeida, Michael and Oppy, Graham 2003. Sceptical Theism and Evidential Arguments from Evil, Australasian Journal of Philosophy 81: 496-516.
Lewis, David 1986. On the Plurality of Worlds. Oxford: Blackwell
Parfit, Derek 1984. Reasons and Persons. Oxford: Oxford University Press.
Rowe, William 1979. The Problem of Evil and Some Varieties of Atheism, American Philosophical Quarterly 16: 335-341
Russell, Bruce and Wykstra, Stephen 1988. The ‘Inductive’ Argument from Evil: A Dialogue, Philosophical Topics 16: 133-160
 The numerical calculations were made with Derive 5 software, using the fact that the distribution of the difference between the total payoffs of the two outcomes over rounds 2 through 10,000 can be modeled as a shifted and dilated binomial distribution.
 This is somewhat disanalogous to the initial scenario. The natural analogy would be to have the dictator capture another innocent person and line her up for execution. The problem is that if the games were like that, then on deontological grounds one might be forbidden to play either game.
 This concern arose from an idea raised in conversation by [removed for anonymization].