Tuesday, May 24, 2016

Rationality and the Belief in a Greatest Prime – with Side Trips to the Prisoner’s Dilemma and Kavka’s Poison

A puzzle current in some philosophical circles involves an eccentric billionaire who offers you a million dollars if you come to believe that there is a greatest prime number. This is, of course, a little challenging for you, knowing as you do Euclid’s proof that there are infinitely many primes.

Six questions that this thought experiment raises:
  1. Knowing Euclid’s proof perfectly well, can you immediately respond to the billionaire by coming to believe that there is a greatest prime?
  2. If the billionaire gives you a week, can you win the million?
  3. Is it rational to cause yourself in a week’s time to win the million by coming to disbelieve in Euclid’s proof and to believe that there is a greatest prime?
  4. Is it possible to cause yourself in a week’s time to win the million by coming to believe both in Euclid’s proof and in the existence of a greatest prime?
  5. If you do so, will you then be rational?
Of these questions, I think the last three are of most philosophical interest, but I will take them all seriatum. Along the way it will be useful to draw upon some similarities to more famous puzzles: the prisoner’s dilemma and Kavka’s poison. I will argue that more is rational than is sometimes thought.

1. Immediately Bringing Yourself to Believe that There Is a Greatest Prime. 
 
For those who can, if asked, display Euclid’s proof at the drop of a hat, this is going to be at best very difficult. Conceivably there are some who have an almost pathological level of control over what they believe, and so could pull it off. If multiple personality of the sort celebrated in moves were to exist, one might try to trigger the appearance of the mathematically dull personality. Inasmuch as I am asking you to imagine yourself facing the billionaire, the question whether there are such exceptional persons is not relevant. This is not a way for you to get rich.

2. Winning the Million if You Have a Week. 
 
I do not want to fight about it, but I conjecture that you probably could succeed in coming to believe in a greatest prime by employing some psychiatrist suitably expert in potent psychotropics who, given free rein and a decent budget, subcontracts a mathematician who moonlights as an actor. The drugs having rendered you highly suggestible and not as acute as usual, the mathematician is able to mis-persuade you that there is a subtle fallacy in Euclid’s proof and that it has just been shown that 274,207,281 − 1 is in fact the last of the primes.

I am going to assume that whatever steps the two take will leave you only temporarily impaired and saddled with your mathematical misconceptions. With the million in hand, you will quickly be restored to your usual mathematical competence, with no negative sequelae whatsoever. This I need for the next section.

3. Rationality of Initiating a Process to Bring Yourself to Believe in a Greatest Prime.

Would it be rational to use this psychiatrist, or any other technique, in coming to believe something that you know, now, to be provably false. 

Not rational. The negative answer follows from the principle that it is never rational to embrace irrationality and any plan to bring yourself to believe in a provable falsehood hugs irrationality pretty tightly.

Rational. On the other side, it seems perfectly rational to enhance one’s net worth by a million dollars quickly, legally, with relatively little discomfort or difficulty, and with injury to no one.

Neutral ambiguity. A third response to the puzzle is to declare that the rationality question is ambiguous. In one sense of “rational” a belief in a proposition you can prove to be false is irrational, and your setting out to procure such a belief is also irrational. In the sense in which a low cost course of action to a large reward is rational, the same plan and conduct are rational. There is no single answer to the question whether procuring the crucial belief, or devising the plan to do so, is rational. It is on one disambiguation of the question and not on the other.

Rational overall. Even accepting, for the sake of argument, that there are these two different senses, we may still ask whether one of them doesn’t here take precedence over the other. The dominant sense, I propose, is the sense that we should look to in guiding our conduct. It seems pretty clear to me that guiding my conduct to the million dollars in so harmless a fashion is the right course.

Perhaps there is something irrational within a plan that requires coming to a belief in something provably false, but that is not to say that the overall plan is not one that would be adopted by one whose conduct is being guided in the best possible way, which is what it seems rational to want from rationality.

Prisoners’ Dilemma. I make a similar argument for defeating the prisoner’s dilemma, a distantly related conundrum. The overwhelming majority view is that it is irrational not to confess. If both players, however, have deeply imbibed the idea that rationality is what guides one to the best overall outcome, then they hold out – and end up doing better than those who confess. Holders of the majority view insist upon calling such players irrational – and are not at all embarrassed by the players’ success. See “Flunking the Prisoner’s Dilemma,” Philosophy Now, April-May, 2009.

Decalogue conception of rationality. So I suggest that there is a respectable, action guiding, sense of “rationality” in which it might be rational for the dilemma prisoner to withhold his confession and for the greatest prime subject to plot to come to a belief now known to be demonstrably false. This clashes with the traditional view of rationality on which it is a matter of a list of principles, many of which were long known as “the laws of thought.” Rationality is set in stone, binding from everlasting to everlasting. 

Eternal principles have, however, not fared all that well in the history of philosophy or science. Many necessary truths known a priori have fallen on hard times. That every triangle sums to 180 degrees is perhaps the most celebrated, but in the twentieth century even such basic rules of propositional logic as excluded middle suffered some hostile fire. 

4. Belief in Both Euclid’s Proof and the Finitude of the Primes

If we view the commandments view of rationality with some suspicion, it becomes worth taking a look at the question whether you could simultaneously believe Euclid's proof and that there is a greatest prime. Doubtless you could not pull this off right away, but with time perhaps there would be some way that you could come to believe these inconsistent propositions. To work towards the question of the belief in, and possibly the rational belief in, inconsistent propositions, I want to spend a little time with a first cousin of the current puzzle: Kavka’s poison.

Kavka’s poison. Our friend the eccentric billionaire this time will award you a million if as of midnight you intend to take poison the next day at noon – a poison that will make you miserable for the ensuing 24 hours but will have no consequences beyond that. It is explicit, however, that it is only your intention at midnight that is crucial. If you change your mind between midnight and noon you can simply decline to take the poison, and you will still get the money, assuming that you had the right intention at midnight. It is an unstated feature of the construction that you cannot employ our psychiatrist ex machina to cause you to forget pre-midnight the part about your being free not to take the poison next day. 

It is the majority view that a rational person cannot win the million. She will know that, having succeeded in having the right intent at midnight, she need not take the poison to get the million. She cannot help but conclude that she will decline the poison. (It dominates taking the poison, whether or not she had the right intention at midnight.) But knowing she won’t take the poison is inconsistent with having the right midnight intent. 

Consider now the case of Oscar who says at midnight that he intends to take the poison, and next day at noon does, in fact, drink it, is miserable for a day, but walks away with the million. He walks away with the million because the billionaire’s infallible testing device determined that Oscar really did intend at midnight to take the poison. Oscar is accused of gross irrationality by the incensed philosophical majority, a chorus joined by all the economists who care about such matters. They point out that Oscar would now be just as rich if at 10 am he had changed his mind and skipped the poison.

Oscar replies that he concluded that the only way he could be sure that he would have the right intention at midnight was to commit himself absolutely to taking the poison, knowing in advance that he would be tempted change his mind. His resolution was to bear up against the siren song of “rationality.” He knew it could be considered irrational actually to take the poison, but he concluded that it would be rational overall to include that irrational element within his plan. Wasn’t he, hearkening back to the prisoner’s dilemma and last section, right as well as rich?

Coexisting intentions and contrary beliefs. Another minority Kavka poison view brings us closer to our current concern. That position argues that it is compatible with intending to take the poison that one believes one will decline to take it. This may seem wrong at first blush, but let’s consider the second and third blushes. It is surely consistent with your intention to stay away from your computer tomorrow morning that you understand that there is some probability that you will, in fact, succumb.

As your estimate of the probability of not doing the intended act goes up, however, doesn’t the strength of the intention go down? Our eccentric billionaire will surely require a very strong intention. 

Consider, however, the addict, who fiercely commits never to take another of those pills. I know nothing in the relevant science that would show he could not pass the most stringent test of the sincerity of his intention. However, if pressed, he might well admit that the probability he will in fact stay on the wagon is very low. It could even, I think, be zero. It might seem deviant to say “I fully intend never to take another pill, but I know I will.” This, however, is not because it is a logical falsehood, or even because it is physically impossible, but only because the situations in which there would be something gained by making this assertion are rare.

Consider someone who is very good at holding out against nearly irresistible temptations. She has a track record of forming unconquerable intentions. I see no impossibility, again as a matter of empirical psychology, for believing she cannot form an intention to take the poison that will pass any strength test while believing that she will probably decline to take the poison.

One can, of course, avoid this conclusion by making everything empirical irrelevant with the stipulation “No intention counts as strong (or sincere) if the agent simultaneously believes she will probably not keep it.” Stipulations can settle any argument. They have, as Russell observed, all the advantages of theft over honest labor.

Fragmented self. If this sort of move works in the Kavka poison case, can it be carried over to the greatest prime puzzle? Not directly. I exploited for Kavka’s case the fact that intention and belief are two different things. In the greatest prime case the incompatibles are both beliefs. Moreover, the Kavka case asks only whether it is possible to have the right sort of intention together with the knowledge that the intention need not be maintained. In the greatest prime case the interesting question is not about the possibility of coming to the belief in Euclid’s proof together with the belief in the existence of a greatest prime. What is worth our time is the question whether doing so is rational.

Can it ever be rational to believe inconsistent things? Consider all your current beliefs. What is the chance that each and every one of them is correct? Nearly zero, right? So you believe b1, b2, … bn but you also believe the negation of (b1 & b2 & . . .& bn). This sort of inconsistency can be entirely rational. I think it was Putnam from whom I first was made aware of this, but I suspect it goes back a long ways.

A belief in the existence of a greatest prime and the belief that Euclid’s proof to the contrary is sound is, of course, an inconsistency of a more immediate sort. It is believing in p and not-p for a relatively simple p. Now, I am inclined to think that I sometimes do exactly this. When I watch the sun rise, what I am usually thinking, uncritically, is that the sun really is moving up above the horizon. Of course, if asked, I display my knowledge that it is my section of the planet that is rotating to face the sun. Most mornings, however, I do not bring that bit of science to awareness. It is easy to believe a p that is front and center to consciousness and also to believe, in background, not-p.

Even for some propositions about which people express the strongest conviction, there is often behavioral evidence that they also hold a contrary belief. The belief that a loved one is now in a heaven of great and eternal joy often seems belied by the mourner’s conduct. I do not think that this is because the belief in the afterlife is not genuine. It is just that the mourner also has a incompletely repressed belief that death is the end full stop. Symmetrically, some outspoken atheists, I suspect, have an incompletely repressed belief in the God of their rearing. They truly disbelieve, but also believe.

It is a common expression that people can be “of two minds,” and I think this ought not be dismissed as mere metaphor. Folk psychology has incorporated large doses of the Cartesian, Christian, and Platonic traditions into its theory of the self, a theory that novelists, psychologists, and neuroscientists have challenged as oversimplified. The self, even the conscious part of the self, is not nearly as unified or transparent as we usually think it is. (It is possible that this self-deception has species survival value, and was only given theoretical polish by the philosophers and theologians.)

An individual’s behavior is poorly modeled by a pilot alone in a control room reading the data screens and throwing the levers of action. A better, if still crude, model would have contending neural coalitions, forming, strengthening, weakening, breaking, and reforming.

If we push the coalition picture towards its extreme, the “same” person might both fully intend to drink the poison, and believe that he won’t; might believe that there is a greatest prime and understand Euclid’s proof and believe it sound.

That would be the end of the story for Kavka’s poison. Not so for the interesting side of the greatest prime puzzle – the rationality question. There is normally no problem of rationality for Bill to believe that there is a greatest prime and Jill to know that there isn’t. If “Bill” and “Jill” name two different functional components of a single brain, however, this becomes more complicated. After all, it is rational, if sometimes difficult, for deliberative assemblies to try to come to a shared set of factual beliefs. One neural collective might be committed to the proposition that chocolate mousse is healthful on balance because of its dark chocolate content while a rival collective would have it that it is unhealthful by dint of its calories. However, either the “Order the mousse!” lever is going to get pulled, or it isn’t. There is only one body here, and that fact makes it rational to bring conflicting beliefs into harmony – at least when any kind of action is in the offing. This we should conclude even if we are considerably more latitudinarian about rationality than are the decalogue proponents.

It may be that there are some rare cases, to be spun out of the ever fertile imagination of philosophers, in which an argument can be made for the rationality of believing simultaneously and with full understanding and attention that Euclid's proof is sound but that there is a greatest prime. The science fiction, however, would be so bizarre that I think we can safely set this possibility aside in our philosophical job of giving advice to everyone who will be ushered into the presence of our eccentric billionaire.

So we cannot, I think, solve the prime number puzzle by pursuing a radical internal division model purporting to establish the rationality of reporting to the billionaire that one knows that Euclid’s proof is sound but believes that there is a greatest prime. Even if we have a week to work at getting into this frame of mind, with expert help, we will at most come to hold the inconsistent beliefs. This may be enough to get the million (positive answer to 4), but there was an easier, and less controversial way to do that. You would be rational in adopting a plan to come to believe in a greatest prime, rejecting Euclid's proof. (positive answer to 2.) And, of course, simultaneous belief in the unsoundness of the proof and the existence of a greatest prime is just fine on almost any account of rationality.

2 comments:

  1. This is a very weird argument. Nobody can see inside another person's mind. All anyone knows about anybody is their behavior. I'm perfectly willing to stipulate, swear under oath in a court of law, and sign an affidavit to the effect that I believe there is a largest prime. Case closed. Lawrence whatever are you thinking?

    ReplyDelete
  2. The fact that people can always try to mislead us about what they are thinking, and often succeed, does not mean that we cannot entertain the possibility that their beliefs are other than what they say they are. People who make out affidavits and swear in court are sometimes convicted of perjury.
    To get the Kavka cases going we have to make the assumption that the billionaire has some very good way of determining what the subjects are thinking. There is some progress in fmri land on this. So for philosophical purposes we just assume the success of some such device, as this assumption gives us some interesting things to think about.

    ReplyDelete