Epistemology,

(Same office, six months later.)

Before we go into your problems, you will be happy to hear that your friend the epistemologist is now completely recovered.

Marvelous, how did it happen?

Almost, as it were, by a stroke of fate--and yet his very mental activities were, so to speak, part of the "fate." What happened was this: For months after you last saw him, he went around worrying "should I trust the machine, shouldn't I trust the machine, should I, shouldn't I, should I, shouldn't I." (He decided to use the word "should" in your empirical sense.) He got nowhere! So he then decided to "formalize" the whole argument. He reviewed his study of symbolic logic, took the axioms of first-order logic, and added as nonlogical axioms certain relevant facts about the machine. Of course the resulting system was inconsistent--he formally proved that he should trust the machine if and only if he shouldn't, and hence that he both should and should not trust the machine. Now, as you may know, in a system based on classical logic (which is the logic he used), if one can prove so much as a single contradictory proposition, then one can prove any proposition, hence the whole system breaks down. So he decided to use a logic weaker than classical logic--a logic close to what is known as "minimal logic"--in which the proof of one contradiction does not necessarily entail the proof of every proposition. However, this system turned out too weak to decide the question of whether or not he should trust the machine. Then he had the following bright idea. Why not use classical logic in his system even though the resulting system is inconsistent? Is an inconsistent system necessarily useless? Not at all! Even though given any proposition, there exists a proof that it is true and another proof that it is false, it may be the case that for any such pair of proofs, one of them is simply more psychologically convincing than the other, so simply pick the proof you actually believe! Theoretically the idea turned out very well--the actual system he obtained really did have the property that given any such pair of proofs, one of them was always psychologically far more convincing than the other. Better yet, given any pair of contradictory propositions, all proofs of one were more convincing than any proof of the other. Indeed, anyone except the epistemologist could have used the system to decide whether the machine could be trusted. But with the epistemologist, what happened was this: He obtained one proof that he should trust the machine and another proof that he should not. Which proof was more convincing to him, which proof did he really "believe"? The only way he could find out was to consult the machine! But he realized that this would be begging the question, since his consulting the machine would be a tacit admission that he did in fact trust the machine. So he still remained in a quandary.

So how did he get out of it?

Well, here is where fate kindly interceded. Due to his absolute absorption in the theory of this problem, which consumed about his every waking hour, he became for the first time in his life experimentally negligent. As a result, quite unknown to him, a few minor units of his machine blew out! Then, for the first time, the machine started giving contradictory information--not merely subtle paradoxes, but blatant contradictions. In particular, the machine one day claimed that the epistemologist believed a certain proposition and a few days later claimed he did not believe that proposition. And to add insult to injury, the machine claimed that he had not changed his belief in the last few days. This was enough to simply make him totally distrust the machine. Now he is fit as a fiddle.

This is certainly the most amazing thing I have ever heard! I guess the machine was really dangerous and unreliable all along.

Oh, not at all; the machine used to be excellent before the epistemologist's experimental carelessness put it out of whack.

Well, surely when I knew it, it couldn't have been very reliable.

Not so, Frank, and this brings us to your problem. I know about your entire conversation with the epistemologist--it was all tape-recorded.

Then surely you realize the machine could not have been right when it denied that I believed the book was red.

Why not?

Good God, do I have to go through all this nightmare again? I can understand that a person can be wrong if he claims that a certain physical object has a certain property, but have you ever known a single case when a person can be mistaken when he claims to have or not have a certain sensation?

Why, certainly! I once knew a Christian Scientist who had a raging toothache; he was frantically groaning and moaning all over the place. When asked whether a dentist might not cure him, he replied that there was nothing to be cured. Then he was asked, "But do you not feel pain?" He replied, "No, I do not feel pain; nobody feels pain, there is no such thing as pain, pain is only an illusion." So here is a case of a man who claimed not to feel pain, yet everyone present knew perfectly well that he did feel pain. I certainly don't believe he was lying, he was just simply mistaken.

Well, all right, in a case like that. But how can one be mistaken if one asserts his belief about the color of a book?

I can assure you that without access to any machine, if I asked someone what color is this book, and he answered, "I believe it is red," I would be very doubtful that he really believed it. It seems to me that if he really believed it, he would answer, "It is red" and not "I believe it is red" or "It seems red to me." The very timidity of his response would be indicative of his doubts.

But why on earth should I have doubted that it was red?

You should know that better than I. Let us see now, have you ever in the past had reason to doubt the accuracy of your sense perception?

Why, yes. A few weeks before visiting the epistemologist, I suffered from an eye disease, which did make me see colors falsely. But I was cured before my visit.

Oh, so no wonder you doubted it was red! True enough, your eyes perceived the correct color of the book, but your earlier experience lingered in your mind and made it impossible for you to really believe it was red. So the machine was right!

Well, all right, but then why did I doubt that I believed it was true?

Because you didn't believe it was true, and unconsciously you were smart enough to realize the fact. Besides, when one starts doubting one's own sense perceptions, the doubt spreads like an infection to higher and higher levels of abstraction until finally the whole belief system becomes one doubting mass of insecurity. I bet that if you went to the epistemologist's office now, and if the machine were repaired, and you now claimed that you believe the book is red, the machine would concur.

No, Frank, the machine is--or, rather, was--a good one. The epistemologist learned much from it, but misused it when he applied it to his own brain. He really should have known better than to create such an unstable situation. The combination of his brain and the machine each scrutinizing and influencing the behavior of the other led to serious problems in feedback. Finally the whole system went into a cybernetic wobble. Something was bound to give sooner or later. Fortunately, it was the machine.

I see. One last question, though. How could the machine be trustworthy when it claimed to be untrustworthy?

The machine never claimed to be untrustworthy, it only claimed that the epistemologist would be better off not trusting it. And the machine was right.

# An Epistemological Nightmare - #5

Source:**MIT****Click here to read scene 4.****By Raymond M. Smullyan, 1982**

__Scene 5__

(Same office, six months later.)

**Doctor:**Before we go into your problems, you will be happy to hear that your friend the epistemologist is now completely recovered.

**Frank:**Marvelous, how did it happen?

**Doctor:**Almost, as it were, by a stroke of fate--and yet his very mental activities were, so to speak, part of the "fate." What happened was this: For months after you last saw him, he went around worrying "should I trust the machine, shouldn't I trust the machine, should I, shouldn't I, should I, shouldn't I." (He decided to use the word "should" in your empirical sense.) He got nowhere! So he then decided to "formalize" the whole argument. He reviewed his study of symbolic logic, took the axioms of first-order logic, and added as nonlogical axioms certain relevant facts about the machine. Of course the resulting system was inconsistent--he formally proved that he should trust the machine if and only if he shouldn't, and hence that he both should and should not trust the machine. Now, as you may know, in a system based on classical logic (which is the logic he used), if one can prove so much as a single contradictory proposition, then one can prove any proposition, hence the whole system breaks down. So he decided to use a logic weaker than classical logic--a logic close to what is known as "minimal logic"--in which the proof of one contradiction does not necessarily entail the proof of every proposition. However, this system turned out too weak to decide the question of whether or not he should trust the machine. Then he had the following bright idea. Why not use classical logic in his system even though the resulting system is inconsistent? Is an inconsistent system necessarily useless? Not at all! Even though given any proposition, there exists a proof that it is true and another proof that it is false, it may be the case that for any such pair of proofs, one of them is simply more psychologically convincing than the other, so simply pick the proof you actually believe! Theoretically the idea turned out very well--the actual system he obtained really did have the property that given any such pair of proofs, one of them was always psychologically far more convincing than the other. Better yet, given any pair of contradictory propositions, all proofs of one were more convincing than any proof of the other. Indeed, anyone except the epistemologist could have used the system to decide whether the machine could be trusted. But with the epistemologist, what happened was this: He obtained one proof that he should trust the machine and another proof that he should not. Which proof was more convincing to him, which proof did he really "believe"? The only way he could find out was to consult the machine! But he realized that this would be begging the question, since his consulting the machine would be a tacit admission that he did in fact trust the machine. So he still remained in a quandary.

**Frank:**So how did he get out of it?

**Doctor:**Well, here is where fate kindly interceded. Due to his absolute absorption in the theory of this problem, which consumed about his every waking hour, he became for the first time in his life experimentally negligent. As a result, quite unknown to him, a few minor units of his machine blew out! Then, for the first time, the machine started giving contradictory information--not merely subtle paradoxes, but blatant contradictions. In particular, the machine one day claimed that the epistemologist believed a certain proposition and a few days later claimed he did not believe that proposition. And to add insult to injury, the machine claimed that he had not changed his belief in the last few days. This was enough to simply make him totally distrust the machine. Now he is fit as a fiddle.

**Frank:**This is certainly the most amazing thing I have ever heard! I guess the machine was really dangerous and unreliable all along.

**Doctor:**Oh, not at all; the machine used to be excellent before the epistemologist's experimental carelessness put it out of whack.

**Frank:**Well, surely when I knew it, it couldn't have been very reliable.

**Doctor:**Not so, Frank, and this brings us to your problem. I know about your entire conversation with the epistemologist--it was all tape-recorded.

**Frank:**Then surely you realize the machine could not have been right when it denied that I believed the book was red.

**Doctor:**Why not?

**Frank:**Good God, do I have to go through all this nightmare again? I can understand that a person can be wrong if he claims that a certain physical object has a certain property, but have you ever known a single case when a person can be mistaken when he claims to have or not have a certain sensation?

**Doctor:**Why, certainly! I once knew a Christian Scientist who had a raging toothache; he was frantically groaning and moaning all over the place. When asked whether a dentist might not cure him, he replied that there was nothing to be cured. Then he was asked, "But do you not feel pain?" He replied, "No, I do not feel pain; nobody feels pain, there is no such thing as pain, pain is only an illusion." So here is a case of a man who claimed not to feel pain, yet everyone present knew perfectly well that he did feel pain. I certainly don't believe he was lying, he was just simply mistaken.

**Frank:**Well, all right, in a case like that. But how can one be mistaken if one asserts his belief about the color of a book?

**Doctor:**I can assure you that without access to any machine, if I asked someone what color is this book, and he answered, "I believe it is red," I would be very doubtful that he really believed it. It seems to me that if he really believed it, he would answer, "It is red" and not "I believe it is red" or "It seems red to me." The very timidity of his response would be indicative of his doubts.

**Frank:**But why on earth should I have doubted that it was red?

**Doctor:**You should know that better than I. Let us see now, have you ever in the past had reason to doubt the accuracy of your sense perception?

**Frank:**Why, yes. A few weeks before visiting the epistemologist, I suffered from an eye disease, which did make me see colors falsely. But I was cured before my visit.

**Doctor:**Oh, so no wonder you doubted it was red! True enough, your eyes perceived the correct color of the book, but your earlier experience lingered in your mind and made it impossible for you to really believe it was red. So the machine was right!

**Frank:**Well, all right, but then why did I doubt that I believed it was true?

**Doctor:**Because you didn't believe it was true, and unconsciously you were smart enough to realize the fact. Besides, when one starts doubting one's own sense perceptions, the doubt spreads like an infection to higher and higher levels of abstraction until finally the whole belief system becomes one doubting mass of insecurity. I bet that if you went to the epistemologist's office now, and if the machine were repaired, and you now claimed that you believe the book is red, the machine would concur.

No, Frank, the machine is--or, rather, was--a good one. The epistemologist learned much from it, but misused it when he applied it to his own brain. He really should have known better than to create such an unstable situation. The combination of his brain and the machine each scrutinizing and influencing the behavior of the other led to serious problems in feedback. Finally the whole system went into a cybernetic wobble. Something was bound to give sooner or later. Fortunately, it was the machine.

**Frank:**I see. One last question, though. How could the machine be trustworthy when it claimed to be untrustworthy?

**Doctor:**The machine never claimed to be untrustworthy, it only claimed that the epistemologist would be better off not trusting it. And the machine was right.

Thanks for posting this! It was an interesting read.

ReplyDelete