Are you sure you have hands?

leonardo_aby Massimo Pigliucci

Skepticism is a venerable word with a panoply of meanings. When I refer to myself as “a skeptic,” I mean someone inspired by David Hume’s famous dictum: “In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence” [1]. Or, as Carl Sagan famously phrased it, “Extraordinary claims require extraordinary evidence” [2]. Oh, and if there is one thing I resent it is being mislabelled as a “cynic,” meaning a naysayer with no sense of humor…

But skepticism (and cynicism, for that matter!) in philosophy is much, much older than that, and has at the least a couple of additional meanings [3]. According to so-called (by Sextus Empiricus, second or third century CE, [4]) “academic skeptics” (because they belonged to Plato’s academy, post-Plato), such as Carneades (214-129 BCE) [5], we cannot have any epistemically interesting knowledge. A different type of skeptic, the Pyrrhonian (named after Pyrrho, 365–ca 275 BCE) denied even that we can deny the possibility of knowledge, a meta-skepticism, if you will. Few modern philosophers are interested in Pyrrhonism, while academic skepticism has a long and venerable tradition, including perhaps most famously Descartes’ “radical doubt” thought experiment, in which he imagined a Machiavellian demon determined to trick him about what he thought he knew. Descartes then asked whether it would be possible, under those circumstances, to actually know anything at all. His answer, of course, was in the affirmative, and took the form of his famous cogito, ergo sum (I think, therefore I am) [6].

There is, of course, a much more fun way to think about the problem of skepticism in epistemology, and that is by using the 1999 scifi move The Matrix as a philosophical thought experiment [7]. The movie famously begins with our hero, Neo, played by Keanu Reeves, living what he thinks is a perfectly normal life, which soon reveals itself to be anything but. Neo, turns out, is much closer to the famous “brain-in-the-vat” (BIV) scenario of modern philosophy of mind (to be precise, he is a body-in-the-vat), with all his “experiences” actually being fed to him via artificial stimulation for the purposes of an evil post-technological civilization of machines that have enslaved humanity.

There is a crucial scene in the movie [8] where Neo’s mysterious mentor, Morpheus (played by Laurence Fishburne) poses the question to Neo of whether he wants to keep living in the “reality” he knows, or if he has the guts to see “how deep the rabbit hole goes.” As we know, Neo chooses the red pill that characterizes the second choice and the movie unfolds from there.

Neo, of course, is initially (properly) skeptical (in the Humean sense) of what Morpheus is trying to convey. The latter might as well have asked his question along the lines of: “how do you know you are not a brain in a vat?” How would you answer that sort of question? Which is another way of asking: have we made any progress against (academic) skepticism?

My discussion here tracks the one put forth in Steup’s broader treatment of epistemology in the Stanford Encyclopedia of Philosophy [9]. We begin with a quick look at the minimal version of the BIV argument:

(1)  I don’t know that I’m not a BIV.

(2)  If I don’t know that I’m not a BIV, then I don’t know that I have hands.

Therefore:

(3) I don’t know that I have hands.

This is a formally valid argument, i.e. its structure is logically correct, so any viable response needs to challenge one of its premises — that is, question what in logic is called its soundness. Before proceeding, though, we must note (as Steup does) that premise (2) is tightly linked to (indeed, it is the negative version of) the so-called Closure Principle: “If I know that p, and I know that p entails q, then I know that q” — a principle that is definitely eminently reasonable, at first sight. The application to our case looks like this: If I know that I have hands, and I know that having hands entails not being a BIV, then I know that I’m not a BIV. But — says the skeptics — the consequent of this “BIV closure” is false, hence its antecedent must be false too: you just don’t know whether you are a BIV or not!

There are, of course, several responses to the skeptic’s so-called “closure denial.” Steup examines a whopping five of them: relevant alternatives, the Moorean response, the contextualist response, the ambiguity response, and what one might call the knowledge-that response. Let’s take a quick look.

A first attack against the BIV argument is to claim that being a BIV is not a relevant alternative to having hands; a relevant alternative would be, for instance, having had one’s hand amputated to overcome the effects of disease or accident. This sounds promising, but the skeptic can very well demand a principled account of what does and does not count as a relevant alternative. Such an account could perhaps deploy a type of approach naturally enough called relevance logic [10], but that would get pretty technical, so I’ll leave it for another time.

Second attack: G.E. Moore’s (in)famous “I know that I have hands” response. This is essentially an argument from plausibility: the BIV goes through if and only if its premises (I don’t know whether I’m a BIV, so I don’t know whether I have hands) are more plausible than its conclusion (I don’t actually know whether I have hands). Which, of course, Moore famously denied — by raising one of his hands and declaring “here is one hand.” But why, asks (reasonably, if irritatingly) the skeptic? To make a long story short, Moore’s counter to the BIV argument essentially reduces to simply asserting knowledge that one is not a BIV. Which, ahem, pretty much begs the question against the skeptic [11].

Third possible anti-skeptic maneuver: the contextualist response. The basic intuition here is that what we mean by “know” (as in “I know that I have hands,” or “I don’t know that I’m not a BIV”) varies with the context, in the sense that the standards of evidence for claiming knowledge depend on the circumstances. This leads contextualists to distinguish between “low” and “high” standards situations. Most discussions of having or not having hands are low standards situations, where the hypothesis of a BIV does not need to be considered. It is only in high standards situations that the skeptical hypothesis becomes salient, and in those cases we truly do not know whether we have hands (because we do not know whether we are BIVs). This actually sounds plausible to me, though I would also like to see a principled account of what distinguishes low and high standard situations (unless the latter are, rather ad hoc, limited only to the skeptical scenario). Perhaps things are a bit more complicated, and there actually is a continuum of standards, and therefore a continuum of meanings of the word “know”? [12]

Fourth: the ambiguity response. Here the strategy is to ask whether the skeptic, when he uses the word “know” is referring to fallible or infallible knowledge. (This is actually rather similar to the contextualist response, it seems to me, though the argument takes off from a slightly different perspective, and I think is a bit more subtle and satisfying.) Once we make this distinction, it turns out that there are three versions of the BIV argument: the “mixed” one (“know” refers to infallible knowledge in the premises but to fallible knowledge in the conclusion), “high standards” (infallible knowledge is implied in both premises and conclusion), and “low standards” (fallible knowledge assumed in both instances). Once this unpacking is done, we quickly reach the conclusion that the mixed version is actually an instance of invalid reasoning, since it is based on an equivocation; the high-standards version is indeed sound, but pretty uninteresting (okay, we don’t have infallible knowledge concerning our hands, so what?); and the low-standards version is interesting but unsound (because we would have to admit to the bizarre situation of not having even fallible knowledge of our hands!).

Finally: the knowledge-that response, which is a type of evidentialist approach. The idea is to point out to the skeptic that the BIV argument is based on a number of highly questionable unstated premises, such as that it is possible to build a BIV, and that someone has actually developed the technology to do so, for instance. But we can deny these premises on grounds of implausibility, just like we would deny, say, the claim that someone has traveled through time via a wormhole on the ground that we don’t have sufficient reasons to entertain the notions that time travel is possible and that someone has been able to implement it technologically. Yes, the skeptics can deny the analogy, but the burden of proof seems to have shifted to the skeptic, who needs to explain why this is indeed a disanalogy. Can someone please get me a red pill?

Now, why on earth did we engage in this, ahem, academic discussion? Because I wanted to give you a flavor of how philosophy makes progress, and why it isn’t particularly fruitful to compare it with progress in the natural sciences (did you see any systematic observation or experiment peeking through the above?). Indeed, I am writing a whole book on this topic, which I will hopefully deliver to Chicago Press by the end of the summer. No, make that I will definitely deliver by the end of summer…

The idea is that philosophy is concerned with exploring conceptual, as distinct from empirical, spaces, which is precisely what we have done above. Indeed, you could go through it again and try to build a concept map [13] to see whether you followed the discussion correctly and to visualize its unfolding. The five responses presented by Steup can be thought of as five peaks in the conceptual space defined by the BIV problem, with other possible responses having been examined and discarded during the long history of the debate (those would be conceptual valleys, to continue the metaphor). Not all peaks are necessarily of the same height — where the height roughly measures how good a given response is, and even the precise position and shape of the peaks may vary over time, as philosophers keep refining them in response to counterarguments from the skeptics.

Moreover, the metaphor should make clear that even to ask the question of what is the true answer to the BIV problem is, in a fundamental way, to misunderstand the whole process. If the BIV question were an empirical one — like “how many planets are there in the solar system” — then it would have one definite answer [14], and a bunch of bad ones. But in conceptual space there often are several reasonable ways of looking at a particular problem (“answers”), and it will not be possible to pare them down to just one. (Another way to put this is to say that conceptual space is wider than, and underdetermined by, empirical space.)

So, what should we then make of (academic) skepticism and its critics? I think the value of the skeptical position is that it fosters epistemic humility: we are really not as smart as we often think we are, and in fact we don’t even have unequivocal answers to very basic questions about knowledge. As for the responses to the BIV problem, the five sketched above represent peaks of different heights in the proper conceptual landscape, and in my mind the ambiguity and the knowledge-that peaks are significantly higher than the rest, the Moore response is the lowest, and the relevant alternative and contextualist options are somewhere in the middle. But I’m sure we can have a discussion about that.

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] In An Enquiry Concerning Human Understanding, published in 1748.

[2] A dictum, by the way, which can easily and rigorously be formalized in Bayesian terms.

[3] Here is the obligatory SEP entry, by Peter Klein.

[4] On Sextus Empiricus.

[5] Here is my take on Carneades.

[6] On Descartes’ epistemology.

[7] You can do this and much more by engaging the fun essays in the collection put together by Susan Schneider, Science Fiction and Philosophy: From Time Travel to Superintelligence, Wiley-Blackwell, 2009.

[8] Here it is, for your viewing pleasure.

[9] Epistemology, by Matthias Steup, SEP, 2005.

[10] Relevance logic.

[11] On George Edward Moore.

[12] I know, I know, this is beginning to sound rather Clintonesque. Then again, the former President of the United States did study philosophy at Oxford as a Rhodes Scholar…

[13] On concept mapping.

[14] Well, kinda. The famous “demotion” of Pluto to “dwarf planet” hinges on the rather arbitrary — and in some sense philosophical — question of what counts as a planet and why. Incidentally, and contra popular perception, it was Caltech astronomer Mike Brown who was chiefly responsible for killing Pluto, not my friend Neil deGrasse Tyson. Neil did, however, write a popular book about the story.

Advertisements


Categories: essay

Tags: , , ,

273 replies

  1. Hi Coel,

    If you are suggesting that there could be some meta-standard by which those things are “delusional”, for example that everything we experience is a simulation in some meta-kid’s computer

    No, I am not claiming that the sense of experience we have is an illusion. If that were my claim, you could perhaps be justified in making the move of labelling this illusion reality. I am not at all talking about some meta-standard of reality, and the idea that we could be a simulation in a computer is completely irrelevant to this point.

    I am claiming that it is possible that we are deluded about our ability to make rational deductions. I’m not talking about the external world now, but the feeling we have that 1+1=2 or that the law of non-contradiction holds or any other self-evident bit of reasoning. No matter how obvious a logical inference, the possibility always remains open that we have made a mistake. Since all justifications ultimately depend on reason, we cannot non-circularly justify reason itself.

    What do you mean by “correctly” in that sentence? … What do you mean by “insane” and “delusion” in that sentence? … What do you mean by “illusory” as used in that sentence?

    I just mean that it is possible that you have absolutely no grasp of logic whatsoever. What seems to follow logically may not follow logically. You can’t prove that you can reason correctly because your predictions are borne out, because that itself is a reasonable argument (an argument that requires reason to understand and appreciate). Besides, it could be that whenever a prediction you make is defeated, you just retrospectively change your prediction and deceive yourself into thinking it was always thus. There are any number of ways your reason could be compromised.

    If you have ever argued with creationists on the Internet, you know what this looks like from the outside. The problem is that these creationists think that they are reasonable and that you are crazy. When you are deluded, irrational and illogical, it often feels like you are perfectly reasonable and those who disagree with you are crazy.

    So what seems to be a consistent and justified world view could be utterly incoherent if you lack the ability to tell reason from nonsense.

    Again, though I think this possibility must be acknowledged, it is not one I think needs to be taken very seriously, so I don’t think you need to be so reluctant to acknowledge it yourself.

    Like

  2. Note: Concept maps [13] needs to be examined more fully.

    Like

  3. Coel, DM, and others.

    A few things, and then I really think this has been beaten to death.

    1. I am not a skeptic, in that I do not come to skeptical *conclusions*. The same, I believe, is true of Massimo. My appreciation for skepticism lies in the fact that it *clarifies* the relationship between reasons and belief and thus, helps us to clarify the extent to which our beliefs are rationally justified and the extent to which they are not.

    2. As I’ve already said, I prefer not to invoke the Brain-in-a-Vat and some of the other more colorful skeptical hypotheses out there, because they distract people and lead them down all sorts of irrelevant rabbit-holes. This has happened quite a bit in this conversation.

    3. All the relevant issues, re: skepticism are easily identifiable, via a very simple process of examining one’s reasons for belief. In the case of empirical beliefs, the relevant reasons come from sense experience and inductive reasoning. In the case of a priori beliefs, the relevant reasons come from the decompositional analysis of concepts and deductive reasoning.

    4. Of course, the employment of one’s senses and one’s analytical and deductive faculties itself presupposes a number of beliefs, which wind up serving as hidden premises in the reasons we give: e.g. I believe that my senses are working properly. I believe that I am not dreaming. I believe that my reasoning capacities are sound, etc. I believe that the future will be like the past. Etc.

    5. But the question now arises as to what reasons I can give for *these* beliefs. And it is here that we run into trouble. Clearly, if what is at issue is one’s belief in the reliability of one’s sense-organs, one cannot appeal further to evidence from the senses, in giving one’s reasons for it. Clearly, if what is at issue is one’s belief that one’s reasoning capabilities are sound, one cannot appeal to further reasoning, in giving one’s reasons for it. Clearly, if what is at issue is one’s belief that the future will be like the past, one cannot appeal to inductive reasoning, in giving one’s reasons for it.

    (5) is the reason why virtually every element of this discussion has been entirely irrelevant to the core question: appeals to parsimony; inferences to the best explanation; more and less reasonable hypotheses; etc; They all run right into the problem identified in (5). If one’s problem is one’s belief in the reliability of the instrument, one cannot employ that same instrument in trying to justify it.

    It is my view that the only way out of this problem is to reject the demand for universal justification and accept that in every area of inquiry, whether it be science, mathematics, ethics, or anywhere else, the process of justifying beliefs can only begin, once a number of substantive things are taken for granted; things that themselves *cannot be justified*, because they comprise the frame in which justification takes place.

    One effect that this should have is to instill in us a sense of epistemic humility (labnut has commented on this) and especially in the areas, where such humility is hardest to find, which, in my view, are dogmatic religion and the more scientistic forms of science.

    Another effect that this should have, by extension, is that it should give us greater appreciation for–and interest in–the non-rational dimension of human life and all the forms of expression that we use to try and make sense of it, especially literature and the fine arts (something that philosophy has been particularly terrible at doing).

    Like

  4. Hi DM,

    I am claiming that it is possible that we are deluded about our ability to make rational deductions. I’m not talking about the external world now, but the feeling we have that 1+1=2 or that the law of non-contradiction holds or any other self-evident bit of reasoning. […] Since all justifications ultimately depend on reason, we cannot non-circularly justify reason itself.

    This is where we differ a bit. I’m a full-blown scientismist and deny that “all justifications ultimately depend on reason” which is “self-evident”. I assert that we validate reason by the fact that it works when applied to the stream-of-experience. We see that it works when applied to the S-of-E, and thus cannot be “deluded about our ability” to reason, since the only claim is that it works when applied to the S-of-E, which it does.

    Like

  5. I actually agree with most of this, Aravis.

    Where I may disagree with you is when you seen to reject parsimony as grounds for believing in the external world once we have agreed to assume for the sake of discussion that we can reason approximately correctly.

    If this point is not at issue then we are in complete agreement.

    Like

  6. Coel,
    you have agreed that:

    1) the BIV and non-BIV have identically the same information.
    2) the principle of parsimony(PoP) reaches the same conclusion in both cases. In other words, in both cases the PoP concludes there is no BIV.
    3) the non-BIV reaches a truthful conclusion.
    4) the BIV reaches a false conclusion.

    and so you are compelled to agree that:

    5) from identically the same information, the PoP can reach both a false conclusion and a true conclusion.
    6) if the PoP can reach both a false conclusion and a true conclusion from the same evidence, it is an unreliable guide to the truth, in the case of the BIV hypothesis.

    7) In that case the inevitable and only conclusion possible is that the PoP cannot discriminate between the BIV and non-BIV cases. The PoP is totally useless in this case.

    By admitting (1) to (4), (5), (6) and (7) follow inevitably.
    You have conceded the case.
    QED.

    If you deny this, I invite the experts, Massimo and Aravis, to adjudicate. I will accept the conclusion of the experts.

    If you wish to contest this please reply to the premises, arguments and conclusions in (1) to (7). We should make this an orderly discussion and (1) to (7) define the structure of the discussion.

    It is no good dragging in those delusions, so beloved by certain militant ideologues, pink unicorns and flying teapots. They are not part of the BIV hypothesis.

    Like

  7. This is a great summary which, I think, puts the subject to bed.

    This statement is the heart of the matter:
    the process of justifying beliefs can only begin, once a number of substantive things are taken for granted; things that themselves *cannot be justified*, because they comprise the frame in which justification takes place

    Many of our disputes have their origin in the way we adopt different reference frameworks.

    Epistemic humility is a necessary objective which is often just beyond our reach. I think we start to reach it when we
    1) relinquish dogmatism, of any kind (on the one hand this, on the other hand that),
    2) we replace the question
    — what are the defects in the other person’s beliefs?
    with the question
    — what are the reasons the other person adopts those beliefs?
    3) we consciously adopt different thinking styles, as in de Bono’s Six Thinking Hats.

    Humility is not only a recognition of our fallibility, it is also a recognition of other people’s insights.

    Like

  8. The belief in the external world is one of the beliefs that comprise the “frame”, in which justification takes place. It must be assumed, in order to justify beliefs like the belief that I am currently talking with you about parsimony, and thus, cannot be justified itself.

    Like

  9. I’d just add that epistemic humility should also apply to how we frame our questions in the first place. Our hidden assumptions aren’t just hidden in the justifications for our beliefs but also in the conceptual structures we use in deciding what a valid or meaningful question is.

    Examples of this include Descartes’ use of vision-oriented metaphors when describing thought, or Kant’s assumption of the existence of mental “faculties”.

    In the question posed in the original post, there are a bunch of assumptions about what knowing is, what beliefs are, the idea that our hands are something separate from our senses, etc., that lead us to ask the question we’re asking in a particular way, or to find it unsolvable in a particular way.

    Like

  10. Hi Labnut,

    You keep making the same points, which Coel and I agree with. Unfortunately, your conclusion doesn’t follow.

    1) the BIV and non-BIV have identically the same information.

    Agreed!

    2) the principle of parsimony(PoP) reaches the same conclusion in both cases. In other words, in both cases the PoP concludes there is no BIV.

    Agreed!

    3) the non-BIV reaches a truthful conclusion.

    Agreed!

    4) the BIV reaches a false conclusion.

    Agreed!

    5) from identically the same information, the PoP can reach both a false conclusion and a true conclusion.

    Agreed!

    6) if the PoP can reach both a false conclusion and a true conclusion from the same evidence, it is an unreliable guide to the truth, in the case of the BIV hypothesis.

    Agreed! In that it is not wholly reliable. It is still quite reliable, which I will explain in response to point (7)

    7) In that case the inevitable and only conclusion possible is that the PoP cannot discriminate between the BIV and non-BIV cases. The PoP is totally useless in this case.

    Agreed that it cannot discriminate. Because that’s not what it is for. What it is for is assigning prior probabilities. If there is a 1% chance that you are a BIV (which according to the PoP is almost certainly a gross overestimation) then the PoP will allow you to choose the correct option with 99% probability. This makes it quite useful indeed.

    Please please please understand this point. The PoP is probabilistic, not discriminatory. You have not once demonstrated that you have taken this point on board.

    Like

  11. Aravis,
    it should give us greater appreciation for–and interest in–the non-rational dimension of human life and all the forms of expression that we use to try and make sense of it, especially literature and the fine arts (something that philosophy has been particularly terrible at doing).

    Exactly.

    Thomas and I have been discussing the question of what function philosophy should serve. Thomas suggested the following Kantian framework, which I have reworded somewhat (thanks, Thomas, for your insights).

    1) what is the truth?
    2) what is the right thing to do?
    3) what may we hope?
    4) what does it mean to be human?

    In other words, philosophy is not an investigative tool to uncover knowledge, in the sense that science and other disciplines are. It is in fact the meta-discipline that tests claims to knowledge, tests claims on our behaviour, tests existential and meaning claims, tests claims about our humanity.

    Like

  12. I don’t accept that I need to assume an external world in order to justify from parsimony that the external world exists. I think I only need to assume my ability to reason. Is this a point that you want to pursue or do you think it is a side issue that is best avoided?

    Like

  13. It’s an important point, because it represents a fundamental misunderstanding of what the skeptical arguments entail. But to be honest, I am thoroughly exhausted with this topic, at this point.

    Like

  14. DM,
    first off, thanks for working in that framework. It streamlines the discussion.

    The PoP is probabilistic, not discriminatory.
    That is a fine distinction. Probability becomes a rule for discriminating between competing hypotheses.

    But I don’t think it rescues your argument. In reply to (7) you say:

    If there is a 1% chance that you are a BIV (which according to the PoP is almost certainly a gross overestimation) then the PoP will allow you to choose the correct option with 99% probability.

    7.1) how do you calculate your probability?
    7.2) in any case both hypotheses(BIV and BIS, Brain-In-Skull) will yield exactly the same probability calculation(same starting information).
    7.3) in that case, if both hypotheses give the same probability result, how will you, the ‘brain’, know whether you are embedded in a vat or a skull?

    The BIV and BIS give exactly the same probability calculations, so where is your ‘brain’? In a vat or a skull? There is no way for your brain to know. The vat and the skull look exactly the same to your brain. The PoP is the same, the probability calculations are the same. You are at a complete impasse.

    Like

  15. Hi Labnut,

    7.1) how do you calculate your probability?

    In order to calculate probability you would need to quantify parsimony, which I don’t know how to do. The point is that more parsimonious explanations are more probable, and in my view BIS is FAR more parsimonious than BIV for the reasons already outlined.

    7.2.) in any case both hypotheses(BIV and BIS, Brain-In-Skull) will yield exactly the same probability calculation(same starting information).

    No, because the probabilities are not based on the evidence but on the parsimony of the explanation for the evidence. BIV is more parsimonious therefore it is more probable.

    7.3) in that case, if both hypotheses give the same probability result, how will you, the ‘brain’, know whether you are embedded in a vat or a skull?

    They don’t yield the same probability result. You choose to believe an explanation based on its probability. If one explanation is far more probable than all others, belief in that explanation can be considered justified.

    Like

  16. Hi Aravis,

    I agree that it is quite vexing that we cannot get to the bottom of this. I just don’t understand your argument at all. You seem to see circularity in relying on the senses because senses are only senses if there is an external world to sense. But we need not assume they are senses from the outset. We need only assume that they are experiences which could be generated by a simulation or a demon or by sensory apparatus. Parsimony favours the latter explanation.

    Here is what I think might be going on.

    There may be some arguments in the literature that our reason is justified based on our sensory experience. We therefore cannot use reason to justify that our sensory experience is true, for that would be circular. We could be living in some kind of simulation designed to breed people with dysfunctional reasoning (e.g. an illogical prejudice against the BIV scenario and a blind spot to that prejudice), and we wouldn’t know it.

    If this is where you’re coming from, then I agree.

    But I’m not grounding reason in anything. I’m assuming that I can reason. This assumption stated, I think that it is justifiable to conclude from a reasoned argument that it is more likely than not that I am not a BIV.

    Like

  17. Hi labnut,

    6) if the PoP can reach both a false conclusion and a true conclusion from the same evidence, it is an unreliable guide to the truth, in the case of the BIV hypothesis.

    It is indeed not a 100.000000%-certain reliable. That is a point I have stated multiple times. It is still, however, a robust and compelling guide.

    7) In that case the inevitable and only conclusion possible is that the PoP cannot discriminate between the BIV and non-BIV cases. The PoP is totally useless in this case.

    Nope, wrong. “Not 100.000000% reliable” is not the same as “totally useless”. For example, 99.9999% reliability can be pretty useful at times! Your error here is in thinking that “BIV” and “not-BIV” have equivalent prior probabilities. They do not, any more than “Invisible Pink Unicorns” and “not Invisible Pink Unicorns” have the same prior probabilities.

    Or, to use DM’s example, “this ticket will win next week’s lottery” does not have the same prior probability as “this ticket will not win next week’s lottery”. Either of those is *possible*, but if you were forced to bet your house on one of them you’d pick the latter, and doing so would be overwhelmingly the more reliable choice (though not an infallible one).

    It is no good dragging in those delusions, so beloved by certain militant ideologues, pink unicorns and flying teapots. They are not part of the BIV hypothesis.

    My purpose in those posts was to explain the above. I did that in my IPU comment above which you just ignored. And, as I explained, the BIV hypothesis is actually much the same as the IPU hypothesis from the point of view of information-content of the theory and thus of parsimony/probability. Sorry to be a teensy bit critical, labnut, but you have a bad habit of simply ignoring replies to you (and you show little understanding of science and probability theory, though of course that is not a crime and there’s loads of things I don’t know about).

    If you deny this, I invite the experts, Massimo and Aravis, to adjudicate. I will accept the conclusion of the experts.

    Sorry labnut but I don’t accept your choice of expert, since I don’t accept that philosophy trumps science on such issues. (And yes I am aware of Massimo’s substantial standing in science, and I have no idea who Aravis is since I presume it is a pen name).

    Like

  18. DM,
    to drive my point home I am going to enlarge on the thought experiment.
    I am the Evil Genius (evil because I am South African but you may dispute the genius part).
    As part of the bush war we perfected the ability to remove brains from skulls, put them in vats, take them out again and put them back in the skulls. The subjects were never aware of the transition, technology is a wonderful thing. Why did we do this? It was a terrific interrogation technique as we could manipulate the prisoner’s reality to reliably extract all the information we needed. Are you surprised? You shouldn’t be, after all we did the world’s first heart transplant. The best part is that we put a compliant brain in Mandela’s skull.

    Our BOSS team(Bureau Of State Security) has taken you into custody for questioning about Mandela(how did you find out?). By chance you overheard the nurse discussing the procedure so you know what is going to happen to you. They mention that every couple of days or so, at variable intervals, they interchange your brain, from BIV to BIS to BIV so that they can recalibrate the computer. You wake up in the morning and you start to wonder, did they do the procedure last night already? Or will they start tomorrow? Tomorrow arrives and you wake up in the morning. Everything looks the same. Now where is your ‘brain’, in a vat or a skull? The nurse smiles winsomely at you when you ask but says nothing. Day after day follows. It is the same nurse, the same ward and the same food. You watch your usual TV programmes and follow the news as you always do.

    Now you get smart. Every day you do a PoP probability calculation and write it down on the pad next to your bed. The next day you repeat the calculation and write it again on the pad next to your bed. You do this for the next twenty days and always get exactly the same result.

    At this point you realise you will never know whether you are a BIV or a BIS. Your brain circuits become completely overloaded by terminal circularity and you go insane. The nurse was dozing and was unable to react quickly enough. We put your body in a helicopter and dump it in the Atlantic Ocean(the Argentinians learnt this trick from us).

    I am demoted and then put into early retirement for this failure. Lately I have started to wonder if they simply solved the problem of my incompetence by making me a BIV. I have also got a pad by my bedside with PoP probability calculations that I learned from you, but the number never budges. What am I? BIV or BIS? I think I’m going insane.

    Like

  19. Hi Coel.
    Perhaps I am misinterpreting what “extraordinary” evidence is. If you say that the provisos as stated — checking that one isn’t being duped (through very mundane means indeed) — then that does lower the bar for what I think of as ‘extraordinary.’ In the example given, the evidence would be very little different than if the claim were “I can make this paper airplane fly.” I don’t think the evidence for that claim would very extraordinary. Do you?

    Like

  20. That reduces Sagan’s dictum to a tautology I think. (I usually do agree completely with tautologies.) Not all tautologies are trivial, although this one — if Sagan meant what you claim he did — certainly is. Do you think checking for wires and so on is as extraordinary as the claim? I do not, because they would apply equally to less extraordinary claims, such as “I invented a flying vehicle.”

    Like

  21. Hi Labnut,

    A nice piece of science fiction but rhetorically it doesn’t help much.

    In that story, the prior probability for being a BIV is much higher because we have informaiton that BIVs exist.

    Also, in your story the external world exists and contains within it BIVs, so the question is not whether the external world as we perceive it exists (for we perceive it to contain BIVs) but whether we are one of the BIVs or not. That’s a completely different question.

    Like

  22. If it’s so trivial, then why are there people who believe in ghosts because they heard bumps in the night?

    Why are there Christians who believe that Jesus rose from the dead because it says he did in a book written thousands of years ago, and quite a few decades after he died (if he even lived in the first place, which is debatable).

    Sagan’s dictum is a distillation of Bayesian reasoning, nothing more. It’s not trivial because most people fail to grasp it, or at least fail to practice it in their daily lives.

    Like

  23. DM,
    Hi Labnut, A nice piece of science fiction

    huh?

    We dumped you in the Atlantic Ocean.
    This confirms my worst fears. I am a BIV but it is even worse than that. They are using the obdurate virtual DM to torment me.

    Like

%d bloggers like this: