A Bayesian approach to informal argument fallacies

Thomas_Bayes

by Scientia Salon

This paper, which appeared in the journal Synthese in 2006, touches on a sacred cow of internet discourse, especially within atheist and skeptical communities: the idea of informal logical fallacies. I have a paper on the same topic currently in press, together with my co-authors Maarten Boudry and Fabio Paglieri, so we will return to the issue once our paper will be out (Maarten is writing a precis for Scientia Salon, and you’ll see that we advocate an even more radical approach to informal fallacies than the present authors). Meanwhile, below is a taste of what Ulrike Hahn and Mike Oaksford wrote. You can find the full paper here (free).

We examine in detail three classic reasoning fallacies, that is, sup- posedly “incorrect” forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argu- ment. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the argu- ments as such that is problematic but rather something about the content of those examples with which they are typically justified. This leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation.

85 thoughts on “A Bayesian approach to informal argument fallacies

  1. I haven’t read the paper yet, but I have always regarded the ‘fallacy’ approach as suspect, partly because so called fallacies are not always fallacies, as suggested above, but also because even when something is completely free of fallacies, that does not imply that it is a sound argument.

    What constitutes a sound informal argument is a more difficult problem than it first appears. I remember years and years ago hearing ‘people must learn critical thinking’ and I thought, that sounded great. It still does but I still don’t have a clear idea of what it means to think critically. I think that most people have this problem, whether they know it or not.

    Liked by 2 people

  2. Dear Robin.

    i wonder how clear of an idea you want to have of critical thinking. My notion of what I mean by the expression is generally clear for me and I think just about as clear as one can make it. I have run across some discussions of “critical thinking” that tried to present it as if there were some kind of straightforward recipe , all worked out, that one could follow in any situation. That is distressing since it just muddies the waters. I suppose that the authors want to sound definitive or something. All it means is to go over whatever topic is in question and see if there are any logical or factual errors. If it a presentation/paper of some sort try and figure out why the author is presenting it. It really helps to know as much about the subject as possible given any particular time/resource or interest constraints (your interest to sort it out).

    You want to be really really careful for anything that seems like an error. the detailed meanings of every word can be very important. Try not to take anything for granted. Look for anything funny/suspect out of place.Find as fault as you possibly can, decide how important any supposed errors are. The more you can read up on the subject, the author, the situation the better. You have to judge anything screwy and track it down.

    If you want something clearer than that I’m concerned that you are trying to find shortcuts, avoid whatever it takes to get to the bottom of it. If you’re not willing to get to the bottom, and you have to figure out where that is, then you’re trying to avoid the work required. You sound like you’re worried that you’re not doing it right. That’s an excellent start.

    One last thing. Don’t come to a conclusion just because you’re tired thinking about it!

    Like

  3. This paper seems fairly sensible. Whenever anyone uses such arguments or labels them fallacies, they are implicitly assuming various probability “priors”. Formal Bayesian analysis is a good way of making implicit assumptions explicit.

    In a similar vein, I’m currently reading Richard Carrier’s application of Bayesian probability to the issue of whether Jesus was a real historical person, which is rather different from the more-usual apologetic approach to that question.

    Like

  4. When it comes to discussing the historical Jesus I have found David Flusser’s Jesus, The Sage from Galilee about as likely accurate as anything. (Earlier versions of the book were simply called “Jesus”)

    Like

  5. Robin,

    I may not fully understand your point. Being free of fallacies *does* imply a sound argument. Being free of fallacies does not make an arguments conclusion correct however.

    Can you give an example of an unsound argument that does not employ some logical fallacy?

    Like

  6. Truth is fuzzy. It is also non-linear.
    Information doesn’t exist platonically, but has to be conveyed by some amount of energy. What if we were to model this in terms of energy flows? Such as prior beliefs being a form of inertia.
    For one thing, we physically exist, not only as material beings, but as dynamic processes and as this paper points out, most argument is about validating our actions, rather than seeking objective truths/reality.
    It might well be argued that information/truth, is the form manifest by the energy.
    Arguments from ignorance might be considered a statistical measure of the conditional effectiveness/validity/strength of one’s current positions. While circular arguments would be a form of positive feedback. Slippery slope arguments would be a linear projection to a conclusion that avoids interim feedback.
    All of these would seem to have and be dynamic physical properties and effects.
    How much are we, as individuals, a function of our sets of beliefs and viewpoints? How important than is it to project those references onto our situation? While this might seem a matter of form, how much is that form an effect of the dynamics leading up to this point and thus our state of inertia?
    What if we find ourselves holding contradictory positions, such as a strong desire for short term gain which will come with long term costs? These are all physically dynamic relations which don’t manifest in conveniently linear solutions. How willing are we to adjust our sense of self?
    Argumentation is when the road is not smooth and straight. It’s what we have brains for.

    Like

  7. Jeff,

    Pardon if I pipe in regarding your question to Robin..

    “I may not fully understand your point. Being free of fallacies *does* imply a sound argument. Being free of fallacies does not make an arguments conclusion correct however.

    “Can you give an example of an unsound argument that does not employ some logical fallacy?”

    Robin may be using ‘sound’ in a slightly technical sense. In logic, an argument is valid if the truth of its premises means its conclusion must be true; this is often for formal reasons. An argument is sound only if it is valid and all its premises are true. In this sense, a non-fallacious, unsound argument is easy to give an example of:

    All women are mortal
    Socrates is a woman
    Thus, Socrates is mortal.

    This argument is valid and non-fallacious, but unsound, as it has at least one false premise.

    Liked by 1 person

  8. What is it to think correctly?

    Some say it has to do with avoiding “logical fallacies”. That is, of course, silly. Imagine a pilot in a plane. Suppose she avoids all logical fallacies. Where does the plane go? Nowhere. Thinking correctly is more than avoiding logical “fallacies”.

    One needs more than logic, to proceed: one needs e-motion, or motivation (both express the fact that they are whatever gets people to get into action; the semantics recognizes that logic without emotion goes nowhere).

    There is another, related, fallacy in thinking that correct thinking is all about avoiding “logical fallacies”.

    First, it assumes that all thinking is “logical”. It is not.

    Or then, if thinking is logical, one has to generalize what “logic” means.
    This, let me say right away, is what I view as the correct approach, and, you guessed it, it has to do with the usual suspect, Quantum Physics.

    What is it to think critically?

    “Critic comes from the Greek kritikos “able to make judgments,” from krinein “to separate, decide”. So being critical means to embrace the context of a case. So it is first about gathering “evidence”, namely facts.

    Hence, to think critically, one needs enough facts. Namely all relevant facts.
    One needs to have the motivation to gather all facts.

    It is arrogant to think that other people are prone to “logical fallacies”.

    Logic, the logos, is a discourse: it is a succession of symbols, and of operations. All can be labelled with numbers: this is the basic consideration which allows to derive the Incompleteness Theorems in logic.

    So the logos is a recipe in a cookbook. It is not the cooking itself.
    Cooking is a continuous affair, logic is not.

    How come?
    And how can one determine all relevant facts, before one has established the logic that will articulate them?
    There, again, one meets the concept of emotion.

    One could say: ’Oh, I will go Bayesian. I will run a first logic with a first universe of facts. If what comes out does not fit, I will add more facts.’

    This is, de facto, what people have been doing, often completely in their heads (“thought experiments”).
    Is there more?

    Probably. How does the Quantum work? A Quantum Wave evaluates the entirety of possible outcomes, then computes how probable they are. That’s eerily similar to “thinking”. Well beyond the “logos”, lightening up the way, there is the feeling of what is probable, what factors, what facts, ought to be taken in consideration.

    Conclusion?
    To think correctly means to grab all the facts that are relevant to the problem considered, and to do so, all the emotions which are relevant not just to finding the facts, but for animating the logic.

    Moreover, just as with the Quantum, this means to think teleologically, no holds barred.

    How can we guess one is on the right track to correctness in the matter of thought? When the opponent starts to squirm, and whine the reasoning is unfair, controversial, out of place.

    Liked by 1 person

  9. Hi Vector Shift,

    That is all very useful stuff but it leaves out the important part (which most discussions on critical thinking leave out) and that is:

    How does the argument demonstrate the truth of its conclusion?

    You can have an argument that is completely free from any logical or factual errors and be presented by someone with no agenda, but which still does not demonstrate the truth of its conclusion.

    As you say there can be no one recipe for this, in fact we can know this because if there was then it would have to demonstrate the truth of the statement “This recipe can be used to demonstrate the truth of all statements” and it would be circular.

    Often someone will say “show me where I have gone wrong in my argument” but it is the wrong question, he needs to show us where he went right and then someone can evaluate whether or not his method of demonstrating the truth works or not.

    If, for example, I want to determine whether Francis Collins has demonstrated the likelihood of the existence of God or if Richard Dawkins has demonstrated the unlikelihood of God then I need to first work out how their arguments are supposed to demonstrate the truth of their conclusions. But when I see a bunch of numbered statements with a “Therefore” at the end which has the superficial appearance of a deductive argument but no actual deductive structure, then I know there is not even any point in reading it because all discussion of it can be nothing more than hand waving because that is all that the argument was. (When will people learn that numbered statements with a “Therefore” sentence at the end do not a deduction make?)

    Take the recent discussions on Searle’s “Chinese Room” argument. A group of intelligent people on one hand think that Searle has demonstrated his point and a group of equally intelligent on the other saying it doesn’t. This has been going on for 35 years now and I expect the dispute to easily outlive me.

    So what has gone wrong here? Was it always a pseudo problem? Possibly.

    And of course, I am right there in the mix with all of my arguments – all that I say applies to me too.

    It seems to me that, for the most part, most of what passes as critical thinking is not trying to get at the truth or demonstrate a truth, but merely to persuade people to the writer’s point of view.

    Like

  10. I’ve read the article once, not very thorougly (that’s for tomorrow) but I’m curious to learn more about your “even more radical approach” because this article doesn’t sound very radical at all. Did someone ever doubt that a belief in a conclusion depends on prior beliefs in the evidence for the conclusion?

    On the other hand … tell Maarten Boudry (he’s from Ghent, isn’t he?) that I’ll buy him a beer when I meet him by accident if your article contains immortal lines like the one I read in the article of Hahn & Oakford:

    “In a quarrel (…) arguing ad hominem may be appropriate” (p. 705).

    Like

  11. Just to illustrate my point, here is Sam Harris’s statement of his “Moral Landscape Challenge”

    Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice).

    I was tempted to simply point out here that the bit after the “Therefore” did not follow from the bit before “Therefore” by any known rule of inference and leave it at that.

    Some people apparently did just this and Russell Blackford, who judged the challenge, said well yes, yes but maybe we can infer the inference that Harris is making.

    Maybe, but that is (dare I say it?) the first step on a slippery slope. By inferring his inference I am putting words in his mouth and could I not be accused of a straw man? A defender of Harris can say “but that is not what he meant” and before long we would be engaged in a futile exercise of talking past each other until we reach the end game of arguing about what the definition of “is” is.

    But one of the most severe and common errors in reasoning is when people say “If you cannot show me where my argument is wrong then the conclusion must be right” or its evil twin “If you cannot show that my premise is false then it must be true.

    All too often someone states a premise and, instead of trying to show why the premise is true, instead starts to consider possible objections to it. After dealing with the objections one by one the writer comes to the conclusion that the premise must be true.

    The question that has interested me for a long time is, is there a reasonable approach to all of this? One that avoids misleading formalisms but also avoids vagueness and ambiguity. An approach that concentrates on the question of how an argument demonstrates its conclusion.

    If not, then I fear that most of our talking may be to no useful or interesting purpose.

    Liked by 3 people

  12. Hi Jeff,

    I may not fully understand your point. Being free of fallacies *does* imply a sound argument. Being free of fallacies does not make an arguments conclusion correct however.
    Can you give an example of an unsound argument that does not employ some logical fallacy?

    I am not, as Paul suggests, using “sound” in its formal sense here, I just mean one that demonstrates its conclusions. But it is not enough for an argument, even an informal argument just to be free from fallacies, an argument has to actually positively demonstrate its conclusion. The fact that I have to defend that statement is indicative of the problem.

    For the example you ask, consider Sam Harris’ statement of his central argument to “The Moral Landscape” which I quoted in my previous post.

    If I agree with his first two sentences, then is his conclusion true? But how exactly do those two statements demonstrate the truth of his conclusion?

    On the other hand, can you prove that those two sentences don’t demonstrate the truth of the conclusion? Sure it does not follow by a formal deduction, but deduction is not the be all, end all of reasoning.

    You might say he has committed the fallacy of ‘non-sequitur’ but then you will have simply assumed the conclusion that his statement does not follow from the previous two.

    So there is the problem. There is no actual fallacy. But no actual argument.

    ===========================================
    As for Bayesian analysis, well the problem is that, although it can be incredibly useful, it is also a good way to make unsupported assertions and hand waving sound very technical (a bit like modal logic, which my old Discrete Maths lecturer used to say was a precise and technical method for talking rubbish).

    Thus religious apologists (and Jesus mythers) love Bayesian reasoning.

    So when I see a Bayesian analysis I tend to wonder to myself “Is it going to be worth the trouble following it?”

    Like

  13. The problem is: some people tend to over-rate every new shiny toy. They find out about logical fallacies and then go around shouting “circular reasoning!”, “appeal to consequences!” or “No True Scotsman!” and smile triumphantly as if that alone had decided a complex issue in their favour. Currently, Bayesianism is the new shiny toy and people do the same, thinking that everything has to be Bayesian or it is worthless.

    I’d say, beware the people who have grasped only one idea; beware the I have a hammer thus everything is a nail mindset. But on the other side, all of these tools have their uses, including the hammer. When somebody in my area makes the argument that we systematists should accept paraphyletic taxa as valid because look, there are many paraphyletic taxa, then I believe it is appropriate to point out that the argument is ridiculously, obviously circular and thus not an argument at all.

    Liked by 1 person

  14. Robin,

    The question that has interested me for a long time is, is there a reasonable approach to all of this? One that avoids misleading formalisms but also avoids vagueness and ambiguity. An approach that concentrates on the question of how an argument demonstrates its conclusion.

    Yes, there is, it is called “math”. Or more specifically, a formal logic system. One starts by specifying the alphabet and the language, then postulates various statements in this language as axioms, some of which define what “deduction” and “proof” mean in the context of that language. Then on top of that one imposes a bunch of axioms of the model you want to work with (like set theory, or category theory, or some other collection of primitives), and then you use all those axioms to prove theorems about primitives.

    This approach has several benefits — it makes a clear distinction between syntax and semantics, requires one to give definitions to all objects under discussion, requires one to explicitly state all assumptions (including what constitutes “deduction”, among other things), and establishes unprecedented rigour in going from premises to a conclusion.

    There is also a drawback to this approach — it is notoriously hard to do, and if you want to have anywhere near reasonable expressiveness that ordinary languages have, you need a gazillion of explicit definitions, most of which are considered either trivial or impossibly complicated. So it is not very efficient for everyday blog comments etc.

    So people most often make a compromise — they use only a handful of deductive axioms and the ordinary everyday language (like English or otherwise), relying on the intuitive understanding of grammar and all the words which have not been explicitly defined. Then, after “an argument” (i.e. an informal deduction) is cleared of all “fallacies” (i.e. is found to be deductively valid, but not necessarily sound), people involved in the argument eventually figure out that they either have different opinions on the validity of the assumptions, or that they mean different things by the words they use to express those opinions (like “scientism”, “knowledge”, “deduction”, etc.). It usually boils down to having multiple different intuitions for the same word, which emphasizes that the word requires a more precise definition.

    But a more precise definition of all words involved will ultimately get you back into the arena of a formal logic system, which is — as I already said — notoriously hard to do.

    So you can either work with some extremely hard formalism, or you can give up the rigour of your arguments. Pick your poison. 😉

    Like

  15. Well, what makes fallacies (especially informal ones) interesting is that they are not valid, but are rhetorically quite convincing. The problem with fallacies is that they are not valid arguments: it is possible for the premises to be true while the conclusion is false and vice-versa.

    An ad ignorantiam claims that “possibly, p” and concludes from that that p.
    The circular argument is not invalid per se, since analysis of circular arguments usually shows them to have the conclusion implicitly present. Thus, circular arguments derive p from p.
    The slippery slope argument claims that if p, then q. However, there is often a long chain of reasoning that needs to be filled out, i.e. “if p, then z, and if z then k….. then q”. A failure in any of the links in the long chain means the argument fails to establish the conclusion on the basis of the premises.

    To say that there are cases in which we find the arguments convincing is to flip the process of finding counter-examples on its head. An argument form is fount to be invalid by finding a case where premises p1, p2, and p3 are true, but the conclusion c is false. Thus, the argument form is shown not to necessitate the conclusion. I welcome more information on how premises can *support* a conclusion, and I’ll have a look at the paper soon. I must say, however, that it does not follow from there being cases where we wouldn’t call out an argument for having an invalid form (I tried it once and soon lost all my friends) that the argument form is thereby truth-preserving.

    Like

  16. Dear Robin,

    You ask

    “How does the argument demonstrate the truth of its conclusion? ”

    In the following circumstance.

    “You can have an argument that is completely free from any logical or factual errors and be presented by someone with no agenda,…”

    You seem to have just said that the argument is valid with correct premises If so then the argument is sound and the conclusion is correct.

    But then you add…

    “but which still does not demonstrate the truth of its conclusion.”

    Which contradicts the first part.

    It is very hard to continue with responding to your concerns because you are in a state of contradiction.

    Then you posit

    “Often someone will say “show me where I have gone wrong in my argument” but it is the wrong question, he needs to show us where he went right and then someone can evaluate whether or not his method of demonstrating the truth works or not.”

    If his argument was sound the above discussion wouldn’t have taken place. If you found that it wasn’t sound then you could just point to the invalid assumption or logical error.

    From the example arguments you mention however it’s virtually impossible for the logical rigor to be present. I rather doubt that all the assumptions used can be said to be well supported by empirical data. I can only image also that there are myriad unstated assumptions as well.

    As you continue…

    “If, for example, I want to determine whether Francis Collins has demonstrated the likelihood of the existence of God or if Richard Dawkins has demonstrated the unlikelihood of God then I need to first work out how their arguments are supposed to demonstrate the truth of their conclusions. But when I see a bunch of numbered statements with a “Therefore” at the end which has the superficial appearance of a deductive argument but no actual deductive structure,”

    No actual deductive structure? Well if they’re just being casual you might have to see if you can piece it together from what is given. This could be very difficult or impossible if the terminology and reasoning isn’t well defined.You might have to fill in the gaps with your own guesses as to what they might have meant. You might come up with several alternatives.

    “then I know there is not even any point in reading it because all discussion of it can be nothing more than hand waving because that is all that the argument was. ”

    Actually you would have to read it very very carefully making guess as to what was left out or misstated to decide that there was actually nothing there. Possibly you will spot along the way some particular conclusion which clearly doesn’t follow. One issue in there kinds of things is that the meanings of the words can change from one part to another. If you can’t piece together a rigorously sound argument from whats given then quite possibly there isn’t one. If you can’t then at least you should develop a list of unstated assumptions, unsupported logical leaps , bad terminology and such. You should be able to show anywhere where a conclusion doesn’t follow -rigorously-from before. At least then you have pointed specific questions to ask about the argument..

    The probability of the existence of God. Good grief how does one calculate a probability for that? I don’t even think that the concept of probability even applies. Probability involves a random variable and multiple trials. Is there some rigorous theory that in invoked? Only those specific attributes of your God that entail specific predictions can even hope of being said to exist if the predictions are verified. The whole process is nonsense. If God predicts the gravitational constant then that what god is.

    “It seems to me that, for the most part, most of what passes as critical thinking is not trying to get at the truth or demonstrate a truth, but merely to persuade people to the writer’s point of view.”

    An argument itself is an argument not critical thinking per se, no matter how much critical thinking may have gone into it. Any rigorous arguments against the existence of God can only be based on showing that it’s a useless concept, not that something doesn’t actually exist, perhaps this is all he means by nonexistence? You really have to be careful here with your notion of existence. All the important words have to be torn apart for empirical meaning. Every time they are used the meaning has to be checked to see if it’s the same.

    A lot of work.

    Like

  17. While applying Bayesian analysis to truth statements does seem an interesting and insightful academic exercise and understanding the application of circular versus slippery slope arguments certainly is useful in internet arguments, the most important observation in this piece is that the function of argument is to advance one’s position, rather than arrive at any universal truths. If we were to look around the world today, with the current United States economic, financial and military dominance, the varied other power centers and their various degrees of subservience to open revolt against this world order, it is evident that truth statements are a fairly tattered fig leaf over pure power politics. It is as difficult, outside the Ivory Tower, to ignore the role energy plays in making the rules, as it seems to be to make that point within them.

    What is structure and can it be distinct from dynamics? Consider the elementary equation of 1+1=2;
    Add is a verb. It is dynamic. If we were to actually add two things together, we would get one of something more. What the statement 1+1=2 really means is that when we put two sets of 1 together, we have one set of 2. It is not a static formula, but a dynamic process. Much of reality are things coming together and other things coming apart. Possibly if we really wanted to come together to some point of agreement, we would have to set aside our preferred sets, structures and formulas and sense the underlaying dynamic which is creating and dissolving them. Then there might actually be some lessons learned, that the outside world would find meaningful.

    Liked by 1 person

  18. This looks quite intriguing. I love Bayesian methods; they are not “new and shiny” but are definitely being embraced by a wider audience in the last decade. I think very few people in the skeptic community are aware of active research on theory of argumentation, which is sort of a tragedy. The work of Hamblin and the more recent development of pragma dialectics are quite accessible concepts that deserve more attention in the pop-skeptic community.

    Marko and Peter — I disagree that informal theories are “not valid”. Formal theories tend to be pretty narrow and cannot be much developed, expanded or corrected without some process of informal reasoning. Formal methods are extremely useful when they can be applied, but they can also obscure assumptions that may prove erroneous, and in many (most?) cases they may not be productive for advancing practical knowledge.

    Like

  19. Dear Robin.

    Me again…

    In respone to your….
    Robin Herbert
    March 10, 2015 • 7:12 pm

    I am not, as Paul suggests, using “sound” in its formal sense here, I just mean one that demonstrates its conclusions.

    You ought to stick to “sound”. Only sound arguments demonstrate their conclusions. As in my previous post it’s hard work but tossing around :demonstrates”to avoid determining if an argument is sound or not is hopeless.

    Can a “demonstrated conclusion” come from an unsound argument?

    If yes then what kind of demonstration is that? You call that a demonstration?

    If not then it’s the same as sound.

    Like

  20. Emotional Thinking Is Superior Thinking

    I do not mean that “logical” thinking ought to be rejected. I am just saying what I am saying, and no more. Not, just the opposite, “logical” thinking ought to be embraced. However, there are many “logical” thinking.

    Any “logical” thinking is literally, a chain made of points. (And there are no points in nature, said a Quantum Angel who passed by; let’s ignore her, for now!)

    Some say that hard logic, and mathematics is how to implement “correct thinking”. Those who say this, do not know modern logic, as practiced in logic departments of the most prestigious universities.

    In truth, overall, logicians spent their careers proposing putative, potential foundations for logic. Ergo, there is no overall agreement, from the specialists of the field themselves, about what constitute acceptable foundations for “logic”.

    It is the same situation in mathematics.

    Actually dozens of prestigious mathematicians (mostly French) launched themselves, in the 1950s into a project to make mathematics rigorous. They called their effort “Bourbaki”.

    Meanwhile some even more prestigious (French) mathematicians, or at least the best of them all, Grothendieck, splendidly ignored their efforts, and, instead, founded mathematics on Category Theory.

    Many mathematicians were aghast, because they had no idea whatsoever what Category Theory could be about. They derided it as “Abstract Nonsense”.
    Instead it was rather “Abstract Sense”.

    Let’s take a better known example: Euclid.

    There are two types of fallacies in Euclid.

    The simplest one is the logical fallacy of deducing, from emotion, what the axioms did not imply. Example: that two circles which looked like they should intersect, did intersect. Emotionally seductive, but not a consequence of Euclid’s axioms.

    Euclid’s worst fallacy was to exclude most of geometry, namely what’s not in a plane. It’s all the more striking as “Non-Euclidean” geometry had been considered just prior. So Euclid closed minds, and that’s as incorrect as incorrect can be.

    To come back to logic as studied by logicians: the logicS considered therein, are much general than those used in mathematics. Yet, as no conclusion was reached, this implies that mathematics itself is illogical. That, of course, is a conclusion mathematicians detest. And the proof of their pudding is found in physics, computer science, engineering.

    So what to do, to determine correct arguments? Well, direct towards any argument an abrasive, offensive malevolence, trying to poke holes, just as a mountain lion canines try to pass between vertebras to dislocate a spine.

    That’s one approach. The other, more constructive, but less safe, is to hope for the best, and launch logical chains in the multiverses of unchained axiomatics.

    Given the proper axioms, (most of) an argument can generally be saved. The best arguments often deserve better axiomatics (so it was with Leibnitz’s infinitesimals).

    So, de facto, people have long used not just “inverse probability”, but “inverse logic”. In “inverse logic”, axioms are derived from what one FEELS ought to be a correct argument.

    Emotions inversely constructing axiomatics is more metalogical, than axiomatics driving emotions.

    Like

  21. It’s an interesting article. And I already have an idea for a more radical approach. If I understand it corrrectly, the article basically researches how prior beliefs P(C) about a conclusion C are altered when confronted with evidence (or an argument) E.

    But that’s not how debates evolve, at least not on Scientia Salon. More often (but not always!) I get the impression that people have a conviction and, when confronted with a new argument, do not evaluate the validity of their conviction in the light of the new argument, but evaluate the new argument in the light of their conviction.

    It would be very interesting to know if the “Bayesian” approach in the article could be extended to this type of situation.

    Like

  22. Robin,

    “What constitutes a sound informal argument is a more difficult problem than it first appears.”

    Indeed. My collaborators and I suggest that there cannot be a unified theory or framework concerning informal fallacies, and that the very term should be abandoned in favor of specific discussions about epistemic warrant, persuasion of arguments, etc.

    Coel,

    “I’m currently reading Richard Carrier’s application of Bayesian probability to the issue of whether Jesus was a real historical person”

    I have a very low opinion of Carrier’s stuff. He thinks he has invented philosophy, but I find many of his writings a jumble of rumblings and well worn stuff. And that’s aside from his, shall we say, prickly character.

    brodix,

    “What if we were to model this in terms of energy flows?”

    As usual, I don’t seem the point. You wish to bring the level of analysis several notches down, where I don’t find it very helpful.

    “It might well be argued that information/truth, is the form manifest by the energy.”

    No, it may not. For one thing because there is a difference between information and truth. Truth is a value judgment on certain bits of information.

    “What if we find ourselves holding contradictory positions, such as a strong desire for short term gain which will come with long term costs? These are all physically dynamic relations”

    We find ourselves in that situation all the time, but I fail to see how thinking in terms of “physically dynamic relations” provides any insight.

    “the most important observation in this piece is that the function of argument is to advance one’s position, rather than arrive at any universal truths”

    I’m not sure that’s the most important message of the paper. I think the crucial insight here is that one cannot simply shout “fallacy!” and be done, either rhetorically or in terms of searching for truths.

    “it is evident that truth statements are a fairly tattered fig leaf over pure power politics”

    Maybe true, but irrelevant to the paper, as far as I can see.

    “It is not a static formula, but a dynamic process”

    It’s a static formula that describes a dynamic process — if you carry it out in the physical world.

    Patrice,

    “One needs more than logic, to proceed: one needs e-motion, or motivation”

    True, but I find it rather irrelevant to the issue at hand, which has to do with epistemology, not psychology.

    “There is another, related, fallacy in thinking that correct thinking is all about avoiding “logical fallacies”.”

    I’m not aware of anyone actually holding to that position.

    “Hence, to think critically, one needs enough facts. Namely all relevant facts.”

    Enough facts is not the same as all the relevant facts, as incorrectly implied by the use of “namely.”

    “It is arrogant to think that other people are prone to “logical fallacies”.”

    It is an observation, and facts are not arrogant.

    “A Quantum Wave evaluates the entirety of possible outcomes, then computes how probable they are.”

    Are you presenting quantum waves as agents? They don’t evaluate and compute, they just behave according to the laws of physics.

    “just as with the Quantum, this means to think teleologically, no holds barred”

    The quantum doesn’t think, as far as I know.

    “Emotional Thinking Is Superior Thinking”

    I have no idea what you mean by that. Superior in what sense? And where’s the bright line between reason and emotion?

    “Any “logical” thinking is literally, a chain made of points”

    No, definitely not “literally.”

    “there is no overall agreement, from the specialists of the field themselves, about what constitute acceptable foundations for “logic”.”

    True, but fairly remote from the level of discussion engaged in the featured article.

    “Example: that two circles which looked like they should intersect, did intersect. Emotionally seductive, but not a consequence of Euclid’s axioms.”

    It may not follow from the axioms, but I am having a hard time being emotionally seductive by intersecting circles.

    “Euclid’s worst fallacy was to exclude most of geometry, namely what’s not in a plane.”

    That’s an historically bizarre claim to make. Like saying that Newton’s worst fallacy was to exclude considerations of general relativity. C’mon.

    “as no conclusion was reached, this implies that mathematics itself is illogical”

    Uhm, no.

    “to hope for the best, and launch logical chains in the multiverses of unchained axiomatics”

    Very poetic, I have no idea what that means, though.

    Patrick,

    “Did someone ever doubt that a belief in a conclusion depends on prior beliefs in the evidence for the conclusion?”

    That isn’t the point of the article. Their point is to deploy a Bayesian framework in order to re-think what it means to incur in an informal fallacy. And no, nobody had done that before.

    “the article basically researches how prior beliefs P(C) about a conclusion C are altered when confronted with evidence (or an argument) E.”

    No, that was already well known. As I said above, the article applies that insight (the Bayesian approach) to the treatment of informal fallacies, showing that — depending on circumstances — they may not be fallacious at all.

    Alexander,

    “The problem is: some people tend to over-rate every new shiny toy. They find out about logical fallacies and then go around shouting “circular reasoning!””

    True enough.

    “Currently, Bayesianism is the new shiny toy and people do the same”

    Not true at all. Bayesianism is far from a new toy, and I think these authors apply it properly to a novel problem.

    Petar,

    “what makes fallacies (especially informal ones) interesting is that they are not valid, but are rhetorically quite convincing”

    The author point out that the very concept of “validity” doesn’t actually apply in the case of informal fallacies (it does in the case of formal ones though).

    cjwinstead,

    “very few people in the skeptic community are aware of active research on theory of argumentation, which is sort of a tragedy”

    Well, I wouldn’t call it a tragedy, but it is indeed peculiar that a community allegedly devoted to critical thinking is unaware of this sort of things. That’s why I picked this article as a “notable” entry for Scientia Salon (the authors, incidentally, were delighted).

    Like

  23. The open journal Informal Logic
    http://ojs.uwindsor.ca/ojs/leddy/index.php/informal_logic/index
    is worth browsing. The most recent issue has a detailed analysis of the utility of the argument
    from ignorance in public health (specifically in the case of BSE), and there is another recent paper explaining why the argumentum ad baculum is not necessarily fallacious (major premise: something bad really will happen to you if you don’t agree).

    Like

  24. Massimo,

    I think the crucial insight here is that one cannot simply shout “fallacy!” and be done, either rhetorically or in terms of searching for truths.

    This is so very true, so very important, and unfortunately so very ignored by a lot of people (especially those who consider themselves rational, skeptic, scientistic, etc.). 🙂

    When faced with a person who regularly shouts “fallacy!” and thinks that is enough for a counter-argument, my pet-peeve is to invoke the argument-from-fallacy fallacy. The Wikipedia entry is a nice read.

    In short, when an argument contains a fallacy, it doesn’t mean that its conclusion is wrong. A simple example is the following:

    “All cats are animals. Ginger is an animal. Therefore, Ginger is a cat.”

    The argument contains an obvious fallacy. Nevertheless, the fact that there is a fallacy in the argument does *not* imply that Ginger is not a cat (it might be, despite wrong reasoning). So recognizing a fallacy in an argument does not automatically invalidate its conclusion.

    Invoking the argument-from-fallacy fallacy usually makes fallacy-shouters shut up, and it’s a beautiful thing to watch them sweating when pushed to admit that shouting “fallacy!” is almost useless in an informal debate. 😉

    Like

  25. Hi Vector,

    You seem to have just said that the argument is valid with correct premises

    No, I haven’t. That is the whole point I am making. That kind of makes the rest of what you say beside the point.

    Hi Marko,

    Then, after “an argument” (i.e. an informal deduction) is cleared of all “fallacies” (i.e. is found to be deductively valid, but not necessarily sound),

    Again, no. To put it crudely, ‘Tutti Frutti’ is free of logical fallacies. ‘Wop bop a loo bop a lop bom bom’, for example, is not a logical fallacy. And yet Tutti Frutti is not a valid argument.

    Validity in a deductive argument is not a matter of absence of fallacies, it is about the presence of deductive structure.

    In case i had suddenly gone insane and have got this wrong, I went to my bookshelf and looked through my old DM textbook and also Kleene’s Mathematical Logic. The concept of a ‘fallacy’ does not even figure. You can’t determine the validity of an argument by searching through it for fallacies, you must use one of the techniques for determining that it has the correct inferential structure.

    So the absence of fallacies is the wrong focus, it should be on the presence of some inferential (not necessarily deductive) structure.

    In any case, most informal arguments one comes across in blogs, comments and books do not have, as you suggest, even an informal deductive structure, one of the things I pointed out in the Sam Harris example.

    In fact a large number of people, do not understand deductive logic. So many times I have been lectured to “use logic” but when I start a sentence in the form of “If A then B” those same people say “You are assuming A”. When I point out that “If A then B” can be true even if A is false they turn on the sarcarsm and ask if this is some new kind of logic from Bizarro world. And these are intelligent people, scientists even.

    Hi Alexander,

    They find out about logical fallacies and then go around shouting “circular reasoning!”, “appeal to consequences!” or “No True Scotsman!” and smile triumphantly as if that alone had decided a complex issue in their favour.

    Yes, someone once told me something I said was an ‘Argument from Ignorance’. I asked him how it was an argument from ignorance and he said “The fact that you don’t know that proves that it is an argument from ignorance’. You couldn’t make this stuff up.

    Hi davidlduffy

    The most recent issue has a detailed analysis of the utility of the argument
    from ignorance in public health (specifically in the case of BSE),

    That is interesting. I have mentioned here before that the reason we know that there is no link between MMR and autism is, in effect, an argument from ignorance, but valid nevertheless.

    But it makes it hard to explain to people.

    Like

  26. Dear Robin,

    I didn’t appreciate the notion of “informal argument” initially. I now think that when you started with

    Robin Herbert
    March 10, 2015 • 7:32 am

    I haven’t read the paper yet, but I have always regarded the ‘fallacy’ approach as suspect, partly because so called fallacies are not always fallacies, as suggested above, but also because even when something is completely free of fallacies, that does not imply that it is a sound argument.

    What constitutes a sound informal argument is a more difficult problem than it first appears. I remember years and years ago hearing ‘people must learn critical thinking’ and I thought, that sounded great. It still does but I still don’t have a clear idea of what it means to think critically. I think that most people have this problem, whether they know it or not.

    Now from what I’ve read your apprehension about the situation your apprehension seems completely justified. Apparently informal arguments aren’t supposed to be “sound”. They just give reasons why the author reaches a conclusion when there isn’t actually enough information for a rigorous conclusion. At best they are non-rhetorical and only present the reasons for the authors guess in order to get feedback from others, the author would like to see if somebody else has relevant information or something. An honest search for a better guess at the truth.

    If the argument is rhetorical Then the author is out to get converts, to sell something, to get elected etc. It’s all about persuasion. Many of the “fallacies” then become techniques. The conclusion here may not even be the authors guess at to some truth, it’s just the conclusion that they want you to accept for whatever reason. Some people become rhetorical when all they have to gain is convincing others of some pet notion of theirs. Argumentation to them is more like sport, trying to win, at least in their own eyes.

    In neither case though does the conclusion actually follow from the argument.

    Like

  27. In response to Robins post:

    ________________

    Robin Herbert
    March 11, 2015 • 10:17 am

    Hi Vector,

    You seem to have just said that the argument is valid with correct premises

    No, I haven’t. That is the whole point I am making. That kind of makes the rest of what you say beside the point

    _____________________

    I was referring to your prior post:

    ______________________

    Robin Herbert
    March 10, 2015 • 5:56 pm

    Hi Vector Shift,

    That is all very useful stuff but it leaves out the important part (which most discussions on critical thinking leave out) and that is:

    How does the argument demonstrate the truth of its conclusion?

    You can have an argument that is completely free from any logical or factual errors and be presented by someone with no agenda, but which still does not demonstrate the truth of its conclusion.

    ________________________

    Here you did talk about an argument that is free from any logical or factual errors that reaches a conclusion,

    I think that If they reach a conclusion that does not logically follow from correct premises then there was a logical error.

    Is there any way to reach a conclusion that does not logically follow from correct premises without a logical error?

    __________________________

    That’s why I have concerns about using the term “demonstrates” Does it mean rigorously prove ? If not then what?

    More to the point:

    If you “demonstrate the truth of a conclusion” do you have a sound argument? If not then how have you “demonstrated” the truth of the conclusion?

    Like

  28. I believe that these efforts are a step in the right direction — signs of “fallacy” need not prove that a given argument doesn’t make a valid point. To ironically show fallacy regarding the standard assumption, it presents a great demonstration of “guilt by association.” In order to personally ground myself in this regard I like to always acknowledge that reality does exist (by definition of course), though we’re just idiot humans that (hopefully) seek to understand how it works. Whether philosopher or scientist or even lizard sunning on a rock, there seems to be only one basic method by which we effectively decide the nature of reality — we take what we think we know to be true, and use this to evaluate models regarding things that we aren’t so sure about. When models regarding the essentials of reality continually remain consistent with observation, they become accepted. It’s the accepted understandings which our scientific community has developed in recent centuries, that has made us so powerful.

    Like

  29. Massimo,

    “As usual, I don’t seem the point. You wish to bring the level of analysis several notches down, where I don’t find it very helpful.”

    I very much plead guilty to that, because it has been the point I’ve been shouting from the rooftops in many of these debates; That there are fundamental conceptual flaws built into those lower levels of many of our essential assumptions. Such as modeling time as a vector from past to future, rather than taking into account the processes by which future becomes past. Or trying to use the premise of the “fabric of spacetime” to explain how space expands, yet forget that the speed of light would have to increase as well, for that to be applicable.

    As for the point I keep trying to make about how information/form is one side of the dichotomy of energy and form and thus the physical expression of information is a essential factor in its formulation, might seem an obvious and thus presumably uninteresting point, but it does go to why the oil men and financiers run this world, while the academics are huddled in the academies.

    Maybe it is too simplistic a level of argumentation, but if you “fail to see how thinking in terms of “physically dynamic relations” provides any insight,” into which arguments are taken seriously and which are ignored, no matter their logical validity, then I suppose I’m just banging my head on the wall. It wouldn’t be the first time.

    I certainly think my point about time being an effect of action is valid, but it gets ignored because I lack the force of authority. Now some would say, ‘No, it’s just wrong.” Yet that isn’t the point. If I had the ability to be heard, then wrong or not, it would be considered. The energy is foundational to the expression.

    The message may not care what medium expresses it, but then the medium doesn’t necessarily care what message it conveys. If you want to get your message across, then taking into account the properties of the medium is an important factor.

    “I think the crucial insight here is that one cannot simply shout “fallacy!” and be done, either rhetorically or in terms of searching for truths.”

    So if the intent is the search for truth, it would seem to me at least, then how it is manifest would be a significant factor in its propagation and ability to negotiate with adverse conditions. If you want to speak truth to power, then you need to understand power, as well as truth.

    Like

  30. Massimo,
    The reason I am not focused on your precise point, “that one cannot simply shout “fallacy!” and be done,”
    is because people do not do that just because they find a logical fallacy in your argument, but because they do not like the main premise of your conclusion and wish to dismiss it by whatever means available. While those who agree with your conclusion will try to offer suggestions to strengthen your argument, if they find it to be weak. That is why I am going several levels down, into the nature of information and both how it is effectively expressed and manifest.
    For information to be “true” would seem to be a much larger area of debate than negotiating fallacious logic, since reality is not always cooperatively linear, or objective.

    Like

  31. I found the article extremely interesting because (1) I have been able to identify informal logic fallacies in reading or listening to individuals argue or debate an issue or (2) one of the participants leans heavily on the identification of an informal fallacy in an opponent’s argument as if the matter were thereby entirely settled. Even so, the matter hardly seemed settled to my satisfaction. And I have felt unease in drawing any conclusion.

    At the same time, I can honestly say that I mainly gleaned much of the article since its technical aspects went over my head. One thing I notice in some of the comments here is a failure to delineate between formal and informal logical fallacies, except in the case of Robin. SciSal makes a point of this when he comments, “The author point out that the very concept of “validity” doesn’t actually apply in the case of informal fallacies (it does in the case of formal ones though).” SciSal further remarks in separate instances:

    1) “Their [the authors’] point is to deploy a Bayesian framework in order to re-think what it means to incur in an informal fallacy. And no, nobody had done that before.”

    2) “the article applies that insight (the Bayesian approach) to the treatment of informal fallacies, showing that — depending on circumstances — they may not be fallacious at all.”

    With regard to the second of these, I’m not sure that “they may not be fallacious at all” is the point the authors are trying to make. And so, perhaps SciSal could elaborate further on this point. Instead, it seems to me that the focus here is the question why the presence of an informal fallacy in an argument is “discounted” or “credited” to some degree. And they propose a Bayesian approach that takes into account factors such as priors and the valence of the argument to help one evaluate whether an informal logical fallacy is sufficiently fallacious on its own to defeat its usage in a conclusive manner. Despite this, what I’m curious about is how the relative familiarity and sophistication within an audience regarding informal logical fallacies is accounted for in the Bayesian approach. This may well have been addressed in the article, and I just happened to miss it.

    Like

  32. Dear Massimo:
    Impertinence and amusement help thought. Unmotivated thought is not worth having.

    The Greeks discovered Non-Euclidean geometry. It’s hidden in plain sight. It is a wonder that, to this day, one repeats Gauss’ self-serving absurdity on the subject (he claimed he had discovered it all before Bolay, but did not publish it because he fears the “cries of the Boeotians”… aka the peasants).
    The truth is most simple: Gauss did not think of it.

    The Greek astronomer Ptolemy wrote in Geography (c. ad 150):
    “It has been demonstrated by mathematics that the surface of the land and water is in its entirety a sphere…and that any plane which passes through the centre makes at its surface, that is, at the surface of the Earth and of the sky, great circles.”

    Not just this, but, nearly 400 years earlier, Eratosthenes had determined the size of Earth (missing by just 15%).
    http://en.wikipedia.org/wiki/Eratosthenes

    Better: Eusebius of Caesarea proposed 149 million kilometers for the distance of the Sun! (Exactly the modern value.)

    How? The Greeks used spherical geometry. Gauss, should he be around, would whine that the Greeks did not know what they were doing. But the Greeks were no fools. They knew what they were doing.

    Socrates killed enemies in battle, and contemporary mathematicians were not afraid of the Boeotians, contrarily to Gauss.

    Aristotle (384-322 BC) was keen to demonstrate that logic could be many things. Aristotle was concerned upon the dependency of logic on the axioms one used. Thus Aristotle’s Non-Euclidean work is contained in his works on Ethics.

    A thoroughly modern approach.

    The philosopher Imre Toth observed the blatant existence of Non-Euclidean geometry in the “Corpus Aristotelicum” in 1967.

    Aristotle exposed the existence of geometries different from plane geometry. The approach is found in no less than SIX different parts of Aristotle’s works. Aristotle outright says that, in a general geometry, the sum of the angles of a triangle can be equal to, or more than, or less than, two right angles.

    Actually Aristotle introduced an axiom, Aristotle’s Axiom, a theorem in Euclidean and Hyperbolic geometry (it is false in Elliptic geometry, thus false on a sphere).

    Related to Aristotle’s Axiom is Archimedes’ Axiom (which belongs to modern Model Theory).

    One actually finds non trivial, beautiful NON-Euclidean theorems in Aristotle (my frienemy).

    This was most natural: look at a sphere, look at a saddle, look at a pillow. In Ethika ad Eudemum, Aristotle rolls out an example of a quadrangle with the maximum eight right angles sum for its interior angles.

    Great circles are the “straight lines” of spherical geometry. This is a consequence of the properties of a sphere, in which the shortest distances on the surface are great circle routes. Such curves are said to be “intrinsically” straight.

    Agent: from Latin “agentem”, what sets in motion. Quantum waves are the laws of physics: given a space, they evaluate, compute. This is the whole idea of the Quantum Computer. So far, they have been uncooperative. Insulting them, won’t help.

    Like

  33. My informal, non-mathematical understanding of Bayes Theorem is that it allows one to estimate how often, given a set of defined variables, a conclusion would be true. (~ the strength or weakness of an argument)

    IMHO, the problem for epistemology is not in understanding the rules of logic, formal (math) or otherwise. Semantics may be an even a bigger problem. E.g. the author’s examples of two similar statements:
    “Ghosts exist because no one has proved that they do not. (1)
    This drug is safe because we have no evidence that it is not. (2)”

    Their Bayesian analysis supports their intuition that (2) is acceptable and (1) is not. Quelle surprise! I am almost certain their result would be entirely dependent on the population that is being studied.

    A population of existentialists, like me, would argue that ghosts do exist but that there is a lot of disagreement on whether they are real; things like zeitgeists and Holy Ghosts need to be considered. (Unlike Santa Clause who all agree exists but almost all would say is not real.) The authors failed to consider different interpretations of exist.

    A population of pharmacologists would suspect that statement (2) is gibberish because there is (almost?) no such thing as a completely safe drug. All drugs are potentially harmful since they, by definition, could affect our metabolism. The authors made an assumption about (therapeutic) drugs that is probably incorrect.

    The content and structure of a statement is fundamental to an evaluation of its truth value, a process that occurs on an intuitive level most of the time. Their Bayesian analysis seems to support this. However, we already know that intuitions vary significantly from person to person so a project for more rigorous analysis might be too ambitious.

    Like

  34. SciSal and others,

    I wonder what people think about the following, any thoughts would be appreciated. In the article the authors quoted Copi and Burgess-Jackson saying, “To a first approximation, informal argument fallacies are arguments that are “psychologically persuasive but logically incorrect; that do as a matter of fact persuade but, given certain argumentative standards, shouldn’t”

    I was just wondering if this definition would actually imply that the intentionally framing an argument in a persuasive way would count as a logical fallacy. Allow me to explain:

    The framing effect (as I am using it here, and not the narrower psychology definition) is when the way that somebody presents choices effects whether or not one choice is made over another. For example, let’s say I am thinking about buying a new car. I see two car dealerships adjacent to one another. Let’s assume that both dealerships are selling the exact same cars (in terms of quality and type). One dealership has a lot of sparkling lights and props around the building and lot that make the cars look much nicer. The other dealership, by contrast, does not. Its cars look as they would on the street.

    The framing effect says that we are more likely to choose to buy a car from the dealership that has presented the cars nicely, because they have “framed” their products more effectively.

    Now, it seems like the shiny-lights dealer is actually making an implicit argument by making his lot look nice. His implicit argument is something like:

    1. If a dealership looks nice, its cars will be of a higher quality
    2. This dealership looks nice
    C. These cars will be of a higher quality.

    Assuming that framing something nicely is actually an implicit argument, then it looks like framing something nicely is actually an informal fallacy. It is an argument that is psychologically persuasive but logically incorrect; that does as a matter of fact persuade, but given certain standards, shouldn’t.

    To apply this to philosophical argumentation, one can imagine a speaker presenting his argument clearly, but intentionally using words that make the argument seem “much nicer.” Implicitly, if we assume that framing something nicely is an implicit argument, he is implicitly arguing that his conclusion is more likely to be true because it “looks nicer.”

    So I have two questions. First, do you think that framing something nicely an implicit argument; thereby making it an informal fallacy? Second, is there ever a situation where the background conditions would be such that the committing the “framing effect fallacy” (so to speak) would be permissible? I agree with the authors that there are some clear cases where background conditions can make appeal to an “informal fallacy” justified (such as in the case of dismissing a used car-salesman’s claims because of his reputation/history of lying to make a sale). However, I am having trouble thinking of a case where something looking nice could ever permit me to accept the conclusion in a similar way to the car salesman case.

    Any thoughts would be appreciated. Thanks.

    Like

  35. Massimo,

    > That isn’t the point of the article. Their point is to deploy a Bayesian framework in order to re-think what it means to incur in an informal fallacy. And no, nobody had done that before. (…) the article applies that insight (the Bayesian approach) to the treatment of informal fallacies, showing that — depending on circumstances — they may not be fallacious at all.

    It’s odd how different two readings of the same text can be. I’ll limit myself to the analysis of the argument from ignorance (p. 708 – 710). First of all, it’s remarkable that the authors don’t use the word fallacy at all in that section, except to make the point that “a fallacy for one person may not be a fallacy for someone else because of differences in prior beliefs”. With this statement they refer to the finding that, given certain values of the prior belief and of the sensitivity and the specificity of the test (of the “evidence”), the negative test validity (i.e. the strength of the argument from ignorance) can be higher than the positive test validity.

    On the other hand, it’s not surprising that they seem to avoid the word fallacy, because the whole reasoning in that part of text is not only applicable to fallacies, but to all arguments that fit the framework – and these are not all fallacies. It is not an analysis of fallacies.

    Once you limit yourself to arguments that fit the Bayesian framework, you get a specific formula with three parameters l, h and n, and if you play around a bit with these parameters you can get – I’m exaggerating a bit – more or less every value you want for P(T/e) and P(-T/-e), i.e. for the negative and the positive test validity. If you choose to associate P(-T/-e) with a fallacy, you’re bound to get surprising results.

    This isn’t difficult to understand – even if you don’t know anything about Bayesian analysis and just study the formula. If the sensitivity of the test is very close to 1 (if there is something, we would have seen it) P(-T/-e) is close to 1. The argument from ignorance is perfectly valid. The argument from ignorance is not always a fallacy – but again, I’m surprised this is new. After all, nobody would call the argument from ignorance a fallacy if we did a test (had experience) with an almost perfect sensitivity and found nothing. To me, this shows again that this part of the text is not an analysis of fallacies.

    I’ll stop here. It’s an interesting article, but I fail to see how it is “radical” – or let me put it differently, it shouldn’t be too difficult to be more radical.

    Just one more thing. On p. 707 the authors write “… because it is Bayesian, the probabilities are interpreted as subjective degrees of belief.” That should be the other way around: because the authors interpret certain things as subjective degrees of belief, it is Bayesian.

    Liked by 1 person

  36. Just to note that mechanisms of reasoning in domains of uncertainty and incomplete knowledge have been developed throughout the history of AI, expert, knowledge-based, and database query systems.
    http://en.wikipedia.org/wiki/Bayesian_programming
    http://en.wikipedia.org/wiki/Probabilistic_programming_language
    http://en.wikipedia.org/wiki/Inductive_programming
    http://en.wikipedia.org/wiki/Abductive_logic_programming
    http://en.wikipedia.org/wiki/Inductive_logic_programming
    etc.

    Like

  37. dantip, interesting. But I think some of this has to do with priors and their manipulation–whether intuitively or intentionally manipulated. A good salesman will listen carefully, and you will basically tell him how to frame your story in a way that is acceptable to you. The other detail is extraneous to his framing the narrative that most closely resembles yours. The rest of it is background noise.

    Here we have an obese person who supports his argument that he’s lost weight by buying a longer belt and pointing out that it is now buckled in different holes. Hence my question about how the Bayesian approach accounts for the relative fitness of the audience in eliminating background noise in evaluating argument.

    So, when Dick Nixon perspires, the audience is distracted from his arguments, tete a tete Kennedy’s. Instead, they perhaps are focusing on something mostly extraneous. That is, in the absence of personal knowledge regarding a subject, what cues (persuades) us to side with one supposed authority rather than another? So to my mind, it is not so much that the background noise lends support to a whether an informal logical fallacy is evaluated by its merits as whether ancillary concerns arise that supplant and take precedence over the fallacy.

    I don’t know whether this addresses your point, but I’m more inclined to envision much of this as occurring on a subliminal level. Many subsequent explanations are merely an ad hoc attempt to justify something that is poorly understood to begin with.

    Like

  38. This is in part about what counts as a circular argument — but my example is from the previous thread about scientism (I’m too late to comment there!).

    Massimo said there that he thought ontological reductionism makes sense, but epistemological reductionism doesn’t. Aravis agreed about the latter, but was sceptical about the former, too.

    My question is: how can a claim that ontological reductionism is (trivially) true be based on anything more than a circular argument? (Or an implicit circular argument?)

    (My supplementary question — more directly relevant to this thread, I hope! — is: can it make sense to talk about an implicit, as opposed to an explicit, circular argument? Or: how can this not make sense — if you need to challenge the whole implied framework for an argument?)

    It may be fundamental to science — at least as is it currently conceived and practised — to think of the fact that each thing is made of smaller components as being (somehow or other) more fundamental to that thing’s essence than the fact that it forms part of various entities larger than itself.

    But is this a fundamental presupposition of other modes of thought — e.g. poetry, or history, or ruminating about your love-life? Should it be? Can it be?

    Of course this wouldn’t be a problem for the idea that ontological reductionism is (trivially) true — if scientific thought could (somehow or other) be shown to be more concerned with fundamental truth than all other modes of thought.

    What might show this, though? The fact that the most fundamental of sciences is concerned with establishing the truth about the most fundamental components of things?

    Like

  39. Massimo (and cjwinstead),

    I am fully aware that Bayesianism is now decades old in its use in academia and even older as per its invention. When I call it “shiny new toy” or similar, I am referring to people like Eliezer Yudkowsky, Richard Carrier, and not least the many especially postdoc-age colleagues in my field of science who swear by BEAST and MrBayes and are near-religiously opposed to doing parsimony and likelihood analyses with PAUP, RAxML, r8s, TNT, iGTP, Mesquite, MEGA and similar software not because they give you any significantly different answer but simply because those methods are, well, not Bayesian.

    To these people, it is the shiny new toy, because Less Wrong didn’t appear in the 1970ies, and “Proving History” or the Bodega Phylogenetics Workshop series didn’t either. These are recent developments.

    Like

  40. I’ve been work-busy this week, and I don’t have too much to add.

    Per a blog post, though? Coel, I’ll double and even triple down on Massimo and his take on Carrier. And, since I do have a graduate degree in this area, that’s a professional, not just an amateur critique.

    Carrier in particular shows that Bayesian stats are just a tool, and per Brodix and the issue of argumentation usually being to win, and per Mark Twain’s bon mot on lies, damned lies and statistics, Carrier’s a practitioner of that in spades with Bayesian stats.

    Does any halfway objective person really believe that one can calculate the odds of Jesus’ historicity to as precise as 0.008 percent?

    Coel, if you think you can, I’ve got some 5-star British wine to sell you.

    Also, speaking of fallacies of informal logic, all Jesus denialists engage in the argument from silence enough to be fallacious.

    http://socraticgadfly.blogspot.com/2014/11/the-academic-shortcomings-of-jesus.html

    ==

    Dantip and Thomas, I see such things as being subliminal, or “fast thinking,” too. Indeed, “framing” is generally not the right word, even with something as empirically large as shiny cars at an auto dealer. “Priming” probably is more nearly correct even for something like this.

    That leads to a larger question: Can something that’s implicit, or to go down my previous essay roads here, something that’s subconscious, really be considered fallacious? Or like my suggesting that “something like free will” needs to include “subconscious free will,” or “subconscious something like free will,” do we need to talk about “subconscious something like logic”?

    ==

    I do very much agree with Robin on the whole broader issue of informal, sound arguments.

    ==

    Per this paper itself, and Massimo”s “more radical,” Massimo, does that approach also lean to a fair degree on Bayesian probabilities? And, to you extend it to a number of other well-known informal logic fallacies besides the three in this paper?

    Like

  41. Ulrike Hahn/Mike Oaksford: {As persuasion research has demonstrated amply, however, the overall effect of communication is determined by more than its content. … Argumentation research is concerned entirely with the actual content of arguments and, even more specifically, emphasizes those aspects of message content that should be of concern to a “reasonable critic” …
    What constitutes a good argument depends not only on the topic or the subject matter (Toulmin, 1992), but also on the audience, … both in philosophical accounts of argumentation (see, e.g., Perelman & Olbrects-Tyteca, 1969; van Eemeren & Grootendorst, 2004) and in the social psychological literature on persuasion (for experimental demonstrations of audience effects, …)}

    Amen!
    With these, the fallacies in any argument is not truly any issue. Most dominant arguments in the history are constructed with two steps.

    S1, seizing the altar (podium or pulpit)

    S2: herding the believers

    Sometimes that the person on the podium is truly leading his time, he will do some great things for humanity. Yet, sometimes that the people (or groups) who are on podium are unable to lead, they will still be hogging the pot while not doing the job; just don’t let others to do, and they employ the following tactics.

    T1, throwing the club-parties: having conferences and symposiums all the time to hype their ‘not even wrongs’ and to celebrate their continuing success of hogging the pot.

    T2, claiming that theirs is the only GAME in town.

    The success of the T1 + T2 operation will produce three results.

    R1, the majority of the audiences will become their followers.

    R2, the small numbers of skeptic will be intimidated into cowards.

    R3, a few true dissenters will be pushed out of the sight.

    This was how the arguments were made in the history and still is. Fortunately, from time to time, some great people do rise to those podiums. Of course, Hahn/Oaksford’s work is still very valuable as an academic discussion, useful only to low level arguments. The correct arguments (of leading issues) will generally not be understood by the general public (audiences) and will not persuade anyone to get off his comfortable pot.

    Liked by 1 person

  42. Hi Massimo,

    I agree with you that Richard Carrier doesn’t do himself any favours by being a “prickly character”, and I’m also not promoting his writings in philosophy. But on topics of ancient history, such as historicity of Jesus, I think that he and similar authors are worth listening to.

    Hi RobLL,

    … the historical Jesus I have found David Flusser’s Jesus, The Sage from Galilee about as likely accurate as anything.

    I think it’s way too “traditional”, by which I mean arising out a long history of treating the gospels as … err .. gospel truth.

    Hi Socratic,

    Does any halfway objective person really believe that one can calculate the odds of Jesus’ historicity to as precise as 0.008 percent?

    But he simply doesn’t do that! Yes, he gives his “best estimate” probabilities for various pieces of evidence, and then adds everything up, to arrive at that number. But he fully recognizes the uncertainties, so also discusses “most favourable” probabilities.

    His ending conclusion is (p601):

    “And again we don’t have that evidence. Instead, with the evidence we do have, the probability that Jesus existed is somewhere between 1 in 12,500 [the 0.008%] and 1 in 3. In other words, less than 33% and most likely nearer to zero. We should conclude that Jesus probably did not exist.”

    For you to just quote the 0.008% value and comment (from your blog) “There’s simply not enough information … to have anywhere near that degree of precision” and “it’s clear that Carrier is “cooking” his Bayesian books to claim such pseudo-precise “precision.””, is to misrepresent what Carrier has actually said. All he is really claiming is less than a 1 in 3 probability.

    One can, of course, argue about all of Carrier’s estimated values, but the benefit of Bayesian analysis is that Carrier’s values are all explicitly stated. This contrasts with the more usual practice in that field where vague assertions are used to back up a largely-apologetic stance.

    … all Jesus denialists engage in the argument from silence enough to be fallacious.

    The “argument from silence” is not in itself fallacious, if done sensibly. If one were to argue against the claim that herds of unicorns are grazing the grasslands of Dakota, the counter would be essentially an argument from silence and it would be fair enough.

    Anyhow, the “mythicists” have way more than mere silence. There are glaring aspects of Paul’s letters (the earliest Christian writings) that simply don’t make sense under a historicist perspective. That’s one of the major positive arguments for mythicism.

    Anyhow, my opinion is that over the next couple of decades Jesus mythicism will become increasingly mainstream. The historicist arguments are simply way weaker and the mythicist arguments way stronger than is widely appreciated. The intemperate responses of the more traditional scholars will be fun to watch.

    Liked by 1 person

  43. Coel

    “The probability that Jesus existed is somewhere between 1 in 12,500 [the 0.008%] and 1 in 3. In other words, less than 33% and most likely nearer to zero. We should conclude that Jesus probably did not exist”

    Sounds like he is saying that a ~15% probability of existence should, probably, be considered a 0% probability of existance? Did I miss something?

    Liked by 1 person

  44. This written in support of Coel, debating those who are obsessed by god.

    Bayes’ style “inverse probability” does not replace what I would call “inverse axiomatics”. If the probability of the existence of the goulougoulou is just .08%, it may as well be zero. And the axioms ought to be changed: believing in Jesus, son of god (not dog, let me point out), is as likely as believing in the Hummingbird God of the Aztecs.

    All right, more people got killed in the name of Jesus than were devoured in the name of the Humming Bird God, so Jesus is a more serious problems, all the more as Abraham crazies are still around, whereas the Humming Bird crazies are finished.

    That Jesus did not exist is completely obvious to anyone who, as I did, read all the Roman literature. In Europe, nearly nobody believes in the Jesus-Abraham stuff. Even out of the six million Muslims of relatively recent immigration in France, a small fraction of two million really believe in the Abrahamist mythology.

    Why no more belief in Europe. Well, take France and consider the history of religious strife there: first one million Cathars got exterminated, and the south of France taken over by the north (under Philippe-Auguste, and a crazed Pope) . Then the Jews got kicked out. And again, under Saint Louis, and again.

    By the fifteenth century, the Protestants were hunted in the Alps, and Louis XI had to send the military to remind fanatical Catholics that French Protestants were free to exert their cult.

    In the sixteenth century, Francois I, advanced in many ways, under the influence of the fanatically Catholic Sorbonne, burned, alive, three philosophers, for insulting Jesus (or something like that). The century ends with seven religious wars in quick succession, secret intervention of Spanish Catholic fascism in France affairs. In the following century, Louis XIV threw the protestants out of France, weakening France and creating the germs of war, for centuries to come.

    The revolution of 1789 reinstated Jews and Protestants, and cracked down on the Catholic church. So the French intellectual tradition, say, at 90% has become very anti-Christian in general, and especially anti-Catholic.

    Thus French philosophers have looked without mercy at what Christianism brought. The verdict? Not much.

    In the USA, it’s different: Christianism, and its Bible was the backbone which justified the holocaust of the Natives. The Bible is indeed full of notions such as “elected people”, “promised land”, “heathens”, and entire population to massacre, just because God said so (and if you don’t obey god, god will torture your son, as god did to the disobedient King David).

    The Bible was also the fundamental cement of American ideology. Thus the American establishment views any attack against the religions of Abraham as attacks against its very foundations.

    If the Bible goes, and Baseball, and American football, there would be nothing left. What would happen then? Would Americans start to think and debate like the French, and be prone to revolutions?

    Like

  45. This debate morphed, rather interestingly, from being about the variations of logical thinking and its consequential pitfalls, to the power of belief and its hold and organizing effect over groups of people.

    Jeh Tween stated the reality of the intent:

    S1, seizing the altar (podium or pulpit)

    S2: herding the believers

    While I suspect, based on Ockham’s razor, that there was someone who was the basis for Jesus, given failed revolutionaries are a dime a dozen and it would be easier to use a person who is remembered by those you wish to convince to follow you, than take the chance of inventing one out of whole cloth, the natural order is for each generation to take what is given and bend it to their needs. What might have originated it being no more consequential than, but still as necessary as the grain of sand around which an oyster builds a pearl.
    That is because the authority of an argument generally counts for far more than its logic. The reason being, as we are all aware, logic is generally quite nebulous and the truth often subjective. So “seizing the altar” and “herding the believers” is about power and politics, not truth and logic.

    Like

  46. Any easily identified fallacy is always benign, easily be removed, that is, hurting no one. The three {ad ignorantiam, circular, slippery slope} fallacies are totally benign. Any hidden fallacy which is often viewed as great truth is the most malignant, and I am discussing three of them.

    Mf1, dishonesty fallacy (hogging the pot): there is no way of removing this fallacy by REASON. Two examples:

    T1, the nature constants of THIS universe cannot be derived, so multiverses.
    Reason: here are the ways of deriving these nature constants (https://scientiasalon.wordpress.com/2014/06/05/the-multiverse-as-a-scientific-concept-part-ii/comment-page-1/#comment-3158 ).
    Conclusion: parties keep going on for the pot-holders.

    T2, describing the quark/lepton with string language is impossible, but it is the only game in town.
    Reason: here is the language which describes the quark/lepton precisely (http://putnamphil.blogspot.com/2014/06/a-final-post-for-now-on-whether-quine.html?showComment=1403375810880#c249913231636084948 ).
    Conclusion: hyping for the only Game in town continues.

    Mf2, pussy-cat fallacy: calling pussy-cat as the “King of the Jungle” (https://scientiasalon.wordpress.com/2015/03/05/science-vs-scientism/comment-page-2/#comment-12661 )

    Mf3, Einstein (useless) fallacy:
    There are two types of theoretical physics.

    Tp1, phenomenologist who calculates phenomenological equations (such as SM equations).

    Tp2, who developing a physics-framework by choosing a set of axioms (postulates), such as Einstein on SR (special relativity) and GR (general relativity). This is the true theoretical physics; its theoretical truths need no empirical verification.

    GR has at least three important axiomatic consequences (not predictions): gravitation lensing, gravitation wave and black hole. Of course, those consequences will have physical expressions. The gravitation lensing is now a base for all CMB calculations. The gravitation wave was indirectly observed in 1974 by Russell Hulse and Joseph Taylor Jr. by observing the decaying orbit between two pulsars, and they received Nobel Prize in Physics 1993. The DIRECT observation could be possible by LIGO (Laser Interferometer Gravitational-Wave Observatory) in a few years. The SHADOW of a black hole could also be observed in a few years.

    Well, all those empirical verifications of GR are truly meaningless for any THEORECICAL truth but comforting the ignorant doubting Tom, and they will not help one bit on GR’s shortcomings which are theoretical based.

    There are a few SMALL shortcomings:
    S1, the expanding universe was not consequence, CC (cosmology constant) was add ad hoc-ly for this.

    S2, dark matter is not a part of GR.

    S3, dark energy is not a part.

    Those shortcomings are theoretical based. Now, by ADDING some terms, those problems could be resolved, but GR is no longer a theoretical framework but is now a phenomenological equation {fitting the data}, losing its status as theoretical truth.

    None of GR’s great achievements (G-lensing, G-wave and black hole) play any major role in the Structure of this universe. No Cosmology model is based on black hole, and G-wave plays no big role in any model.

    GR has BIG shortcomings, being totally useless in describing the STRUCTURE of this universe. Yet, this hidden fallacy has prevented the advancement of new physics. More, next.

    Like

  47. Only indirectly related but I had to chuckle today when I read an article along these lines by someone who works at Duke University’s “Center for Advanced Hindsight.”

    Like

  48. “And again we don’t have that evidence. Instead, with the evidence we do have, the probability that Jesus existed is somewhere between 1 in 12,500 [the 0.008%] and 1 in 3. In other words, less than 33% and most likely nearer to zero. We should conclude that Jesus probably did not exist.”

    ———————————

    As someone with a degree in History, as well as philosophy — and my speciality is in late antiquity (I studied under the great Peter Machinist, when he was at Michigan, before he took off for Harvard) — this is precisely how *not* to do history. Reminds me of the way that game-theorists are destroying political science and economics.

    Part of the more general hard-science envy that plagues social scientists and has rendered so much of it less and less interesting, over the years.

    Yuck.

    Liked by 2 people

Comments are closed.