The strange phenomenon of the cult of facts: three case studies

justthefactsby Massimo Pigliucci

I am a scientist, I appreciate the importance of verifiable facts. Moreover, my empirical research was in quantitative genetics, so I have a salutary respect for quantification and statistical analysis.

But I’m also a philosopher, which means I recognize that there simply isn’t such a thing as facts without a given theoretical framework. [1] So did Darwin, by the way. He famously wrote to his friend Henry Fawcett: “How odd it is that anyone should not see that all observation must be for or against some view if it is to be of any service!”

I’m telling you all this because I think we are currently suffering from what Leon Wieseltier has recently called “the cult of facts” [2]. Wieseltier was complaining about famed data cruncher Nate Silver, who has been referring to opinion journalism as, and I quote, “bullshit.”

Case 1: Nate Silver, bullshit and opinion journalism

Silver is justly famous for his Bayesian number crunching and meta-polling which, among other things, has led him to formulate increasingly accurate predictions about Presidential and Congressional elections (and which has now gotten him a gig at ESPN, where he is turning his talent to what he loves most: baseball analysis) [3].

Silver’s complaint about opinion journalism is based on his perception that op-ed columnists at outlets like The New York Times, The Washington Post and so forth are predictable and repetitive, which in turn is because they have “very strong ideological priors [which prevent them from] evaluating the data as it comes in [and] doing a lot of [original] thinking.”

Like Wieseltier, I am aware that the state of public intellectualism and opinion making isn’t exactly without problems. But, with Wieseltier, I find it oddly naive of Silver to talk as if “ideological priors” (otherwise known as beliefs about the world) weren’t inevitable in anyone (including Silver), and — within limits — were not actually a good thing.

Moreover, as a fellow Bayesian, Silver ought to know that his own analogy is ironically flawed: in Bayesian analysis you always begin with priors, and the whole point is to revise those priors as new data comes in. That is, embedded in the very fabric of the Bayesian approach [4] is that you start with beliefs, you add data (collected on the basis of your beliefs!), and end up with (likely modified) beliefs. You just can’t take the belief components out of the analysis, it’s integral to it, and it’s both affected by the data one gathers and determines which bits of information “out there” actually get to count as data.

As Wieseltier astutely observes, “Silver wishes to impugn not only the quality of opinion journalism, he wishes to impugn also its legitimacy. The new technology, which produces numbers the way plants produce oxygen, has inspired a new positivism, and he is one of its princes. He dignifies only facts … He does not recognize the calling of, or grasp the need for, public reason.”

And that is the crucial issue. It is fine — indeed, a good idea — to criticize individual opinionators when they get their facts wrong, or when their reasoning is faulty. That is the essence of democratic discourse in an open society. It is downright reactionary, however (regardless of whether Silver himself intends it that way) to delegitimize the whole idea that smart and well read people — we used to call them intellectuals — have become irrelevant because all we need to grasp the truth is tables and graphs.

Wieseltier accuses Silver of attempting to impose an auctoritas ex numero as the final arbiter for our judgments. The number of crucial issues that simply do not lend themselves to this sort of neo-positivism is staggering, with Wieseltier himself citing whether gays should have the right to marry, the scope of the social net, and the question of whether we have a moral duty to intervene in a case of genocide as obvious examples. Note that he is not saying that facts are irrelevant to these questions: social nets and military interventions in other countries are costly affairs, both in terms of financial and human resources (though it’s harder to imagine what sort of fact would be relevant to the issue of gay marriage). The point is — in perfect Humean fashion [5] — that facts will help us arrive at judgments, but will not uniquely determine those judgments. Our values and our critical reasoning are the additional ingredients entirely left out of Silver’s narrow view.

Wieseltier points out another bit of revealing naiveté on the part of Silver: his complaint that commentators like Paul Krugman or George Will are “repetitive,” which Silver again attributes to the rigidity of their (ideological) priors. But as Wieseltier immediately notes, Krugman, Will and others are in the business of public reasoning and persuasion, and the latter requires repetition. Indeed, for someone who is so much into evidence-based assertions, Silver would do well to check the cognitive science literature on how people change their minds: they rarely do it on the spot, as soon as they hear a beautifully reasoned argument (or pore over a cleverly put together infographic). People change their minds — when they do — because of multiple exposures to a given idea, from multiple sources and in various fashions. So it isn’t enough to make sure one gets his facts straight and his reasons well articulated. One also has to write elegantly and convincingly. And one has to do it over and over, if one wishes to accomplish anything at all.

Wieseltier ends his piece by asking whether numeracy is truly the American public’s most pressing problem. Seems to me that a vibrant democratic discourse could use more numeracy among its participants, and Nate Silver has certainly contributed his share in acquainting people with the power of data crunching. But that’s peanuts compared to the hurdle of fostering critical thinking abilities without which no amount of data crunching will help move society forward.

Case 2: the Ngram viewer

A second instructive case comes courtesy of a book review that appeared recently in The New Yorker, penned by Mark O’Connell, who was commenting on Uncharted: Big Data as a Lens on Human Culture, by Erez Aiden and Jean-Baptiste Michel [6].

Aiden and Michel have apparently single-handedly founded a new field, which goes by the unwieldy name of “culturomics,” i.e. the quantitative study of culture. The promise is the usual one: out with the old-fashioned humanistic approach to culture; in with the scientific and quantitative method.

As in the case of Wieseltier’s criticism of Silver, O’Connell too is (rightly) not dismissive of the new approach. After all, it is only reasonable to welcome a better handle on the facts of whatever it is one wishes to study or understand. The issue, rather, is one of emphasis, and of what exactly is being promised vs what can be delivered.

The centerpiece of Aiden and Michel’s book is the tool they invented for Google back in 2010: the Ngram viewer, a piece of software that allows you to graph the recurrence of a given word or phrase in Google’s gigantic library of scanned books. With so much data and computing power at their disposal, what did Aiden and Michel discover about the inner workings of human culture?

Well, while investigating the infamous Nazi campaign against “degenerate [i.e., modern] art,” Aiden and Michel looked up Marc Chagall’s trajectory in Ngram and concluded that “between 1936 and 1943, Marc Chagall’s full name appears only once in our German book records.” Which, of course, has been well known (qualitatively) to historians for a while. They also discovered that “Marcel Proust became famous for writing good books” (really?). They further quantitatively documented that Hitler is the most famous person born in the past two centuries, which led them to the apparently novel conclusion that “darkness, too, lurks among the n-grams, and no secret darker than this: Nothing creates fame more efficiently than acts of extreme evil. We live in a world in which the surest route to fame is killing people, and we owe it to one another to think about what that means.” Just one question: for whom, exactly, was this a secret?

And therein lies the problem. It’s not that Ngram isn’t a fun and potentially even somewhat useful tool. Nor is the point that humanity scholars cannot benefit from more scientific literacy and, when appropriate, some statistical training. But none of this amounts to the hyperbolic, TED-like statements of Aiden and Michel reported by O’Connell: “this big data revolution is about how humans create and preserve a historical record of their activities. Its consequences will transform how we look at ourselves. It will enable the creation of new scopes that make it possible for our society to more effectively probe its own nature. Big data is going to change the humanities, transform the social sciences, and renegotiate the relationship between the world of commerce and the ivory tower.” I seriously doubt it, but at any rate I’d like to see the data backing up this overly grandiose statement before accepting the claim at face value.

Case 3: Who’s assessing the assessors?

If you are a faculty at a state university anywhere in the United States you will recognize the term “assessment,” and you will likely have strong feelings about it. It refers to the latest legislative and administrative fad to engineer the impression that the powers that be actually give a damn about public education — at the same time as legislators keep slashing funds for it with gusto, and administrators keep hiring people like themselves and granting them handsome salaries the benefits of which to students is far from clear. [7]

The whole idea behind assessment exercises is that we need quantitative (of course) ways to figure out if our students are learning what we think we are teaching them. As one of the administrators at my own university keeps repeating, “I am data driven.” Well, so am I, truly, though I prefer the broader expression “evidence driven.” Nonetheless, I thought that giving students assignments in the form of tests and papers, and then grading said students on such assignments, was precisely the way we do check the degree to which students are learning at least a certain percentage of what we teach them.

Apparently not. Instead, we need to spend precious faculty time, and of course invest in expensive, custom made (and usually awfully designed) software, to “assess.” But as Steven Hales has pointed out in an entirely data-free editorial in The Chronicle of Higher Education [8], the whole approach quickly degenerates into epistemic skepticism and eventually into downright epistemic suicide.

Here is Hales’ satirical analysis: “the outcomes-assessment tool faces the same dilemma that grades [do]: Either (1) we know that it accurately measures the degree to which a student has mastered the course material and achieved the objectives of the course, or (2) we do not know. … Obviously we can’t use the outcomes-assessment tool itself to prove its own veracity, since that, again, is circular. … the demand that we prove the reliability of every method of gaining beliefs leads directly to a vicious regress. Ultimately we are left with skepticism: We have no knowledge at all.”

If only state legislators and administrators had bothered to take a course in epistemology, or introductory logic!

There is more: whenever I inquire with administrators about the ultimate purpose of assessment exercises, I eventually get them to admit that what they really want is to show state legislators that the university is improving by the only two measures that seem to have any traction with politicians: graduation rates and time to degree completion.

Now, of course nobody wants students to drop out of college, if it can be at all avoided. And nobody wants students to spend an extra minute beyond what is necessary in college, because — given the outrageous cost of tuition — they’d be sinking further and further into perilous debt before they even get their first job.

But surely those can’t be the only measures that count! To show this, my favorite retort to administrators is to use a reductio argument (again, those pesky intro philosophy courses!): if we really care only about graduation rates and time to completion, then there is a sure way to guarantee that we have one hundred percent graduation and that all students finish their degree in exactly four years. All we need to do is — ironically — to drop any assessment, including grades, and pass every student in every course, regardless of how s/he has mastered the material.

“Surely you’re joking, Massimo!” immediately responds the somewhat flabbergasted administrator. But I’m not, or at the least only in part. (Good philosophical points can often be made with jokes anyway. [9]) The point is that clearly better graduation rates and shorter time to completion are not what we are after. What we are after is a thoughtful education that allows our young to both reflect on the kind of life they want and develop the skills necessary to become thoughtful citizens of a vibrant democracy. Oh, and yes, to be able to find a job too.

But these latter goals are very difficult to quantify, contra the ease of estimating graduation rates and time to completion. So here is a perfect example of a situation where quantitative data not only are not helpful, but are positively harmful. That’s because the data is being gathered in response to a very poorly thought out question. It is simply astounding that we even need to have this discussion, and moreover that faculty all over the country are — at the least at the moment — surely on the loosing side of said discussion (which means, of course, that so are the students).

As I made clear at the beginning of this essay, I am not data-phobic, anti-science, or a luddite. I am simply trying to resist the latest quantification fad whenever it doesn’t help, or in fact hampers what we are trying to do. (And I haven’t even mentioned the app-based obsession with quantifying selves! [10]) Bayesian statistics, Ngram, and even (some) assessment exercises are surely tools we want to keep handy in our conceptual and technological toolbox. But the tools by themselves are no panacea, and indeed in some cases are simply irrelevant to the task, or downright harmful to it. David Hume once said that a wise person proportions his beliefs to the evidence. True, but a wise person is also capable of formulating good questions and then choose the best approaches to answer them, instead of the other way around.

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] See section 4 of Jim Bogen’s “Theory and observation in science,” Stanford Encyclopedia of Philosophy.

[2] “The emptiness of data journalism,” by Leon Wieseltier, The New Republic, 19 March 2014.

[3] See Silver’s FiveThirthyEight site.

[4] See The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy, by Sharon Bertsch McGrayne (2011).

[5] Here is how David Hume framed the problem in his A Treatise of Human Nature (1739): “In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it should be observed and explained; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason.”

[6] “Bright Lights, Big Data,” by Mark O’Connell, The New Yorker 20 March 2014.

[7] It is easy to find both data and opinion pieces backing up these statements. For instance: “State of American Higher Education: More Adjuncts, More Administration, More Tuition, Fewer Full-time Faculty, and Less State Support!,” by Anthony Picciano, CUNY Academic Commons, 17 February 2014; The Fall of the Faculty, by Benjamin Ginsberg, Oxford University Press, 2011; “Administrator Hiring Drove 28% Boom in Higher-Ed Work Force, Report Says,” by Scott Carlson, The Chronicle of Higher Education, 5 February 2014; and many, many more.

[8] “Who’s Assessing the Assessors’ Assessors?” by Steven Hales, Chronicle of Higher Education, 11 March 2013.

[9] See Plato and a Platypus Walk into a Bar . . .: Understanding Philosophy Through Jokes, by Thomas Cathcart and Daniel Klein, Penguin, 2007.

[10] For an insightful commentary see: “Quantified Self: The algorithm of life,” by Josh Cohen, Prospect Magazine, 5 February 2014.

181 thoughts on “The strange phenomenon of the cult of facts: three case studies

  1. Such an important article. It’s so true. The unfortunate fact that technology [among other things as your dutifully pointed out] operate on a bell curve as far as usefullness for society goes…. The more advanced technology gets the less the masses “have to” think. And, consequently, the easier and more frequently they are ale to bed “fed” bull $hit…

    Like

  2. Coel,
    huge strawman of supposed claims that Dawkins has not actually made, and then ritually burning the strawman.
    That is not at all surprising, considering his work, It is hard to tell the difference.

    Like

  3. No, it’s very easy, unless you’re deliberately trying not to. And Dawkins’s work and intellectual contributions will far outlast those of his critics (The Selfish Gene being a landmark in the field for example).

    Like

  4. Coel, I’m afraid I have to disagree here. Dawkins does have a habit of making stronger statements than he apparently means. As for the Selfish Gene, it is high time that people stop referring to it as a work of scholarship. It was a (very good) popularization of ideas developed by others (Hamilton, Williams, etc.), so no, it will not be a landmark in the field, if by field you mean evolutionary biology.

    Like

  5. Hi Massimo, works of synthesis, of review, works that teach, are still part of “scholarship” surely? The Selfish Gene has been cited over 3000 times in the primary scientific literature, which is a large number by any standards and qualifies it as a landmark in the field. Few books have been more influential in evolutionary biology over the last 30 years.

    Like

  6. Coel,

    I’d like to see where that 3000 citations come from. But regardless, no, Dawkins’ was not a (technical) survey of the scientific literature, it was a book for the general public. Very well written, and very successful. But that’s all it was, regardless of how many times it cited.

    Like

  7. Coel,

    it would take me a whole week and ten pages to argue against the concept of “selfish gene.”

    On the biology, I would be very cautious. What is selected (ever so imperfectly) is phenotype. There are thousands of genes, all packed tightly as a motley crowd in what comes up for selection. Add spandrels, and sheer errors. To argue that the “selfish gene” wins out to me is a dubious proposition.

    An example: speciation on the Galapagos (1 => 13 so far) does not seem to follow beak size (though this trait varies with environmental pressure), but errors in the singing – so the Grants. Once this error is enshrined, the finch populations no longer interbreed, though they could.

    In fact, speciation may be the result of lack of selective pressure – for the latter quickly eliminates deviant genes. It is only when no pressure exists that genetic variation can emerge, seeking a new niche to construct. Remember genetic drift, which Cavalli Sforza says was very important in small hominid populations.

    Darwin, who came from breeding pigeons, was selecting for one trait, and the analogy stuck when applied to the wild. I’m not sure that the wild selects so narrowly.

    I could live with the biology, and be keen to learn more. The trouble starts with the social implications of the catchy concept “selfish gene.” It struck a chord with a Zeitgeist that pushed autonomy. Being socially selfish was seen as “natural” behaving likes gene do, and therefore inevitable. Selfish gene to me is a replay, in modern garb, of Spencer’s “survival of the fittest” and the ensuing “social Darwinism.”

    It is for this, and only for this reason that I’d like the social discourse cleansed of the concept.

    Like

  8. Yes, it was a book accessible to the general public, but that’s not “all it was”, it was also a book that has been highly influential in the field, from undergrads to PhD students to researchers.

    Like

  9. Brain, not sure who you are referring to. What ad hominem attack? Whose uninformed brain? Please careful with the words you use, your latest comment already only barely made it through my civility filter.

    Like

  10. I disagree. It is my field, and I have trained a number of undergrads and PhD students, never using The Selfish Gene (and neither have any of my colleagues that I’m aware of).

    Like

  11. Hi Aldo,
    You’re entitled to your opinion about the “selfish gene” concept, but the field of evolutionary biology disagrees with you. Yes, things like genetic drift are important, but that in no way negates selfish-gene ideas.

    On the social side, the idea that “being socially selfish was seen as “natural” behaving likes gene do, and therefore inevitable” is only prevalent among those who DO NOT UNDERSTAND what The Selfish Gene is actually about. Anyone who thinks it is advocating “social Darwinism” either hasn’t read or understood the book, or is deliberately misrepresenting it.

    Like

  12. Hi Coel,
    I think we should both take Massimo’s warning quite seriously. I want to appeal to you that we both drop the theist/anti-theist contest. You and I have clashing belief systems and we cannot resolve the clash in this blog, and nor should we try.

    My appeal to you is that:
    1) we respect each other’s belief systems in the way we word our comments,
    2) we don’t turn the comments on this blog into a battle ground between belief systems,
    3) we refrain from comments that exacerbate each other’s sensitivities,
    4) we comment in a spirit of goodwill,
    5) we strive for understanding and not point scoring or condemnation.

    This is my undertaking to you and Massimo can hold me to this. I sincerely hope you will join me in this undertaking and I hope DM does as well.

    Like

  13. We take false statements and supernatural beliefs as factual!? Why politeness! Silly ideas deserve to be called as such in professional forums.

    Online bullies who name call and name and attack people need to be policed like in and social setting. False statements need to be proven wrong. Individuals should never be mentioned in a comment, oly ideas.

    The critique of “style/tone” is just implicit censorship and solipsism.

    Like

  14. Once again, what and of whom are we talking about? I think I’m doing a pretty good job at balancing things between keeping the conversation civil and yet allowing even sharp disagreement. Feel free to email me directly if you want to get specific.

    Like

  15. Comments discussing/attacking any individual and naming anyone is false. There is no need to mention anyone’s name in a comment, we are discussing ideas, not people. It’s not impolite it is factually irrelevant. In fact the no free will research says no one has control of their behavior anyway.

    Like

  16. BMM,
    Why politeness!
    Massimo has just called for civility in comments.

    As for the rest, I have given an undertaking to abide by the norms of civilized behaviour that I listed. That is what I will do. I hope others join me in this.

    How you behave is between you and Massimo. For my part I will never reply to comments that contravene these rules of civilized behaviour. You will simply be ignored. Believe me, calm, civilized, polite, well thought out arguments always win the day in the eyes of the majority.

    Like

  17. Politeness is based on moralizing views. Such statements are solipsism. Is subjective moralizing appropriate for professional discussions? Do we want doctors, engineers, airplane pilots to behave according to subjective, moralistic beliefs?

    It is easy to avoid the logical error of personal attacks – never mention anyone’s name.

    Like

  18. @ Brain Molecule Marketing — “Politeness is based on moralizing views. Such statements are solipsism.”

    Solipsism is the belief that other minds do not exist. Politeness assumes that other people do exist. So no, that’s factually wrong.

    “Do we want doctors, engineers, airplane pilots to behave according to subjective, moralistic beliefs?”

    Yes, we do. All professions leave room for judgment. All professions have ethical codes of conduct that are not based in pure reason.

    Like

  19. So we have moved to discussion of ideas and facts to manners. A silly, but common defense, by the uninformed afraid of new ideas. It smothers, real learning and exchange.

    Recent research patients who “like” their docs have far worse health outcomes, e.g., die sooner…etc…

    Like

  20. @ Brain Molecule Marketing — You pretty consistently fair to respond to direct questions. Do you still maintain that politeness “are solipsism”? What is your argument for making that connection? You have also failed to give an adequate account for your logical positivist beliefs. Many of your statements seem to be implying that because you discount even philosophical debate on ethics or morals. So is it possible in your view to think rationally about morality? Are matters of personal subjective preference like “Van Gogh was a great painter” forbidden in your view? Can you prove all these claims of yours you’ve made? If so please do so.

    Like

  21. What do local cultural beliefs about bad behavior have to do with anything? Add in the medical fact of no free will and these ideas are just obsolete – contradicted by biology, physiology, brain science and animal ethology everyday. The is no conscious choice, each body and brain does what it must to optimize itself. Broken brains do harmful things. The folklore of badness-morality is just a myth and more magical and wishful thinking.

    Like

  22. Until now, you haven’t stated a single fact to justify your opinion on logical positivism – which, as a matter of fact, is logically inconsistent and proved to be so many years ago.
    Me thinks you should read more on the relations between logical positivism and language – boy, your gonna be surprised! 😉

    Like

  23. I mentioned Cizek’s work and search free will, everyone can make their own minds up. lol I only read peer reviewed brain research reports…philo is just theology…words, word play and natural language semantics inside of semantics…ugh.

    Like

  24. Free will is a too much controversial issue to settle a discussion like this one (or any discussion, I think).
    You keep assuming that logical positivism and language are at odds with each other. Well, as far as I understand, they are not. In fact, logical positivists were very afraid of going beyond results of experimentation and they were distrustful of scientific explanations because they thought those kinds of talk, if taken too far, would lead to metaphysics (they also thought all metaphysics was bad philosophy).
    To avoid all those “nasty” things they held that, instead of talk about relations between objects (that’s metaphysics for them!), we should talk about relations between words/phrases.

    “logical positivism: In this course, logical positivism and logical empiricism are used interchangeably. These terms refer to an ambitious, language-centered version of empiricism that arose in Vienna and Berlin and became the standard view in philosophy of science through the middle of the 20th century. Under the pressure of criticism (largely from within), the positivist program became somewhat more moderate over the years”.
    Taken from Jeffrey L. Kasser, Philosophy of Science:
    http://www.thegreatcourses.com/tgc/courses/course_detail.aspx?cid=4100

    Like

  25. Vision is so much of the mammal/primate brain this kind of basic research is very good. Philo and faith-based counter views encouraged welcome. Always best to discuss specific studies instead of “science.”

    “Abstract
    Visual input often arrives in a noisy and discontinuous stream, owing to head and eye movements, occlusion, lighting changes, and many other factors. Yet the physical world is generally stable; objects and physical characteristics rarely change spontaneously. How then does the human visual system capitalize on continuity in the physical environment over time? We found that visual perception in humans is serially dependent, using both prior and present input to inform perception at the present moment. Using an orientation judgment task, we found that, even when visual input changed randomly over time, perceived orientation was strongly and systematically biased toward recently seen stimuli. Furthermore, the strength of this bias was modulated by attention and tuned to the spatial and temporal proximity of successive stimuli. These results reveal a serial dependence in perception characterized by a spatiotemporally tuned, orientation-selective operator—which we call a continuity field—that may promote visual stability over time.”

    Like

  26. Thought provoking with a scientists philosophical perspective. You got me thinking re Data Journalism History Heritage and Digital Humanities so thank you.

    Like

Comments are closed.