Back to Square One: toward a post-intentional future

intentionalby Scott Bakker

“… when you are actually challenged to think of pre-Darwinian answers to the question ‘What is Man?’ ‘Is there a meaning to life?’ ‘What are we for?’, can you, as a matter of fact, think of any that are not now worthless except for their (considerable) historic interest? There is such a thing as being just plain wrong and that is what before 1859, all answers to those questions were.” (Richard Dawkins, The Selfish Gene, p. 267)

Biocentrism is dead for the same reason geocentrism is dead for the same reason all of our prescientific theories regarding nature are dead: our traditional assumptions simply could not withstand scientific scrutiny. All things being equal, we have no reason to think our nature will conform to our prescientific assumptions any more than any other nature has historically. Humans are prone to draw erroneous conclusions in the absence of information. In many cases, we find our stories more convincing the less information we possess! [1]. So it should come as no surprise that the sciences, which turn on the accumulation of information, would consistently overthrow traditional views. All things being equal, we should expect any scientific investigation of our nature will out and out contradict our traditional self-understanding.

Everything, of course, turns on all things being equal — and I mean everything. All of it, the kaleidoscopic sum of our traditional, discursive human self-understanding, rests on the human capacity to know the human absent science. As Jerry Fodor famously writes:

“if commonsense intentional psychology really were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species; if we’re that wrong about the mind, then that’s the wrongest we’ve ever been about anything. The collapse of the supernatural, for example, didn’t compare; theism never came close to being as intimately involved in our thought and practice — especially our practice — as belief/desire explanation is.” [2]

You could say the scientific overthrow of our traditional theoretical understanding of ourselves amounts to a kind of doomsday, the extinction of the humanity we have historically taken ourselves to be. Billions of “selves,” if not people, would die — at least for the purposes of theoretical knowledge!

For years now I’ve been exploring this “worst case scenario,” both in my novels and in my online screeds. After I realized the penury of the standard objections (and as a one-time Heideggerean and Wittgensteinian, I knew them all too well), I understood that such a semantic doomsday scenario was far from the self-refuting impossibility I had habitually assumed [3]. Fodor’s ‘greatest intellectual catastrophe’ was a live possibility — and a terrifying one at that. What had been a preposterous piece of scientistic nonsense suddenly became the most important problem I could imagine. Two general questions have hounded me ever since. The first was, What would a postintentional future look like? What could it look like? The second was, Why the certainty? Why are we so convinced that we are the sole exception, the one domain that can be theoretically cognized absent the prostheses of science?

With reference to the first, I’ll say only that the field is quite lonely, a fact that regularly amazes me, but never surprises [4]. The second, however, has received quite a bit of attention, albeit yoked to concerns quite different from my own.

So given that humanity is just another facet of nature, why should we think science will do anything but demolish our traditional assumptions? Why are all things not equal when it comes to the domain of the human? The obvious answer is simply that we are that domain. As humans, we happen to be both the object and the subject of the domain at issue. We need not worry that cognitive science will overthrow our traditional self-understanding, because, as humans, we clearly possess a privileged epistemic relation to humans. We have an “inside track,” you could say.

The question I would like to explore here is simply, Do we? Do we possess a privileged epistemic relation to the human, or do we simply possess a distinct one? Being a human, after all, does not entail theoretical knowledge of the human. Our ancestors thrived in the absence of any explicit theoretical knowledge of themselves — luckily for us. Moreover, traditional theoretical knowledge of the human doesn’t really exhibit the virtues belonging to scientific theoretical knowledge. It doesn’t command consensus. It has no decisive practical consequences. Even where it seems to function practically, as in Law say, no one can agree how it operates, let alone just what is doing the operating. Think of the astonishing epistemic difference between mathematics and the philosophy of mathematics!

If anything, traditional theoretical knowledge of the human looks an awful lot like prescientific knowledge in other domains. Like something that isn’t knowledge at all.

Here’s a thought experiment. Try to recall “what it was like” before you began to ponder, to reflect, and most importantly, before you were exposed to the theoretical reflections of others. I’m sure we all have some dim memory of those days, back when our metacognitive capacities were exclusively tasked to practical matters. For the purposes of argument, let’s take this as a crude approximation of our base metacognitive capacity, a ballpark of what our ancestors could metacognize of their own nature before the birth of philosophy.

Let’s refer to this age of theoretical metacognitive innocence as “Square One,” the point where we had no explicit, systematic understanding of what we were. In terms of metacognition, you could say we were stranded in the dark, both as a child and as a pre-philosophical species. No Dasein. No qualia. No personality. No normativity. No agency. No intentionality. I’m not saying none of these things existed (at least not yet), only that we had yet to discern them via reflection. Certainly we used intentional terms, talked about desires and beliefs and so on, but this doesn’t entail any conscious, theoretical understanding of what desires and beliefs and so on were. Things were what they were. Scathing wit and sour looks silenced those who dared suggest otherwise.

So imagine this metacognitive dark, this place you once were, and where a good number of you, I am sure, believe your children, relatives, and students — especially your students — still dwell. I understand the reflex is to fill this cavity, clutter it with a lifetime of insight and learning, to think of the above as a list of discoveries (depending on your intentional persuasion, of course), but resist, recall the darkness of the room you once dwelt in, the room of you, back when you were theoretically opaque to yourself.

But of course, it never seemed “dark” back then, did it? Ignorance never does, so long as we remain ignorant of it. If anything, ignorance makes what little you do see appear to be so much more than it is. If you were like me, anyway, you assumed that you saw pretty much everything there was to see, reflection-wise. Since your blinkered self-view was all the view there was, the idea that it comprised a mere peephole had to be preposterous. Why else would the folk regard philosophy as obvious bunk (and philosophers as unlicensed lawyers), if not for the wretched poverty of their perspectives?

The Nobel Laureate Daniel Kahneman calls this effect “what you see is all there is,” or WYSIATI. As he explains:

“You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” [5]

The idea, basically, is that our cognitive systems often process information blind to the adequacy of that information. They run with what they get, present hopeless solutions as the only game in town. This is why our personal Square One, benighted as it seems now, seemed so bright back then, and why “darkness,” perhaps our most common metaphor for ignorance, needs to be qualified. Darkness actually provides information regarding the absence of information, and we had no such luxury as a child or as a species. We lacked access to any information tracking the lack of information: the “darkness” we had to overcome, in other words, was the darkness of neglect. Small wonder our ignorance has felt so enlightened at every turn! Only now, with the wisdom of post-secondary education, countless colloquia, and geriatric hindsight can we see how little of ourselves we could see back then.

But don’t be too quick to shake your head and chuckle at your youthful folly, because the problem of metacognitive neglect obtains as much in your dotage as in your prime. You agree that we suffered metacognitive neglect both as pretheoretical individuals and species, and that this was why we failed to see how little we could see. This means 1) that you acknowledge the extreme nature of our native metacognitive incapacity, the limited and — at least in the short term — intractable character of the information nature has rendered available for reflection; and 2) that this incapacity applies to itself as much as to any other component of cognition. You acknowledge, in other words, the bare possibility that you remain stranded at Square One.

Thanks to WYSIATI, the dark room of self-understanding cannot but seem perpetually bright. Certainly it feels “different this time,” but given the reflexive nature of this presumption, the worry is that you have simply fallen into a more sophisticated version of the same trap. Perhaps you simply occupy a more complicated version of Square One, a cavity “filled with sound and fury,” but ultimately signifying nothing.

Raising the question, Have we shed any real theoretical light on the dark room of the human soul? Or does it just seem that way?

The question of metacognitive neglect has to stand among the most important questions any philosopher can ask, given that theoretical reflection comprises their bread and butter. This is even more the case now that we are beginning to tease apart the neurobiology of metacognition. The more we learn about our basic metacognitive capacities, the more heuristic, error-prone, and fractionate they become [6]. The issue is also central to the question of what the sciences will likely make of the human, posed above. If we haven’t shed any real traditional light on the human room, then it seems fair to say our relation to the domain of the human, though epistemically distinct, is not epistemically privileged, at least not in any way that precludes the possibility of Fodor’s semantic doomsday.

So how are we to know? How might we decide whether we, despite our manifest metacognitive incapacity, have groped our way beyond Square One, that the clouds of incompatible claims comprising our traditional theoretical knowledge of the human actually orbit something real? What discursive features should we look for?

Capable of commanding consensus can’t be one of them. This is the one big respect where traditional theoretical knowledge of the human fairly shouts Square One. Wherever you find intentional phenomena theorized, you find interminable controversy.

Practical efficacy has promise — this is where Fodor, for instance, plants his flag. But we need to be careful not to equivocate (as he does) the efficacy of various cognitive modes and the theoretical tales we advance to explain them. No one needs an explicit theory of rule-following to speak of rules. Everyone agrees that rules are needed, but no one can agree what rules are. If the efficacy belonging to the phenomena requiring explanation — the efficacy of intentional terms — attests to the efficacy of the theoretical posits conflated with them, then each and every brand of intentionalism would be a kind of auto-evidencing discourse. The efficacy of Square One intentional talk evidences only the efficacy of Square One intentional talk, not any given theory of that efficacy, most of which seem, quite notoriously, to have no decisive practical problem-solving power whatsoever. Though intentional vocabulary is clearly part of the human floor-plan, it is simply not the case that we’re “born mentalists.” We seem to be born spiritualists, if anything! [7]

Certainly a good number of traditional concepts have been operationalized in a wide variety of scientific contexts — things like “rationality,””representation,” “goal,” and so on — but they remain opaque, and continually worry the naturalistic credentials of the sciences relying on them. In the case of cognitive science, they have stymied all attempts to define the domain itself — cognition! And what’s more, given that no one is denying the functionality of intentional concepts (just our traditional accounts of them), the possibility of exaptation [8] should come as no surprise. Finding new ways to use old tools is what humans do. In fact, given Square One, we should expect to continually stumble across solutions we cannot decisively explain, much as we did as children.

Everything turns on understanding the heuristic nature of intentional cognition, how it has adapted to solve the behavior of astronomically complex systems (including itself) absent any detailed causal information. The apparent indispensability of its modes turns on the indispensability of heuristics more generally, the need to solve problems given limited access and resources. As heuristic, intentional cognition possesses what ecological rationalists call a “problem ecology,” a range of adaptive problems [9]. The indispensability of human intentional cognition (upon which Fodor also hangs his argument) turns on its ability to solve problems involving systems far too complex to be economically cognized in terms of cause and effect. It’s all we’ve got.

So we have to rely on cause-neglecting heuristics to navigate our world. Always. Everywhere. Surely these cause-neglecting heuristics are among the explananda of cognitive science. Since intentional concepts often figure in our use of these heuristics, they will be among the things cognitive science eventually explains. And then we will finally know what they are and how they function — we will know all the things that deliberative, theoretical metacognition neglects.

The question of whether some kind of explanation over and above this — famously, some explanation of intentional concepts in intentional terms — is required simply becomes a question of problem ecologies. Does intentional cognition itself lie within the problem ecology of intentional cognition? Can the nature of intentional concepts be cashed out in intentional terms?

The answer has to be no — obviously, one would think. Why? Because intentional cognition solves by neglecting what is actually going on! As the sciences show, it can be applied to various local problems in various technical problem ecologies, but only at the cost of a more global causal understanding. It helps us make some intuitive sense of cognition, allows us to push in certain directions along certain lines of research, but it can never tell us what cognition is simply because solving that problem requires the very information intentional cognition has evolved to do without. Intentional cognition, in other words, possesses ecological limits. Lacking any metacognitive awareness of those limits, we have the tendency to apply it to problems it simply cannot solve. Indeed, our chronic misapplication of intentional cognition to problem-ecologies that only causal cognition could genuinely solve is one of the biggest reasons why science has so reliably overthrown our traditional understanding of the world. The apocalyptic possibility raised here is that traditional philosophy turns on the serial misapplication of intentional cognition to itself, much as traditional religion, say, turns on the serial misapplication of intentional cognition to the world.

Of course intentional cognition is efficacious, but only given certain problem ecologies. This explains not only the local and limited nature of its posits in various scientific contexts, but why purely philosophical accounts of intentional cognition possess no decisive utility whatsoever. Despite its superficial appeal, then, practical efficacy exhibits discursive features entirely consistent with Square One (doomsday). So we need to look elsewhere for our redeeming discursive feature.

But where? Well, the most obvious place to look is to science. If our epistemic relation to ourselves is privileged as opposed to merely distinct, then you would think that cognitive science would be revealing as much, either vindicating our theoretical metacognitive acumen or, at the very least, trending in that direction. Unfortunately, precisely the opposite is the case. Memory is not veridical. The feeling of willing is inferential. Attention can be unconscious. The feeling of certainty has no reliable connection to rational warrant. We make informed guesses as to our motives. Innumerable biases afflict both automatic and deliberative cognitive processes. Perception is supervisory, and easily confounded in many surprising ways. And the list of counter-intuitive findings goes on and on. Cognitive science literally bellows Square One, and how could it not, when it’s tasked to discover everything we neglect, all those facts of ourselves that utterly escape metacognition. Stanislaus Dehaene goes so far as to state it as a law: “We constantly overestimate our awareness — even when we are aware of glaring gaps in our awareness” [10]. The sum of what we’re learning is the sum of what we’ve always been, only without knowing as much. Slowly, the blinds on the dark room of our theoretical innocence are being drawn, and so far at least, it looks nothing at all like the room described by traditional theoretical accounts.

As we should expect, given the scant and opportunistic nature of the information our forebears had to go on. To be human is to be perpetually perplexed by what is most intimate — the skeptics have been arguing as much since the birth of philosophy! But since they only had the idiom of philosophy to evidence their case, philosophers found it easy to be skeptical of their skepticism. Cognitive science, however, is building a far more perilous case.

So to round up: Traditional theoretical knowledge of the human simply does not command the kind of consensus we might expect from a genuinely privileged epistemic relationship. It seems to possess some practical efficacy, but no more than what we would expect from a distinct (i.e., heuristic) epistemic relationship. And so far, at least, the science continues to baffle and contradict our most profound metacognitive intuitions.

Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One? Some kind of gut feeling, perhaps? An experience at an old New England inn?

You tell me. I can remember what it was like listening to claims like those I’m advancing here. I remember the kind of intellectual incredulity they occasioned, the welling need to disabuse my interlocutor of what was so clearly an instance of “bad philosophy.” Alarmism! Scientism! Greedy reductionism! Incoherent blather! What about quus? I would cry. I often chuckle and shake my head now. Ah, Square One… What fools we were way back when. At least we were happy.

_____

Scott Bakker has written eight novels translated into over dozen languages, including Neuropath, a dystopic meditation on the cultural impact of cognitive science, and the nihilistic epic fantasy series, The Prince of Nothing. He lives in London, Ontario with his wife and his daughter.

[1] A finding that arises out of the heuristics and biases research program spearheaded by Amos Tversky and Daniel Kahneman. Kahneman’s recent, Thinking, Fast and Slow provides a brilliant and engaging overview of that program. I return to Kahneman below.

[2] Psychosemantics, p.vii.

[3] Using intentional concepts does not entail commitment to intentionalism, any more than using capital entails a commitment to capitalism. Tu quoque arguments simply beg the question, assume the truth of the very intentional assumptions under question to argue the incoherence of questioning them. If you define your explanation into the phenomena we’re attempting to explain, then alternative explanations will appear to beg your explanation to the extent the phenomena play some functional role in the process of explanation more generally. Despite the obvious circularity of this tactic, it remains the weapon of choice for great number of intentional philosophers.

[4] Another lonely traveller on this road is Stephen Turner, who also dares ponder the possibility of a post-intentional future, albeit in very different manner.

[5] Thinking, Fast and Slow, p. 201.

[6] See Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability.”

[7] See Natalie A. Emmons and Deborah Kelemen, “The Development of Children’s Prelife Reasoning.”

[8] Exaptation.

[9] I urge anyone not familiar with the Adaptive Behaviour and Cognition Research Group to investigate their growing body of work on heuristics.

[10] Consciousness and the Brain, p. 79. For an extended consideration of the implications of the Global Neuronal Workspace Theory of Consciousness regarding this issue see, R. Scott Bakker, “The Missing Half of the Global Neuronal Workspace.”

173 thoughts on “Back to Square One: toward a post-intentional future

  1. Robin: “I imagine that if most people believed that “aboutness” was an illusion, then we would drop the misleading language. Instead of saying that a parent loves a child we would say that the representations of the child in the brain triggered certain brain states or something of the sort.”

    You’re imagining only to make apparently absurd, not to actually understand the position you think you’re critiquing. Me, I prefer not wasting my time on straw-men.

    On my view, the idea that parents and children would stop using a heuristic so powerful as ‘aboutness’ is absurd. Only philosophers, realizing that aboutness is a way communicate salient relationships to the environment absent any detailed causal information regarding that relation, would cease thinking it something fundamental, something that science ‘presupposes’ in this or that respect. As a result, they would cease being mystified by the fact that heuristic adapted to circumvent the absence of detailed causal information doesn’t seem to admit causal explanation, and thus to presume they are talking about something that lies outside and so is irreducible to the causal order.

    Something only they can be experts about!

    Like

  2. Alexander Schmidt-Lebuhn: “But it surely means that one cannot call such a hypothetical event, as Fodor does, “the greatest intellectual catastrophe in the history of our species”.”

    This is something that comes up quite often on my site, Three Pound Brain. I’m actually inclined to agree with up to a point. But as I mentioned in one of the responses above, you have the ‘creep’ of mechanical idioms into everyday parlance, but also, if my heuristic account of FP is correct, there’s a sense in which the functionality of FP depends on neglect, and that thus we should expect all kinds of strange dysfunctions. I linked this above, but it’s worth relinking here: http://rsbakker.wordpress.com/2014/05/20/neuroscience-as-socio-cognitive-pollution/

    Like

  3. Disagreeable Me: Love your handle! I feel the tug of what you’re saying, but I think you’re not following the consequences of a heuristic-neglect interpretation all the way through. We humans are pretty clearly disposed to attribute ‘intrinsic efficacy’ to complicated phenomena. Money is a great example: we can answer a good number of practical questions by presuming money is intrinsically efficacious: we need not know anything about economics to know what ‘money can do.’ And yet intrinsic efficacy has no place in our theories of money or economy, outside of a psychological annex. The same will happen to FP, I think.

    This is one place where I part company with Dennett and interpretavists more generally. Heuristic neglect, as I’m posing it, actually allows us to get a sense of the problem-ecologies corresponding to our tools and to avoid pointless ‘merelogical fallacy’ type debates. Belief talk will remain in our communicative patois, but there will be no theories of belief that takes them as real things to be picked out, as opposed to ways to lock into certain restricted ranges of problems.

    Liked by 1 person

  4. Labnut: I love your handle too. You guys do have the best handles, you realize. I agree with your assessment, though my position is actually very different than Rosenberg’s. It’s funny how you can sometimes find these agreements on the shape of the problem from across the aisle. I feel the same way when I Terrence Deacon, for instance. He has the problem nailed… the solution? Not so much.

    But my Diogenetic quest to find evidence of Square Two continues unrequited. The perceptive comments have been piling up, but so far they have all danced around that tricky issue of evidence!

    Liked by 1 person

  5. Scott,

    You cite Fodor’s suggestion that if “commonsense intentional psychology really were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species”, and then go on to relate that, after you realised “the penury of the standard objections”, you “understood that such a semantic doomsday scenario was far from the self-refuting impossibility I had habitually assumed”. It became “a live possibility — and a terrifying one at that. What had been a preposterous piece of scientistic nonsense suddenly became the most important problem I could imagine.”

    One problem I have with this is that, not only do you neglect to explain what it was that changed your mind, but you provide very little in the way of argument for why anyone else should take your doomsday prophecy seriously.

    Suppose we accept, for the sake of argument, that the standard objections to EM (i.e. that it is self-refuting) are as penurious as you suggest. It seems to me that, at most, this suggests that there are no simple a priori reasons for supposing that folk psychology might not someday be replaced by a superior, neuroscientifically informed (or rather, perhaps, neurotechnologically engineered) alternative. But right now, in the absence of any such alternatives, how is this anything more than a highly speculative sci-fi scenario rather than “a live possibility — and a terrifying one at that”?

    There’s much more I could say, but given the strict 500-word limit, for now I will simply ask the following questions:

    (1) What was it, exactly, that convinced you that your “doomsday” scenario was a “live possibility”? What was it that “suddenly” transformed what you had previously dismissed as “a preposterous piece of scientistic nonsense” into “the most important problem I could imagine”? Perhaps if you did a better job of explaining that, you will be better able to rationally motivate others to think likewise?

    As far as I can see, the kind of empirical research you’re alluding to (e.g. the heuristics and biases literature, much of which goes back many decades), while it tells us plenty of fascinating things, doesn’t even begin to suggest the kinds of thoroughgoing eliminativism you seem to want to terrify (or else “rile”) us with.

    (2) Are you interested in this because you want to write dystopian fantasy novels about a coming “post-human apocalypse”, or do you mean to be forwarding new arguments which you think philosophers should sit up and take notice of?

    To be quite frank, on the basis of what you have written here, I think you would be better suited to the former, but I’m willing to be persuaded otherwise should you be able to forward some actual arguments rather than continuing with the cheap rhetorical “shock-and awe” tactics (which, frankly, you are unlikely to terrify or even rile many philosophers with, not least because these things have been discussed at great length since the late 1950s-1980s, when the likes of Feigl, Feyerabend, Rorty, Dennett, Stich, the Churchlands and others first formulated them).

    Like

  6. rsbakker:

    “I do think that the intentional entities philosophers discuss are best understood as the products of METAcognitive illusions.”

    So you’re having an illusion about an illusion. Uh, right. What I’m still waiting for is a coherent articulation of this view from nowhere, which apparently also seems to bypass Davidson’s scheme-content problem.

    “So given that humanity is just another facet of nature, why should we think science will do anything but demolish our traditional assumptions?”

    “I think science is doomed to be a fractionate mosaic of techniques and tools.”

    So science will likely explain both the natural and the human and yet be “doomed to be a fractionate mosaic of techniques and tools.” I confess I don’t understand these seemingly contradictory positions.

    Looks to me like a lot of busyness in the pursuit of a chimerical theory of everything. I think I’ll stick with Wittgenstein’s On Certainty.

    Like

  7. Upon re-reading Scott’s essay, others’ comments, and his first comment, I may have been too charitable in my first comment. I still like some of his take on what neurological sciences may have to say about human nature in the future, but I think he’s too charitable about what they’ll have to say as positive description, versus what they’ll do to “unsay” current ideas, that is, negative description.

    I also think that he’s wrong if he believes that neurological science is going to do even the negative description work without input from philosophy.

    Aravis Having exchanged emails and ideas, it’s nice to see you on the video with Massimo!

    And, you’re right on point 3. I quote Scott:

    So we have to rely on cause-neglecting heuristics to navigate our world. Always. Everywhere. Surely these cause-neglecting heuristics are among the explananda of cognitive science.

    That’s a hellaciously absolutist quote.

    Got some evidence to back it up? Erm, no!

    Do you think, per your non-intentionaity project, that “you” will, in the next 30 years? OR 50? I doubt it.

    Now, you might find that many heuristics are subconscious rather than conscious, tis true. And?

    Subconscious drives, or subselves, if we want to grab from Dennett’s version of intentionality, can have intentionality just like fully conscious selves.

    I said above that I was too charitable in my first comment. As I re-read again, more slowly, the paragraphs right after your quote above, I’m reminded of putting a quasi-scientistic (didn’t say fully, just quasi) take on consciousness, intentionality and selfhood through the blender of French postmodernism.

    And, otherwise, per Dan, and per my first comment, too, all of your self-referentiality about philosophy of mind applies just as much to science of mind. And, this was where I was charitably wrong in my first comment.

    Thomas Jones otherwise sums up my response to Scott’s first comment. It’s not just scientism, it’s arrogance. In light of the quote I pulled from the body, I don’t know what else to call it. Massimo, I hope it doesn’t trip your filtering!

    Like

  8. Scott, It’s kind of you to take the time to address our comments. There is no need to apologize to me. It’s rather disconcerting, however, when an author complains about the quality of comments and describes them as “droll” without entertaining the possibility that he might have played a role in the outcome.

    Some of the early comments, including mine and Bill Skaggs’, expressed hesitance in grasping your central point, which I still contend can be laid at the feet of your language and tone. If you want droll and breezy, perhaps consider the concluding two sentences of your article.

    You are asking a general audience to accept and to address a problem/question you have described, along with a questionable metaphor called Square One. It is not unreasonable IMO for readers to assume that, as Aravis states in part, “Perhaps, he is a bit too committed to a narrative . . . .”

    The more I read your comments, the more I feel you’ve simply misfired in terms of your audience. If, in fact, your aim is, as stated in this comment–“I’m saying that the mystery of intentional idioms is not a mystery that philosophical appropriations of intentional idioms can ever hope to explain”–then perhaps it might be best to rewrite the article and submit it where intentionalist philosophic accounts might take your scenario seriously. Or, as Abe Cochran states, where trained “philosophers should sit up and take notice . . . .”

    Like

  9. All, just a general reminder of watching the tone of your comments. Strong and vigorous criticism is most welcome, anything beyond that is not. This, naturally, goes for authors as well.

    Like

  10. “Please spell out the self-defeating nature of this claim in a manner that doesn’t simply beg the question. I’ve yet to encounter a tu quoque argument that doesn’t. Otherwise, I fear you’re arguing against some view other than my own.”

    You’re articulating theoretical content for a theory that does away with content. If you’re claiming that you’re not doing this, then frankly I have no idea what you *are* doing, and more to the point, why anyone ought to be moved by it. If you’ve found a way to (theoretically) disprove the very idea of theoretical content, then you’ve just splattered a bunch of strange marks on an electronic screen, game over.

    (Also I suspect here that given your remark about question-begging that there is more which needs to be done to cash out your implicit commitments, for example regarding standards of evidence, as it is not clear why one *should* assume intentionality is false in order to not engage in question-begging against this bizarre and unclear view.)

    “1) I actually think the first-person can be explained away but that’s part of my larger account (check out: http://rsbakker.wordpress.com/2013/12/22/cognition-obscura-reprise/ ). But I don’t see how this issue is relevant to the question of whether philosophy has made any theoretical progress on the issue of the first person.”

    No commitment to *progress* is required on this end. If science hasn’t explained away intentionality (and thereby folk psychological talk), then what’s the need for progress if intentional talk is empirically adequate *and* necessary for us to get on with existence? Moreover, you yourself cannot even articulate this point without drawing on intentional language. And if I’ve understood you above and you’re trying to say that you *are not expressing any claims*, then I have no idea (a) what you are up to and (b) why anyone would care given that folk psychology is apparently quite well-adapted for human life.

    “2) Who assumed anything about proposition knowledge as the ‘relevant epistemic goal’ vis a vis the human?”

    The entirety of the discussion in your entire article is premised on the notion of cognitive and/or theoretical content to claims about the human mind. In fact, you nowhere seem to consider that the reduction of epistemology to Fregean-style propositional claims with content, whether these are linguistic utterances or scientific theories, need not be the salient matter when talking about human life. If this is not your stance, then you need to clarify what you are talking about, and why/how non-propositional states, which need not rely on any notion of *content* as such, are still victims to it.

    Like

  11. 1. Abe Cochrane — you are my hero for the day. I hope the author will take seriously your remarks in the last paragraph, for his own sake, as well as for the sake of the discussion.

    2. Part of the problem, here, is that it is difficult to determine what the author’s position actually is.
    Depending on his mood, the author would appear to vacillate between Eliminative Materialism, a la the Churchlands (but with a doomy, rather than an upbeat affect), and Instrumentalism a la Dennett. Alas, he has the infuriating habit of denying these sorts of affiliations, even as he is expressing them. For example, in a recent reply, the author says “here is where I part company with Dennett,” but then goes on to say something that is little more than a paraphrase of a classically Dennett-ian position: “Belief talk will remain in our communicative patois, but there will be no theories of belief that takes them as real things to be picked out, as opposed to ways to lock into certain restricted ranges of problems.” Part of the difficulty in communication, on the author’s part, may be a ponderous, wordy writing style. After all, word salad like the following is hardly helpful to the cause of clarity: “Heuristic neglect, as I’m posing it, actually allows us to get a sense of the problem-ecologies corresponding to our tools and to avoid pointless ‘merelogical fallacy’ type debates.”

    3. The author seems to think – hence all the talk of “heuristics” – that while intentional explanations are somewhat useful, the ontologies they entail obviously do not “really” exist, and the causal efficacy they describe is illusory. He also thinks, obviously, that they eventually will be replaced by the explanations of lower-level, non-intentional sciences. All of this lands him in a very familiar, well-traveled constellation of theories, all of which have been articulated on S.S. on any number of occasions and for which ample responses have been given. It is difficult to bring myself to articulate them *yet again*, but for the sake of those who are new to S.S. it seems worthwhile to at least bullet two of the main ones:

    –The level of description at which explanations are best pitched depends upon what one is seeking the explanation *for*. Intentional – and more broadly, Folk Psychological — explanations are typically employed by ordinary people, for the purpose of navigating the social dimensions of their lives, and it really is unclear that any lower-level scientific explanation would ever be equally (and certainly not more) relevant, for *that* purpose. If I want to know why consumer confidence fell in the last quarter, a quantum mechanical explanation is not going to tell me anything useful, despite the fact that both the consumers and everything they consume are ultimately made of atomic and subatomic particles.

    —Questions as to what *really* (stamps foot) exists typically involve a misunderstanding of the nature of ontological commitment and the difference between what Carnap called “internal” and “external” questions. Relative to economies, currencies exist. Relative to socially embedded, socially acting persons, intentional states exist. What *really* exists, however, is a question that is supposed to apply outside of conceptual frameworks, but that is precisely where such questions are *misplaced*. Think Wittgensteinian language games and their various “toolboxes.”

    —Ditto for what counts as causal-explanations. These are also framework-relative in the sense described above.

    Like

  12. Hi Scott, looks like your essay has provoked quite a reaction. Having feet in both camps (neuroscience and philosophy) I guess I don’t stand to lose either way this thing is resolved 🙂

    That said, perhaps I am missing something in your argument, because I’m not feeling the angst.

    “And so far, at least, the science continues to baffle and contradict our most profound metacognitive intuitions.”

    I’m not sure that this is really the case, is it? On a personal level it is true that we feel a level of knowledge about ourselves and conscious control over our own actions which scientific studies suggest is not accurate.

    But that has been long acknowledged (or so I thought) by traditional philosophy, as well as the softer sciences (psychology). In fact, as long as we are not considering feelings about ourselves, most people are usually quick to point out how flawed other people’s knowledge is about themselves, as well as control over their own actions. Criminals have understood such weaknesses (in others) and learned to exploit them quite well.

    Many religions have that sort of idea worked into them somewhere too.

    While I find some work in cognitive neuroscience fascinating (especially how the mind constructs the world for itself) and funny (when one sees how flawed the representation can be) I never found conclusions coming out of it particularly mind-shattering or demoralizing.

    It certainly doesn’t do away with conceptions of humans based on intentionality. I’m not sure why the philosophical or “traditional” model has to be 100% accurate to be in some way meaningfully accurate. I realize this is where you might pull out the “practical” card. My first reaction would be, well, there really isn’t anything wrong with practical. Desiring for something more than that is perhaps unrealistic in the universe we are getting to know. But my second would be that meaning is not merely about practicality, it is a level of intelligibility that is different from scientific knowledge of its innerworkings.

    You can create a rather substantive description of feelings, experiences, etc and how they tend to interconnect for the individual, at the level of the individual. Even if this is a “seeming” which does not match up 100% with reality, so what? Does it not work as an accurate expression, or idealization of our experience?

    Science can fill in the details regarding mechanisms of how the brain creates the world for itself, including all the hidden mechanisms, but that doesn’t do much in giving you anything to work with in choosing what to do next. The hidden mechanisms (as you point out) will remain hidden to your mind. Eventually (at best) it may reach the point of showing how all the processes of the brain can take a certain environment and make an individual experience the world… as that person could have already told you on their own.

    What then? How would that get us any further past “square one”?

    Like

  13. Thanks Labnut for ferreting the clearer statement from Scott, to wit:

    “Now that science is overcoming the neural complexities that have for so long made an intentional citadel out of the soul, it will continue doing what it has always done, which is offer sometimes simple, sometimes sophisticated, mechanical explanations of what it finds, and so effectively ‘disenchanting’ the brain the way it has the world…there are no ‘meaning makers,’ objective or subjective. According to it, you are every bit as mythological as the God you would worship or honour…
    We are just beginning, as a culture, to awaken to the fact that we are machines.”

    And what is a “machine”, pray tell? And what is “mechanical”?

    The old notion of “machine” is still Neolithically valid. And “mechanical” is also valid, in the Middle Age sense.

    The Ancient Greeks did not think lowly of “mechanical”. The Greek mekhanikos meant “full of resources, inventive, ingenious”. And, indeed, now that we have Quantum Physics, micro machines are going to look just that way: “full of resources, inventive, ingenious”.

    The chlorophyll molecule is as small, and as important a machine as it gets. It depends upon violating Middle Age mechanics. How? Electrons feel everywhere, and find the lowest energy solution.

    Those who fear that we are “machines” have the pretention to explain the soul with neurons, glia, neural tendrils, myelin, etc.

    However, as the case of chlorophyll shows, one has to contemplate the level of understanding just below, mesophysics, and thus, the Quantum. At that point the Middle Age notions of “machines” and “mechanics”, as found in today’s civilization, blow up.

    Today’s civilization basically uses NO Quantum effect (it uses effects explained by the Quantum, that’s different; and sometimes directly, as in a Josephson Junction; but the Quantum cannot be manipulated yet; Quantum Computer research is trying to manipulate the Quantum.)

    More is different. A simple explanation can cause disenchantment. However, Quantum explanations are not like that. They are entangled, non-local. They are also immensely complex: instead of computing with finite sets of numbers, qu-bits compute with infinities (and phases are what counts).

    There is no doubt that this is the level at which the “soul” (whatever that is) sits. The behavior of just one Quantum process is so baffling that some famous physicists have suggested, long ago, that it was conscious, in some sense.

    In any case, Should Quantum processes be, in some sense, intentional, everybody agrees that it is hidden from consideration (that’s the whole idea of Quantum cryptography! Quantum cryptography security networks already exist.)

    Thus we can already say this: the soul is Quantum, and, as no one knows how the Quantum works, and it’s hidden from observation, so is the soul.
    This, by the way solves the riddle of the Abrahamic god alleged omnipotence: Quantum physics says god has to stop at one’s soul door.

    Quantum Physics guarantee our free will. That is free from whatever, but what makes the Quantum tick.

    Liked by 1 person

  14. Hi Scott,

    Robin: So you think philosophy has managed to nail down the theoretical truth of the human?

    As I have not said, nor implied, anything remotely of the sort, I am not sure why you should say so. What part of my claim that we are perpetually perplexed suggests to you that I think we have anything at all nailed down? But the point is, neither do you.

    As for the question of whether cognitive science is counter-intuitive I’d humbly suggest that you’re indulging in a little hindsight bias

    Not at all, it would only be hindsight if these were things we didn’t realise until after we had heard a scientist say them. These are things we knew all along.

    I hit the folk with little cogski facts all the time (yes, I’m like Sheldon that way) and they are regularly surprised. The fact that it kind of makes sense afterwards is exactly what we should expect, given that they reveal what was going on all along.

    I will have to take your word for it, but if it makes sense to them afterwards then how can you say that it “…continues to baffle and contradict our most profound metacognitive intuitions.”?

    If it baffled and contradicted our most profound metacognitive intuitions it wouldn’t make sense even in hindsight.

    All you are saying is that you have encountered people who are initially surprised by some of these facts, but see the sense of them afterwards.

    That hardly bears out your thesis.

    Otherwise, you manage to entirely overlook my argument vis a vis folk psychology.

    As I said before, I am not really sure what your argument is. You seem to be denying the efficacy of the very language that you are using to state the argument, how is that not self-defeating?

    Certainly it may be the case in the future that new knowledge about the brain will change the way we think about ourselves. What that knowledge will be and how it will change our view of ourselves is something that I don’t even think we can guess at.

    It may be something dismal that we learn, or it may be something exciting, or it may be neither. We can’t really say.

    Like

  15. Hi Scott,

    You’re imagining only to make apparently absurd, not to actually understand the position you think you’re critiquing. Me, I prefer not wasting my time on straw-men.

    You are wasting your time on a straw man right now if you are claiming that I am only trying to make it apparently absurd rather than trying to understand it.

    It was simply an answer to your question “What would a post-intentional future look like? What could it look like?”. I think that it is a pretty straightforward answer. If you were serious about the question then why shy away at the obvious answer?

    If this is not the kind of thing you have in mind then I am at a loss to what you mean by “post-intentional”. If people simply go on thinking that brain states are about things, then how exactly will this state of affairs be post-intentional? Can you give an example of the kind of thing you have in mind?

    The feeling of love is not a representation of the child. So, looked at as a purely physical process, in what way is it “about” the child? If aboutness is a misleading intuition then how will it serve in any way as a “heuristic”?

    We might reasonably say that the parent’s mental representation was “about” the child, in the sense of representation. But the brain state triggered by that representation does not pick out any properties of the physical child.

    If people continue to regard things such as love, compassion, pity and empathy as being “about” things rather than simply triggered by things then the description “post-intentional” clearly does not apply.

    The example I usually give is to suppose some bacteria in your gut released a chemical which triggered a feeling of hunger, which led to more ingestion of food which benefitted the bacteria. If you asked most parents whether their feelings were “about” their child in exactly the same way as the feeling of hunger is “about” the bacteria then they would say “of course not”.

    If we lose the idea of intentionality then the distinction is lost. And I think that the conclusion I draw at the end is entirely reasonable. People make the enormous investment in having and raising children because they think that there is something special and important about that relationship.

    If people start thinking of it in terms of this as a physical process which triggers other physical processes then I doubt that anyone, knowing in advance the sort of investment that raising a child involves, would consider it worth it. As I said there are easier, cheaper brain states which do not involve those commitments and generally speaking they will likely lead to more well being for the person involved.

    Liked by 1 person

  16. Scott,
    As an observer of philosophy, my impression is the analytic camp is more straightforward, yet consequently mechanistic and linear, while the continental camp loves to beat around the bushes, but is more organic and reactive.
    So in a very simplified analysis, you posit a fairly straight forward reductionism, while attempting to keep circling back to corral all the cats. Yet it is those thermodynamic feedback loops which essentially keep you chasing your tail and disprove the logic that some end will ever be reached. Given that any bottom line conclusion only collects the material for further actions. The end of the line is simply when all energy has been radiated away to other enterprises.

    Like

  17. Right away, reading Scott’s link to his review of Dehaene’s book, another light bulb came on, and directly related to my criticism of Bill Skaggs’ piece.

    Just as Bill, in my book, made a category mistake, I think you, Scott, have made a “levels mistake.” Arguably, that’s a subclass of category mistakes, anyway.

    (Sorry, folks, that I haven’t gotten up to better speed quicker. I’m a newspaper editor, and of course, Tuesday was election day here in the US.)

    The implication that, per a quote from the book, “ignited assemblies of neurons literally make up your mind” somehow eliminates the “self,” “consciousness,” etc.? Tosh. No more than electrons in certain orbital probability clouds are described as orbiting an atom with 79 protons eliminates “gold.”

    You may not call that eliminative materialism. But it is.

    One can question folk psychology, or even show certain aspects of it wrong, without engaging in eliminative materialism, or anything similar, whether under old names or new flags of convenience.

    Claiming otherwise is a false dilemma. Especially given your own notes that current neuroscientific research isn’t that far along (although you don’t expressly say what I have before, that it’s in the Early Bronze Age), you’re also not in a good position to offer such a dilemma up when the scientism horn of the two you show us is not much more than a null set.

    Otherwise, having done a quick grokking of a couple other blog posts of yours. A theory like mine, that much of what we think is consciousness is actually subconscious, that much of what we think is consciously done by free will is either not a fully free will decision or is not a fully conscious one, is not connected to your theories at all.

    That said, Dennett is far from the only person to discuss ideas of subselves (as I grok a piece by you about him, in part). And, subselves need not have boundaries; in fact, my thought is that the fluidity of subselves mitigates against boundaries.

    And having read Norretranders, another person who influenced my essay here about free will “versus” psychological determinism, I can say that his idea of a user illusion doesn’t necessarily lead to your conclusion, either.

    Aravis Having perused Bakker’s site, including a post specifically about Dennett as well as what he links here with Footnote 10, I’d say that’s just about right: Half Churchland, half Dennett.

    Abe Cochrane I’ve visited Scott’s Amazon page, and read Amazon’s précis about a couple of his books. Your Point No. 2:

    Are you interested in this because you want to write dystopian fantasy novels about a coming “post-human apocalypse” …

    Sounds just about right.

    To bring this back to Scott?

    To riff on Shakespeare: “There are more things in philosophy about the mind, Bakker, than are dreamt of in your scientism.”

    Respectfully put. But, philosophy probably doesn’t sell dystopian fantasy novels as much as scientism. Well, it can. Hermann Hesse, anybody? Camus?

    Like

  18. Scott,
    “The same could be said of any of the world’s great religions, could it not?”
    No, because I wasn’t discussing any religious aspect of Buddhism (like the metaphysics of re-incarnation, which, as a secularist, I do not accept), but rather a philosophical psychology constructed by Indian and Tibet thinkers who passed it along in the Buddhist tradition: A philosophical psychology that is rigorously materialistic, wherein what I understand you to mean by “intentionality” is generated by the body with consciousness following after, rather than imposed on the body by the mind – which notion shares aspects with certain Western theories, including Pragmatism (hence the two philosophies I identify with, which I rarely find in conflict – although I won’t say there isn’t any). The philosophies of India and Tibet are ancient and we have learned more since, no doubt; the 19th Centuries of America and Europe are not so old, but we’ve still learned more since, no doubt. But some foundations investigated in these philosophies remain viable as modeling structures.

    What I find interesting in older philosophies are those ideas with real tenacity. They remind us that the course of human reason is not to find dead-ends in dogma, but to indicate the kinds of questions every generation of new thinkers may ask themselves, not only to learn more, but to incorporate their knowledge into understanding.

    (In writing the above, I should remark that I wonder if you take too ‘top-down’ an approach to intentionality yourself; that is, isn’t much of what you write here based on an assumption that the brain is the locus of intentionality? But what if Chandrakirti and William James are right, what if it begins in the body? Doesn’t that generate some interesting issues?)

    On your remark: “The fact that intentional idioms work is what we are all trying to explain.” – I may be missing something here, but actually there are a number of fairly strong explanations of intentional idioms, in grammar, logic, psychology, narratology, semiotics, sociology, anthropology, even rhetoric. What you seem to want to say is, ‘if we understand correctly why it (apparently) works, we’ll see that it is blinkered (thus misguided).’ If so, you need to build a stronger case than you have here.

    Contra your remark, “Science doesn’t give a damn.” – science is not an agent, nor does it have agency, it can neither dispense, nor dispense with, any ‘damns.’ Science is an activity practiced by humans seeking to learn something about the world, presumably for human purposes. Therefore, pragmatic demands can be made of it, or of any philosophy including its accumulated knowledge, or developing out of it. Furthermore, accommodating such demands is a necessary rhetorical strategy for persuading others to take seriously science, its knowledge, its influenced philosophy. Simple assertion of any claim – however true – can only be convincing using strong reasoning, without which it’s success depends on persuasive rhetoric; but for maximal effect some combination of the two is best.

    Liked by 1 person

  19. Scott,

    You could say the scientific overthrow of our traditional theoretical understanding of ourselves amounts to a kind of doomsday, the extinction of the humanity we have historically taken ourselves to be. Billions of “selves,” if not people, would die — at least for the purposes of theoretical knowledge!

    I understood that such a semantic doomsday scenario was far from the self-refuting impossibility I had habitually assumed [3]. Fodor’s ‘greatest intellectual catastrophe’ was a live possibility — and a terrifying one at that.

    What I miss in your essay is a description of this terrifying scenario. Assuming, for a moment, that science does reveal all that you say it will,
    – how will our conceptions of ourselves change?
    – how will this be so damaging?
    – why will it be terrifying?

    I imagine that you have explored these issues in your novels (which I have not read, mea culpa) but it will be helpful if you gave a precis of the consequences you foresee, and why they will ensue. I am an optimist(for theistic reasons) but I would like to understand your pessimism.

    Thanks to WYSIATI, the dark room of self-understanding cannot but seem perpetually bright

    That is a powerful insight. It is also known as anosognosia(you are to stupid to know how stupid you are). See this entertaining description from the NY Times – http://nyti.ms/1pWMuNt

    From the tone of the comments you have rattled quite a few cages. Well done. Too many people are too comfortable in the accumulated waste that has settled in the bottom of their cages. I prescribe a course in science fiction reading. It just might restore that marvellous faculty called the imagination.

    Like

  20. labnut,

    This is perhaps only tangential to the original topic, but your remark about atheism leading to the conclusion that we are “mythical” and “machines”, as expressed by the author you cite, is something that I have always found puzzling when I heard it before.

    It all probably depends on what is meant with those terms, and whether they are seen to be negative. Mythical is particularly odd because to my understanding, it can only mean something on the lines of being either non-existent or having different properties than is commonly believed. The second meaning would be quite acceptable and indeed trivial in this context, but I guess an atheist using gods as an example for comparison must mean the first.

    So, I am non-existent? We all are? That is indeed an interesting claim. Here, I can touch my own body, and humans can touch, see and hear each other. We do, indeed, exist. So the claim that science has shown “us” to be mythical only makes sense when “us” is supposed to be immaterial souls or something like that. But a monist – of which there are even religious ones – only ever believed that “us” is our bodies, and so the claim that science has shown us to be mythical is gibberish to a monist.

    Second, machines. What is a machine? I guess what is meant here is: a system following a determinate system of rules, with maybe a bit of random thrown in. But the point is, under that definition humans are always machines, because libertarian / dualist free will has never made sense. There are precisely two options, rule-shaped behaviour and randomness (with the added irony that stochasticity is just another form of rule once you average over enough instances). That exhausts all possibilities.

    So the believer who is happy in the belief that we are not tick-tock machines because we have a soul just fails to ask the follow-up question: how does the soul work? Because there are only two options, it too is either going tick-tock according to some rules, throwing dice, or a combination of the two. And of course it would have to be mostly the rules, because it would have no meaning to be a good soul if such a soul would just randomly axe-murder its own children given the wrong die cast.

    We are either evolutionary machines or machines created by a deity, but nothing is gained; the theist position is in this context precisely as dismal as the atheist one.

    In my eyes, however, although the word is sometimes used as a metaphor for biological systems, it seems more precise to say that a machine is something with moving or cogitating parts that was built (e.g. by humans) for a purpose. We have purposes we want fulfilled, but when nature evolves beings like us, it hasn’t, so to this atheist we aren’t machines. We would totally be machines under that definition if we had been purpose-built by a deity, though…

    Like

  21. Scott,

    So given that humanity is just another facet of nature, why should we think science will do anything but demolish our traditional assumptions?

    Embedded in your essay is the belief that progress in science will inevitably explain everything. That confidence is based on the great success of science to date.

    But is this confidence warranted? As someone pithily put it, ‘the future is not what it used to be’.

    Is it possible that we have reached the cusp of scientific progress and are beginning to touch the limits of what science can reveal? That hinges on another question. Does science have limits? Are there boundaries to what we can know? I think we are beginning to see the answer is yes.

    1. We are locked in our universe and can never see outside its boundaries or beyond its beginning. This is the inescapable conclusion of modern physics. We can speculate about multiverses and cyclic universes. But they are unobservable and unverifiable. Science has seemingly come up against a brick wall. All that is left in this instance is creative mathematical speculation and we have no way of verifying this speculation.

    2. The laws of nature inexplicably regulate the universe with astonishing mathematical precision but we have no idea why that should be so. This has to be the most profound mystery of all and we have no way, even in principle, of investigating this. How does science explain the laws of science? Even the question does not make sense.

    3. It may be that we have cognitive limits that forever limit our understanding. This possibility is undeniable. For example my dogs can never understand my love of Mozart’s horn concertos. Similarly our own cognitive apparatus could also be fundamentally limited. The differing interpretations of quantum mechanics might be evidence of this. In other words, we may be suffering from anosognosia(http://nyti.ms/1pWMuNt) and like anosognosics, are incapable of perceiving our limitations.

    4. We are locked in Square One, our own internal conscious awareness. This first person, internal state may be beyond the reach of third party science and thus consciousness may be unexplainable. This is an open question and more science is needed but the outlook is not promising. One approach to this problem is to deny its existence and your writings look suspiciously like a denial.

    5. We exercise free will in a strictly deterministic world. This defies all that science has revealed. The de rigeur explanation here is denial or compatibilism. Kant pithily disposed of compatibilism as ‘that wretched subterfuge’ and ‘petty word-jugglery’. I exercise free will. No amount of denialism changes that fact and cleverly re-labelling it as compatibilism is just intellectual sleight of hand. It may be that the explanation is, like the putative multiverse, beyond the reach of science. More science is needed.

    For all these reason I question your confidence that science will explain away the self and reduce us to machines. We are beginning to see that science has limits.

    Liked by 1 person

  22. This has been a very interesting discussion, and incidentally this essay has gotten more views than anything else at SciSal over the past 2-3 weeks.

    I hope it is clear to my readers that just because I publish an essay it does not at all mean that I endorse the author’s argument or point of view. Indeed, I make a point of publishing a variety of perspectives in this forum.

    This is one such case. While I actually enjoyed Scott’s prose, I found his argument deficient in two fundamental ways (as a number of others have already remarked, so I’m not breaking new ground here, just staking my own position).

    First, there is essentially no positive argument. Even if the author’s “negative” thesis (that we are fundamentally mistaken about intentionality and other aspects of our self-perception of our minds) is correct, I still have at best a glimmer of this supposedly bold post-intentional future. I suspect a number of commenters have it right: that future will look very much like the present, because human beings cannot avoid but talking and thinking in terms of intentions (and the same goes for “free will,” consciousness, and a number of other alleged illusions).

    Second, and more crucially, it seems to me (and to others) that Scott’s thesis can be interpreted in two different ways, one that is reasonable but not at all ground breaking, the other that is more interesting, but likely misguided. The reasonable but uninteresting version amounts to say that science has shown, and likely (but not certainly, and not universally) will keep showing us that we have some very partial, and sometimes even seriously warped, views of reality, including the reality of our own mental phenomena. Join the crowd, at the least ever since Hume, and likely well before that.

    The daring but, I think, wrong, version is that we are radically wrong about pretty much everything we think and perceive about the world at large, and especially our mental processes. To begin with, again as others have pointed out, there is an issue of proper, or most informative, levels of description. Sure, at the neurological level of description there are no intentions, there is no consciousness. But by the same token, at the level of molecular descriptions of our lungs, say, there is no breathing, and at the quantum level description of our circulatory systems there is no blood flow. What that tells me is not that there *really* is no such thing as breathing or blood circulation, but rather that those levels of description — while certainly contributing to our general ontological picture — are *irrelevant* to a proper understanding of the phenomena at hand (be they intentionality, consciousness, breathing or blood circulation).

    Something like this idea was nicely expressed even by such scientism-friendly authors as James Ladyman and Don Ross, in their oft-cited (by me) Every Thing Must Go. Specifically, they take on the “it is *really* only quarks” crowd to task by impugning the infamously recurring example of mundane tables. It is often heard that “tables” are illusions, that physics tells us that what we perceive as solid objects are *really* just a bunch of atomic orbitals interacting by way of the nuclear forces, plus the Pauli principle that precludes them from occupying each other’s space.

    But, say Layman and Ross, this misses entirely the point. Yes, of course tables are made, at bottom, of atoms (or quarks, or strings). But at our own level of analysis they *really* are solid, and they *really* are made of wood, or marble, or glass. These materials, and the solidity they provide to the object, are not at all illusions, they are simply the most informative levels of description of a particular pattern of reality when it comes to meso-objects perceived by agents such as human beings.

    The same, I maintain, goes for intentionality, reference, consciousness, and all the other mental phenomena that Scott would love to eliminate to bring us into his bold post-intentional future. There isn’t going to be any such thing. There is going to be a perennial dance between the scientific and the manifest images of reality, because that dance is what makes it possible for human beings to understand, and navigate, the world.

    Liked by 1 person

  23. I think the trouble here has been that Scott has not said anything that might rationally motivate anyone to believe that his “doomsday” scenario is a “live possibility” (or perhaps even a theoretical one). And indeed, the evidence to which he alludes by way of (presumably) trying to persuade us on this score is psychological research which is firmly committed to explanation of people’s behaviour in terms of the framework of intentional psychology. Thus he cites the cognitive biases and heuristics literature which suggests that we typically form our beliefs on the basis of extremely limited information, employing “quick and dirty” cognitive shortcuts to jump to conclusions which fail to take into account much of the relevant information. We ignore statistical base rates in favour of stereotypes and are subject to all manner of cognitive biases which lead to poor probabilistic reasoning, especially when it comes to dealing with highly complex situations involving many variables or large numbers. Moreover, not only are we ignorant about much of the information that we would need to arrive at accurate beliefs and predictions, but there is a pervasive illusion of sufficiency, such that we tend to suppose that the information we have consciously available to us is that we need, and so on.

    So yes, there has been lots of research into these things, and much of it is fascinating. Books such as Stuart Sutherland’s Irrationality made this available to the general public decades ago, and in recent years it has become a boom industry in trade paperbacks (just think of recent books by e.g. Taleb, Kahneman, Fine, Kurzban, Tavris & Anderson, Rosenzweig, Ariely, Burton, McRaney, Dobelli, Watts and dozens of others). And yes, such research tends to confirm what many philosophers and psychologists have been arguing for many decades, i.e. that our sense that we have special, privileged, introspective access to the wellsprings of our actions and beliefs, that we know ourselves in a completely different way, and to a far better degree, than anyone else possibly could, and so on, is largely illusory.

    So far so good. But again, my question to Scott is:

    How does any of this pose any kind of threat to the everyday practices of explaining people’s actions in terms of intentions, beliefs, desires, habits, tastes, preferences, motivations, goals, hopes, and so on?

    At best (or worst, if you like), it seems to me, this literature suggests that we tend to be “cognitive misers”, basing our beliefs on inadequate evidence, mistaking feelings of certainty for rational warrant, taking ourselves to be more rational than we actually are, and so on. But not only does none of this threaten the conceptual framework of intentional psychology, but it very obviously presupposes and works entirely from within it. After all, by what rational standards do we conclude that we tend to be poor reasoners and cognitive misers if not those embedded in the normative framework which Scott would have us believe is on the verge of imminent collapse?

    Like

  24. Hi Massimo,

    First, I would agree with most of your comment, so the following are minor differences rather than fundamental disagreement:

    … but rather that those levels of description — while certainly contributing to our general ontological picture — are *irrelevant* to a proper understanding of the phenomena …

    I’d have used the word “complementary” for the different levels rather than “irrelevant”.

    There is going to be a perennial dance between the scientific and the manifest images of reality…

    I wouldn’t agree that only the lower levels of description are the “scientific” ones. Science readily deals with all levels. As just one example, “species” is a pretty high level and abstract concept (compared to, say, the molecules out of which species are composed), and yet is entirely scientific. Ditto for “intentionality” and other brain states.

    As I see it, scientism is not about eliminating the higher-level concepts (for all the reasons various people have pointed out, that is a non-starter). Rather, scientism is the doctrine that the different levels mesh seamlessly, and that a proper and full account could run up and down the levels effortlessly. That would contrast with a view that compartmentalises the different domains or the different levels.

    And while I’m on:

    Hi labnut,

    We exercise free will in a strictly deterministic world. This defies all that science has revealed.

    If I were to program my computer to print out the claim that it was exercising free will, should we accept the claim?

    Like

  25. It’s very interesting what stevenjohnson wrote:

    “if the essential person is the conscious, self-aware reasoner, then the dreaming self is an impostor who invades us nightly, molesting us in our sleep. If we can bear this affront now, then we surely bear the ignominy of not being what we imagine ourselves to be. And, after all, we can see clearly enough that other people are not always the masters of themselves they imagine”.

    Perhaps sleeping should be free of noise and impostors, it may be contradictory to maintain alertness while sleeping, in this sense the habitual chitchat would be better off side while we sleep. In this perspective it seems that the gossiping self doesn’t want to face silence because this is perceived ominous or at least uncomfortable. Out of this silence the sleeping self would feel like a nutshell floating in the sea and would lose progressively its condition of self/me/ego to turn into a nude non-self, something insignificant and evanescent.

    If the nightly silence is able of cancelling and bring the self into silence, the habitual chitchat would be a defense mechanism, an impostor as stevenjohnson points out. It would be worthwhile to wonder how and why such disturbing program has been installed in the self.

    Like

  26. I would like to offer a blanket apology for my tone. I really do mean no offense! And I really am the product of a different internet culture. I’ve been accused of being an ass too many times not to think I probably am one. And pragmatically speaking, tits rarely recognize tats, so from here on in I’ll stick to replying to what I see as ‘serious’ criticisms.

    I’m entirely willing to engage criticisms I don’t think are serious on Three Pound Brain (because you never know!), the URL for which can be reached by clicking my handle.

    Criticisms that are not serious include: 1) tu quoque arguments, because no, I don’t have to accept your interpretations of intentional idioms to use intentional idioms because I have my own interpretation of intentional idioms; 2) tone/style arguments, because there’s no argument that can make spinach taste good; 3) blanket dismissals that begin by asserting my position is a warmed over version of anybody smarter than me, because even if I seem to be saying something similar, I assure you, the motivating gestalt is quite new (and frankly, could use a good thrashing); 4) dismissals that begin by attributing something ‘obviously absurd’ to my position.

    What I would like to respond to are, first and foremost, answers to the question posed in the piece: What evidences the claim that intentional philosophy has made it past Square One. The absence of such responses is… suspicious?

    I want nothing more than to clear away all the procedural smoke and genuinely talk about what evidences intentional theories of intentionality.

    Like

  27. Name’s Mike, just so no one posits I’m going for anonymous. Broken wrist so I’ll keep this short but notation:

    In terms of general responses to the essay as per my readings here, people seem to think that the onus is on Scott to summarize the aforementioned ‘future.’ Seeing as his speculative maximum is already cranked to a contextual 11 in this piece, I’d say it might be fairly irresponsible to his argument in this space to offer exact and detailed scenarios (he does offer a place in Encyclopaedia Ex Nihilo, which is on his blog).

    In specific, because I think Abe has most succinctly put the collective-at-issue:

    How does any of this pose any kind of threat to the everyday practices of explaining people’s actions in terms of intentions, beliefs, desires, habits, tastes, preferences, motivations, goals, hopes, and so on?

    Recognition.

    You need only look to this first wave of Brain-Computer Interfaces or early definitional “nootropics” (as per anti-psychotics or antidepressants changing the habitual cognitions persons afflicted can even have in their “cognitive tool-kits”). In terms of Scott’s use of neglect and the Kahneman metaphor it seems suggested that we cannot theorize what consciousness might feel like to the human-that-alters-its-self? Or the point at which they and their salient cognitive ecology and sociocultural affairs meaningfully break with unaltered.

    As per the essay, though, I think the contextual point is that theoretical accounts of our knowledge, up to and including science – itself seeming just the most versatile and self-consistent tool in our box at the moment – are going run the same gamut of naturalization that the world did, which we exploited to our capacity in ways the natural world certainly never intended, as we naturalize our human place in it. So how can we be so certain about the current value of our per-discipline theoretical accounts, given Scott’s scenario that we have previously been shown to rationalize explanadum that are “perfectly and obviously” consistent with the world as we cognize it but were completely at odds with the world as it is?

    Liked by 1 person

  28. Abe: “Where’s the doom?” is a question I’ve been asked many times. The first problem has to do with the simple idea of consigning the sum of human speculation on the human to superstition. I think this is a stupendous tragedy in and of itself, but probably inevitable.

    The second problem as I see it, can be summed up with this quote:

    “Guilt. It’s this mechanism we use to control people. It’s an illusion. It’s a kind of social control mechanism—and it’s very unhealthy. It does terrible things to our bodies. And there are much better ways to control our behavior than that rather extraordinary use of guilt.” Ted Bundy (as quoted in Robert Hare’s Without Conscience, p 41)

    Once you acknowledge that all the spooky constraints posited by intentionalism come down to the meat, the universality of those constraints depends on the universality of the meat. You’re right to note the SF element, but I’m not sure this constitutes grounds to dismiss the worry. The whole of what makes humans human is about to shift into the purview of technological manipulation. What do you think our system is going to make of us? Something ‘better’? Since there’s no consensus commanding way of defining ‘better,’ this means everyone will be free to pursue their own ‘better way.’ Given the misery it causes, I’m pretty sure guilt will be an early casualty.

    I apologize, but find it hard to take the rest of your comment seriously.

    Liked by 1 person

  29. I noticed over at the author’s site that the readers there were unimpressed with the response the author has received here. The criticisms of our responses fall into a few basic categories:

    1. We’re just criticizing the author’s “style” rather than the substance of the argument. The speculation is that we would prefer boring and dry prose.

    I don’t think the writing style is the problem. It’s colorful and dramatic, and maybe philosophy needs more of that. But one can express ideas clearly using a colorful style, and I don’t think the author has succeeded in doing that. Examining ideas requires rigor and precision because, as the author seems to understand, we’re prone to all sorts of cognitive mistakes and limitations. So I’d ask the author: Do you feel like your ideas have been expressed with rigor and precision? Do you think it’s necessary to be precise and rigorous?

    2. We’re neglecting “metacognitive neglect”. The idea of neglect is apparently central to the author’s arguments, and to what he calls the “Blind Brain Theory”, and we’re not getting it at all.

    Could be. I’d ask the author: can you make your idea of metacognitive neglect more clear to us? I was taking it to be a sort of blindness to certain kinds of causal realities due to our brains not being structured to be able to see or interpret them, but I get the feeling this isn’t right, because the word “neglect” implies (ironically?) an intentional ignoring of something rather than an inability to perceive it.

    3. We’re close-minded academics who can’t see outside our preconceived ideas of the topic.

    Possibly. I’d point out, though, that most of the commenters here seem not to be philosophy professors. Aravis and SciSal are the only ones I know of that comment regularly here who are philosophy professionals.

    4. We criticize ideas that we don’t understand completely instead of asking questions.

    This, I think, is a valid point. I often find myself thinking, “how does this fit in with my position on the issue” rather than simply trying to understand an idea in its own right. I try to read everything here with a spirit of inquiry, but I don’t always succeed. Also, it’s demonstrable that argumentative posts here get far more attention here than exploratory ones.

    Like

  30. Scott,

    I asked you a very simple question, namely:

    What was it, exactly, that convinced you that your “doomsday” scenario was a “live possibility”? What was it that “suddenly” transformed what you had previously dismissed as “a preposterous piece of scientistic nonsense” into “the most important problem I could imagine”?

    I also asked you how you think that the biases and heuristics research you cite poses any kind of threat to folk psychological explanations of people’s actions in terms of intentions, beliefs, desires, habits, tastes, preferences, motivations, goals, hopes, and so on.

    You can’t take such simple questions seriously, yet you expect us to take seriously your unsupported hyperbole about how we are facing an imminent apocalypse and that we should consign “the sum of human speculation on the human to superstition”?!

    I notice that you’ve been congratulating yourself over on your own blog about how you’ve hit us with something so radically new that we benighted folk, still stuck as we are at “Square One”, are just not able to make sense of it. I think Mark makes a very good suggestion in reply. I’m reproducing his comment to you from your blog here:

    Scott,

    “I’m hitting them with a sensibility they’ve never encountered before, is all – an eliminativism that falls out of a sustained theoretical consideration of heuristic neglect”

    Do you really think you’ve made a compelling case for that over at Massimo’s blog? Well, I’m sorry to tell you, but you really haven’t. Maybe it’s clear to you, but you have obviously done a dreadful job at making it clear to others. How about some simple arguments explaining how you think that eliminativism about intentional psychology “falls out of a sustained theoretical consideration of heuristic neglect”? You may think you have already made that case, but again, you really haven’t. I suggest sticking to that claim and trying to spell it out in the form of a clear argument. How do you get from neglect to elimination? How about spelling that out more clearly, drawing upon whatever evidence you think best supports it, and then see how it floats?

    You complain over here that “what they should be doing is raising concerns and asking questions”, yet there are plenty of people over there asking you some very simple and fair-minded questions, and raising some very obvious concerns (see e.g. Abe Cochrane’s comments), thus giving you another chance to make your position clear. Instead of hanging out here with your few acolytes boasting about how you’re too far ahead of everyone for them to understand you, how about if you address some of those questions which people have bothered to put to you? Because whatever you may fancifully think, I’m afraid that, right now, the impression most people have is not that you’re “onto something new and not so easily dismissed” as that you’re on to something old which *is* easily dismissed.”

    If you’re not able to answer my simple questions, how about doing as Mark suggests and having a go at explaining how eliminativism about intentional psychology is supposed to “fall out of a sustained theoretical consideration of heuristic neglect”?

    Liked by 1 person

  31. Jarnauga111: “So science will likely explain both the natural and the human and yet be “doomed to be a fractionate mosaic of techniques and tools.” I confess I don’t understand these seemingly contradictory positions.”

    Isn’t science a mosaic of techniques and tools now? Isn’t it nevertheless the single most transformative claim making institution in the history of the human race?

    Like

  32. Excellent! Having combed through your geriatric insights and hindsights a few times, I am unable to come up with anything disagreeable to say. Your analysis squares very well with the way things look in my own mature years, so I am biased. Basically, human beings are severely limited in their ability to gather information of their surroundings even though we are the evolutionary champions in this regard. Our intuition is WYSIATI, but as a result our image of ‘reality’ is riven with error, deception, fantasy, illusion and delusion. It is not easy to be a human.

    Science will increasingly destroy the comfortable myths of the past. The old certainties and verities are damaged goods. Philosophy has been feeling the heat lately. It is not that philosophers will not keep up, it is that they can not because the tidal wave of information builds exponentially. The foundational texts of the major religions need to be rewritten. Humanity will stumble on as all this information percolates, slowly, through the culture. Change is slow and progress does not run in a straight line.

    Fodor does seem to overstate the cataclysm that will ensue once everyone realizes the depth of our collective confusion. Cataclysmic upheavals did not occur when the Romans decided to dump their multitudinous Pantheon and replace it with a tripartite One. Similarly, the discovery that earth was not at the center of the universe did not lead to a mass extinction. Although the unfortunate Giordano Bruno was brutally murdered in Rome for looking too far into the future, the Roman Catholic church has apologized, albeit a few hundred years later. Progress?

    You ask “Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One?” Well, with geriatric foresight I will say maybe. Firstly, Square One is not so terrible, considering where we have come from: some atoms of C, H, O and N, i.a., having coalesced in a primordial soup, led to the emergence of life! That was no mean feat. In a few short billion years life has now acquired the ability to contemplate itself and everything around. Square One is actually pretty amazing!

    Second, Square One is all we have, but it is not static, it evolves, and we have some control over it, but not complete. Square One is the only and best address in town; there are many rooms and lovely salons, a few torture chambers. A hive of incessant activity from which we escape in restful sleep, perhaps to dream. All we can do is invest in this house of experience and learning and pass the best of it on before we die.

    We cannot know the future and the past is quite murky too, therefore we are not sure where we are headed. We must work harder at avoiding the mistakes of the past.

    Like

  33. Socratic Gadfly: I apologize for coming across as arrogant. I received a pretty chill reception from the outset, and I am writing these replies as economically as I can. I generally think of myself as a fool, if that’s any consolation. But I generally think of humanity as such too, so…

    My heuristic interpretation of intentional idiom is speculative, something I think will be confirmed in the course of time. Check out, http://rsbakker.wordpress.com/2014/11/02/meaning-fetishism/ , if you’re interested.

    Like

  34. Thomas Jones: This is pretty close to how I thought it would go. I’m basically saying that the life’s work of a good number of you has been a colossal waste of time. I would certainly hate me if I were you! I am an outgroup threat, and I think the reaction here characterizes as much brilliantly. Minimize via a handful of ancient communicative tactics. Attack the tone. Attack the institutional provenance. Pose easy-to-dismiss caricatures. Tell the offender to go back to where they belong! I know you guys think you’re different, somehow above the fray (because everyone thinks this), but you’re human like the rest of us. Vocabulary can only change so much. I didn’t expect anyone to bite any bullets for this very reason.

    But I had hoped to receive more questions clarifying points and less strawman assumptions than I’m used to. And at least a few answers to the question posed in the piece, for sure. The back-channel chatter I’m getting is certainly fixated on this, as well as the lack of charity.

    Critiquing the Square One analogy is definitely one way to go, especially if one has no way of responding to the problem the analogy poses. Maybe I am too committed to the metaphor. I await arguments to that effect.

    Like

  35. Aravis: “Part of the problem, here, is that it is difficult to determine what the author’s position actually is…”

    The author agrees with a good deal of what Dennett has to say, but the author doesn’t see what ‘intentional stances’ add aside from confusion, and thinks his ‘real patterns’ are just a smoke screen for eliminativism. The author also thinks his redefinitional strategy is futile… The author can go on. But if you find the author so confusing, why don’t you ask the author clarifying questions, as opposed to insinuating that he’s confusing because… take you pick! Perhaps I’m not a hack. Perhaps you’ve encountered something genuinely new, and your confusion stems from incapacities as much your own as the author’s. How could you know, short of genuinely trying to understand the author’s position? But this requires asking the author questions directly, and this requires addressing him/her in a direct manner…

    Because the fact is, Aravis, Scott (you can call me that you know!) has done a fair amount of reading over the past decades, and he has never (aside from hints in Dennett) encountered anyone who has tackled these issues in terms of neglect. I would only be too happy to be disabused of this, if it turns out I’m wrong.

    So, for me the primary issue is one of understanding what intentionality is in terms that will allow cognitive science to move on. ‘What really exists’ relative to cognitive science is what I’m talking about. I think cognitive science will embrace eliminativism, that it will turn its back on traditional philosophical accounts of intentional phenomena, which it will explain away as artifacts of metacognitive neglect. If history is any guide, this is will provide the new cognitive baseline for our understanding of the ‘human.’

    Like

  36. Hi SocraticGadfly,

    The implication that, per a quote from the book, “ignited assemblies of neurons literally make up your mind” somehow eliminates the “self,” “consciousness,” etc.? Tosh. No more than electrons in certain orbital probability clouds are described as orbiting an atom with 79 protons eliminates “gold.”

    Why not think of gold as eliminated?

    Isn’t it possible to both think of gold, but to also as a second way of thinking, think of it as basically not there?

    Certainly in terms of money, it’s easy enough to think of a currency from the past and to think at the time it was worth something, but now it’s ‘not there’, except as a physical artifact. Surely no one disputes there are obsolete currencies – that provoked the thought they were worth something at the time, but now provoke the second way of thinking, that they are worth nothing (even as they still exist). And on reflection it’s possible to consider both types of thinking, not just one.

    The same can be applied to gold – we can both think it’s there, but also employ a second way of thinking in which there is just a clump of electrons buzzing about. Just a physical artifact.

    Do we have to be stuck in one mode of thinking in regards to the mind – can’t we consider a second way of thinking where self and conciousness are, atleast in terms of the way the first method of thinking thinks about them, are eliminated?

    Certainly gold isn’t sacred, anyway.

    Like

  37. Brandholm: “That said, perhaps I am missing something in your argument, because I’m not feeling the angst.”

    Neuroscience guys never do! I sometimes wonder whether it’s like trying to gross out FX guys: desensitization. But to get to your real point:

    “Science can fill in the details regarding mechanisms of how the brain creates the world for itself, including all the hidden mechanisms, but that doesn’t do much in giving you anything to work with in choosing what to do next. The hidden mechanisms (as you point out) will remain hidden to your mind. Eventually (at best) it may reach the point of showing how all the processes of the brain can take a certain environment and make an individual experience the world… as that person could have already told you on their own.”

    In a sense this actually nails a good part of our dilemma. We have all these systems for navigating the world absent any detailed causal information, each possessing limited problem ecologies. We have the capacity to adapt these systems to new problem ecologies. And as you point out, we’ll continue using them, but now we’ll know what their limits are, and we can avoid running afoul any number of philosophical traps. In other words, we get to keep our intentional guidance systems while leaving behind all the conundrums that arise when apply these systems to problem ecologies they simply have no hope of solving.

    Sounds like a win-win, doesn’t it?

    The problem is that these systems are adapted to environments of information scarcity. Dennett circles this problem in a fuzzy way, I think, when he discusses ‘creeping exculpation.’ We don’t yet know how neuroscience will frame this picture, but let’s just assume we have a heuristic ‘responsibility’ device, one triggering behaviours, like punishment, allowing us to navigate social complexities of the basis of very little relevant environmental information. So what happens when we begin filling this environment with more and more unprecedented information? Think of the way various media representations seem to trigger fears that make total sense in small Paleolithic communities, but are plainly irrational in our world. Imagine a time where the etiology of every classroom difficulty has been tracked, so that every student has multiple ‘accommodations.’ This is a complicated issue, but part of my fear is that the integrity of our guidance systems actually requires that we remain ignorant of them—that ignoring is simply not enough.

    Like

  38. Patrice Ayme: “And what is a “machine”, pray tell? And what is “mechanical”?”
    You mean fundamentally? I have no clue. But what does that have to do with the central role mechanism plays in the life sciences? As it stands, we know enough to cure diseases. Is that not enough?

    As for speculation as to how interpretations of the quantum can redeem traditional conceptions of the soul and free will, I genuinely hope you’re right, but I just don’t see it.

    Like

  39. @Scott Bakker,
    Your article is all over the map (with cognitive neuroscience/eliminative Materialism; folk psychology/Square one; doomsday prophecy for the Square one/post-intentional future; etc.). Your intentional cognition (only in terms of epistemology, a very narrow niche) is a bit interesting, but you failed to carry it to a meaningful conclusion. In fact, your conclusion {Square One… What fools we were way back when.} is wrong. It is a very naïve Scientism, much less coherent than those discussed before at this Webzine.

    I have showed a four-lock-test for physics. Without the keys for these four locks, physics is far removed from being a meaning base for any meaningful Scientism. Again, there are three human ‘empirical facts’ (intelligence, consciousness and spirituality). Without the ‘explanations’ of these three facts in Biology, that biology is also far removed for making a meaningful Scientism. For me, those three human facts must be encompassed by (arising from) physics laws. Yet, I am not a reductionist, an over-used term with many ambiguous meanings. I am a diehard physics-ism-ist.

    Richard Dawkins: “… to think of pre-Darwinian answers to the question ‘What is Man?’ …, can you, as a matter of fact, think of any that are not now worthless except for their (considerable) historic interest?”

    Disagree, totally.

    There two types of knowledge.
    Type A: We ‘know” some facts of nature (Newton equation, quantum principle, … meiosis/mitosis, etc.). This is knowledge.

    Type B: We know what “we don’t know”. This is wisdom.

    There is a relationship between these two types. The bigger the type A, the smaller the type B. But by all means, the type B is much more valuable. The current Scientism is lacking the type B, not knowing what it does not know. I have showed some solid examples for physics. For this article, I would like to show one example, the evolutionary biology.

    Darwin had done a great job for pointing out that the current stage of lives is reached via ‘evolution’. But, the Darwin-mechanism (DM) {nature selection pressure acts on phenotype of ‘individual’ of a population gradually and leads to ‘speciation’} is wrong. I have showed this point very briefly at https://scientiasalon.wordpress.com/2014/10/28/the-varieties-of-denialism/comment-page-1/#comment-9225 .

    No life can evolve alone as individual, not even for a species. All lives must evolve on a ‘stage’, and the construction of that stage (CoS) preempts all evolution-mechanisms, and that CoS has nothing to do with the DM.
    a. Biologization: converting the inorganic compounds into biological substances, mainly done by bacteria or archaeans.
    b. Global oxygenation: started with oxygen-producing bacteria and was accelerated with oxygenic photosynthesis.
    c. The fungi rescue of wood crisis: restoring the greenhouse gas (stabilizing the globe temperature) and provided space and food for land animals.

    By denying this fact, the DM-based Scientism cannot be correct.

    Like

  40. Re Coel and having just watched the fair part of Aravis and Massimo on video … I’ll accept that as a partial description of scientism while saying that it also isn’t true. The different levels don’t mesh seamlessly. Beyond that, as I noted in calling Scott on making an “levels mistake,” the levels of description don’t mesh seamlessly. They certainly don’t mesh in their descriptive power. I could describe a quantum probability wave for an elephant, but why?

    And, per the issue of emergent properties, it’s not even that just the descriptiveness doesn’t mesh across levels. The levels themselves don’t mesh seamlessly.

    Alexander With the rise of machine language, coupled with continuing failure of strong AI proponents to produce anything like strong AI, I reject machine language about humans more and more, especially when it tries to analogize the brain to a computer.

    I saw this in some of Scott’s writings as well, and have seen it in Dennett. In transhumanism, for moving “wetware” of the mind to new platforms, it’s part and parcel of that, too.

    And, it’s clearly wrong. If science of mind has shown us much so far, one thing it’s clearly shown is that the brain/computer analogy is simply wrong. Far beyond the simplistic idea that our brains think in parallel vs computers in serial, which itself is simplistic, but refutes the brain/computer analogy, it’s wrong.

    No. Rather, per Wolfgang Pauli, it’s not even wrong.

    Until we actually learn more, and not in Scott’s description, but until we actually learn more — without preconceptions about what we think neuroscience will tell us — what mind sciences actually will tell us, we don’t even know enough to make such analogies.

    Scott One can move beyond religious ideas of guilt without throwing out intentionality. I’ve covered this before. I think the idea that rejecting intentionality is the only way to do that comes close to a red herring. Or, per my previous comment here, again, you’re presenting a false dilemma. The flip side of that is that some determinists even use the word “guilt,” still.

    Your apology is accepted — to the degree it’s extant. Your caveats?

    No. 2, first. What if your calling “bananas Foster” by the name of “spinach”? It’s legitimate/serious to point that out. No. 3? What if your claims to be original simply aren’t perceived as that? Every marketing or advertising genius in the world has a plethora of uses of “new” and “original.” Doesn’t mean it’s true. As for your first caveat? Well, again, doing intellectual judo, bringing in Wittgenstein, or whatever, we don’t have to accept your interpretations, either! And, per Wittgenstein, if that’s really your stance, then the game is over, isn’t it?

    Labnut Your dog isn’t trained to love Mozart? Next dog I get, I will have him attuned to Stravinsky! Try your dog on some Schönberg first. Since dogs, of course, don’t produce music, you need to introduce him to music outside the 12-tone major/minor system, and serial music will do that.

    Like

  41. Introducing the wisdom of Ted Bundy confirms my intuition that behind Scott’s fancy dancing is this: inevitably Science (materialist reductionism) is going to so completely problematize (if not disprove) the existence of free will that large portions of the populace will go haywire and kill with no sense of responsibility. Morals will go out the window. As an excuse, though not directly a cause, for psychotic behavior, he may well be right, and this is indeed the dark side of scientism. Scientism says “You can’t resist Science, Science is all there Is.” But there is a Manicheanism. The rosier, institutional Scientismists, Tyson for instance, give us a religiose Science, that links us all in a hope for a better future — a faith in Truth. But dystopian scientism (the mad scientist, Frankenstein and Faust) is no less alive and now occupies a continental philosophical position known as accelerationism. This super-realist position takes scienZe, knowledge, praxis, depradations of capitalism, whatever, as inevitable, and thus hastens to remove obstructions. Some names that come to mind are Nick Land, Ray Brassier and Quentin Meillassoux. They all seem to have a fondness for H. P. Lovecraft in common.

    Liked by 1 person

  42. Labnut: Thanks for that link! I explicitly frame philosophy as ‘theoretical anosognosia’ in a couple of my pieces: http://rsbakker.wordpress.com/2014/06/25/the-philosopher-the-drunk-and-the-lamppost/

    So I see a number of problems, some I’ve sketched in responses above.

    1) Mourning the manifest image problem. I know people who don’t feel this at all, but the idea of consigning the sum of pre-cognitive revolution human speculation on the nature of the human to the superstitious myth bin breaks my heart. I’ve tried to come up with arguments for why it should break other hearts, but they all smack of rationalization to me.

    2) Meat makes right problem. Biology is in the process of becoming technology. Once we concede that the various kinds of ‘floating efficacies’ that we attribute to things like rules, representations, wills, and so on are simply heuristic ways of solving problems absent high dimensional information regarding what is actually going on, the technical plasticity of the biological, amounts to exploding the ‘human’ (I have a novel in the works on this subject).

    3) Cognitive pollution problem. As heuristic systems tuned to ancestral, information-scarce, problem ecologies, our intentional systems are very likely to give us a rough ride as more and more unprecedented information becomes available. What Dennett calls ‘creeping exculpation’ is simply the tip of the iceberg. My fear is that our world is becoming a cognitive analogue to a gallery of visual illusions, where environmental information structures trigger more and more intuitive assumptions in maladaptive ways.

    But these are all living issues on TPB. We could use an optimistic soul or two!

    Like

  43. Hi Scott,

    What I would like to respond to are, first and foremost, answers to the question posed in the piece: What evidences the claim that intentional philosophy has made it past Square One. The absence of such responses is… suspicious?

    No, as a number of us have pointed out we are still pretty much in the dark about what problem you are posing or what question you are asking.

    You keep saying you mean no offence and I in turn mean no offence when I say that your style is vague and prolix and hence your various points are unclear.

    You keep coming back with this triumphal ‘no one can answer my question’ without taking on board that the reason is not that you have stumped us, the reason is that no one can work out what the heck the question is.

    Every time we make a stab at what you are talking about, you say that we are wrong. But, again – just what is the problem? What is the question you are asking?

    tu quoque arguments, because no, I don’t have to accept your interpretations of intentional idioms to use intentional idioms because I have my own interpretation of intentional idioms;

    But don’t you see, that is the entire problem? When you talk of a post-intentional future, are you using your own interpretation of ‘intentional’?

    If so then, hey, no problem. What is it to me that your personal interpretation of ‘intentionality’ is going to be overthrown? We don’t even know what that interpretation is and you won’t tell us, so why are you even asking?

    On the other hand, if you are talking of a problem that has relevance to anyone but yourself then you must be referring to something apart from your own private interpretation of intentionality.

    Thus you are now using both interpretations, your own and what you imagine is ours, mixed in the same argument and we are supposed to guess which interpretation to use each time.

    This is my fifth and final comment I believe. I am sorry I could not grasp what problem you foresaw or understand what question you were asking.

    But as I said earlier, science may change the way we think about ourselves some time in the future, but if any of us could second guess science we would not need scientists, and so nobody can really tell how science will change the way we think about ourselves.

    It may be something we regard as good, bad or neutral.

    Some of us like to think that what science has done in the past in other domains is to explain and clarify. Generally speaking, that has been a good thing.

    I have no reason to believe that when scientists finally get a handle on the brain then that is what will happen, they will explain and clarify.

    I am sorry if I can’t manage to get upset at the prospect that, at some time in the future, the mind will be explained and clarified.

    Like

  44. Labnut: “But is this confidence [in scientific progress] warranted? As someone pithily put it, ‘the future is not what it used to be’.”

    For me ‘progress’ simply means an accumulating capacity to solve an accumulating number of problems. I don’t think science as it stands is all conquering by any stretch, but I also don’t think humanity stands anywhere near the limits of its imperial expansion. I think it will swallow us whole, reinvent us into something that will reinvent it, at which point all bets are off.

    The question you need to ask regarding free will and it’s undeniability is simply how it would seem to an anosognosiac. If the experience is plausibly the same, then instances of it evidences either the existence of something well nigh inexplicable, or anosognosia. The latter is far and away the more modest answer, I think.

    Like

  45. @astrodreamer I almost laughed at this: “inevitably Science (materialist reductionism) is going to so completely problematize (if not disprove) the existence of free will that large portions of the populace will go haywire and kill with no sense of responsibility.”

    As I take it you’ve reduced Bakker’s stance into something it is not. His form of eliminativism is not physicalism, nor is it materialist and reductionist per se.

    The notion that “free will” is being problematized as a category of thought: a concept is true. Why? The concept is philosophical and has both a political, social, cultural, and psychological lineage. Sciences are about descriptions, and the trope, turn of phrase, concept that masks what is in truth a processes in the brain is slowly being uncovered within the neurosciences. Free will per se will not go away, but our understanding of what this old concept is will as we begin to understand the functions and relations that both trigger and enact these processes that we – as first-person singular agents do not have full access (due to operative neglect). Whether one buys into the interpretations being given by neuroscientists about what they are uncovering through the many new technologies of imaging and electron based systems is another matter. The interpretations of the factual evidence will be weighed as it has been in the sciences up to now: is it a valid or not. Philosophers can coin this as ‘truth claims’ (Analytic) etc.

    The notion that the populace will go berserk is over the top. What worries me more is that the new neurosciences might lead to darker actual manipulations by governments on its citizens; and, rogue nations to use it against enemies, etc. Sciences have always been two-edged in this way. As neurosciences help in the medical field, we must also realize that entities like DARPA and others will weaponize such knowledge as tools of destruction or defense or interrogation, et. al. Scott in his Neuropath set the metaphor to maximum to show how such enactments could happen in an over the top situation. To me it would even be deadlier in the hands of cold reasonable men in espionage and military.

    Like

  46. Scott,

    You keep complaining that no one has asked you clarificatory questions (“what they should be doing is raising concerns and asking questions”, “why don’t you ask the author clarifying questions?”, “I had hoped to receive more questions clarifying points”), so why is it that you have ignored every single one of mine?

    The questions I have put to you are hardly unreasonable ones, Rather, they are simple requests for you to substantiate (that is, provide arguments and cite evidence for) your core claims.

    So why all the evasiveness? Would you have us simply have us take all this on trust? Or perhaps you’re just waiting until I use up my quota of five comments? Well, this was my fourth, so if you hold out a little longer with evasions and obfuscations you might just get away with it — in which case, you will be able carry on confusing your sense of certainty that you’re right with rational warrant for your claims. How ironic would that be, huh?

    Like

  47. rsbakker wrote:

    “The author can go on. But if you find the author so confusing, why don’t you ask the author clarifying questions, as opposed to insinuating that he’s confusing because… take you pick! Perhaps I’m not a hack. Perhaps you’ve encountered something genuinely new, and your confusion stems from incapacities as much your own as the author’s.”

    ———————-

    I made several very specific critical observations, including pointing to a mistaken understanding of relevant levels of explanation and their relationship to our different purposes in engaging in inquiry, identifying a misconception of ontological commitment and of the status of claims as to what “really” exists, and making a similar point as to the framework-relative quality of causal explanations.

    It is too bad that you are so fixated on the idea that you’ve come up with something New! Unprecedented! Baffling! because you might have otherwise replied to these and the many, many other substantive objections that you have received from commenters here. The fact that we rarely agree on anything, here, at S.S. suggests that *perhaps* we may have a point, in this case.

    Regardless, there is no need to take our word on it — and clearly, from your evident attitude, you won’t, despite the fact that several us work in the field, in a professional capacity. (I can’t deny that it is disconcerting to carry on a conversation with you here and then find that you are trash talking us over at your own blog.)

    Anyway, if you want people to take your musings on this subject seriously, work them up into a substantial article and publish it in a peer-reviewed, blind refereed journal. That, ultimately, is the standard of sound work in our profession, as well as in the sciences.

    A cheer-leading squad on your own blog? Not so much.

    Like

  48. Scott,

    I have to admit I’m still confused about this point you raise as a non-serious criticism (a propos of footnote 3 in the article):

    1) tu quoque arguments, because no, I don’t have to accept your interpretations of intentional idioms to use intentional idioms because I have my own interpretation of intentional idioms

    So far as I can reconstruct your position, you’re saying that (a) intentionality isn’t what we traditionally have taken it to be, that (b) to the extent there is such a thing, it’s all heuristic rather than “meaningful”, and that (c) you can justify claims (a) and (b) on the grounds of an argument to the progress of science in the general disenchantment of nature. And (d) if human beings just are natural phenomena, then (c) seems the reasonable conclusion to draw.

    The reason I (for one) have had a hard time going along with this is because I don’t see how you can get from (c) to (a) or (b) without already drawing on the resources of intentional language to do so. This isn’t a simple tu quoque response or question-begging — it’s not just that you have your own conception of intentionality (etc.) but rather a concern that you are not entitled to that conception. (Which is why I raised the concern about tacit epistemological commitments in my last response.)

    Unless I’ve just entirely missed it in this piece, you never quite address why (1) you’re justified in your alternative conception of intentionality and (2) why anyone else ought to care. But not only that, when the argument you’re setting out (3) denies that there are reasons (viz. a-c), then (1) and (2) cannot be satisifed to begin with!

    This was the concern I raised that you were, perhaps unwittingly, relying on not only intentional language but an epistemic reduction to propositional attitudes as the only relevant source of evidence — but then again the conclusion of your argument is that there is no evidence, just as there is no justification or reasons. That is, this entire project of (a-d) necessarily presupposes the folk psychological view of intentionality in order to even formulate the view that denies there is such a thing.

    To sum up: I don’t think anyone denies that you have your own conception of intentionality. The reason you’re encountering what you’ve labeled question-begging or tu quoque arguments is because of how you arrive at that conception of intentional phenomena, which is already presuming intentionality in its formulation and expression. What is being denied is that you have any entitlement to that conception without argument — which you haven’t provided in any way that actually challenges this particular objection, and which anyway you cannot provide by the lights of your own view.

    Liked by 1 person

  49. @chaosmogony Reading Scott’s essay he never states that “intentionality” will be done away with or doesn’t exist, as he says:

    “The indispensability of human intentional cognition (upon which Fodor also hangs his argument) turns on its ability to solve problems involving systems far too complex to be economically cognized in terms of cause and effect. It’s all we’ve got.”

    The point that he is making is that the neurosciences as he says: “Since intentional concepts often figure in our use of these heuristics, they will be among the things cognitive science eventually explains. And then we will finally know what they are and how they function — we will know all the things that deliberative, theoretical metacognition neglects.”

    Heuristics will not replace intentional concepts, but as neurosciences begin over the coming years to understand more and more what is behind these older philosophical conceptions of intentionality we will begin to form new understandings of these processes that may or may not prove or disprove whether the philosophical terminology needs to be updated to fit the exacting truths or descriptions of the sciences. I personally have never seen the need to stipulate the priority of science over philosophy, nor of philosophy over the sciences: somehow we need to forge a diachronic and synchronic relation with the various concepts we use.

    As many have related the notion of the ‘manifest image’ will probably be with us till our species goes extinct, as well as the more refined views of the ‘scientific image’ as a specialist vocabulary and intrinsic/extrinsic tool with heuristic appeal. Whatever it may be at the moment we need openness, clarity, and a realization that we do not have all the evidence in: we neglect certain epistemic and ontological aspects to describe or even inference by indirect or direct appeals.

    Like

  50. SciSal: “First, there is essentially no positive argument.”

    I’m not even sure I managed to keep to the 3000 words making my critical case! I’d only be too happy to give you the ‘positive’ side of the coin in a future piece, SciSal. Either way, I do want to thank you for allowing me to state my case in your forum. You have something special going on here. Very much so.

    “there is an issue of proper, or most informative, levels of description.”

    It’s human nature to assimilate the novel to the familiar, and as such, it’s entirely natural to think what I’m arguing amounts to confusing different levels of description. But it does not. What it does, rather, is redescribe the ‘level of intentional description’ that so many take as ‘irreducible’ in terms of heuristic problem-ecologies. And in doing so, it not only allows us to understand intentional phenomena in thoroughly naturalistic ways, it offers a parsimonious explanation for why they have generated such perplexity over the ages. Given our complexity, it seems hard to imagine how our metacognitive relation to ourselves could be anything but fractionate and heuristic (which is to say, anything but the metacognitive capacity cognitive neuroscience is unearthing as we speak). Add the inevitability of neglect to this meager self-relation, the inability of metacognition to ‘meta-metacognize,’ and suddenly the riot of contradictory claims regarding consciousness and self looks inevitable. Meanwhile, it also explains the limited efficacy of our intentional posits (they’re adapted to specialized problem ecologies). It likewise explains the dramatic difference between our practical and theoretical applications of those posits, why ‘belief,’ say, can function so well in everyday discourse, yet remain the source of perpetual theoretical controversy. And lastly, it explains the perplexing antipathy between intentional phenomenality and causality (because the former is adapted to solve absent that information).

    In short, it offers a way to block the abductive arguments that have been the bane of eliminativism in cognitive science. As I’ve been saying throughout, this is not at all the eliminativism people here have been accustomed to arguing.

    As for the reality of posits in physics, I don’t know one way or another. I gave up reading Ladyman and Ross! The argument here is that I have a parsimonious way of understanding intentional posits that simplifies our ontologies and resolves a good number of now ancient conundrums. It even allows us to clarify the relationship between psychological models and neural mechanisms. And it makes testable predictions, to boot.

    And yes, it implies that the intentional posits crowding the philosophical and special science arena are best thought of as metacognitive illusions.

    Like

Comments are closed.