Back to Square One: toward a post-intentional future

intentionalby Scott Bakker

“… when you are actually challenged to think of pre-Darwinian answers to the question ‘What is Man?’ ‘Is there a meaning to life?’ ‘What are we for?’, can you, as a matter of fact, think of any that are not now worthless except for their (considerable) historic interest? There is such a thing as being just plain wrong and that is what before 1859, all answers to those questions were.” (Richard Dawkins, The Selfish Gene, p. 267)

Biocentrism is dead for the same reason geocentrism is dead for the same reason all of our prescientific theories regarding nature are dead: our traditional assumptions simply could not withstand scientific scrutiny. All things being equal, we have no reason to think our nature will conform to our prescientific assumptions any more than any other nature has historically. Humans are prone to draw erroneous conclusions in the absence of information. In many cases, we find our stories more convincing the less information we possess! [1]. So it should come as no surprise that the sciences, which turn on the accumulation of information, would consistently overthrow traditional views. All things being equal, we should expect any scientific investigation of our nature will out and out contradict our traditional self-understanding.

Everything, of course, turns on all things being equal — and I mean everything. All of it, the kaleidoscopic sum of our traditional, discursive human self-understanding, rests on the human capacity to know the human absent science. As Jerry Fodor famously writes:

“if commonsense intentional psychology really were to collapse, that would be, beyond comparison, the greatest intellectual catastrophe in the history of our species; if we’re that wrong about the mind, then that’s the wrongest we’ve ever been about anything. The collapse of the supernatural, for example, didn’t compare; theism never came close to being as intimately involved in our thought and practice — especially our practice — as belief/desire explanation is.” [2]

You could say the scientific overthrow of our traditional theoretical understanding of ourselves amounts to a kind of doomsday, the extinction of the humanity we have historically taken ourselves to be. Billions of “selves,” if not people, would die — at least for the purposes of theoretical knowledge!

For years now I’ve been exploring this “worst case scenario,” both in my novels and in my online screeds. After I realized the penury of the standard objections (and as a one-time Heideggerean and Wittgensteinian, I knew them all too well), I understood that such a semantic doomsday scenario was far from the self-refuting impossibility I had habitually assumed [3]. Fodor’s ‘greatest intellectual catastrophe’ was a live possibility — and a terrifying one at that. What had been a preposterous piece of scientistic nonsense suddenly became the most important problem I could imagine. Two general questions have hounded me ever since. The first was, What would a postintentional future look like? What could it look like? The second was, Why the certainty? Why are we so convinced that we are the sole exception, the one domain that can be theoretically cognized absent the prostheses of science?

With reference to the first, I’ll say only that the field is quite lonely, a fact that regularly amazes me, but never surprises [4]. The second, however, has received quite a bit of attention, albeit yoked to concerns quite different from my own.

So given that humanity is just another facet of nature, why should we think science will do anything but demolish our traditional assumptions? Why are all things not equal when it comes to the domain of the human? The obvious answer is simply that we are that domain. As humans, we happen to be both the object and the subject of the domain at issue. We need not worry that cognitive science will overthrow our traditional self-understanding, because, as humans, we clearly possess a privileged epistemic relation to humans. We have an “inside track,” you could say.

The question I would like to explore here is simply, Do we? Do we possess a privileged epistemic relation to the human, or do we simply possess a distinct one? Being a human, after all, does not entail theoretical knowledge of the human. Our ancestors thrived in the absence of any explicit theoretical knowledge of themselves — luckily for us. Moreover, traditional theoretical knowledge of the human doesn’t really exhibit the virtues belonging to scientific theoretical knowledge. It doesn’t command consensus. It has no decisive practical consequences. Even where it seems to function practically, as in Law say, no one can agree how it operates, let alone just what is doing the operating. Think of the astonishing epistemic difference between mathematics and the philosophy of mathematics!

If anything, traditional theoretical knowledge of the human looks an awful lot like prescientific knowledge in other domains. Like something that isn’t knowledge at all.

Here’s a thought experiment. Try to recall “what it was like” before you began to ponder, to reflect, and most importantly, before you were exposed to the theoretical reflections of others. I’m sure we all have some dim memory of those days, back when our metacognitive capacities were exclusively tasked to practical matters. For the purposes of argument, let’s take this as a crude approximation of our base metacognitive capacity, a ballpark of what our ancestors could metacognize of their own nature before the birth of philosophy.

Let’s refer to this age of theoretical metacognitive innocence as “Square One,” the point where we had no explicit, systematic understanding of what we were. In terms of metacognition, you could say we were stranded in the dark, both as a child and as a pre-philosophical species. No Dasein. No qualia. No personality. No normativity. No agency. No intentionality. I’m not saying none of these things existed (at least not yet), only that we had yet to discern them via reflection. Certainly we used intentional terms, talked about desires and beliefs and so on, but this doesn’t entail any conscious, theoretical understanding of what desires and beliefs and so on were. Things were what they were. Scathing wit and sour looks silenced those who dared suggest otherwise.

So imagine this metacognitive dark, this place you once were, and where a good number of you, I am sure, believe your children, relatives, and students — especially your students — still dwell. I understand the reflex is to fill this cavity, clutter it with a lifetime of insight and learning, to think of the above as a list of discoveries (depending on your intentional persuasion, of course), but resist, recall the darkness of the room you once dwelt in, the room of you, back when you were theoretically opaque to yourself.

But of course, it never seemed “dark” back then, did it? Ignorance never does, so long as we remain ignorant of it. If anything, ignorance makes what little you do see appear to be so much more than it is. If you were like me, anyway, you assumed that you saw pretty much everything there was to see, reflection-wise. Since your blinkered self-view was all the view there was, the idea that it comprised a mere peephole had to be preposterous. Why else would the folk regard philosophy as obvious bunk (and philosophers as unlicensed lawyers), if not for the wretched poverty of their perspectives?

The Nobel Laureate Daniel Kahneman calls this effect “what you see is all there is,” or WYSIATI. As he explains:

“You cannot help dealing with the limited information you have as if it were all there is to know. You build the best possible story from the information available to you, and if it is a good story, you believe it. Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.” [5]

The idea, basically, is that our cognitive systems often process information blind to the adequacy of that information. They run with what they get, present hopeless solutions as the only game in town. This is why our personal Square One, benighted as it seems now, seemed so bright back then, and why “darkness,” perhaps our most common metaphor for ignorance, needs to be qualified. Darkness actually provides information regarding the absence of information, and we had no such luxury as a child or as a species. We lacked access to any information tracking the lack of information: the “darkness” we had to overcome, in other words, was the darkness of neglect. Small wonder our ignorance has felt so enlightened at every turn! Only now, with the wisdom of post-secondary education, countless colloquia, and geriatric hindsight can we see how little of ourselves we could see back then.

But don’t be too quick to shake your head and chuckle at your youthful folly, because the problem of metacognitive neglect obtains as much in your dotage as in your prime. You agree that we suffered metacognitive neglect both as pretheoretical individuals and species, and that this was why we failed to see how little we could see. This means 1) that you acknowledge the extreme nature of our native metacognitive incapacity, the limited and — at least in the short term — intractable character of the information nature has rendered available for reflection; and 2) that this incapacity applies to itself as much as to any other component of cognition. You acknowledge, in other words, the bare possibility that you remain stranded at Square One.

Thanks to WYSIATI, the dark room of self-understanding cannot but seem perpetually bright. Certainly it feels “different this time,” but given the reflexive nature of this presumption, the worry is that you have simply fallen into a more sophisticated version of the same trap. Perhaps you simply occupy a more complicated version of Square One, a cavity “filled with sound and fury,” but ultimately signifying nothing.

Raising the question, Have we shed any real theoretical light on the dark room of the human soul? Or does it just seem that way?

The question of metacognitive neglect has to stand among the most important questions any philosopher can ask, given that theoretical reflection comprises their bread and butter. This is even more the case now that we are beginning to tease apart the neurobiology of metacognition. The more we learn about our basic metacognitive capacities, the more heuristic, error-prone, and fractionate they become [6]. The issue is also central to the question of what the sciences will likely make of the human, posed above. If we haven’t shed any real traditional light on the human room, then it seems fair to say our relation to the domain of the human, though epistemically distinct, is not epistemically privileged, at least not in any way that precludes the possibility of Fodor’s semantic doomsday.

So how are we to know? How might we decide whether we, despite our manifest metacognitive incapacity, have groped our way beyond Square One, that the clouds of incompatible claims comprising our traditional theoretical knowledge of the human actually orbit something real? What discursive features should we look for?

Capable of commanding consensus can’t be one of them. This is the one big respect where traditional theoretical knowledge of the human fairly shouts Square One. Wherever you find intentional phenomena theorized, you find interminable controversy.

Practical efficacy has promise — this is where Fodor, for instance, plants his flag. But we need to be careful not to equivocate (as he does) the efficacy of various cognitive modes and the theoretical tales we advance to explain them. No one needs an explicit theory of rule-following to speak of rules. Everyone agrees that rules are needed, but no one can agree what rules are. If the efficacy belonging to the phenomena requiring explanation — the efficacy of intentional terms — attests to the efficacy of the theoretical posits conflated with them, then each and every brand of intentionalism would be a kind of auto-evidencing discourse. The efficacy of Square One intentional talk evidences only the efficacy of Square One intentional talk, not any given theory of that efficacy, most of which seem, quite notoriously, to have no decisive practical problem-solving power whatsoever. Though intentional vocabulary is clearly part of the human floor-plan, it is simply not the case that we’re “born mentalists.” We seem to be born spiritualists, if anything! [7]

Certainly a good number of traditional concepts have been operationalized in a wide variety of scientific contexts — things like “rationality,””representation,” “goal,” and so on — but they remain opaque, and continually worry the naturalistic credentials of the sciences relying on them. In the case of cognitive science, they have stymied all attempts to define the domain itself — cognition! And what’s more, given that no one is denying the functionality of intentional concepts (just our traditional accounts of them), the possibility of exaptation [8] should come as no surprise. Finding new ways to use old tools is what humans do. In fact, given Square One, we should expect to continually stumble across solutions we cannot decisively explain, much as we did as children.

Everything turns on understanding the heuristic nature of intentional cognition, how it has adapted to solve the behavior of astronomically complex systems (including itself) absent any detailed causal information. The apparent indispensability of its modes turns on the indispensability of heuristics more generally, the need to solve problems given limited access and resources. As heuristic, intentional cognition possesses what ecological rationalists call a “problem ecology,” a range of adaptive problems [9]. The indispensability of human intentional cognition (upon which Fodor also hangs his argument) turns on its ability to solve problems involving systems far too complex to be economically cognized in terms of cause and effect. It’s all we’ve got.

So we have to rely on cause-neglecting heuristics to navigate our world. Always. Everywhere. Surely these cause-neglecting heuristics are among the explananda of cognitive science. Since intentional concepts often figure in our use of these heuristics, they will be among the things cognitive science eventually explains. And then we will finally know what they are and how they function — we will know all the things that deliberative, theoretical metacognition neglects.

The question of whether some kind of explanation over and above this — famously, some explanation of intentional concepts in intentional terms — is required simply becomes a question of problem ecologies. Does intentional cognition itself lie within the problem ecology of intentional cognition? Can the nature of intentional concepts be cashed out in intentional terms?

The answer has to be no — obviously, one would think. Why? Because intentional cognition solves by neglecting what is actually going on! As the sciences show, it can be applied to various local problems in various technical problem ecologies, but only at the cost of a more global causal understanding. It helps us make some intuitive sense of cognition, allows us to push in certain directions along certain lines of research, but it can never tell us what cognition is simply because solving that problem requires the very information intentional cognition has evolved to do without. Intentional cognition, in other words, possesses ecological limits. Lacking any metacognitive awareness of those limits, we have the tendency to apply it to problems it simply cannot solve. Indeed, our chronic misapplication of intentional cognition to problem-ecologies that only causal cognition could genuinely solve is one of the biggest reasons why science has so reliably overthrown our traditional understanding of the world. The apocalyptic possibility raised here is that traditional philosophy turns on the serial misapplication of intentional cognition to itself, much as traditional religion, say, turns on the serial misapplication of intentional cognition to the world.

Of course intentional cognition is efficacious, but only given certain problem ecologies. This explains not only the local and limited nature of its posits in various scientific contexts, but why purely philosophical accounts of intentional cognition possess no decisive utility whatsoever. Despite its superficial appeal, then, practical efficacy exhibits discursive features entirely consistent with Square One (doomsday). So we need to look elsewhere for our redeeming discursive feature.

But where? Well, the most obvious place to look is to science. If our epistemic relation to ourselves is privileged as opposed to merely distinct, then you would think that cognitive science would be revealing as much, either vindicating our theoretical metacognitive acumen or, at the very least, trending in that direction. Unfortunately, precisely the opposite is the case. Memory is not veridical. The feeling of willing is inferential. Attention can be unconscious. The feeling of certainty has no reliable connection to rational warrant. We make informed guesses as to our motives. Innumerable biases afflict both automatic and deliberative cognitive processes. Perception is supervisory, and easily confounded in many surprising ways. And the list of counter-intuitive findings goes on and on. Cognitive science literally bellows Square One, and how could it not, when it’s tasked to discover everything we neglect, all those facts of ourselves that utterly escape metacognition. Stanislaus Dehaene goes so far as to state it as a law: “We constantly overestimate our awareness — even when we are aware of glaring gaps in our awareness” [10]. The sum of what we’re learning is the sum of what we’ve always been, only without knowing as much. Slowly, the blinds on the dark room of our theoretical innocence are being drawn, and so far at least, it looks nothing at all like the room described by traditional theoretical accounts.

As we should expect, given the scant and opportunistic nature of the information our forebears had to go on. To be human is to be perpetually perplexed by what is most intimate — the skeptics have been arguing as much since the birth of philosophy! But since they only had the idiom of philosophy to evidence their case, philosophers found it easy to be skeptical of their skepticism. Cognitive science, however, is building a far more perilous case.

So to round up: Traditional theoretical knowledge of the human simply does not command the kind of consensus we might expect from a genuinely privileged epistemic relationship. It seems to possess some practical efficacy, but no more than what we would expect from a distinct (i.e., heuristic) epistemic relationship. And so far, at least, the science continues to baffle and contradict our most profound metacognitive intuitions.

Is there anything else we can turn to, any feature of traditional theoretical knowledge of the human that doesn’t simply rub our noses in Square One? Some kind of gut feeling, perhaps? An experience at an old New England inn?

You tell me. I can remember what it was like listening to claims like those I’m advancing here. I remember the kind of intellectual incredulity they occasioned, the welling need to disabuse my interlocutor of what was so clearly an instance of “bad philosophy.” Alarmism! Scientism! Greedy reductionism! Incoherent blather! What about quus? I would cry. I often chuckle and shake my head now. Ah, Square One… What fools we were way back when. At least we were happy.

_____

Scott Bakker has written eight novels translated into over dozen languages, including Neuropath, a dystopic meditation on the cultural impact of cognitive science, and the nihilistic epic fantasy series, The Prince of Nothing. He lives in London, Ontario with his wife and his daughter.

[1] A finding that arises out of the heuristics and biases research program spearheaded by Amos Tversky and Daniel Kahneman. Kahneman’s recent, Thinking, Fast and Slow provides a brilliant and engaging overview of that program. I return to Kahneman below.

[2] Psychosemantics, p.vii.

[3] Using intentional concepts does not entail commitment to intentionalism, any more than using capital entails a commitment to capitalism. Tu quoque arguments simply beg the question, assume the truth of the very intentional assumptions under question to argue the incoherence of questioning them. If you define your explanation into the phenomena we’re attempting to explain, then alternative explanations will appear to beg your explanation to the extent the phenomena play some functional role in the process of explanation more generally. Despite the obvious circularity of this tactic, it remains the weapon of choice for great number of intentional philosophers.

[4] Another lonely traveller on this road is Stephen Turner, who also dares ponder the possibility of a post-intentional future, albeit in very different manner.

[5] Thinking, Fast and Slow, p. 201.

[6] See Stephen M. Fleming and Raymond J. Dolan, “The neural basis of metacognitive ability.”

[7] See Natalie A. Emmons and Deborah Kelemen, “The Development of Children’s Prelife Reasoning.”

[8] Exaptation.

[9] I urge anyone not familiar with the Adaptive Behaviour and Cognition Research Group to investigate their growing body of work on heuristics.

[10] Consciousness and the Brain, p. 79. For an extended consideration of the implications of the Global Neuronal Workspace Theory of Consciousness regarding this issue see, R. Scott Bakker, “The Missing Half of the Global Neuronal Workspace.”

173 thoughts on “Back to Square One: toward a post-intentional future

  1. Philip Thrift: “Perhaps “abstractions” is better than “illusions”. Here I mean abstraction* in the constructive, computer science sense.”

    Regarding ‘abstractions’ this is one of the reasons why I’m interested in Chris Eliasmith’s semantic pointers approach. The ‘curse of dimensionality,’ the problem of picking relevant patterns out of huge data sets, crops up for the brain itself as much as it does for neuroscientists attempting to reverse engineer the brain. I actually think that heuristics provides a more general way of thinking through these issues, but I don’t really have anything more than hunches on this issue. It seems to me that the kind of compression and truncation we find in nature is bound to be very opportunistic, particularly when it comes to systems, like human metacognition, that don’t have that much of an evolutionary track record.

    The ‘illusions’ I refer to are simply mistakes, if my larger case holds, the result of taking heuristics (instrumental abstractions?) as being self-sufficient entities/capacities.

    Liked by 2 people

  2. John Smith: “I should add, and I assume it has been said in other replies, that the subjective of which you are so suspicious is the same subjective that uses objectivity to get beyond intuitive failings, using logic and science”

    Well, to give a sense of how extreme I think a ‘post-intentional’ future will be, I actually think we’ll come to see ‘subject and object’ as another metacognitive heuristic we use to make sense ourselves in environments absent any detailed information regarding our continuity with nature. Natural science conceives us as systems embedded within systems, not as subjects set over and against a world of objects. The former is the high-dimensional view, the one that allows us to mine ever more information in our attempt to understand ourselves as a component of nature. The latter is the low-dimensional view, which philosophers have been banging their heads against forever, which special sciences have been able to adapt in various specialized ways, but no more – precisely what we should expect of a heuristic.

    What would a post-subject/object (post-intentional) world look like? It’s hard to imagine, very counter-intuitive, but I think it’s undeniable that subject/object is heuristic, given the amount of information (to whit, almost all information regarding our natural continuity) it neglects. And we bump into it’s ecological limits rather frequently, if you ask me. On a Bayesian understanding of the brain, for instance, our ability to solve the inverse problem turns on the functional independence of the system to be cognized. As soon as that independence is compromised, as with observer effects, the brains ability to causally cognize other systems goes up in smoke. This strikes me a very good place to start mapping the heuristic ecological limits of subject/object.

    Like

  3. thomaswischer: ” It may not be possible to build a timeless theoretical structure to explain the self conscious individuals in the world.”

    My friend Eric Schwitzgebel has debated me to a stalemate on this general topic. I agree this could be the case, with the caveat that we switch ‘timeless’ with ‘robust.’

    Like

  4. Brodix: “I agree with Massimo Pigliucci, the tone of some comments addressed to Scott Bakker is unusually harsh. I guess that, basically, he wants to know what evidences intentional theories of intentionality, so he doesn’t deserve such reactions.”

    What I would like to suggest is that the harshness is an expression of an inability to answer the question! But that’s too easy, I know. I’m still waiting on an answer though.

    As a kind of half-ass sociological aside (to take off my amateur philosopher’s cap and to put on my amateur anthropologists!) I really worry about the academic allergy to uglier sides of online discourse. The ‘online disinhibition effect’ is a real phenomena, and it leads to some nasty stuff (this ain’t nuthin), but after years of doing stuff like this I’ve come to realize that it has it’s *place,* if that’s the right word. The pattern we’ve seen here, where nastiness has beget conciliation has beget serious discussion is one I’ve seen many, many times. My fear is that the first phase has the effect of chasing academics out of the very venues they are needed most. I actually think it would be a tremendously positive thing, for instance, if sustained interaction with the followers of some politically extreme website became a degree requirement for polysci MAs.

    Like

  5. John Smith: “Just to be brief and clear, the answer to Scott is that intentionality is the basis for accumulation of objective knowledge, and that it objective knowledge does not automatically “appear”, it takes work and time, and it is not appropriate to decry that basis. The objective explanation for subjective intentionality is being explored by neuroscience continually, even if only by correlation, so be patient.”

    I agree that humans (via more and more technical prostheses) are the ones doing the work of science, but I don’t think ‘intentional theories of the subject’ describe the humans doing that work. What I’m asking for in the piece is evidence that ‘intentionality is the basis for the accumulation of objective knowledge.’

    Think for a moment on how procrustean this division of knowledge into ‘objective’ and ‘subjective’ is, how insensitive it is to the issue of problem-ecologies, for instance. In a sense, you could look at what I’m offering as a more nuanced way to understand how knowing bears on the known bears on knowing. The subject/object dichotomy, if I’m right, will be seen as a relic of the days when cognition remained an unknown unknown.

    People are often incredulous when I say things like this, but again, I entreat them to consider how much information subject/object neglects. We needed someway of troubleshooting problems regarding cognition lacking any access to the neurobiology that renders cognition possible. Subject/object is our way of coping with this situation. The fact it isn’t equipped to answer questions such as ‘What is the nature of knowledge?’ is amply evidenced by the perpetual controversy we call epistemology.

    Like

  6. stevenjohnson: Who’s the OP? Me?

    Here’s a way (one of my earlier, more naïve formulations I now think) of looking at the issue – the ‘positioning problem.’ Conscious reflection can only access information that has crossed the conscious ignition threshold and is available for report (because it seems, if Block is right, say, that we are conscious of more than we can report (which confounds attempts to isolate NCCs)). What conscious reflection cannot do is ‘position’ the information it accesses – in fact, it can’t even intuit the fact that it belongs to a larger, nonconscious, functional context.

    Given this, it seems hard to imagine how traditional conscious reflection could possibly move humanity past Square One *in any respect.* What does this make of centuries of ethical reflection?

    Like

  7. Aravis: “The author is still convinced that he has hit us “with a sensibility they’ve never encountered before.” (A quick trip over to his blog confirms that.) He is also convinced that none of us has answered or even addressed his questions, re: a post-intentional future, not even Massimo! And we’ve been mean.”

    Well this can be settled easily enough.

    1) Name me one philosopher/philosophical position that uses neglect as opposed to representation to mediate the relation between the neural and the intentional.

    2) A good number of people on this thread have expressed surprise at the ad hominem rancour, including Massimo. Are they all mistaken?

    3) And yet again, I have no idea how you’re answering the question posed in the piece above. So please bear with the incapacity of an interloping hack amateur, and enlighten him with a direct answer: What evidences the fact that intentional philosophy has made it to Square Two?

    Again, I’m not claiming that the intentional posits of philosophy reduce to neurobiology – I’m claiming they don’t exist! I think intentional cognition and our everyday intentional idioms do admit causal explanation. And I actually think running afoul Carnap (whom I don’t know well at all) and Wittgenstein (who was once my philosophical hero) are probably quite good things. Either way, citing them to refute me seems to presuppose that both of them have made it to Square Two…

    Which brings us right back to the ‘author’s’ question.

    Like

  8. labnut: Thanks for that Christian Smith citation. The review gives me goose-pimples! It’s perfect example of the ‘functional nihilism’ I see us drifting into as part of a larger process of ‘social akrasis’: one where the administrative apparatus is thoroughly nihilistic, and where individuals live lives of false, thoroughly commodified meaning.

    Disney World, in effect.

    Like

  9. Aravis Tarkheena: “… there is every reason to think that the intentional framework — with its ontology — will continue to serve us well in perpetuity, even while science advances our understanding of the human actor at a different level of description.”

    Amen!

    labnut: “… but I disagree {Science will show we are merely machines, incapable of true intentional behaviour} because …”

    Every lively life will always be reduced to a cadaver one day. No, reduction will definitely not tell the ‘whole’ story. E.O. Wilson wrote a book “Science, not philosophy, will explain the meaning of existence”, but his story is not scientific reduction but a synthesis. Of course, I totally disagree with him, besides the point here.

    brandholm: “As I argue earlier in the thread we are not about to eclipse intentionality with euroscience, at least not much more so than we already have in the past.”

    Agree, totally. The brain has three parts.
    One, structure: the functional parts, the number of neurons, brain’s endocast (topology), etc.

    Two, the connections: even with the identical structure, the different neuron ‘connections’ during the neuron stampede will make these two completely different brains.

    Three, the imprint: for two brains with identical structure and identical connections, they will still be a completely different brains if they have different ‘imprint’, the burn-in. The ‘meaning’ of anything (external or internal) is interpreted by the ‘burnt-in’ pages. See http://www.prequark.org/inte001.htm . This ‘burn-in’ process becomes a great divide for the material-deduction of the brain staying at the yonder shore of intelligence and consciousness.

    I do appreciate that Scott brought up the issue of ‘intentional-cause-neglecting epistemology’ although I do not agree with his direction on the issue. In physics at least, the epistemology is thus far totally based on the ‘intentional-‘cause’-neglect’, that is, totally ignore the issue of ‘initial condition’. The big bang and the inflation are at best some processes ‘after’ that initial condition.

    This ‘intentional-cause-neglect’ can be a very smart strategy at the time that we had no clue of what that initial condition could be. Now, the four-locks (which locks ‘this’ universe into its current shape) are known, that is, that ‘initial condition’ must encompass the keys for these four-locks. Now, we have clue and direction, and the ‘post-intentional-cause-neglect’ epistemology is to address the issue of ‘initial condition’ of this universe.

    Michael Murden: “The question Scott seems to be asking is whether philosophy is well suited to answer questions such as ‘what is the nature of the relationship between the human mind and brain?’ ‘How is it possible for human beings to create original mathematics?’ and ‘What is the nature of human language?’”

    Excellent points. All these questions will be answered when the ‘initial condition’ of this universe is understood.

    Like

  10. Brandholm: I’m glad you liked, and I am very interested in pursuing this conversation. I’d be more than happy to consider anything you’ve written for a guest blogging spot on TPB, since this is one of the blog’s cornerstone concerns. My position hasn’t gelled on this topic the way it has on brain science, so I get where you’re coming from, and I concede that the ‘pollution’ metaphor is problematic (though still instructive). I wonder, though, if you’re taking the *accelerating* pace of change into account.

    Either way, you (or anyone else) can reach me at my author email: richard.scott.bakker@gmail.com

    I sometimes get inundated, so if you could add ‘SS’ to the subject heading, it would be a great help.

    Like

  11. OP=original post

    “What does this make of centuries of ethical reflection?”

    Insofar as these reflections assume individual beliefs and desires are causal, valuable only for literary qualities. The vision of Handsome Lake or the revelations to Hong Xiquan or the speeches of the gods to the heroes of The Iliad are no more, but no less, explanatory. Insofar as ethical reflections are rationalizations of moral practice, they have the same value any any history, supplying at the least material for error analysis. I mean “rationalization” in the sense of efforts to cohere principles and customs, to make consistent, to focus on the means of effective performance, to specify goals…not in the sense of “apologetics.” By and large though it appears to me that all views developed from the perspective that the Mind is dispositive must inevitably function effectively only as apologetics for the status quo, thus actively harmful in the long run. In one of his bolder, more honest moments, Hume said we should burn the Schoolmen. That’s not enough, burn it all.

    But I’m not sure why we should worry about the value assigned to the history of philosophy. I think you forgot to write “introspection” in place of “conscious reflection.” Introspection has failed to find knowledge about the physical world. The discovery it also fails to discover knowledge about social existence may be dispiriting but shouldn’t cause existential panic. We can make progress in “positioning” our information, by interaction with others. We don’t intuit our larger, nonconscious context, we are challenged by others with their observations. Collectively we observe, generalize, hypothesize, test. Skeptics who deny the existence of knowledge, and antirealists who believe there is only opinion too, are left adrift I suppose. The religious never feel lost of course, but they never agree about where they are either. But for the rest of us, the scientific disenchantment of the ethical world could be a boon. Which is better? Believing that brutality in prisons is an ethical failing of the beliefs and desires of the guards and prisoners, and failing to change anything? Or determining that the situation causes behaviors and changing the situtation?

    Like

  12. SciSal: “And I do think that your piece shows a recurring problem in these kinds of discussion, the confusion between ontology (yes, it’s all made of quarks, or strings, or whatever) and epistemology (what’s the best way to describe and understand phenomena?).”

    The ‘best way,’ I think everyone would agree, is the way that solves whatever problem we happen to be confronting as economically as possible. This is why I basically gave you the abductive list I did in my previous answer, to at least advertise, if not demonstrate, some of the problem-solving advantages that fall out of my positive approach. What I want to claim is that a heuristic neglect approach can solve a great number of problems on the cheap – that is, absent the groaning edifice of traditional intentional philosophy. A simpler and far more troubling ontology remains.

    But you’re right, this was no more than an advertisement. I’ll try to pull something together that focusses less on critical provocation, and more on clear, constructive presentation.

    In terms of ‘testable predictions’ there’s a variety of ‘neglect effects’ that leap out of my approach that I think, anyway, could provide the basis for an empirical psychological form of agnotology. I have short (and rather dated, I now think) list here: http://rsbakker.wordpress.com/2013/07/29/cognitive-deficits-predicted-by-the-blind-brain-theory/

    This idea hit me while pulling together my introduction for my dissertation (on fundamental ontology) in 1999. I had decided to dive into neuroscience, confident that I could some kind of empirical support for my ‘brilliant’ ontological (in the phenomenological sense) thesis. It ended up destroying my dissertation, and my mental health – until my hobby, fiction writing, suddenly landed me book deals around the world. Metacognition, I had realized, could not be anything what philosophical reflection needed to be. It’s deliverances were the product of strategic scarcity, elements in specialized problem-solving regimes that simply did not include divining the nature of the intentional phenomena that provided the spine of my whole dissertation project. That was the end of my philosophy career, and the founding insight for what I then called the ‘Blind Brain hypothesis,’ a hypothesis that I think is clearly in the process of being confirmed. Far from problematizing that original kernel, the march of brain science has been filling it in, suggesting new ways to extend the insight.

    It’s no coincidence that I ended my piece with the question I did: it’s the primary question that falls out of that now old insight, what evidences intentional philosophy’s claim to have made it to Square Two? What do you think? Have I missed answers given in the posts above?

    Like

  13. Scott,
    The subject/object paradigm is a largely western model, while eastern philosophy tends to be much more contextual. It’s my suspicion this is a relic of the fact we were hunter gatherers up until the dawn of civilization and thus in competition with our environment, often as well as each other, while rice farming had evolved in the east, as a foundation to more structured civil development and so there was inherent cooperation with the environment, as well as in the larger community.
    Which, as with a lot of ideas with me, goes back to the idea of time, in that we think of the future as being spatially in front of us and the past behind us, since we are physically moving through and thus counter to that environment. While in various eastern and supposedly in some Native American societies, the past is considered in front, since both the past and what is in view are known, while the future is behind. This is actually much more physically and contextually correct, since we do see what has already happened and then this information proceeds past us, as we the viewer are part of our situation and not separate from it.

    Like

  14. labnut: Thus one person commented “Well, this unpleasant episode is almost over“. Fortunately, Massimo deleted this.

    Retroactive edits? Isn’t that a bit like rewriting history? I understand perfectly the need to avoid flame wars online, but why the need to pussyfoot around like this? And why the need to threaten repeat offenders with banning? With authors allowed unlimited posts and posters allowed five, why not some pressure on the other end for authors to respond to questions directly, without evasion and without driving readers nuts by beating around the bush or just flat out not answering (which several very competent readers have complained about)? Why allow an author to simply overwhelm a thread answering only questions that are convenient for them to answer? Isn’t there an implicit obligation here given what kind of venue this is to forthrightly address substantive questions put to you as an author?

    Like

  15. jarnauga,

    I’ll answer your questions as best as I can:

    “why the need to pussyfoot around like this? And why the need to threaten repeat offenders with banning?”

    Well, the short answer could be, of course, because this is my forum, my rules. A better answer is that a great part of the idea of Scientia Salon is to foster civil and constructive discussion, as opposed to much mud slinging, ad hominem and talking past each other than one commonly finds in other venues.

    “Why allow an author to simply overwhelm a thread answering only questions that are convenient for them to answer?”

    Because s/he is the author. Authors do not even have an obligation to address a single comment. They can write their piece, and let the discussion happen among the readers. That’s certainly what happens in larger venues, like the NYT blogs.

    I am grateful that most of my authors do take the time to respond to readers’ comments. In this particular instance, I did suggest to Scott to use the method that I normally use: instead of posting a large number of individual responses, pick selectively interesting quotes from commenters and post occasional long replies. Either way, he has a special privilege qua author. If you’d like to submit a full piece in response, I’ll be happy to consider it.

    cheers,
    Massimo

    Like

  16. Scott, I see you rest on there being no clear definition or application of subjective/objective to human awareness, and to intentionality being too weak to account for the subjective experience. I

    1. By definition and intact subjective creation within a brain, as the experience of awareness, is subjective. It is ongoing and from it we make assumptions about what is “out there” including “oneself” and “the world” and their interface. You do not appear to have accounted for the fact that an experience known only to you, intact and as an aspect of your anatomy, is by definition subjective. Its basic, not hard to understand.

    2. On the other hand, that subjective experience assumes that the representation created in the experience by neural finalization of “stimulation from oneself interfacing a world” has objective reality beyond the subjective neural event. Its just a subjective ongoing event of finalization in a brain, and it assumes that what is represented in the experience (oneself interfacing a world by moves to collect stimulation from a world) actually exist as objects. Thus, by definition objects with an objective existence assumed because it is always a subjective experience that assumes it – subjective at all time.

    3. Intentionality is content. What could be broader than that to deal with the representations of “self in world” appearing ongoing from neural finalization? Nothing, that;’ why intentionality is valued and considered by many to be of use. In fact is it “what is represented” in the experience, and nothing more in this context. What is represented is “objects”, and they are represented in the experience of a “subject”.

    I made the point that no one currently knows how to objectify the subjective experience (reproduce it in a machine), but that’s no need for pessimism, and no need to mess around with “subjective/objective” or “intentionality” with no sound basis. The readers who know Leibniz, Kant, and Schopenhauer, will enjoy what I have written, and others could learn by reading their work. These are basic matters that regrettably too few people know or understand. But read over again and see how it fits – to me, you are way off track, and these corrections are necessary. To me, what you write has no grounding in proper definitions, which is “square one” in terms of argument. There is a free resource that explains these issues http://1drv.ms/1tnKM6f

    Like

  17. With all due respect, Massimo, aren’t those rules a bit Orwellian if we’re talking about retroactive edits and deleted posts after publication without any acknowledgement that this has been done other than mentions of them by others or a quote from someone’s deleted post on Three Pound Brain? Look, it’s your site, of course. It just seems a bit 1984-ish.

    But don’t authors have an obligation to readers, too, especially when they answer and simultaneously trash them elsewhere (not to mention the fact that authors need readers to…read their work)? And what of the dismissive comments made by the author himself in responses? Is that part of an author’s special privilege? Schlafly uses the word “silly” [“…it is silly to think that science is going to reverse all of our beliefs.] which you warn him about [“And please don’t use words like “silly,” your comment barely made it through my filter…”] and yet the author gets to describe readers’ interpretations as “uncharitable” and their substantive criticism as follows: “All I’m asking for are claims that actually engage the piece, not droll dismissals of the kind that someone possessing a position so fantabulously powerful as your own should be ashamed of making.”

    Why the double standard? Because he includes a bravado escape provision? Viz: “If I sound contemptuous, I’m not. [except on my own site, of course] I’m the product of a far rougher internet culture, and I know that my online idiolect tends to chafe certain sensibilities. I apologize. I’m not trying to insult anyone, only provoke genuine responses to my argument.”

    Just trying to get some clarity here and not run afoul of The Filter.

    Like

  18. Just a quick follow up that might help you Scott, and where I actually agree with you in spirit, so to speak, is that theories of intentionality (as content, including “how” it arises, and “what” it is is) are woefully inadequate in terms of what I have written directly above. Where will you read an uncluttered definition? It is subjectivity as an ongoing neural finalization that can only be a “representation” because it is a finalization event after stimulation (at eye, hand, and so on, and in thoughts in my view as well – the Libet point earlier). It is not an experience occurring at the eye itself. It is represented in the brain as being experienced there. We clearly have a subjective intact and personal event, and it clearly represents eye and world interfaces (seeing something) as objects, but this is all within the subjective experience and so we can only ever assume in the experience that the objects actually exist – with 99.99% certainty most of the time. I suspect your confusion with definitions stems from the lack of a proper understanding of intentionality in general. Anyway, I hope this helps you and others.

    Like

  19. labnut,

    I wish you would have addressed my last comment because I disagree that things are the way you think they are. In short:

    You, and the author of this piece, may overestimate the impact these considerations have on the way people live their everyday lives. Most people get their moral compass from the people around them and from trial and error, and they would not even understand the question of whether there is intentionality or not. Likewise, civilisations probably do not primarily fall because people stop believing in intentionality or become too scientistic or something like that; one suspects that resource limits, epidemics, the elite giving itself so many privileges that the state is starved of the funds necessary to protect itself against collapse, and pure chance play stronger roles, to say the least.

    Second, under the definition of machine you appear to use humans are always machines because dualist free would would not operate any differently than materialist decision-making: it would either follow some mechanistic rules or be stochastic. An alternative has never been developed beyond evading the question of how it would work.

    Third, under my definition of machine – one that seems rather commonsensical to me – humans are only machines if they are purposely created by another intelligence; otherwise they aren’t.

    In other words, either the situation is bleak regardless of what science will do, or the situation is bleaker under theism anyway. But really most of us will shrug and get on with our lives.

    Like

  20. @rsbakker

    what evidences intentional philosophy’s claim to have made it to Square Two? What do you think? Have I missed answers given in the posts above?

    My personal view is that philosophy of mind is not doing well at explaining minds. I can’t say I really understand your position very well, but if you think that a lot of the problem lies in: A) the expectation that pure contemplation will provide us with anything like accurate information about how our minds actually operate; and B) a blindness to the fact that the theoretical structures we use to construct philosophical theories import folk-psychological assumptions that don’t jibe with reality — then we’re in agreement on that.

    If you believe that the way our minds process information – what they’re *geared* to do – makes it very difficult to contemplate the features of our own minds, then we’re in agreement about that too.

    Even if philosophy of mind reaching square two (and I don’t see a clear statement of the criteria for that above) disproves your thesis, it doesn’t mean that philosophy of mind being stuck at square one makes it true. Unless that’s your entire thesis. It doesn’t seem like it is.

    I don’t even know of a philosopher who has claimed to have reached a decent understanding of the mind, so I don’t know if there’s an answer to your question. I do think we’re making some progress here and there getting our heads around the problem, at least. Both Dennett and Deacon come to mind – and I don’t think it’s any coincidence that they both see scientific understanding as essential. And I think it’s really valuable to spend time coming up with novel concepts and frameworks for thinking about the mind, and why we’re running into such trouble trying to explain it.

    Liked by 1 person

  21. I learned a lot from the comments (including Scott’s answers, Michael Murden, John Smith/labnut)
    If I understand well, Scott’s position is half-way between mine and a theory of the mind rife with remnants of past philosophical bumbling from centuries long gone. (Said ancient conceptions having apparently a lot of dedicated enthusiasts.)

    A philosophical theory of the mind can only incorporate the science we now know for sure. Philsophy cannot contradict science. Naturally, science, as it progresses, devours yesterday’s philosophy.
    So what do we know for sure? Within skulls, neurology and more (glialology?): ideas and facts in “neurobiology”.

    Now biology’s fundamentals are solidly anchored (one can guess; guessing is one thing the philosophical method does best), with what is out there, and inside everything, Quantum Physics.
    Interestingly, most Quantum Physics is… undiscovered. Some will be baffled to read this: was not a Theory Of Everything TOE) established by celebrity physicists? Actually TOE does not have a toe in mind theory (or a toe in anything practical, come to think of it).

    What about the “Standard Model”? As Scott mentioned it, I need to address that irrelevant detail. The Standard Model has nothing to do with the Quantum Physics pertaining to the brain. Strictly nothing. It’s a theory in High Energy Physics, and the brain is not about High Energy Physics.

    The Quantum Physics one needs for the mind is, unsurprisingly, in my opinion, the one needed for the Quantum Computer: and that physics is obviously a work in progress.

    All right. So a theory of the computing mind will have to be anchored in the Quantum, but is there a simpler level of the Theory of Mind informed by brain biology that one can already guess?

    Obviously (…) theories we make, and all we learn has to do with neuronal circuitry, if not individual neurons themselves, more or less impacted by glial activity.
    I tried to explain before that, for example “denialism” is a direct consequence of this elementary situation. So is the inertia of imprinting.

    However, everything indicates that individual neurons are immensely complex. The last research show that even an axon can be immensely complex, and reactive along its length (according to how much myelin it has).

    Scott:”… metacognition is nothing like the reflective capacity that philosophers have presumed all these millennia.” I, of course, agree.

    If ideas are simply elements of brain architecture (say neural networks), the mind is like an enormous landscape full of complicated structures.

    And who is the architect? Consciousness, roaming around, and helping to establish new ones with pieces of the old ones. So metacognition itself has to correspond to brain metastructures (some of these are already known, namely the gateway neurons regulating blood flow, hence activity, in various parts of the brain).

    If mental complexity, ultimately, fundamentally, is anchored in Quantum Physics (as it has to, I firmly believe, basically because not only there are indices, but no choice), one recovers most of what make people special… And not at all like clockworks.

    Like

  22. jargauga,

    “aren’t those rules a bit Orwellian if we’re talking about retroactive edits and deleted posts after publication without any acknowledgement that this has been done”

    Two things: first, this was the first time in a long while that I’ve done any (very minor) edits to any comments. In both cases these were long time commenters with whom I have a personal relationship, and who I thought wouldn’t mind. Moreover, the alternative would have been my usual path: blocking the comment, wait until I got home (can’t do it via mobile), email the authors with a request to do the edits themselves, copy/paste the original post so they wouldn’t have to, and then wait for the modified comment to show up and approve it. As you can see, it’s a lot of work.

    Second, you do know that typically media outlets do edit letters to editors, right? I was simply treating those comments in the same manner.

    “But don’t authors have an obligation to readers, too, especially when they answer and simultaneously trash them elsewhere”

    They do, and I’ll be more forceful in remind them of it. But I just don’t have the time to police what my authors do on other sites, sorry.

    “Schlafly uses the word “silly” [“…it is silly to think that science is going to reverse all of our beliefs.] which you warn him about [“And please don’t use words like “silly,” your comment barely made it through my filter…”] and yet the author gets to describe readers’ interpretations as “uncharitable””

    Uncharitable is not an insult in my dictionary, while silly is. Sorry, shades of gray.

    Like

Comments are closed.