Freedom regained

Freedom Regainedby Julian Baggini

[This is an edited extract from Freedom Regained: The Possibility of Free Will, University of Chicago Press. Not to be reproduced without permission of the publisher.]

We’ve heard a lot in recent years about how scientists — neuroscientists in particular — have “discovered” that actions in the body and thoughts in the mind can be traced back to events in the brain. In many ways it is puzzling why so many are worried by this. Given what we believe about the brain’s role in consciousness, wouldn’t it be more surprising if nothing was going on in your brain before you made a decision? As the scientist Colin Blakemore asks, “What else could it be that’s making our muscles move if it’s not our brains?” And what else could be making thoughts possible other than neurons firing? No one should pretend that we understand exactly how it is that physical brains give rise to conscious thoughts and perceptions, but nor should anyone doubt that in some sense they do.

However, because we don’t yet understand the relationship between mind and brain, we don’t yet know how to talk about it. The words and phrases we reach for when talking about the mind are often inadequate and misleading. For instance, I and others sometimes talk about brains “causing” or “giving rise to” thoughts and perceptions. That suggests that brains are doing all the real work and that thoughts and perceptions are in some sense mere effects of neural causes, a view known as epiphenomenalism.

Do the scientific facts about brains require us to think of thoughts and actions in this way? I don’t think they do. The undeniable fact is that brains provide the material means by which conscious life is sustained. Without brains there can be no human consciousness. But it does not follow from this that we can explain all human behavior in neurological terms alone and that conscious thoughts contribute nothing to our actions. That is a much stronger claim, which goes against the evidence of experience.

Take a simple example. I shout to you “duck!” and you duck. Home in on the brain and it may well be possible to trace a line of cause and effect which only describes sound waves entering your ear, translated into brain signals, in turn triggering further neural firings that lead to your muscles moving in such a way that you duck. We will not find embedded in any of this the meaning of “duck.” However, it seems deeply implausible to suggest that we can make sense of what happened here unless we accept that the meaning of “duck” had a vital role to play in the causal chain. If I had shouted “suck!”, “cheese!” or “jump!,” you would have reacted differently. We cannot then understand your behavior unless we ascribe some critical importance to the meaning of “duck.”

That can only mean one of two things: either the meaning of “duck” had no role at all to play in your action, or a purely physical description of what went on would not provide a complete account of why you did what you did, adequate to explain what happened. Given how implausible the first option is, we should be very careful before rejecting the second.

The thesis we are questioning here can be summed up as the claim that thoughts have no causal efficacy: they do not affect what we do. “Thoughts” should be understood very broadly here to include not just beliefs, but also desires, intentions and simply the way in which we understand what we see and hear, like injunctions to duck.

When people deny the causal efficacy of thoughts, they often do so on the basis of experiments that at most show that thoughts do not affect actions in certain very specific cases. To jump to the general conclusion that thoughts never affect actions looks like a remarkable example of rash generalization. The Libet experiments, for example, appear to show that conscious choice doesn’t determine when we choose to move a finger in a laboratory situation. Even if this is true, it seems too much to leap from this to the claim that, for instance, the belief that immigrants are taking over the country is not a reason why a person voted for a nationalist political party. An experiment which shows that “thoughts have no causal efficacy here” cannot show that “thoughts have no causal efficacy anywhere.” That would be like claiming that because a person’s religious belief does not affect their choice of soap powder, it doesn’t affect their choice of spouse or place of worship.

The analogy is perhaps less exaggerated than you might think because, as a matter of fact, many experiments that appear to debunk the role of conscious thought in action focus on very specific kinds of action that are not particularly reason- or thought-based. The philosopher Shaun Nichols offered me as an example a famous study by John Bargh, in which subjects who were made to read passages that included words associated with old age subsequently walked more slowly than subjects who weren’t. None of these people were aware that they were altering their behavior, or had any suspicion that what they had read was changing how they moved.

“Some people are shocked that people’s behavior can be affected so much by things they are unconscious of,” Nichols told me. But thoughts do not generally play a significant role in how we walk, unless we are deliberately acting or adjusting for some particular reason. So typically, if you ask someone why they walked more slowly than usual to an elevator, they don’t know. “But what if you asked them, why did you walk to the elevator?” asks Nichols. “It’s not going to be like, ‘Jeez, I don’t know, maybe it was to get out of the building?’ They know why they walked to the elevator. If somebody is at an airline gate, and you say, ‘Why are you here?’ they don’t say, ‘Gee I have no idea why I’m here.’” No experiment has ever shown that people’s beliefs have nothing to do with actions of this kind. So “if you’re going to draw big inferences about the nature of human decision making from studies about the foibles, it’s really important to keep in mind all of the things we do astonishingly well, so well that you could never publish an experiment because the editor would just say, ‘Well of course people know that they’re at the gate because they need to get a plane!’”

Work by the psychologists Kathleen Vohs and Jonathan Schooler also seems to show that thoughts do affect actions. Specifically, the belief that you have free will makes you act more morally, and the belief that you lack it makes you act worse. In two experiments, they found that subjects who had read a passage which “portrayed behavior as the consequence of environmental and genetic factors” cheated more on a subsequent task than those who had read a neutral passage. They also found that “increased cheating behavior was mediated by decreased belief in free will.” Others have found similar results. This seems as clear an example as any of a belief affecting action.

Perhaps the neatest and most powerful rejoinder to the idea that thoughts change nothing comes from the neuroscientist Dick Swaab, who dismisses free will out of hand as a “pleasant illusion.” Nonetheless, in his book We Are Our Brains he reports that “patients suffering from chronic pain can be coached to control activity in the front of the brain, thereby reducing their pain.” But hang on: if “we are our brains” how can we control them? His own example is evidence that it is far too simplistic to talk as though our brains are doing all the work and conscious thought is redundant.

For all the clever research showing how we are manipulated by unconscious processes, much of what we do is patently rooted in thoughts, reasons and beliefs. No credible scientific view of mind can force the conclusion on us that thoughts have no role to play in guiding our actions. How they do so, however, is not so easy to explain. One possibility is that consciousness is somehow a property of physical stuff, whether it is a part of a brain or a table. If this were true, it would be odd if only some elements, like carbon, had this property. Therefore almost everyone who believes consciousness is a property of matter is a panpsychist, believing that mind or consciousness is a feature of all physical matter. Mind is everywhere.

This sounds crazy. Surely stones don’t think? Well no, and most panpsychists don’t claim they do either. Only remarkably complicated physical structures like brains can think in any recognizable sense because thinking, as we know it, requires more complexity than the structure of the stone allows. Nonetheless, panpsychism does entail that there is some kind of trace of mind, some minimal subjective awareness, even in a pebble.

Many philosophers have ruled this possibility out because it seems that subjective awareness is just not the sort of thing that could ever be the property of brute matter. But maybe this is simply a lack of imagination. Matter may not be so brute after all and believing that it is may be as ignorant and prejudiced as believing that “brutes” like pigs and dogs cannot feel pain. The contemporary panpsychist Galen Strawson, for example, says that nothing in physics rules out the possibility that something physical cannot have experiences. “To claim to know with certainty that spatio-temporal extension entails non-experientiality is to claim to know more about space-time than is warranted by anything in science.” He accuses many materialists of being “false naturalists” in the grip of “the conviction that experience can’t possibly be physical, that matter can’t possibly be conscious.” Ironically, this is an assumption shared with Descartes’ dualism, which asserts that the world is made up of two different substances, matter and mind. Indeed, it is doubly ironic because as Strawson points out, “Descartes was at bottom aware that one can’t rule out the possibility that matter may be conscious. Many of the false naturalists, by contrast, have no such doubts.”

Strawson may be right about this. Weirdness is not, after all, a sure sign of falsity. As biologist J.B.S. Haldane famously said, “the Universe is not only queerer than we suppose, but queerer than we can suppose.” That may be so, but many, including myself, find it very hard to even understand what the panpsychist claim really adds up to. The choice, it seems, is between a view that is ludicrous or empty, as Colin McGinn put it. If it is the claim that stones think, it is ludicrous. If it is the claim that any atom could be part of something that does think, then it is empty, because this is what anyone who thinks brains are required for consciousness believes. So although panpsychism cannot be ruled out, it remains an explanation of last resort for the consciousness of physical beings.

A more promising alternative appeals to the idea of different levels of explanation. To take a mundane example, when I hit the “#” key on my keyboard, an “#” symbol appears on my screen. There must be some explanation of this at the very lowest, sub-atomic level, involving nothing more than chain reactions between electrons, neutrons and protons. That might appear to be the most fundamental explanation of all. But of course it isn’t the only one and for practical purposes it isn’t the best. Far preferable is the one that refers to the code written into the computer software. When I press the key, a digital signal is sent which passes through a program, resulting finally in a digital signal which “tells” the monitor which pixels to blacken out. To say that the “real” explanation is the sub-atomic one and that the existence of code does nothing to explain what happens would not just be wrong but perverse.

When it comes to our minds and behavior, there are explanations at the level of conscious thought, the biochemical brain, and of fundamental physics. If we took seriously the reductionist idea that the only true explanation of why things happen is to be found at the most basic, lowest level, then not even brain science would be “really” explaining behavior. Physics rather than psychology or neuroscience provides the ultimate reductionist account of why things happen.

We do not have to decide which of atoms, brains or thoughts provide the “real” explanation for what we do. We simply need to accept that there are different accounts we can give at each level, and which is most appropriate depends on what we are trying to understand or explain. This notion of appropriateness can be understood in a purely pragmatic sense. You could believe that in principle a physicist with powers of Laplacian omniscience could describe everything that a person has done on the basis of physical information alone. But because in practice this is never going to be possible, you might accept we still have use for the explanations of psychologists and neuroscientists.

However, there is increasing evidence that scientific explanations don’t work as neatly as this after all. The old reductionist paradigm was that the way to understand how anything works is to break it down and down until you get to the most fundamental processes. In other words, the complex whole can be entirely explained through the workings of the simpler parts. This has commonly been understood to imply the theoretical possibility of reconstructing from the bottom up as well as deconstructing from the top down: if you know what the atoms are doing you’ll know what the larger objects made up of them will do. But the Nobel-prize winning physicist Philip W. Anderson suggests this is a widespread mistake. “The reductionist hypothesis,” he says, “does not by any means imply a ‘constructionist’ one: the ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less to those of society.”

Science increasingly seems to be confirming the old adage that the whole is greater than the sum of its parts. For instance, you can look at how the brain works and in theory describe everything that goes on in terms of fundamental particles. But you cannot look only at the laws governing the behavior of particles and from that work out what will happen when they are arranged into complex organs like brains. The laws of physics do not predict consciousness, yet that is what the physical universe gives rise to.

To put it another way, systems behave in ways which cannot be predicted simply by knowing the behavior of the elements of the system. Systems acquire characteristics which their simple parts do not have. A swarm of bees can be deadly even though no bee in it is lethal; an orchestra can play a piece of discordant music even though each instrument by itself is playing harmoniously; five functional human beings can form a dysfunctional group. As another Nobel-prize winning physicist Robert Laughlin put it, “what we are seeing is a transformation of worldview  in which the objective of understanding nature by breaking it down into ever smaller parts is supplanted by the objective of understanding how nature organizes itself.”

This new understanding is known as complexity theory. “A complex system is composed of many different systems that interact and produce emergent properties that are greater than the sum of their parts and cannot be reduced to the properties of their constituent parts,” as the scientists Nicolis and Rouvas-Nicolis put it. Or, to take psychologist Michael Gazzaniga’s account of “emergent properties,” micro-level complex systems “self-organize … into new structures, with new properties that previously did not exist, to form a new level of organization at the macro level.” In “strong” versions of this theory, “the new property is irreducible, is more than the sum of its parts, and because of the amplification of random events, the laws cannot be predicted by an underlying fundamental theory or from an understanding of the laws of another level of organization.”

To give a clear example, if this is correct, quantum physics is more fundamental than Newtonian physics, but Newton’s laws can’t be torn up and replaced by quantum ones. “Classical properties, such as shape, viscosity, and temperature, are just as real as the quantum ones, such as spin and nonseparability,” says Gazzaniga.

Gazzaniga is interested in how complexity provides a way of understanding how minds operate in ways that can’t be either predicted or understood by only studying brain processes. Mind and consciousness are “emergent properties” which arise out of nothing more than brain processes, because the complex organization of these processes creates new properties which are not found at the fundamental physical level.

This explains how it can be that beliefs, desires and intentions can actually change things, without us having to think that they are mysterious, non-physical things. “Mental states that emerge from our neural actions do constrain the very brain activity that gave rise to them,” explains Gazzaniga. “Mental states such as beliefs, thoughts, and desires all arise from brain activity and in turn can and do influence our decisions to act one way or another.”

This is one of the most important scientific facts we need to bear in mind when thinking about free will. Too often it can seem that, if brains are the engines of thought, then thoughts themselves cannot change anything. Complexity theory shows us how this can be false, without the need to postulate any strange, weird, supernatural or non-physical will or soul. It shows that the idea that thoughts, beliefs and desires can cause things to happen is not outmoded metaphysics, but bang up-to-date science.

I’m fairly sure that the right way to understand the apparently self-evident fact that thoughts do change what we do will involve thinking about levels of explanation rather than adopting panpsychism. I’m less sure whether that will be because we cannot in practice do without explanations at the psychological level or because we cannot in principle explain everything at the most fundamental physical level. My bet would be on the latter, or a third possibility I can’t even imagine. However this debate resolves itself, we already understand enough to see that accepting that in some sense our thoughts and actions are only possible because of our brains should not trouble us. Neuroscience is filling in details of the naturalist picture, but there is plenty of room for human agency in it.

_____

Julian Baggini is Founding Editor of The Philosophers’ Magazine. His books include Welcome to Everytown: A Journey into the English Mind, What’s It All About?: Philosophy and the Meaning of Life, the bestselling The Pig that Wants to be Eaten, Do They Think You’re Stupid?, The Ego Trick and The Virtue of the Table: How to Eat and Think, all published by Granta Books.

Advertisements

94 thoughts on “Freedom regained

  1. This is my 5th post in the thread, so I won’t be able to respond to anything else. But I did want to clarify how I view determinism and free will.

    Determinism is the belief (thus the -ism) in the reliability of cause and effect. Each thing that happens is caused to happen by something. And, given the same relevant conditions and forces in play, the thing will reliably happen again. Those conditions and forces are the causes of the event.

    All of the sciences depend upon the universe being “deterministic”. Upon this they base their hope to discover these causes and give us some control over the events that affect us. For example, finding the causes of disease may enable us to prevent or cure them.

    Our knowledge is seldom perfect. And our tools are often lacking to observe or deal with some things, like the spin of an electron. I don’t think we have a tool yet to poke one and make it spin the other way. But the spin of a specific electron is unlikely to affect us.

    On the other hand, at a more macro level, we have learned how to bang whole batches of electrons into one end of a copper wire and get another batch to come out the other end. And we’ve constructed a ton of useful electrical appliances by controlling batches of electrons.

    The crystalline arrangement of copper atoms aids the predictability of this behavior. In other materials, we cannot count upon the electron producing such a reliable effect without bumping into something else and going we-don’t-know-where.

    Which brings us to uncertainty and unpredictability. We use concepts like “randomness” and “probability” and tools like statistical analysis to deal with events where the causes may be ‘theoretically’ knowable but for all practical purposes are unknown.

    A true-believing determinist, like myself, believes that, even when the outcomes cannot be reliably ‘determined’ (in the sense of ‘known’ or ‘predictable’) that they are still causally ‘determined’ (in the sense of ‘reliably caused’ and ‘inevitable results’ and ‘theoretically’ knowable).

    A coin flipped in a vacuum (and perhaps shielded somehow from gravitational fluctuations) by a sufficiently accurate device may always rotate a precise number of times to always come up heads. And the causes of unpredictable quantum effects is theoretically knowable, even if we don’t know them yet.

    Within this deterministic universe, everything is inevitable. One of these inevitable things was the emergence of life. Another was the evolution of biological organisms with an effective array of internal and external sensory systems. And a sufficiently advanced neurological system to allow them to experience, remember, imagine, experiment, evaluate, and choose alternative behaviors to satisfy biological needs by manipulating itself and its environment.

    One of these species is us. And we introduce into this universe a source of causation based on more than just instinct, but upon complex and imperfect mental performances motivated by purposes, both real and imagined, and behavior which is sometimes predictable and sometimes full of surprises. We call some of these mental processes ‘thinking’ and ‘choosing’. And they are as real as what we call ‘standing’ and ‘walking’.

    And when we decide for ourselves what we will do, rather than someone else forcing us to act against our will, we call it ‘free will’ — an inevitable product of a deterministic universe.

    Liked by 3 people

  2. Disagreeable Me comments on the OP:

    “I agree with your conclusion that there is plenty of room for human agency. But what should be made explicit is that an account of agency as arising from the mechanical operation of a very complex arrangement of simple structures such as neurons is also compatible with the agency of machines such as computers. A lot of people who believe in human free will are unhappy to accept that and feel compelled to appeal either to mysticism on one hand or quantum weirdness on the other to give us something extra special above and beyond simple cause and effect. I hope we agree that this is unnecessary.”

    All of the comments so far have been insightful and represent, often, a perspective from a recognizable community. Many, from all the different camps, seem to be a little careless with the facts, however. This is a problem for everyone and is a great obstacle in our understanding.

    The above is an example of a common (universal?) bias: underestimating the complexity of everything. To say that neurons are ‘simple structures’ can not be true. They could only be simple for someone who knows little about them.

    A neuron may be nothing less than a small (collection of) quantum computer(s). The neuronal cell body is stuffed with microtubules apparently containing quantum critical proteins that could function like quantum computers. This is still highly preliminary but does suggest the vast amount of information that would be required to understand brain function and the processes of life. We are just beginning to scratch the surface. Kauffman/Hameroff 2015 (http://phys.org/news/2025-04-quantum-criticality-life-proteins.html) It is still impossible for us to fully understand what utterly amazing creations we are. We are not of our own making and so we have to slowly uncover our own miraculous design and structure.

    The same is true for bacteria. How could they be simple when we have almost no idea of how they actually work.

    The idea that human consciousness is produced by a network of ~100 billion neuronal quantum supercomputers is simple enough, understanding what that means might also be impossible.

    Liked by 3 people

  3. This is in response to Johannes Lubbe.

    I think that you are an automotonist ! (Prejudiced against automatons.)

    “The surprising result is that there is no line that divides automatons from conscious entities…”

    There is no evidence that automatons aren’t or can’t be conscious.

    Like

  4. Hi dantip,

    Can you explain what you mean by this? First, “embedded” and “encoded” sound very similar, could you explain how they are different for you?

    I didn’t mean to distinguish between them. My use of “embedded” was simply an echo of the sentence I was disagreeing with, and my use of “encoded” was intended as a more specific clarification. Thus I was saying that the “meaning” of the word “duck” was encoded in the neural network (and I’d also find it pretty peculiar that anyone would disagree. What other options are there?).

    Hi Aravis,

    Reductionism, conceived in any clear, concrete sense, is a dead letter. […] This is only a problem for people who are still enthralled by unity of the sciences fantasies. For those of us who are happy to have scientific and theoretical pluralism, …

    By rejecting the joined-up-thinking approach to such issues you deprive yourself of one of the most powerful tools for understanding the world. Worse, the way that you frame the whole problem actively creates artificial divides and so more or less prevents a better understanding of things like “meaning”.

    Hi Marko,

    Strong emergence can, does, and must appear in nature.

    As was discussed in comments to your essays, you have not shown that strong ontological emergence is required. Your arguments were purely about epistemology, and you have not shown that weak emergence is insufficient for ontology.

    Hi labnut,

    I plainly do possess the ability to perform novel actions and create novel thoughts that were not predetermined by my neuronal patterns. In other words, I plainly and obviously possess free will. […] There are three reactions to this impasse. 1. Deny there is a problem. That is free will denialism.

    You seem to have caught Socratic disease! 🙂 Arguing by labeling opposing positions “denialism” isn’t all that convincing.

    Like

  5. Marko,
    Despite appearances, nature is *not* deterministic. Not even on ordinary macroscopic scales

    As always, I value your contributions. But I need clarification of the above statement.
    What does it mean if some result R is not reliably the deterministic effect of cause C according to some law L in some environment E?
    As far as I can see, determinism can be stated as follows:

    R ~ L(C(E)), where R is the result, L is the relevant law(s), C is the cause and E is the environment(I am simplifying as much as possible for clarity).

    Determinism states the same R will always hold for a given law(L), cause(C) and environment(E).
    If determinism does not hold, in certain cases for a given L, C and E, this can only mean that the operation of the law(L) has 1) changed, is 2) overridden, or is 3) not applicable, given that the cause(C) and the environment(E) have not changed. The following possibilities exist:

    0. Determinism is always true.
    In other words R ~ L(C(E)). For a given L, C and E, R will always obtain.

    1. the law(L) has changed.
    This has never been known to happen. So far, the stability of the laws of nature seems absolute. They are invariant in time and space(pace Smolin).

    2. the law has been overridden or decoupled.
    Something has supervened to change the operation of the law, although the cause(C) and the environment(E) have remained the same. It is hard to imagine how this could happen.
     
    3. the law is not applicable.
    If no law applies the result must be random. Although much of apparent randomness is simply the uncomputable effect of determinism.

    Turning now to free will. Free will defies determinism so (0) cannot be true.
    Randomness is a poor explanation for free will since the products of our thoughts are anything but random. So we must discard (3). We have never known laws of nature to change so we must discard (1).

    That leaves (2), the law(L) has been overridden or decoupled. But what does that mean? It can only mean that an alternate law(L1) is operating in its place, since randomness cannot explain structured thought.

    But this is still determinism, although of a different kind. In other words
    R ~ L1(C(E))
    where L1 represents the laws that govern thought.

    So, for free will to exist, there must be special laws of thought(L1), that supervene or decouple the operation of  certain other laws, allowing thought to operate independently of neuronal patterns.

    We have no idea how this can happen. David Chalmers supports this idea when he claims that we cannot understand consciousness until we uncover the special laws of nature that govern consciousness.

    Free will denialism makes the fatal error of assuming that we know enough to rule out free will. That is an arrogance fueled by ideological prejudice. A great deal more science needs to be done. Free will is a reality so we must bend our minds to uncovering the laws of nature that make this reality possible.

    Like

  6. Coel wrote:

    “The way that you frame the whole problem actively creates artificial divides and so more or less prevents a better understanding of things like “meaning”.”

    ——————————-

    Given that in the 47 times we’ve had this conversation on Scientia, you’ve never provided one millimeter of greater understanding of “things like meaning” via your “joined-up-thinking” approach I’ll stick with Wittgenstein, Austin, Ryle, and co. Thanks.

    ——————————-

    As everyone already knows — because we’ve had this discussion over and over again — I don’t think that free will, consciousness, or intentionality are scientific problems and thus, they are not going to be better understood via scientific theorizing. Contra Coel, however, this does not mean that we cannot learn a lot about these things. Within the artistic, humanistic, and social scientific frameworks a great deal of understanding can come about these subjects. The fact that they it is the sort that scientismists aren’t interested in does not mean it is not valuable. Indded, it is their conception of understanding that is impoverished — because tunnel-visioned — not the humanist’s.

    Joyce and Faulkner will teach you more about consciousness than any neuroscientist. And Orwell will teach you a lot more about freedom.

    Liked by 3 people

  7. Hi Johannes Lube, you make interesting points regarding the spectrum of behaviors and cognitive capacities found throughout all living organisms. This was raised at a seminar with Dan Dennett, and he seemed to agree that consciousness could be conceived as a continuum. However, it was not clear if you are suggesting that the mechanisms of bacterial sensing and response fully capture what is happening at the level of human (or other sufficiently complex animal) consciousness. I would disagree with that since the level of organization found in the brain (or neural networks) provides for more functions than found in less complex, particularly single celled, organisms.

    Hi Marvin Edwards, I’m glad you’ve been posting since I don’t have a lot of time. Very nice responses.

    Like

  8. Hi Coel, by studying mental activity from brain cells all the way down to quantum levels you would never identify the word “duck” or its meaning (to certain English speaking organisms). All you could find are causal relationships between external stimuli (sound frequencies) and chains of physical responses.

    To determine that the frequencies were interpreted as a unit of language, that it would be experienced as “duck”, and that it means a person should bow down quickly would require information from a completely different level than the underlying mechanics of cognition and response.

    This is where reductionists have lost perspective. They assume the knowledge we manifestly have at the bio-social level in order to claim everything can be understood by processes seen at lower levels. But meaning is not accessible from pure mechanism.

    It is by connecting observations between levels that we start understanding how lower level mechanisms enable higher order phenomena. Some mechanistic predictions might be possible without information about/from the higher level, but it would be very limited and without capturing meaning (since the higher order entities are left undefined).

    In this sense I believe I agree with Aravis that language at one level is basically incompatible with understanding at another. The idea I take as a compatibilist is that both have their uses during investigations at each level, and by careful comparison some connections may be made between the two.

    Perhaps Aravis would disagree, but I believe it would be possible to point at specific neural activity and say: “here is the feeling of pain”, “here is the decision to raise one’s arm”, “here are the networks enabling conscious activity”. It will require careful observations and mapping between reports at mechanistic and personal levels. And I believe this does have value (scientific and other).

    One can of course use this information to disrupt or impose thoughts and actions on others through direct physical action on the brain. That would not indicate superiority of mechanistic explanations, or mean the person had no free will, it just means free will can be undercut using more direct means than pointing a gun at them.

    Important to understand, is that using a purely reductionist approach would hinder our ability to understand coercion at a higher level: what a gun is, much less how or why it (or the very thought of one) could get a person to act opposite from their normal interests… and what others might feel about it. That’s when science leaves the picture and other forms of understanding/exploring the human condition become paramount.

    Like

  9. Julian,
    Your Post mentions Freewill only briefly at a couple of places. Mostly it seems to be substantiating the premise that the mind is where freewill arises and that this also is at a level which is above “the biochemical brain”. Do you really mean that consciousness is NOT an intrinsic part of (different and separable from) biochemical brain activity? Because in that case where do you suggest this mind exists?

    Is it not probable that our whole conscious thoughts and decisions are only part(s) of our total brain activity? Possibly the tip of an iceberg above and supported by unconscious activities which go on continuously to a greater or lesser extent from early life until death. This tip of conscious activity can then cause action in other part(s) of our brain and/or cause bodily response(s), i
    .”n the same way as, bodily, my hand can scratch my nose. In which case freewill’d thinking once again becomes a biochemical-brain level activity -which is a physical process -and you are back at Square 1 with Freedom Lost.

    I am an Incompatibilist but unlike many/most people,.., both lay and expert, I do not think it negates moral responsibility, in fact IMO Determinism has little, if any, moral relevance.

    Firstly, is the hard determinist ever affected by the knowledge that, when reasoning out a decision, he cannot do otherwise than what is inevitable?
    I’m not! I probably behave, trying to forecast the future with a conscious intent, exactly as if I were libertarian. This is because I do not know, cannot know, what that inevitable Determined decision IS, only that I must make it.

    Secondly, I think there are no Causes. In our short existence we have only proven evidence of Effects…
    Determinism is …effect => effect => effect => effect => effect => effect… [The philosophical puzzle of a First Event is as unreal as Nothing or Infinity, conceptually, linguistically possible but for which we have no sensory evidence.]

    Humans have evolved into a highly communal species, the larger the groups the greater interdependence of their members and the more important any behaviour affecting any/all others becomes. I suggest this is synonymous with behaving with moral responsibility. Whether human behaviour is free or determined hardly matters, so long as its overall effect is survivally advantageous intra-special behaviour.

    Like

  10. Coel,

    As was discussed in comments to your essays, you have not shown that strong ontological emergence is required. Your arguments were purely about epistemology, and you have not shown that weak emergence is insufficient for ontology.

    Yes, that’s true. But for the purpose of discussing free will, it is epistemology that matters. Namely, for the sake of the argument, I could introduce some ontology which contains (as primitive concepts) several gods, human souls, material world, heavy dualism, etc. Given such an ontology, I could arguably claim that free will exists, and that it is weakly emergent from the primitives.

    But such an argument is vacuous, because you will then probably complain that we have no epistemological basis to support such an ontology. And while I would agree with you in that complaint, the complaint itself is a game-changer — we are now discussing epistemology, not ontology. And epistemology suggests that there is strong emergence in nature, bringing us back to my claim that — as long as we have no detailed brain model to rigorously prove weak emergence of free will, strong emergence remains a viable option, impossible to exclude on scientific grounds.

    As I noted a few threads back, ontology is and will always remain underdetermined by science. It is therefore useless to use any particular ontology as an argument for/against free will. All we really have at our disposal is scientific knowledge, i.e. epistemology.

    Labnut,

    Given your list of choices, I’d say that the correct one is

    3. the law is not applicable.
    If no law applies the result must be random.

    This is because laws of physics cannot predict measurement outcomes, but only probability distributions for outcomes. Outcomes themselves are random, as is commonly phrased. Though note that the word “random” is overloaded with meanings, a more precise term would be “uncomputable”.

    Randomness is a poor explanation for free will since the products of our thoughts are anything but random. So we must discard (3).

    I don’t think so. Our brains may well have (and actually most probably do have) a random component (an amplification of “quantum randomness”). Actually I’d be surprised if they didn’t, given the complexity of the brain structure. This randomness provides the seed for ideas that appear in our consciousness, and are the crucial component of things like intelligence, lateral thinking, creativity, etc. On the other hand, the deterministic component of our brain processes is tuned (through evolution) to filter away most of “useless noise” and keep only a fraction of random events, giving rise to “Eureka!” moments and “Why didn’t I think of that?” moments. The whole thing is described in a bit more detail in the two-stage model of free will.

    That said, your option (2), overriding of laws, is also a possibility. We don’t know the ultimate theory describing the evolution of a human brain, and I can certainly imagine several new matter fields (call them “dark matter”, but *not* in the usual astrophysical sense) which couple noticeably to ordinary matter only if certain complexity of matter structure has been reached (like a biological brain). Such theories are notoriously hard to rule out, and they support the ideas of David Chalmers. I’m even somewhat sympathetic to this option as well, and have been toying with constructing a few models.

    Like

  11. Hi dbholmes,

    …. by studying mental activity from brain cells all the way down to quantum levels you would never identify the word “duck” or its meaning …

    Yes you would! Information (such as words) is encoded as patterns in the neural network. “Meaning” is linkages between bits of information (how patterns of neural activity affect other patterns of neural activity).

    This is where reductionists have lost perspective. They assume the knowledge we manifestly have at the bio-social level in order to claim everything can be understood by processes seen at lower levels.

    All “knowledge that you manifestly have” is encoded as patterns in the neural notwork. Your understanding of humans, of the English language, all of it is in there.

    Now, of course, in order to see a *pattern*, you need to take an overview of the neural ensemble, and not just look at one neuron at a time, but are you seriously suggesting that thinking and understanding is anything other than neural activity? If so what?

    But meaning is not accessible from pure mechanism.

    We nowadays even have mobile phones that can listen to speech, understand its meaning, and then act on the information! This is not mysterious!

    If the reply is that that is not “real” meaning and not “real” understanding, then what is missing? The sprinkling of magic fairy dust? The “wetness”? What? We humans are simply doing a more complex version of what that mobile phone is doing.

    Important to understand, is that using a purely reductionist approach would hinder our ability to understand coercion at a higher level …

    This is a common misunderstanding that a “reductionist approach” means ignoring the high-level description. It doesn’t. It means tying the high-level and the low-level together. You don’t lose any part of the understanding, you only gain.

    Hi Marko,

    ontology is and will always remain underdetermined by science.

    You can indeed always add on to your ontology orbiting teapots, Invisible Pink Unicorns, sprinklings of magic fairy dust, or whatever. But the onus is very much on the person arguing for them.

    Hi Aravis,

    Within the artistic, humanistic, and social scientific frameworks a great deal of understanding can come about these subjects. The fact that they it is the sort that scientismists aren’t interested in does not mean it is not valuable.

    You misunderstand “scientismists”. They don’t think that high-level, humanities-level understanding is not interesting and not valuable, rather, they seek to enhance the interest and value by properly understanding such things, linking them together and seeing them as part of a glorious ensemble.

    it is their conception of understanding that is impoverished — because tunnel-visioned — not the humanist’s.

    Well I find your view on this pretty improverished and close to anti-intellectual, with your refusal to ask how these things arise from the underlying biology and physics. If you personally are not interested in those questions, then ok, but declaring that there are no answers is “tunnel-visioned”.

    Like

  12. (Apologies to David Ottlinger for not heeding his advice)

    Coel,

    “Yes you would! Information (such as words) is encoded as patterns in the neural network. “Meaning” is linkages between bits of information (how patterns of neural activity affect other patterns of neural activity).”

    “All “knowledge that you manifestly have” is encoded as patterns in the neural notwork. Your understanding of humans, of the English language, all of it is in there.”

    “Now, of course, in order to see a *pattern*, you need to take an overview of the neural ensemble, and not just look at one neuron at a time”
    ____________________________________________________

    First of all, “meaning” can’t simply be identical to linkages between bits of information relayed between neural networks. This is far far too broad of a criterion. There are plenty of neural networks that relay information to one another that we intuitively don’t think have anything to do with meaning (unless you are using an incredibly broad understanding of “meaning” which would make your claim trivial). If a neural network has some kind of “meaning,” or reference, it must *at least* be in a sense directly causally connected to the world in a reliable way or something like that. Not all neural networks have this direct causal connection. Please see the philosophical, psychological, and neuroscientific research on this.

    Your second quote is essentially just a reassertion without support of you saying “information is encoded as patterns in the neural network.” so I will leave it there.

    As for your third quote, you seem to be saying (in conjunction with your second quote) that all knowledge we have is encoded in patterns of neural networks, where we must look from an overview of neural ensembles in order to extract meaning (All of this, in my opinion, was rather gestural/handwavy and vague)

    This is just not true (unless once again you are making the trivial, analytically true statement that in order for something to be considered a neural pattern, as you define it, there have to be multiple neurons interacting..). Single-cell recordings suggest that individuals neurons play important roles in and of themselves. For example, we have an individual neuron which is responsive to Jennifer Aniston’s face, another individual neuron responsive to the World Trade center, etc. In other words, it looks like individual neuron can refer to objects (at least in an informational indicator approach by which just as smoke represents/refers to the fire which caused it in virtue of carrying information about the fire, so too neurons represent/refer to external objects in virtue of being caused by them and thereby carrying information about them). We don’t always have to look at complex neural networks to find “knowledge” or “meaning.” (Once again, all very ambiguously stated on your part, which will most likely result in a response from you which purports to alter or finagle your definitions to make what you were saying earlier seem coherent).
    _____________________________________________________

    “We nowadays even have mobile phones that can listen to speech, understand its meaning, and then act on the information! This is not mysterious!
    If the reply is that that is not “real” meaning and not “real” understanding, then what is missing? The sprinkling of magic fairy dust? The “wetness”? What? We humans are simply doing a more complex version of what that mobile phone is doing.”
    _____________________________________________________

    This is a very problematic thing to say. You are assuming that “understanding” for these phones is the same as “understanding” for people, and that it would be ridiculous to think that there is something more to understanding than the sense in which the phone understands things. I strongly disagree with both claims.

    One way to make this clear is to consider one way to easily fool computers (indeed ,this is a technique frequently employed in modern-day turing tests by judges). Ask the computer, “is a rat furniture?” The computer will have no coherent response to give. Instead, the computer will have to begin answering with a series of default “back-up” answers (answers given when the computer doesn’t have an answer to the question). The reason for this is because the computer cannot categorize objects in the same way that we can. It cannot recognize that rats aren’t the kinds of things that are furniture. In other words, it recognizes that the syntax of the question is completely permissible, but it does not get the semantics. Go ahead and try this with something like Siri on your iphone, you will see that Siri falls back to a default set of answers which can be triggered by asking her all sorts of questions which she doesn’t know how to answer.

    So, yes brains, like phones, are made up of physical stuff and might have some of the same input-output relations, but brains clearly have some fundamentally different functional properties from phones- brains can categorize and phones cannot. So yes, brains are more complex than phones, but they are more complex in the sense that they have fundamentally different kinds of operations being performed which we don’t yet understand, they are not more complex in the sense that they perform the same operations as phones (which we do understand) just in a larger scale.

    Liked by 2 people

  13. It has been suggested that determinism is the only game in town. But there really is no evidence for determinism.

    Each time it is brought up I ask about this scenario where I have my finger poised above the screen of my phone. It seems to me that I can press “OK” or “Cancel” but the determinist would say that this is an illusion and that at least one of these actions is impossible.

    Again I ask, how long has it been impossible (a ballpark figure would do) and how do you know?

    Someone has suggested that one could have moral responsibility under determinism, but that is really just assigning a new meaning to the phrase “moral responsibility”. Under the normal usage it would be absurd to say that we have moral responsibility for an event which we never had the slightest possibility of preventing.

    Of course everybody is free to use the phrase “moral responsibility” as they see fit as long as they don’t suggest that everybody must share their own meaning.

    But is there really no difference to the way we would think about morality if determinism was true (as someone has suggested)?

    If I knew determinism was true then I would not know what I will do in the future, but I would know that whatever I will do is something I could never have prevented.

    Consider that I have my finger poised over “OK” and “Cancel” as above and one action will make me immensely rich but cause significant harm to some people I will never meet. The other will result in financial disadvantage to me but prevent that harm. Here is one way that moral decision making would differ:

    Under libertarian free will it would be true to say ”If I press OK then in the future I will hear of people suffering and know that I could have prevented it

    Under determinism that would not be true, rather: ”If I press OK then in the future I will hear of people suffering and know that there was nothing I could ever have done to prevent it”

    This may or may not be influential in your decision, but it is not true that the reasoning must be the same in either case. It is up to individuals to decide if they feel guilty for things they could not possibly have prevented.

    Also, “determinism” is not the same as “reliable cause and effect” as someone has suggested, there can be perfectly reliable cause and effect in a system which is not deterministic.

    And again, compatibilism – I still don’t know what it means.

    Aravis once detailed the Wittgensteinian approach to free will and Coel said that this was compatibilism. And when I detailed my understanding of the term libertarian free will, Marko said this was compatibilism.

    I have heard some argue than under compatibilism the matter of determinism/indeterminism is not relevant, but then later say that there could be no agency at all without determinism.

    Like

  14. Hi Dantip – I’m sure Coel can answer for himself, but your response is a mixture of patch-ups for features he couldn’t address in 500 words, and a focus on the wrong property. There are now numerous examples of computer systems that can outcompete humans in categorizing the external world de novo ie purely (unsupervised) reinforcement learning, or in a mixed way (you might like to wade through Schmidhuber’s review http://arxiv.org/pdf/1404.7828.pdf).

    Your example of a grandmother neuron is particularly inapt, given that these are sitting at the top level of deep hierarchical network – again we can point to such specialized nodes in artificial neural networks that are loci for meaning ie physical phenomena in the “neuron” are reliably correlated with things out there, with this association being formed via a standardized but general purpose approach. Their function of “long term memory” requires the rest of the system.

    The same author has a couple of other interesting papers:
    PowerPlay: Training an Increasingly General Problem Solver by Continually Searching for the Simplest Still Unsolvable Problem
    implementing his ideas about formal theories of curiosity, creativity, fun and intrinsic motivation, and a slightly different Godel machine that “rewrite[s] itself in arbitrary fashion once it has found a proof that the rewrite is useful according to a user-defined utility function”.

    One can dismiss this saying that Good Old Fashioned AI was a failure, but the rapid recent improvements in performance on these types of cognitive task using these approaches are pretty spectacular.

    Like

  15. Hi DavidDuffy

    “I’m sure Coel can answer for himself, but your response is a mixture of patch-ups for features he couldn’t address in 500 words, and a focus on the wrong property.”
    ______________________________________________________

    Yes, it is possible I was focusing on the wrong property, though I am still unclear from both your post and his what the right property to focus on is. This is part of why I said his post seemed very hand-wavy.

    “Your example of a grandmother neuron is particularly inapt, given that these are sitting at the top level of deep hierarchical network – again we can point to such specialized nodes in artificial neural networks that are loci for meaning ie physical phenomena in the “neuron” are reliably correlated with things out there, with this association being formed via a standardized but general purpose approach. Their function of “long term memory” requires the rest of the system.”
    __________________________________________________

    Unfortunately I am really not sure how this addresses my previous claims about single neurons. Sure, their function in long term memory may require the rest of the system, but I really don’t see why long term memory matters for the point I was making about reference.

    Assume you didn’t have long term memory at all. Even in this case, neurons responding to various features in the world could be referring to those features simply in virtue of carrying information about those external objects by being reliably caused to become excited by those external features (see my comment above for a motivational analogy with fire and smoke). So my point was that if you adopt an informational indicator approach, a single neuron can be seen as bearing meaning (as referring to an external feature or object).

    Now you might think that since these neurons are embedded in neural networks my point is undermined, indeed you said that these single-neurons which fire in response to a particular feature sit “at the top level of a deep hierarchical network.” I take it that you were trying to say that we still have to look at neural networks for meaning even in the case of single neuron excitations because neural networks are necessary conditions for single neurons to be able to respond in the right sort of way to external stimuli.

    This seems false too. Consider the analogy with smoke and fire given before. There are plenty of necessary conditions that must be in place in order for smoke to be let off and become an indicator of fire, but we can still say that the smoke by itself is the indicator of the fire since it is the thing that actually indicates fire, not any of those other necessary conditions.

    Perhaps you think I am focusing on the wrong property again. I am fine to admit it if I am since I wasn’t sure from the start what the right property was to be focusing on. I was trying to recreate what was being claimed but, alas, I may have failed.

    “here are now numerous examples of computer systems that can outcompete humans in categorizing the external world de novo ie purely (unsupervised) reinforcement learning, or in a mixed way (you might like to wade through Schmidhuber’s review http://arxiv.org/pdf/1404.7828.pdf).”
    ____________________________________________________________

    This claim only connects with mine if we clarify what we mean by “outcompete.” As we have seen in the past, Watson could win at jeapordy, i.e outcompete a lot of people, but nobody thought Watson was intelligent or used the same operations we do in order to complete various tasks. The notion of “outcompete” here would have to mean that computers implement categorization operations in at least a similar way to us, and implement those operations better than we do. One way to think about this is to ask if our theories of categorization in psychology seem to overlap with the machine learning methods that you mentioned. I simply do not know the answer to this, as I am not a computer scientist or engineer.

    However I can say this- people still can’t get computers to categorize quite right with even the latest methods. The case which I think shows this is translation machines- the most sophisticated methods have trouble with metaphors and various other types of semantic moves that we use all the time in ordinary day interactions. As a result frequently you will see translators (created by Google) spit out nonsensical translations because they simply don’t know how to deal with semantics in the same way we do. Perhaps I am also wrong about this? More than happy to hear a response as I could be outdated on the most cutting edge computational powers.

    Liked by 1 person

  16. About 4 billion years ago, it is believed, life somehow constituted itself from an abiotic reality. Like much else, we do not really know how. There is evidence that organic molecules may have been plentiful before life began. About that time the temperature of our galaxy had apparently also cooled to about 70 degrees C, so DNA could thus exist without denaturing. Life and consciousness had sprung from a lifeless reality. These earliest life forms may have resembled modern bacteria, we do not know. Small packages of nucleic acid had somehow started the process of replication and competition for survival. This early biological consciousness could not have been of the subjective kind. Rather, molecules of nucleic acid organized reality around themselves and were propelled by the forces of nature to interact with the environment and proliferate, presumably.

    Which brings us to the question of panpsychism. There clearly is/was a huge amount of information in the prebiotic cosmos. This domain of reality as it is, is the realm of modern day physicists. where they revel and excel in their searches. Quanta of energy, strings of information, spins, entanglements and decoherence. Particles, ions and molecules interact in a clock-like fashion. “Information” is exchanged. Ergo the universe appears to be ‘intelligent’, just not in an anthropomorphic way.

    We struggle mightily with these concepts because our brains can not help but conjure up images with which they are familiar, i.e. derived from ordinary experience and conditioned by prior understanding.

    Nevertheless, the burgeoning deluge of new information increasingly suggests that there is an over-arching, coherent narrative that includes reality, life, consciousness and ultimately culture. This might be pregnant with meaning for the future.

    Liked by 2 people

  17. dantip: You are having an argument that we have had scores of times, before you ever came on Scientia. Coel’s position is no different now than it was then, which means we are at the terminus point of diminished returns. The only purpose in responding, then, is to make sure that others aren’t seriously misled with regard to where things stand, with respect to the prospects of any physicalistic account of consciousness or intentionality.

    Coel wrote: “Thus I was saying that the “meaning” of the word “duck” was encoded in the neural network (and I’d also find it pretty peculiar that anyone would disagree. What other options are there?).”

    This is exactly what I’m getting at. It has been patiently explained to Coel, again and again that this sort of gloss doesn’t mean anything. The whole point is that we have no idea what it means to say that a physical object or process has semantic properties, which means that we also have no idea what it means to say that “the meaning of ‘duck’ is encoded in the neural network.”

    For the gazillionth time, any attempt to give a purely mechanical account of intentionality — that is, to explain intentionality in entirely non-intentional terms — will require us to come up with a theory in which syntax and syntactic distinctions do all the work. Understand what this means. It means that we would, for example, have to come up with a purely syntactic account of synonymy, antynomy, co-referentiality, and the like. The trouble, of course, is that there are indefinitely many syntactically identical statements that are not synonymous, not to mention indefinitely many synonymous statements that are not syntactically identical. The same sort of problem will arise, of course, withrespect to antynomy, co-referentiality, opacity, and the rest. There is no way to represent semantical concepts and distinctions in purely formal terms — which is what a purely syntactical approach to mental content tries to do. (Davidson tried to give a syntactical account just of the opacity of belief sentences in “On Saying That,” which is universally regarded as a complete failure.)

    Please understand, dantip, that I am not trying to convince Coel. He’s alread told us that the syntax/semantic distinction is, if not completely bogus, porous and easily breached, so to make these sorts of arguments is simply to bang one’s head upon a wall. You are not discussing these subjects with a person who cares whether or not his views on linguistics and the philosophy of mind are well-informed. The best thing that you and everyone else can do is read and talk to people who *do* have expertise in these areas.

    I should just add one more thing. The problems are actually much worse than I’ve indicated. This whole conversation is taking place within one narrow corner of the analytic philosophical tradition’s efforts to deal with consciousness and intentionality. A very scientistic corner, I should add. There is real reason to think, however, that the very notion of mental representation (aka mental content) in play here is problematic in itself. Most of Wittgenstein’s discussions regarding the possibility of a private language and the problem of consciousness and of other minds are dedicated to demolishing representational accounts of mind and thought. For an excellent primer on this, I highly recommend Ian Ground’s presentation to the Royal Institute of Philosophy, entitled “Why Wittgenstein matters.” (The material on representationalism occurs primarily in the second half of the talk.)

    Liked by 3 people

  18. I’m starting to really feel at home here given that: 1) I just had a great chat with Daniel Tippens, 2) I now see plenty of other true determinists around (and not just the populist Dan Dennett sort), and 3) standard discussions do lately seem more “meaningful” than “academical,” (which I presume is the result of recent editorial changes — though I must admit that the “skull measurement” and then “Sam Harris” bitch sessions were great fun!).

    Hi Marko,

    I’ve been itching to discuss determinism and free will with a qualified physicist for about 20 years, given that I’ve held a contrary position since my college days. I’m not entirely sure that we’re on opposing sides of this however, since perhaps you take an epistemological position (or that which might be understood), whereas I take an ontological one (or regarding ultimate reality itself)? If you were to take a perfectly ontological stance however, then does the following seem reasonable?:

    Effects do not happen non-causually, but rather have causes which specifically mandate their occurrences. Therefore if something were to happen which was not a function of reality as a whole itself, then this could be defined as “magic.” In practice I doubt that this actually happens, given that foundation would not then exist for such events to occur.

    Regarding quantum mechanics, if there are one or more dimensions of existence which we aren’t able to measure (beyond the standard four which we may also have trouble with), should we still consider observed uncertainty to be ultimate, or rather just a product of normal human ignorance? Rather than resort to non-causal magic, should we not just presume that the human perspective happens to be far too small to comprehend ultimate reality?

    Then finally there is the question of human free will. As other determinists here have mentioned, I believe that we do have some in a practical sense, though not an ultimate one. Humans obviously have very small perspectives, and therefore can be said to “freely choose” in these limited ways, though the larger a given perspective happens to be, the fewer sources of choice that should become apparent. A “perfect perspective,” however, should present no variability whatsoever.

    So how does this sound?

    Like

  19. Dantip,
    No worries, I will not be the one to suffer. Also the reason I am not biting is that Coel definitively proved his unwillingness to own a single position in previous discussions here: https://scientiasalon.wordpress.com/2014/12/01/on-the-disunity-of-the-sciences/comment-page-2/#comments
    and (elsewhere) here: https://thebiganswers.wordpress.com/2014/12/04/reduction-in-two-easy-steps/

    You might be tilting at windmills, all the same I wish you luck!

    Like

  20. In the spirit of being productive…

    Two excellent discussions with Peter Hacker on the questions of mind, body, consciousness, and free will. Hacker is a Wittgensteinian — indeed, one of the foremost experts on Wittgenstein — and does a very good job of describing the basic mistakes made by reductionists, on these topics, and by analytic/scientistic philosophy more generally.

    Liked by 2 people

  21. In general, I have the usual objection to any book which has a theme like “Freedom Regained”, in that I simply don’t buy that there is a case that we lost it in the first place.

    The famous determinism/indeterminism dilemma is not, on examination, a dilemma at all.

    The other arguments I have heard against it don’t really make sense to me.

    To simply say that libertarian free will would require “magic ” without defining what is meant by magic, or why.LFW would require such a thing, is not an argument at all.

    As I said before, references to dualism and supernaturalism are irrelevant until you can demonstrate that LFW would require them.

    Daniel Dennett, in his review of Sam Harris’ “Free Will” said “The incoherence of the illusion has been demonstrated time and again in rather technical work by philosophers”, but does not cite even one in his footnotes.

    I have looked for such an demonstration but have not found it. If anyone can point me to such an work then I would be grateful.

    So all the discussion on things like meaning and complexity, while interesting in its own.right, is beside the point when considering whether or not we have libertarian free will.

    For example it is going well beyond the evidence to say that any current machine, besides a biological brain, can understand anything, notwithstanding all the wonders of fast machines, clever algorithms and big data.

    As I have pointed out before, the Jeopardy producers should have sprung a surprise rule change on Watson, that they would now be asking questions and the contestants should give answers. The human contestants would have been able to handle that without any problem, but Watson, for all its impressive computing power, would have been as useful as a house brick..

    On the other hand we have no real reason for thinking that this will be impossible in the future.

    That might be an interesting discussion for the future, but on this context it is just a diversion.

    For my part I am still agnostic on the topic, but to rule out something that seems to be the case (for example that I could select either the OK or the Cancel button), without any evidence against it, does not seem to be a rational course of action.

    I will always consider there to be a benefit of the doubt that any harm I seem to choose is something that I could have prevented.

    And let me finally say something about the (non existent) link between this issue and the issue of whether or not punishment should be retributive. The argument for non-retributive punishment works just as well whether or not we have libertarian free will. Arbitrarily linking it to an unsupported metaphysical opinion only hampers our ability to progress the case.

    Liked by 2 people

  22. “It has been suggested that determinism is the only game in town. But there really is no evidence for determinism.”

    Actually determinism was overthrown by quantum mechanics. For most macroscopic situations Newtonian mechanics is an extremely good approximation and it is deterministic.

    The role of quantum mechanics in brain function is a matter of argument which is to say we don’t know. The indeterminacy of QM however is just randomness and even if it affects human behavior it wouldn’t account for what is commonly meant by by “free will”. For that there would have to be some other factor changing the statistics from the purely random predictions of QM to something else. Sorting this out, whether there is something changing the statistics of QM inside living human brains is a monstrously difficult problem and won’t be done for a long, long, long, time.

    At a simpler level there are so many problems with free will that it’s hard to make sense of the concept. How would you know if your will was being controlled? Something forcing your body to carry out actions that you didn’t want to is controlling your actions not your will. If your will is being controlled by hidden variables of some sort you would feel just like you do now. You would have desires and act accordingly.

    Liked by 1 person

  23. Hi Dantip.

    “[whether] our theories of categorization in psychology seem to overlap with the machine learning methods”

    Well, I found this paper,
    Deep Learning of Orthographic Representations in Baboons
    , interesting. The learning curve and performance of baboons learning to discriminate between written novel English words and non-words can be mimicked by a neural network that “simulates the primate’s visual ventral stream”. The authors also try to mimic brain lesioning, by removing network connections in their model and seeing how performance deteriorates. One might or might not believe this is capturing one subsystem underlying human literacy, which involves cross-talk between multiple modalities and brain regions, but the simple minded merging of deep learning networks, one for images, one for language is what is doing so well at image classification at the moment eg

    http://www.stat.ucla.edu/~junhua.mao/m-RNN.html

    Re semantics, standard tasks for deep learning in
    http://arxiv.org/pdf/1503.00185.pdf were “sentiment classification” [of sentences, favourable v. unfavourable re topic], “sentence-target matching”, “semantic relation classification: [l]earn[ing] long-distance relationships between two words that may be far apart sequentially, and “discourse parsing”.

    So, although we don’t have anything like human type thought, we definitely have bits and pieces that it seems plausible might be able to be “joined up” appropriately – I have lost the origin of the aphorism that goes something like “motion without a mover, biochemistry without an animal, design without a designer, thoughts without a thinker”. As a simple minded scientific materialist, I have an expectation our understanding will continue to increase by moving up and down between lower and higher levels of explanation/understanding, rather than hit a mysterious limit.

    Like

  24. Massimo remarked …

    “Perhaps the neatest and most powerful rejoinder to the idea that thoughts change nothing comes from the neuroscientist Dick Swaab, who dismisses free will out of hand as a “pleasant illusion.” Nonetheless, in his book We Are Our Brains he reports that “patients suffering from chronic pain can be coached to control activity in the front of the brain, thereby reducing their pain.” But hang on: if “we are our brains” how can we control them? His own example is evidence that it is far too simplistic to talk as though our brains are doing all the work and conscious thought is redundant.”

    In this case it can just be that one part of our brain is having an influence on/controlling another part. “We are our brains.” is just an oversimplification of the situation. If we say that “John is his brain.” then we can ask if John is his whole brain or only part of it. If we put John’s brain in a vat is that the same as putting John in a vat? We can’t even properly speak of John’s brain if john is his brain. This is like saying John’s John.

    If John isn’t his whole brain then is he only a part of it? Which part? Is he always in the same part or does he move around depending on what he’s thinking about? Is there a John when he’s in dreamless sleep? Our concept of “person” is just too vague here. The objection hinges on this difficulty but it does nothing to indicate that conscious thought isn’t redundant.

    “We are our brains.” and teaching someone to reduce their pain are entirely consistent and completely independent of whether or not conscious thought is redundant.

    Liked by 1 person

  25. Or, to take psychologist Michael Gazzaniga’s account of “emergent properties,” micro-level complex systems “self-organize … into new structures, with new properties that previously did not exist, to form a new level of organization at the macro level.” In “strong” versions of this theory, “the new property is irreducible, is more than the sum of its parts, and because of the amplification of random events, the laws cannot be predicted by an underlying fundamental theory or from an understanding of the laws of another level of organization.”

    To give a clear example, if this is correct, quantum physics is more fundamental than Newtonian physics, but Newton’s laws can’t be torn up and replaced by quantum ones. “Classical properties, such as shape, viscosity, and temperature, are just as real as the quantum ones, such as spin and nonseparability,” says Gazzaniga.

    This is a terrible example, or rather an attempt at one. Classical properties are derivable from quantum mechanics as approximations. It was by studying classical physics a a more detailed level that QM was invented. If one studies shape, viscosity and temperature in sufficient detail a QM description will be reached. The question here isn’t the question of which laws are more “real” but which ones more accurately describe experiments/observations. If the statement is reworded…

    “Classical properties, such as shape, viscosity, and temperature, are just as accurate in predicting observations as the quantum ones, such as spin and nonseparability,”

    Then it is definitely incorrect.

    Also, so far as we know, there are no irreducible properties demonstrated by empirical results. Thinking that they exist in complicated systems where calculations are too difficult is pure speculation. Invoking irreducible properties here is like a god of the gaps argument, we don’t know for sure so irreducible properties are responsible.

    Personally I think that some of this is just “physics envy” aggravated by some pretty outrageous remarks by some physicists against philosophy in general.

    Liked by 1 person

  26. I watched the first of the Peter Hacker talks that Aravis was kind enough to link. As I understood it, half of it was an exposition of the “linguistic turn”, analyzing the English uses of the word “mind”. It was interesting to contrast this to contrast this to Lilliard (1998)
    Ethnopsychologies: cultural variations in theories of mind – Hacker’s evocation was very similar to what Lilliard calls the
    “European American Social Science Model” constructed from interviews with children and adults (fig 1 and 2 of that paper), except that Lilliard quotes Johnson [1990] as finding that “10-year-olds showed evidence of believing what Johnson intuited was the adult understanding: that a brain transplant has the effect of transplanting the self”, the opposite of what Hacker thought should follow. The rest of the paper turns to not very extensive evidence as to whether the concept of mind differs across cultures eg Japan where “kokoro”, “hara”, “ki”, “seishin” and “mi” are four concepts that cut across body and mind. The discussions about causation, intention, emotion (220 English words, 5 in some other languages) are also interesting.

    So, I thought Hacker’s repeated assertion that “mind is not brain”, was just that, an unsupported assertion in the view of the consonance of most of his views with the “folk” or EASSM, but the differences at the key issue (bearing in mind that “folk” concepts are mixed up anyway).

    Like

  27. Hi dantip,

    You are assuming that “understanding” for these phones is the same as “understanding” for people.

    Yes. Or rather I’m putting the onus on those claiming there is a difference. The standard philosophical position introduces entirely artificial divides in order to claim that there is something missing and that the phone isn’t doing “real” understanding (for some reason that they never spell out explicitly).

    If I ask an iPhone “Siri, what is the time?” and Siri responds correctly, then Siri has understood the meaning of my request. That’s all there is to it! There is no need for anything else, no need for fairy dust, no need for “only humans can do it” exceptionalism. The concepts “meaning” and “understanding” are just about linkages between concepts.

    If I say “Siri, kas ir laiks?” then Siri will not “understand” my “meaning” because Siri doesn’t know the linkages between Latvian words and other concepts, in exactly the same way that non-Lativan-speaking humans would not understand.

    The next generation will grow up speaking to their phones and all this will be obvious to them. Give an iPhone to a Lativan 8-yr-old and they’ll quickly realise that Siri understands the meaning of English but not (currently) Latvian.

    As a comparison, we now accept that life doesn’t need elan vital and that the naturalistic account of life as self-replicating patterns is entirely adequate. But life-forms lacking elan vital are still real life-forms! Aravis is hankering after some equivalent of elan vital to give “real” meaning, but the whole quest is misconceived.

    … brains can categorize and phones cannot.

    First, quite obviously, current phones are hugely more primitive and limited than human brains. So, yes, you can point to their limitations. So what? It’s just a difference in degree.

    And yes, computers can categorize. If you have, say, a database of light-curves of 100 million stars, and want to find and categorize the variable stars, the way to do it is to train a neural network by giving it examples of particular variable-star types, and then let it categorize the database for you. This is routine stuff.

    “Strong” AI is just bundles of weak-AI modules that are inter-linked. There’s no added dualistic woo; the “extra” that the philosophers are looking for doesn’t exist and isn’t needed.

    Not all neural networks have this direct causal connection.

    Sure, and networks without such connections have less “understanding” of “meaning” because they “know about” fewer linkages between concepts.

    Single-cell recordings suggest that individuals neurons play important roles in and of themselves.

    It can indeed be the case that particular neurons often or always fire given particular inputs such as “Jennifer Aniston’s face”, but that doesn’t mean that whole concept is encoded in a single neuron. You can’t encode information in a single neural-network junction (any more than in a single digital bit), only in patterns of them.

    Hi Robin, much as I’d like to explain and defend compatibilism and compatibilist free-will, comment limits preclude.

    Like

  28. @Coel:

    Aaaaagggggghhhhhh!!!!! Why do we have to have this interminable exchange?! In the interest of my own sanity, I’m just going to copy and paste part of Aravis’ innumerable responses to this sort of thing. From February 28 (10:47 am):

    _______________________________

    “But how does one recognise genuine understanding if merely behaving exactly like a competent human (Turing Test) isn’t enough? Is it genuine only when a human displays the exact same behaviour? But that is then just begging the question.”

    Actually, it’s the Turing Test and those who appeal to it to justify Strong AI that begs the question. Massimo patiently went through this, in two dialogues with AI enthusiasts (one of them, alas is the awful Eliezar Yudkowsky):

    http://bloggingheads.tv/videos/2561
    http://bloggingheads.tv/videos/2483

    2. Coel wrote: “Its “sense” or “content” is the set of linkages it has to other symbols, to other pieces of information.”

    The most widely held view in the philosophy of language is that the content of a term or expression consists of its reference. Some philosophers, like Frege and more recently, Katz, think that in addition to reference, terms and expressions have an additional semantic value — sense — which, roughly, consists of a description of the referent. In either case, meaning describes a word-world relation, not a word-word relation.

    The view you have described is known in the literature as conceptual role semantics and was championed chiefly by Ned Block. It is a view with difficulties too many to count (you can see the relevant Encyclopedia entries), but its main problem is that it cannot make sense either of truth or intentionality, so as far as I am concerned, it is a loser.

    Regarding your earlier remarks on syntax and semantics, it is nothing but a word jumble. These terms are very clearly understood and belong to a real science, known as Linguistics. The relevant definitions are easy to find and bear no resemblance to what you have said here or in previous discussions.

    3. Robin Herbert wrote: “I must confess I don’t understand either Coel’s nor Aravis’ definition of “understand” and yet somehow I understand what I mean when I say I don’t understand them. Aravis seems to have defined it in terms of another word we use for “understand” (grasp).”

    Now *this* is a really good question/challenge. By “grasp” is typically meant something like “mentally represent,” and while many philosophers are happy to leave it at that, I — given my Wittgensteinian leanings on so many things — am not. I invoked it here primarily to provide a common, standard account of understanding, so as to focus on the Chinese Room question — and to counter Coel’s assertion that no one has such an account and thus, cannot declare its absence. Of course, Wittgenstein’s view on mental representation helps Coel and the Strong AI crowd even less, so….

    Liked by 1 person

  29. Dantip: You are assuming that “understanding” for these phones is the same as “understanding” for people.
    Coel: Yes. Or rather I’m putting the onus on those claiming there is a difference.

    Yikes! Can this impasse be resolved?

    Firstly. The word understanding is properly applied to human beings. If there is doubt, the human understanding can be probed. Animals also have questionable ‘understandings’ but that requires a very special effort on our part to properly define and identify. If one wants to ascribe understanding to a dead contraption hooked to a man made source of energy, the onus should be on you! Saying a smartphone understands is saying that it is conscious.

    I have argued that bacteria display biological consciousness. Like us they organically adapt to and interact with their environment. Still, their consciousness has ALMOST nothing to do with human experience, even though their biological processes superficially resemble ours.

    The onus is on Coel to explain to us how a non-biological machine running man made algorithms understands, i.e. is conscious. A skeptical audience is waiting!

    Liked by 2 people

  30. Coel wrote:

    Aravis is hankering after some equivalent of elan vital to give “real” meaning

    ———————————————————————————————–

    I am after no such thing. I have said no such thing. I have implied no such thing. I have intimated no such thing.

    ————————————————————————————————-

    davidlduffy wrote:

    I thought Hacker’s repeated assertion that “mind is not brain”, was just that, an unsupported assertion

    ———————————————————————————————————

    Actually, Hacker doesn’t just “assert” anything. He goes into quite a bit of detail as to why it is incoherent to say the mind is the brain. He discusses this in the second video as well.

    Jarnauga:

    I’m with you. This is beyond frustrating. At this point, its mostly aggravation and very little to nothing by way of enlightenment. I need a lengthy time-out.

    Liked by 1 person

  31. All his examples need no other explanation than activity in the brain. There is no ‘mind’ versus brain, it’s all brain. There is no free will, and it need not be simplistically labeled determinism. All these pleas for conscious and free will as something different from the brain is insulting to the amazingly complex and beautiful organ.

    Liked by 2 people

  32. Just a few final thoughts on this thread.

    First, Coel, at least “Socratic disease” does not have repetitive hand-waving as a pronounced symptom.

    More seriously, Coel and DavidDuffy at a minimum, and perhaps any others on this thread of similar mindset, are either missing, misunderstanding, or ignoring two issues related to this.

    One is, in the sciences in general, the idea of emergent properties. The other is, related to consciousness, science of mind, etc., embodied cognition.

    Of course, Massimo has discussed both more than once in the past, between here and Rationally Speaking. Not sure how long David’s been commenting here, but I know Coel knows this. So, I vote for “ignoring.” We could call that another form of denialism, I suppose? ☺

    Thanks for giving me a laugh that Aravis is after “élan vital,” too. (Advance note: Neither am I. Massimo, though? I heard he’s selling “Élan Vital” brand “hand crafted” premium grappa.)

    I am otherwise with Aravis, Massimo and Jarnauga. That’s why I said, in my first comment, that if anything, Baggini needs to give his book another dunking in complexity theory.

    Unfortunately, in the world of business hypercapitalism, for things like customer service, corporations are likely to believe more and more, especially in the US, that Coel’s take on such things is right, and trot out ever more customer service “Siris” on the phone and even robots for in-person store service.

    Onus is back to you. Computers are wired differently, operate differently and more. And, as noted, still easy to stump in many ways.

    VectorShift There is, of course, a difference between “emergent properties” that get “mishandled” when treated with the wrong level of approach, or an attempt at greedy reductionism, and “irreducible complexity.” I didn’t hear Baggini, or myself, or any of the non-Coel/Duffy people mentioned above, use the phrase “irreducible complexity.” Red herring.

    So is your idea about “John.” None of us said that “John” is less than his brain. Rather, embodied cognition says that he’s more than it.

    That said, as I noted above, this issue is more complex than Baggini makes it. Indeed, I have talked here before about “something like free will,” subconscious free will or something like it, and more. https://scientiasalon.wordpress.com/2014/10/21/free-will-and-psychological-determinism/

    That said, there’s nothing to say that automatons are conscious, either, per another comment of yours, and I reject p-zombies or similar.

    Liked by 1 person

  33. Hi Coel, being a neuroscientist, skeptic, materialist, and atheist I believe that the physical brain is the source of all thought and behavior 🙂

    Being a compatibilist who believes in emergent properties I also think it is important to apply terms and language at their appropriate level. You argued (contra the author) that meaning could be found in neural networks, I am arguing that it can’t because meaning is an emergent property found at a different level.

    We nowadays even have mobile phones that can listen to speech, understand its meaning, and then act on the information! … If the reply is that that is not “real” meaning and not “real” understanding, then what is missing?

    The best you have here is that with proper programming phones can identify and react appropriately to higher order phenomena in a way that is important to humans. The “importance” and “meaning” of their actions are not found in their chips and programs but in the expectations of their human designers.

    Or are you claiming that a phone, when hearing “duck!”, would actually understand that someone is concerned that another person should lower their body quickly to avoid being hit by something? Let’s say a phone has the ability to track trajectories and will present a warning if it sees a baseball is about to hit you in the head. When it yells “duck” do you think your phone really means it?

    This is a common misunderstanding that a “reductionist approach” means ignoring the high-level description. It doesn’t. It means tying the high-level and the low-level together.

    Tying events between two levels connects but does not reduce higher level phenomena to the lower. It does not allow one to conflate the networks that enable people to experience meaning with actual meaning.

    Let’s say a man and woman fell in love when they were young, fought obstacles to stay together, and over time saved each other’s lives. Then the woman died, and the man is upset. I argue that the actual meaning of that woman’s life and death to that man is properly found in considering those events and not in his neural networks.

    Let’s say that during surgery a neurosurgeon finds all the right synapses and cells needed to affect the man’s experience of what she meant to him. Zip zip zip, her life and death provoke no emotion in the man, maybe he can even laugh at it.

    Now another surgeon looks at the man’s neural networks. Would you say that her life and death really have no meaning (because the 2nd surgeon finds his neural networks result in laughter rather than tears) or that their real meaning is properly assessed (regardless of what he now feels) on the past events?

    The capacity to experience meaning comes from the mechanics, meaning itself comes from (is defined by) the interaction of entities at a higher level.

    Liked by 2 people

  34. Hi Aravis, I think by “meaning” being embedded or encoded in the brain Coel was trying to say that there are networks which physically hold (or manifest) our experiences of meaning related to different things. I think he has made an error by conflating the materials/processes which allow for our capacity to understand and impose meaning, with understanding and meaning itself.

    But I want to make sure I understand your position (and will be watching the videos).

    Do you believe that scientists would be unable to identify networks within a brain which enable a person to identify or remember a favorite musician (for example)? Or that they could not cause, alter, or prevent thoughts and behaviors by manipulation of the brain?

    If you believe this is possible, does it (scientific investigation) not give us some valuable information/insight into concepts of understanding (epistemology),identity, will, and morality? I’m definitely not saying it gives us everything, or most, but doesn’t it give us something of value?

    Like

  35. Like the author I don’t support the following

    “thoughts have no causal efficacy: they do not affect what we do. “Thoughts” should be understood very broadly here to include not just beliefs, but also desires, intentions and simply the way in which we understand what we see and hear, like injunctions to duck”

    but I find the rest of the article problematic.

    It’s not clear what the definitions of mind or consciousness are being used, if any, and they’re often used as synonyms. Mind is also used as a synonym for ‘thoughts and perceptions’ or as their backdrop. Moreover, the meaning or definition of ‘meaning’ is not touched, likewise for the Brain

    “The undeniable fact is that brains provide the material means by which conscious life is sustained.” …

    Do they. The implication is we have defined clearly what we mean by brain and that organisms without are not conscious but that has not been established.

    And then this threw me off

    “We will not find embedded in any of this the meaning of ‘duck’.”

    Even partially? but either what is ‘The Meaning’ of duck?

    “That can only mean one of two things: either the meaning of “duck” had no role at all to play in your action, or a purely physical description of what went on would not provide a complete account of why you did what you did, adequate to explain what happened.”

    First, false dichotomy. Second, considering he says “would not provide a complete account” why not simply change his previous statement to

    ‘nowhere near a complete understanding of ‘duck’ can be found by looking at a brain’s bio-chemistry and neural processes.

    I quit reading the article at this point because the ground was getting too fluid to support me and I didn’t expect to learn anything more.

    dbholmes,

    Taking Johannes Lubbe place,

    “However, it was not clear if you are suggesting that the mechanisms of bacterial sensing and response fully capture what is happening at the level of human (or other sufficiently complex animal) consciousness”

    Supposing we look at the concept of electricity instead of consciousness, whether electricity is running a digital clock or a super computer it isn’t really relevant to ask if electricity really captures the differences in what plays out in each case.

    “the level of organization found in the brain (or neural networks) provides for more functions than found in less complex, particularly single celled, organisms”

    Yes, from our perspective, but I don’t see why complexity of function should be related to likelihood of consciousness. Moreover, I think that the difference in complexity between a unicellular organism and a dog can be seen as basically non-existent if we live in an extremely complex world, and assuming we do then the apparent differences in complexity we observe can be seen as an artifact of the measures we use to evaluate comparative complexity.

    Liked by 2 people

  36. Robin Herbert
    Quote 1 (May17 8.48pm):
    >Under libertarian free will it would be true to say: ”If I press OK then in the future I will hear of people suffering and know that I could have prevented it”Under determinism that would not be true, rather: “If I press OK then in the future I will hear of people suffering and know that there was nothing I could ever have done to prevent it.”<

    In this case it is just as possible that consequently you may, or may not, feel regret/guilt and this again depends on the type of human you are.

    Being an atheist-determinist does not force one into an becoming an inhumane automaton who is unable to empathise with suffering in others. Though Determinism does not in any way remove the requirement to make your choice if you are faced with more than one available action -and assuming that one has some degree of better moral principles- being a determinist and an integrated member of a community or group, one can still feel responsible if it was undeniably your decision which caused harm.

    You said: “Of course everybody is free to use the phrase “moral responsibility” as they see fit as long as they don’t suggest that everybody must share their own meaning.”
    I presume this alludes to my previous comment (May16 5.31pm): “I suggest this is synonymous with behaving with moral responsibility.”
    I think it is a slight exaggeration to say I was saying that everybody *must* share this idea. I'm a bit puzzled too how “everybody must share their own meaning” is possible if having their own meaning means having different ones. Does not meaning derive from everybody sharing a common (or very similar) concept? This, as I understand modern scanning techniques can demonstrate, is also probably a comparable electro-chemical process happening inside all the brains that think that concept and can be passed from human brain to human brain exceptionally well by using language.

    However, I will expand that remark of mine thus:
    I suggest it might be helpful to consider making moral responsibility synonymous with the impulse to act so as to provide an overall effect that is survivally advantageous intra-special behaviour.

    Just to refuse (as some do) to use an expression on the grounds of its close popular connection with established religions is to lose a useful blanket term: there is no hesitation in using other behavioural words like love also still widely claimed as the prerogative of God(s).

    I nearly agree with you (R.H. May18 12.10am) when you say that arguments for non-retributive punishment work just as well with or without free will, except that I would replace “non-retributive punishment” with “the enforcement of rational rules, -from public laws to mutual etiquette”.
    Though Determinism actually supplies those evolved needs, tastes, desires, emotions and reasoning power you possess which you require to function, you still do precisely what you want to do, and whatever that is indicates your moral worth.

    Like

  37. Hi Socratic – re (strong and weak) emergentism, embodied cognition. Sure! But in the latter, if the concept of a brain in a vat is coherent, then surely a “brain” sitting in one spot having inputs piped in is equally so. Have another look at the descriptions the neural network wrote (sitting in its cave looking at Flickr) and answer if you would have thought it remotely possible 20 years ago.

    Re the former, I’m a great believer in ontological levels – I even think sociology is a science studying real objects – and having some sympathy for a functionalist way of thinking am comfortable with the idea that the same “layer” could emerge from a different lower level of explanation (say, as wave behaviour can). But it will arise in a comprehensible and non-arbitrary fashion.

    Mario Bunge (sorry I only seem to quote him ;)) says:

    I am a materialist but not a physicalist because, as a physicist, I learned that physics can explain neither life nor mind nor society. Physics cannot even explain phenomena (appearances), because these occur in brains…; nor can it fully explain machines, as these embody ideas, such as those of value, goal and safety…

    As it happens, I only agree partially with the above. The target Baggini article mentions “complexity theory”, and the other thread is the physics of information in explaining life eg Maximum Entropy Production.

    Like

  38. Aravis,

    Thank you for the videos from Hacker.

    davidlduffy,

    Hacker is making a rich non-technical argument for the necessity of seeing the individual as a whole entity; once that view is taken, the distinction between mind and brain becomes somewhat trivial and irrelevant. (I read your Lillard link; while there is something to learn there – and some interesting flaws – it doesn’t have much to do with what Hacker is talking about.)

    Coel,

    You are re-defining the term ‘understanding’ outside of the common – um – understanding of it. I see little difference between your use of it in relation to your phone and my neighbor’s saying that her cat ‘understands’ her, except that she knows that she is using the word tropologically for certain behaviors her cat exhibits (although not every cat owner would – but then they would not be using the word in the way that you do, anyway).

    Daphne,

    The issue won’t go away by hand-waving.

    Marclevesque, SocraticGadfly,

    Both of you raise interesting problems with the excerpt; I’ll suspend judgment ‘till I read more from Baggini.

    dbholmes,

    Your question was addressed to Aravis, but let me muse on it a bait. The notion that mental states and behavior could be changed through external physical influence predates our current understanding of the brain by many centuries – since at least the invention of ethanol derived from various plants. I’m not being facetious; ingestion of alcohol clearly impacts the brain and causes drastic changes in behavior and personality. So, clearly further research in this area will tell us something; the problem is, will it give us a complete picture of the individual person, even after all the details are patched together?

    The answer right now seems to be, probably not; this may even be the wrong question to be asking. I knew someone in college who responded to a failed romance by taking to drink, he would sit in his room and play the same album over and over again (The Cars, I think). After he finally got through rehab, he said he couldn’t hear any song off the album without feeling physically ill. Was it a learned response? The effect of the anti-depressants and mild sedatives that helped get him through rehab? The socialization with fundamentally different kinds of people than he had been hanging around before? Who knows. (I was one of those he’d hung around with before, so he lost with me.)

    So the question you’re asking can be answered; but will that get us the complete picture here? Why did my friend’s romance fail? why did he ‘manipulate his brain’ in the way he did? What clicked in the rehab process that led to whatever changes that occurred that made him different? what kind of life did he lead after?

    Who was this guy? That’s the kind of question that can never be asked productively in the neurosciences unfortunately. Yet it is the kind of question that interests (some of) us most.

    Like

  39. Regarding Coel’s assertion that his iPhone may effectively be “conscious,” is this not the exact sort of question that philosophy professors challenge their students with all the time? We’ve reacted to him quite predictably, though without achieving victory given that the philosophy community in general doesn’t back us up. Or am I mistaken? Does this community now agree that phones and such aren’t conscious? Where might such consensuses be shown on Wikipedia? What else is understood about consciousness?

    Ned Block would like for his model of consciousness to resolve things, and I do hope he’s successful given its conformity with my own such model. If however we were to develop a functional and accepted model in this regard, I presume that we’d do more than just “straighten out Coel.” Perhaps psychologists could use it in their work. Perhaps cognitive scientists could then understand cognition a little better. Note if we were to finally gain an accepted model of the conscious mind, a great chunk of “philosophy” should then be transformed into “science.” But isn’t Coel also the guy who bears the flag of science on his chest? Once the philosophy community reaches consensus on this issue, they may then effectively be considered “scientists” in this regard. I may be wrong, but I do believe that this was all Coel wanted to accomplish in the end anyway.

    Like

  40. Aravis has previously espoused “ordinary language philosophy” solving “misunderstandings philosophers develop by … forgetting what words actually mean in everyday use”.

    The current generation of 8-yr-olds are quite content with the idea that an iPhone understands the meaning of “Siri, what is the time?”.

    We need a good and proper reason not to accept this, and no-one has given any explicit conception of “meaning” that explains what is missing. People are merely proceeding on the dualistic intuition that “only humans can do it”.

    Johannes Lubbe: Saying a smartphone understands is saying that it is conscious.

    No it isn’t. It’s just saying that it understands (can relate the information to other pieces of information). There is no need to read into the word any more than that. By bundling the concept up with consciousness you’re just trying to make the problem insoluable. Breaking problems down into pieces is the way to solve them.

    jarnauga111: … meaning describes a word-world relation, not a word-word relation.

    And my example of an iPhone understanding the meaning of the request “what is the time?” involves the iPhone relating the words to its clock and thence to the real-world property “time”.

    Hi dbholmes,

    You argued … that meaning could be found in neural networks, I am arguing that it can’t because meaning is an emergent property found at a different level.

    Hold on, meaning is an emergent property found at a high-level description of the neural network. It is still the property of the neural network.

    The “importance” and “meaning” of their actions are not found in their chips and programs but in the expectations of their human designers.

    Of course we humans are not doing “real” importance and “real” meaning either; our brains are merely physical computational devices programmed by evolution.

    Let’s say a phone has the ability to track trajectories and will present a warning if it sees a baseball is about to hit you in the head.

    A most excellent idea for an app!

    When it yells “duck” do you think your phone really means it?

    Given that app, yes! Seriously, our brains are just bundles of “apps” programmed by evolution to do a job. There isn’t any fairy dust producing “real” meaning. We are just more-capable phones.

    I argue that the actual meaning of that woman’s life and death to that man is properly found in considering those events and not in his neural networks.

    And what is “considering those events” other than the neural network and its stored ideas and memories?

    Would you say that her life and death really have no meaning …

    If by that question you mean “meaning to him“, then no, they don’t (in your scenario, the surgeon destroyed that).

    If by “real meaning” you mean “meaning” in the abstract, unrelated to any beholder, then that’s a nonsensical notion. What is the “real meaning” of the Altantic Ocean?

    Like

Comments are closed.