Is quantum mechanics relevant to the philosophy of mind (and the other way around)?

6a00d8341bf7f753ef01910260a3e7970cby Quentin Ruyant

There have been speculations on a possible link between quantum mechanics and the mind almost since the early elaboration of quantum theory (including by well known physicists, such as Wigner, Bohr and Pauli). Yet despite a few proposals (e.g. from Stapp, Penrose, Eccles [1]) what we could dub “quantum mind hypothesis” are often readily dismissed as irrelevant and are seldom discussed in contemporary philosophy of mind. My aim in this article is to defend the relevance of this type of approach.

For the purpose of this discussion it is useful to distinguish two different theses regarding the putative links between quantum mechanics and the mind:

  1. The mind is relevant in interpreting quantum mechanics
  2. Quantum mechanics is relevant in the philosophy of mind

Of course the two theses are not necessarily construed as independent by the proponents of quantum-mind hypothesis. One could argue that the mind is relevant in interpreting quantum mechanics, precisely for the same reasons that quantum mechanics is relevant in the philosophy of mind. This is actually what I will argue here (or at least that it is a promising hypothesis that should be pursued). However, the two theses face different kinds of objections and need to be distinguished.

Is consciousness a biological problem?

Quite logically, I will first tackle the second one: the idea that quantum mechanics could help us explain consciousness. Such claim is sometimes dismissed on the ground that the problem of understanding consciousness is a biological problem, not a physical one. Let me clarify a bit: by “biological/physical problem” I understand: a problem which is better informed by biology/physics, not necessarily a purely scientific (as opposed to philosophical) problem. Quantum mechanics, it is said, is only relevant at very small scales of reality, while conscious organisms are biological organisms, typically found at a macroscopic level, where quantum effects manifest themselves as mere noise. Besides, it is said, randomness is not a proper substitute for free-will, so quantum mechanics wouldn’t help anyway. Therefore quantum mechanics is irrelevant to philosophy of mind.

First, let us observe that typical quantum effects are not necessarily foreign to biology, as illustrated by the burgeoning field of quantum biology. Nor are they in principle confined to the microscopic level — this is the heart of the measurement problem, as illustrated by the famous Schrödinger’s cat thought experiment. Quantum effects such as entanglement also help explain macroscopically observable properties, such as heat capacities or magnetic susceptibilities [2]. It is generally assumed that decoherence precludes the observability of quantum effects on macroscopic objects, but as Zurek et al. note, decoherence is more a heuristic tool to be applied on a case by case basis than a generic consequence of the theory [3]. Finally, quantum entanglement is hard to measure on complex systems. The idea that no quantum effect exists at all on our scale is thus neither empirically nor theoretically grounded. At most we can say that no quantum effect is detectable in common physical objects whose behavior can be accurately described using Newtonian mechanics alone, such as tables and chairs, but of course these are not the sort of conscious objects we are interested in (unless, of course, you think that biological phenomena can be explained with Newtonian mechanics alone).

However, my main contention concerns the idea that the problem of consciousness is a biological problem. Let us follow Chalmers in distinguishing the “easy problems” of consciousness from the “hard problem.” The easy problems concern everything that is scientifically tractable from a third person perspective — how do we discriminate and integrate information, etc. that is, all the cognitive aspects of consciousness. These (not so easy) problems are undeniably biological or psychological. The “hard problem” concerns the phenomenal aspect of consciousness, the subjective first-person “what it’s like” to be conscious. And this question, Chalmers argues, is not scientifically tractable: it is a metaphysical problem.

Metaphysics addresses the most fundamental aspects of reality and arguably the phenomenal aspect of consciousness is one of them. Now, if there is a branch of science which more closely resembles metaphysics in its specific interest for the fundamental aspects of reality, it is physics — not biology. Physics and metaphysics overlap in many respects (just consider the wild speculations about a mathematical universe advanced by physicists such as Tegmark [4]) and there is probably a continuum between the two. On the contrary, a contribution of biology to fundamental metaphysical issues seems to me rather implausible. I could be wrong (and Chalmers could be wrong in thinking that phenomenal aspects of consciousness are metaphysical), but I contend that the hard problem of consciousness, if it exists, is not a biological problem, but a physical one: it is just too fundamental a problem to be addressed from a biological perspective. Note that I don’t mean to deny that there are relations between phenomenal and psychological aspects, in the sense that certain cognitive states are correlated with specific phenomenal aspects, but explaining such correlations is distinct from explaining why there are phenomenal aspects to begin with.

Of course, no metaphysician denies that physics is of interest in the philosophy of mind. Kim’s causal exclusion argument involves the principle of “physical closure.” The argument precisely addresses the problem of the relations between the physical and the mental [5]. What some metaphysicians apparently deny is that quantum physics or any actual physics is of particular interest for such issues: for these authors metaphysics can still produce interesting insights about the physical “in general,” that is, whatever actual physics says. They seem to assume that the physical “in general” poses no important problem of interpretation apart from the well entrenched problems of classical metaphysics.

It seems to me that there is no such thing as “the physical in general, whatever actual physics says”: our conception of the physical changes with our physics. There is no point in reasoning on the physical without taking into account what our best current physics says about it. And our best current physics is quantum mechanics (quantum field theory to be precise). For this reason I think, following Ladyman, Ross and Spurrett [6], that metaphysicians should be informed by our best physics rather than work on a dated conception of the physical, or, as they say provocatively, on “A-level chemistry.” (Ladyman, Ross and Spurrett note that some of Kim’s central arguments rely on conceptions of the physical that are no longer accepted by physicists. The same goes, I would say, of thought experiments involving clones and mind duplication: the no-cloning theorem in quantum mechanics precludes the possibility of such perfect physical duplication [7]).

I am not saying that all metaphysicians should be trained in contemporary physics to produce valuable work (Kim’s Mind in a Physical World is very valuable and important, in my opinion), but contemporary physics is definitely a place we should look at to address fundamental issues in the philosophy of mind. My overall impression is that this is hardly the case today, although such inputs are considered in Chalmers’ The Conscious Mind [8].

Does embracing a quantum mechanical view of the physical really change the perspective for the metaphysics of mind? At the very least metaphysical interpretations of the physical inspired by contemporary physics could open new avenues to be explored, and, perhaps, help make progress on important conundrums in the field, such as the problem of mental causation. It seems to me that there are no good reasons not to follow this path.

Is the mind foreign to the measurement problem?

Which leads us directly to the second point, i.e., the first thesis sketched above: that the mind is relevant in interpreting quantum mechanics. The idea was initially proposed by some physicists as a solution to the measurement problem — the problem of reconciling the theoretical structure of quantum mechanics, which describes non-local “superpositions of states,” with actual phenomena, where no superposition is ever observed. The theoretical structure does all the predictive job, so to speak (apart from the Born rule, which maps the structure with outcome probabilities [9]) and ultimately, the fact that no superposition exists for measured quantities is only ascertained by our conscious observation. Hence the idea that it is the mind which makes the wave-function “collapse.” Of course there are other, less anthropocentric theories, such as Bohm’s, Ghirardi-Rimini-Weber [10] or the infamous many-worlds interpretation [11].

The main type of objection against interpretations involving an observer, I would say, is that they seem too reminiscent of either 19th century Idealism or early 20th century neo-Kantian and phenomenalist views (which did strongly influence said physicists). These doctrines have declined in favor of a renewal of scientific realism in the course of the 20th century.

From a realist perspective, such interpretations seem to attribute a privileged ontological status to the human brain, which is increasingly not acceptable. Was there really no definite reality before life appeared on earth? Does the moon vanishes when no one is looking? All this seems barely good enough for mystics and new age gurus (there might be more sensible anti-realist interpretations, but let’s not quibble…) However, having previously rejected the idea that phenomenal aspects of consciousness are to be addressed by biology, all of this is easily defused: a privileged ontological status of human observers only makes sense for those who pretend that biology can inform deep metaphysical questions.

Let me be more specific and draw on an example. I suggested that phenomenal aspects of consciousness could eventually be explained under a proper interpretation of physics. A possible such explanation could take the form of panpsychism: the idea that, somehow, all matter is conscious. In fact, by distinguishing phenomenal aspects from cognitive aspects of consciousness and relegating the former to physics and the latter to biology or psychology, we would have something like panphenomenalism: the idea that all matter is “phenomenal.” Anyway, in the context of either panpsychism or panphenomenalism, granting a particular role to phenomenality in physics, say, in the collapse of the wave function, does not amount to granting a privileged ontological status to the brain.

Perhaps panpsychism is implausible, but panphenomenalism fares a bit better in my opinion. Obviously, tables and chairs are not conscious. Following panphenomenalism, what they lack is not phenomenality (which would be a feature of their fundamental constitution) but cognitive abilities. Phenomenality without memory, persistence, information integration and a capacity for world and self representation is simply not awareness, or not full awareness — it is at best being transiently aware of nothing identifiable, without the very possibility of knowing that one is or was aware,nothing close to consciousness. I would readily grant this feature to electrons if it could convincingly explain some relevant metaphysical issue.

Another frequent objection against panpsychism is the so-called combination problem: if phenomenal aspects are present in the microscopic constituents of reality, how is it that we have a unified phenomenal experience? I don’t have an answer to this question, but it is not specific to panpsychism (it is a version of the binding problem also found in computational theories of mind, for example). My guess is that it has something to do with a link between quantum entanglement and cognition, perhaps in line with Tononi’s integrated information theory [12], but this is pure speculation. In any case, quantum holism, if accepted, seems to provide a good basis to answer this [13], whatever quantum-mind theory we endorse.

At any rate, although I find it attractive, my goal is not to convince you that panphenomenalism is the one true theory of mind, but to illustrate the fact that one can make sense of an involvement of the mind in the interpretation of quantum mechanics without falling back into Idealism. And, of course, there are other alternatives too, such as Eccles’ dualism for example, or Stapp’s kind-of dual aspect theory, or perhaps some versions of neutral monism.

Another common objection to considering a role of the observer in the measurement problem is that it involves non-locality, which is at odds with Lorentz invariance in special relativity. This is actually a potential problem for most collapse interpretations of quantum mechanics (but apparently, GRW theory does not face it). However, invoking phenomenal aspects in a solution to the measurement problem does not necessarily involve an objective wave-function collapse: it could involve, say, a relational or a modal interpretation of quantum mechanics [14]. Which interpretation of quantum mechanics best fits our needs to account for phenomenal aspects depending on which theory of mind we endorse is precisely the kind of question which should be addressed in the philosophy of mind.

In sum, my goal is not to defend one or the other interpretation of quantum mechanics, nor to defend one or the other theory of mind, but rather to stress the relevance and potential fruitfulness of discussions relating these two domains of inquiry. The hard problem of consciousness and the measurement problem in quantum mechanics share a strong conceptual affinity: both concern the relations between physical structure and phenomenal aspects of reality, broadly construed. Either the world viewed from the mind, or the mind viewed from the world, if you like. This conceptual affinity should not be neglected on the ground of unfounded suspicions of Idealism or anti-realism or any other similar concern. The example of panphenomenalism above shows that a common treatment to both problems might be explored without presenting insurmountable obstacles, something worth pondering.

Yet, in spite of the conceptual affinity between these two central problems of philosophy, talk of quantum mechanics in the philosophy of mind is often brushed aside. At the same time, talk of consciousness and rational agents in, say, discussions on the many-worlds (or many-minds) interpretation of quantum mechanics is ubiquitous, and difficult to avoid. Both camps act as if important issues in the other camp were already settled. This is a strange situation. Aren’t we perhaps missing something by being too compartimentalized? One of the main roles of philosophy — and metaphysics in particular — is after all to provide a unified picture of the world. Is it inconceivable that some considerations in the philosophy of mind (or other areas of philosophy) might inform our interpretations of physics as much as the converse?

Is quantum mechanics useful at all?

To conclude, let me address a final worry that I have so far left aside: that quantum mechanics is of no help in explaining the mind at all. I don’t know about the debate concerning the relationship between free-will and randomness — except that randomness in quantum mechanics is closely tied to the measurement problem, and that what we mean by “randomness” is also up to interpretation. (Shouldn’t we say “unpredictability” instead? Or shall I suggest “physical privacy”?)

Besides, I do not claim that quantum mechanics can explain consciousness. My argument is more modest: the question of phenomenal aspects of consciousness should be addressed in relation to quantum mechanics, because only our best physics can inform such metaphysical questions, and because quantum effects are not necessarily confined to the microscopic realm. Moreover, it should be addressed in relation to the measurement problem, because they share conceptual affinities, and because the “threat” of Idealism is unfounded. All I claim is that a suitable metaphysical interpretation of quantum mechanics could eventually explain the metaphysical problem of consciousness.

Having said that, some features of quantum mechanics such as non-locality/holism or the no-cloning and the free-will theorem [15], could eventually help address some questions in the philosophy of mind, such as the binding problem or the problem of causal exclusion.

In light of this, quantum mechanics certainly deserves more consideration in the philosophy of mind. In my view, claiming that quantum effects reduce to “microscopic noise” simply disregards the epistemic depth of the measurement problem, just as claiming that the problem of consciousness is essentially biological disregards its ontological depth. These two “dogmas” of philosophy of mind are mutually reinforcing and we should reject them altogether if we want to make sense of consciousness as well as of quantum mechanics.

_____

Quentin Ruyant is a PhD student in philosophy of science in Rennes, France and former engineer. He maintains a blog dedicated to the popularization of philosophy of science (in French)

[1] Quantum approaches to consciousness, Stanford Encyclopedia of Philosophy.

[2] Macroscopic entanglement witnesses.

[3] Deconstructing decoherence.

[4] Our mathematical universe;  and Why physicists are saying consciousness is a state of matter, like a solid, a liquid or a gas.

[5] See “The completeness of the physical,” in Mental causation, Stanford Encyclopedia of Philosophy.

[6] Every Thing Must Go: Metaphysics Naturalized.

[7] No-cloning theorem.

[8] The Conscious Mind: In Search of a Fundamental Theory.

[9] The Born rule.

[10] On collapse theories and the Ghirardini-Rimini-Weber model.

[11] See this recent essay by Sean Carroll about why the many-worlds interpretation of QM is not that crazy after all.

[12] Integrated information theory.

[13] See: Holism and nonseparability in physics, Stanford Encyclopedia of Philosophy.

[14] Modal interpretations of quantum mechanics and Relational quantum mechanics, Stanford Encyclopedia of Philosophy.

[15] Free will theorem.

Advertisements


Categories: essay

Tags: , ,

200 replies

  1. DM, Robin,

    Easy problems = intelligence.
    Hard problem = consciousness.

    Well no. Both the hard and easy problems – according to Chalmers – pertain to consciousness, not intelligence. You can’t just change the topic arbitrarily midway. Ergo, your conclusion does’t follow, and I stick to mine: the hard problem (as distinct from the easy ones) doesn’t exist.

    think of the OpenWorm project. If that succeeds then it will be a simulation of C Elegans.

    Yes, which as you well know, I think is entirely distinct from an actual C. elegans.

    It is not the same as the p-zombie argument but a similar point is being made.

    Not at all, since consciousness is the result of physical attributes of the human brain, not just of information. Indeed, that sort of reasoning is precisely why I call DM a dualist: he thinks one can decouple consciousness from specific physical substrates, I don’t. (Of course he then turns around, claims without evidence that everything is virtual anyway, so he turns out to be a monist after all – just not a physicalist.)

    whatever “metaphysical possibility” might mean

    Right, I’m beginning to think that that category is empty. I can wrap my mind around physical and logical possibilities, but I don’t know how to determine whether something is metaphysically possible or not. And neither do metaphysicians, I bet.

    the possibility of the human simulation that models all the externally observable behaviours of a human seems to be a consequence of Naturalism.

    Sure. They just wouldn’t be human beings, but simulations of human beings.

    problem does not rely on any assumption about whether or not such a simulation would be conscious

    As far as I’m concerned to say that a simulated human may be conscious is like saying that simulated water may be wet. Nonsense.

    strictly speaking is is logically necessary for H2O to have the physical properties it does because anything that did not have those properties would not be H2O but something else.

    Not at all. H2O has to have the properties it does only given the particular physical laws of this universe, which don’t seem to be logically necessary.

    A p-zombie, on the other hand, would have exactly the same mathematical description and would behave exactly as does a person

    I’m not sure what that means. If a p-zombie is identical in every respect to a human being, then in this universe it has to have phenomenal consciousness. In another universe, who the hell knows.

    I think that if you say that it is logically possible for something to have our precise physical make up and behave exactly as we do (including talking about consciousness and subjective experiences such as pain) then you are saying there is a problem.

    But I am not. If two things are physically identical they must behave exactly the same, internally and externally. So p-zombies are physically impossible.

    The fact that this logical possibility is not instantiated is not to the point, but if it is logically possible then you are saying that physics does not require the hypothesis of consciousness to describe the universe

    Consciousness is not a hypothesis, it is a fact to explain.

    Liked by 1 person

  2. Hi Massimo,

    Thanks for that.

    It’s clear to me that the reason we are going in circles is because we interpret Chalmers differently.

    Your interpretation of Chalmers would indeed make nonsense of what he is saying. But this is not my interpretation.

    What Chalmers means, in my view, is that the easy problems pertain to intelligence, namely how information is gathered, integrated, processed etc so as to produce intelligent behaviours.

    In Chalmers’ own words, he describes the easy problems as the following:

    the ability to discriminate, categorize, and react to environmental stimuli;
    the integration of information by a cognitive system;
    the reportability of mental states;
    the ability of a system to access its own internal states;
    the focus of attention;
    the deliberate control of behavior;
    the difference between wakefulness and sleep.

    These are all properties that could be programmed into an unconscious computer system in your view. The pertain to the objective, behaviouristic aspects of consciousness, but have nothing to do with phenomenal consciousness/qualia/experience.

    So, it appears to me that your dim view of the hard problem of consciousness arises out of a misconceived interpretation of what Chalmers is saying. If you think he is vastly overestimated, even by people like me who disagree with him, perhaps you ought to reconsider whether you’re missing something in his arguments.

    I suggest you read Chalmers again with this interpretation in mind.

    http://consc.net/papers/facing.html

    Like

  3. DM, my contention is that there is nothing beyond that list, because items on that list do imply consciousness already (e.g., “the focus of attention,” which Jesse Prinz actually suggests is the key to consciousness; or take the difference btw wakefulness and sleep!). And I’m certainly not the only one to, allegedly, misunderstand and underestimate Chalmers. Dennett, most famously, is on record as saying that there is no such thing as the hard problem.

    Like

  4. Just a comment about a computer simulation worm (C. elegans) vs. a real worm. There’s actually three things: a real worm, an assembled worm, and a computer simulated worm. A computer simulated worm (on my MacBook Pro) is not going to get wet or process chemicals or have any experience that equals the real worm. But an assembled (robotic) worm (based on the model in my computer) composed of the right material could get wet and process chemicals like a real worm. If a conscious device could be made someday, I think it would be an assembly, not a simulation. (I don’t know if quantum processing would be going on in such a device. Maybe so.)

    Like

  5. Hi Massimo,

    Dennett is not misunderstanding Chalmers. Dennett is a computationalist. Dennett thinks that solving the easy problems (which could be solved in a computer program) would entail solving the hard problems (making that computer program conscious). Like Dennett, I see no hard problem.

    But you are a biological naturalist, not a computationalist, which means you see a difference between the simulation of consicousness (easy problems) and actual consciousness (hard problem).

    We run into problems when discussing concepts such as ‘attention’ which are assumed by some (you) to imply consciousness and are assumed to apply only metaphorically to computers. But a simulated person could pay pseudo-attention in the sense that its information processing apparatus could put different weights on different stimuli. It could ignore certain stimuli completely while exclusively devoting all available resources to processing others. As a computationalist, I see no difference between this concept of metaphorical pseudo-attention and the usual meaning of attention (so, I call both ‘attention’). Chalmers may not be a computationalist, but I think he is open-minded enough to use this consciousness-agnostic interpretation of ‘attention’.

    The same goes for wakefulness and sleep. A simulated person would have sleep cycles just like a real person. It is these objectively-accessible, behaviouristic aspects of intelligence that Chalmers considers the easy problems.

    Like

  6. Ok maybe my last comment was not very clear. I more or less understand what you say.
    The problem is that you are not making a precise point, but you are apparently arguing for a whole framework of thought loaded with presupositions that would deserve examination. You cannot convince me in one comment that your system is valuable–perhaps that would require a few full books where it is carefully developed in all its details and confronted with a century of research in epistemology to show that your framework is non-trivial, non-deficient and really novel (and I suppose it would not be the same system anymore if you really did that).
    Maybe I am wrong and you are making a precise point, but I fail to see it.

    Just to illustrate take your first sentence:

    > “‘mind’ is an empirical reality, zillion times more empirical than any physics-data ”

    What do you mean by “more empirical”? Do you endorse an observational/theoretical dichotomy as logical empiricists did? If so, what do you make of all the objections to this dichotomy (the “myth of the given”, the fact that observation is loaded with theory, or that mixed predicates apply to both observational and non-observational terms for example)? Can you give us a precise, formal definition of what a “degree of empiricality” would be?
    Also you compare the “mind” with physics “data”. By “mind”, do you mean some kind of theoretical entity postulated by folk psychology or something else? And what is “physics data”: the raw output of instruments? The mathematical models built from this output? Isn’t it a category mistake to compare the mind to “data”, or do you mean some kind “mental data” instead of “mind” (sense-data maybe)? If this is so, you’ll have to confront the litterature: sense-data are not very fashionable these days. Or maybe you mean “physics theoretical entities” (quark, electrons) instead of “physics-data”?
    If this is so, it doesn’t seem to me that the mind, as a theoretical entity, is “more empirical” than physical entities.

    And this is just the first sentence. I don’t want you to answer this, the point is that I could go on like this for every sentence of your last comment, which gives me the impression that you are trying to argue for a whole system of presuppositions rather than on a particular point. I cannot evaluate your system in one comment.

    Like

  7. DM, this is going to be my last comment, unfortunately, got pressing work to do. This has nothing to do with Dennett being a computationalist (which, by the way, I’m not sure he is; certainly not in the same way in which you seem to be). Dennett makes clear that he thinks all the problems of consciousness are scientific problems that can be handled by naturalistic science. Which is what Chalmers denies. I agree with Dennett even though our specific naturalistic paths diverge.

    Like

  8. Hi Massimo,

    I don’t think I’ve ever disagreed with anything Dennett has said on consciousness.

    Chalmers does not really deny that science can explain consciousness, rather he has suspicions in that direction. He’s just articulating a problem as he sees it.

    He does not so much claim that Dennett is wrong as that Dennett’s view is radical and implausible.

    Thanks for the conversation.

    Like

  9. Hi Robin,
    Actually I am more convinced by the substance of the argument than the details.
    My interpretation (which might not be faithful to Chalmers) is this: under our current conception of the physical, it is conceivable that a world exists which is physically indistinguishable but lacks phenomenal aspects.
    Chalmers infers that materialism is false. I would rather infer that our conception of the physical needs to be updated somehow.

    Like

  10. I suppose Massimo’s view is that we might be able to conceive a zombie today because we lack the relevant knowledge, but once the “easy problems” will be solved, our conceptual network will probably have change in such a way that it will be clear that zombies are physically impossible, and that once you have the cognitive aspects, you have it all?
    In a sense, Chalmers argument would be grounded in a lack of imagination on the future progress of science (drawing too easily metaphysical conclusions from what he can conceive).

    I tend to agree with this view (I am not a fan of “metaphysical possibilities”), but to me the crucial point is: can we know a-priori that our conception of the physical will have changed then? To me the answer is yes, and that’s precisely in this sense that there is indeed a “hard problem”, because changing our conception of the physical is a metaphysical move.

    As far as I can see, there are two contradictory intuitions in the debate:
    – physics cannot account for phenomenal aspects: it’s just configuration of stuff…
    – zombies are impossible in our physical universe: everyone with the right physical base is conscious.

    I think both intuitions are valid, but it only implies that our current conception of the physical needs to be updated before we solve the problem of consciousness.

    Like

  11. Hi Quentin,

    … it is conceivable that a world exists which is physically indistinguishable but lacks phenomenal aspects. […] I would rather infer that our conception of the physical needs to be updated …

    How does one leap from it being *conceivable* that our conception of physics is incomplete, to the stance that our conception of physics *is* incomplete and thus *does* need updating?

    Like

  12. Hi Quentin,

    I suppose Massimo’s view is that we might be able to conceive a zombie today because we lack the relevant knowledge, but once the “easy problems” will be solved, our conceptual network will probably have change in such a way that it will be clear that zombies are physically impossible, and that once you have the cognitive aspects, you have it all?

    Except that Massimo allows that it may be possible to build a computer that has all the cognitive abilities of a human, processes information in a way precisely analogous to a human but does not have any phenomenal consciousness. If not a p-zombie, this is some other kind of zombie (computational zombie?), and as far as I can see many of the important points pertaining to p-zombies in distinguishing the easy from the hard problems can also be made of c-zombies.

    Like

  13. Massimo doesn’t actually allow any such thing. Massimo thinks consciousness is a cognitive ability, and further thinks computers can’t have it. So…

    Like

  14. Yes. Idealism of this kind is not more workable than materialism. One has get beyond the idealism/materialism dichotomy. Then you get Transcendental or Absolute Idealism.

    Like

  15. Massimo,

    It’s very difficult and increasingly frustrating to have these discussions when pretty much every term is interpreted so as to presuppose consciousness. If you take cognition to include consciousness, then of course once we have solved all the cognitive problems there is nothing else. The sentence is empty unless you take a consciousness-agnostic interpretation, which is what I intended Quentin to mean (perhaps he can clarify).

    The first sentence on Wikipedia regarding cognition is “In science, cognition is mental processing that includes the attention of working memory, comprehending and producing language, calculating, reasoning, problem solving, and decision making.”

    Every one of those has an analogue in a c-zombie and so a consciousness-agnostic interpretation.

    Like

  16. The move if from the conceivability of a physically indistinguishable entity which lacks phenomenality to the conclusion that our physics is incomplete.

    Like

  17. Quentin – Mostly I completely agree with what you say. But not always. You posted this a while back. No reply button was available in situ so I’ve quoted it and added interjections.

    “I suppose Massimo’s view is that we might be able to conceive a zombie today because we lack the relevant knowledge, but once the “easy problems” will be solved, our conceptual network will probably have change in such a way that it will be clear that zombies are physically impossible, and that once you have the cognitive aspects, you have it all?”

    I would say it is already perfectly obvious that zombies are impossible and inconceivable. I can conceive of an entity without consciousness, but not one that has discussions about consciousness on internet forums.

    ” In a sense, Chalmers argument would be grounded in a lack of imagination on the future progress of science (drawing too easily metaphysical conclusions from what he can conceive).”

    Chalmers’ gets a rough ride here, and I feel it is mostly due to a misunderstanding of his ideas. He argues that we have something that zombies don’t. It does not seem a contentious idea, given that we have expressly defined zombies in this way. .

    “I tend to agree with this view (I am not a fan of “metaphysical possibilities”), but to me the crucial point is: can we know a-priori that our conception of the physical will have changed then? To me the answer is yes, and that’s precisely in this sense that there is indeed a “hard problem”, because changing our conception of the physical is a metaphysical move.”

    Couldn’t agree more. This conception is not going to change while metaphysics is ignored.

    “As far as I can see, there are two contradictory intuitions in the debate:
    – physics cannot account for phenomenal aspects: it’s just configuration of stuff…
    – zombies are impossible in our physical universe: everyone with the right physical base is conscious.

    I think both intuitions are valid, but it only implies that our current conception of the physical needs to be updated before we solve the problem of consciousness.”

    Why are these contradictory ideas? To me they would both be true, and they would both say the same thing.

    Which only goes to show how confusing these topics can be. .

    Like

  18. Hi Quentin,

    I’m still baffled. How do we get from it being *conceivable* that our physics is incomplete to the conclusion that our physics *is* incomplete?

    We can surely always conceive of things that don’t actually exist. I can conceive of Cartesian dualism, but that alone is not an argument for dualism.

    Like

  19. Hi Coel,

    If in the actual world P holds, but it is conceivable that ¬P would hold with the same physics, then our physics cannot explain why P holds but ¬P does not hold.

    This is not unlike matter/anti-matter asymmetry. It indicates a gap in our knowledge.

    Like

  20. DM, oh yeah, I share the frustration! If you don’t think consciousness is a type of cognition, what do you think it is? “Cognition” means anything to do with thought processes. Aren’t emotions and perceptions, including perceptions of internal states, thought processes? According to Damasio self-consciousness is a type of internal cognition of one’s own mental states, so it falls right into the group of so-called “easy” problems.

    The difference btw Dennett and me is a red herring here: both he and I agree that there is no hard problem, in the sense that neurobiology will figure out (if it is possible) how consciousness arises in physical systems. The only difference is that he thinks it can arise in a much broader class of physical systems that I am willing to grant without further evidence. Chalmers, instead, claims that there is reason to think that science is intrinsically insufficient to understand consciousness, because the alleged hard problem isn’t going to be solved even if and and when we’ll have all the pieces of the physical puzzle. And why not? Because he can conceive otherwise…

    Like

  21. If the same physics holds it is NOT conceivable that things would be different. It is a delusion that such thing is conceivable, the very type of delusion Chalmers has made a career of promulgating.

    Like

  22. Hi Massimo,

    If you don’t think consciousness is a type of cognition, what do you think it is?

    We’re taking different interpretations of cognition here.

    I take cognition to be the kind of information processing performed by brains, the kind of thing that would be reproduced by computers running a brain simulation. In the terminology we agreed before, I am interpreting cognition as intelligence. From context, I believed (and still do) that this is the meaning Quentin intended, because otherwise his sentence would be a trivial tautology. For me, as a computationalist, consciousness is indeed an aspect of cognition.

    However, on biological naturalism, my interpretation of cognition as information processing would make consciousness separate, because a computer can have cognition but no consciousness.

    “Cognition” means anything to do with thought processes. Aren’t emotions and perceptions, including perceptions of internal states, thought processes?

    Sure, but for each of those there is an objective and a subjective aspect. I’m going to refer as a c-zombie (computational zombie) to a simulated person who has all the outward appearance of a real person, as well as processing information in a way analogous to a real person but who, on biological naturalism, has no phenomenal experience.

    A c-zombie would have what objectively appears to be emotions, perceptions, perceptions of internal states and so on. The only thing a person has that a c-zombie does not have is phenomenality. A c-zombie could not function without pseudo-emotions and pseudo-perceptions and so on — whatever we call them, something has to fill the cognitive roles of true emotions and perceptions in order for the zombie to function as a human. I interpret cognition to consist of these functional roles.

    According to Damasio self-consciousness is a type of internal cognition of one’s own mental states, so it falls right into the group of so-called “easy” problems.

    And a c-zombie would have something fulfilling the cognitive role of self-consciousness. It would have an internal representation of self, it would be able to report on its own state, and it would take its own state into account when reasoning and decision-making. This is the easy problem. The hard problem is adding the special sauce that transforms pseudo-self-consciousness to actual self-consciousness.

    The difference btw Dennett and me is a red herring here

    I think the difference is crucial. Dennett’s class of conscious physical systems is not only broader, it is exhaustive. For Dennett, any system at all that can be made to process information in a way analogous to a brain is conscious. For you, only a small subset is. You are therefore drawing a distinction Dennett does not, so you have a problem of how to define or support that distinction. That problem is essentially the hard problem of consciousness. Dennett can deny the hard problem exists because he doesn’t think the distinction exists. It seems to me that you cannot do so consistently.

    Like

  23. Hi DM,

    If in the actual world P holds, but it is conceivable that ¬P would hold with the same physics, then …

    Well hold on, I can conceive of letting go of a brick and it floating in mid-air (in the manner of Vogon spaceships). I can even conceive of the brick floating “… with the same physics”.

    Of course that just means that my conception is inconsistent. I can “conceive” of all sorts of things that can’t actually work and violate physics, such as me flapping my arms and flying.

    So the fact that someone can “conceive” of a p-zombie that lacks qualia *is* *not* a demonstration that that is consistent with known physics and known biology!

    All it is is a demonstration that humans can scheme up and conceive of all sorts of things regardless of whether those things are internally consistent or consistent with real-world physics.

    Like

  24. Hi Massimo,

    If the same physics holds it is NOT conceivable that things would be different. It is a delusion that such thing is conceivable, the very type of delusion Chalmers has made a career of promulgating.

    Yes and no. It is conceivable that we could be missing knowledge of some laws of physics that are important for consciousness but not for anything else. It could therefore be the case that there is a possible world which shares the same known laws of physics but which differs in the unknown laws of physics which are required for phenomenal consciousness.

    So the physics in our text books could be compatible with a possible world populated by p-zombies.

    If this is the case, then our knowledge of physics would appear to be incomplete. Indeed, this does appear to be the case if we cannot derive phenomenal consciousness from our physics but we persist in believing that phenomenal consciousness is a physical phenomenon.

    Like

  25. Exactly right, Coel, thank you.

    Like

  26. It isn’t just conceivable that we miss knowledge, we know that to be the case. But that trivial bit helps Chalmers not at all.

    Like

  27. Hi Massimo,

    Right, but you need to pay attention to the kind of knowledge Chalmers imagines we’re missing.

    Chalmers holds that it is conceivable that there are physical laws that have no effects whatsoever apart from making certain configurations of matter experience phenomenal consciousness. This is a far out idea I personally don’t believe, but if you are a biological naturalist it would seem to be within the space of possibilities. It could be that these laws are what allow biological matter to be conscious but not electronics.

    A person in a universe without these laws would be physically identical to us, having atoms and molecules arranged in precisely the same way, but without these laws, those atoms and molecules could no more support consciousness than could a silicon computer. This people would in fact be p-zombies.

    Like

  28. Quentin Ruyant: ” I don’t want you to answer this, … you are trying to argue for a whole system of presuppositions rather than on a particular point. I cannot evaluate your system in one comment.”

    Although you do you want me to answer your critique as you cannot evaluate my system, it will not be fair as you did evaluate my comment and give a very negative score (as loaded with presuppositions). Thus, please allow me to say a few words here.

    First, what do I mean ‘empirical’? For me, there are only two types of empirical.
    1. Theory-loaded empirical, such as LHC data (the background calculation is totally theory-dependent). In fact, all Poperianism type data is theory-loaded, subject for fallibility.
    2. Non-theory-loaded empirical, that is, sense-empirical, concept-empirical, logic-empirical, etc.

    Quentin Ruyant: “… Also you compare the “mind” with physics “data”. … sense-data are not very fashionable these days. Or maybe you mean “physics theoretical entities” (quark, electrons) instead of “physics-data”?”

    Indeed, you have totally misevaluate my comment. No, I have never compare the ‘mind’ with physics ‘data’. I am talking about a ‘connecting-thread’: the highest tier manifestation (the mind) is connected to a ‘base’ (the laws of physics, not data) via a ‘connecting-thread’. As you have missed this vital point (mistaken laws as data, and mistaken connection as comparison), your critique has no meaning to my points.

    Quentin Ruyant: “… By “mind”, do you mean some kind of theoretical entity postulated by folk psychology or something else?”

    Wow! Why don’t you ‘read’?
    { What is ‘mind’? Instead of giving a clearly definition, I will just discuss its three attributes (while it could have many more attributes).
    a. It produces ‘intelligence’.
    b. It produces ‘consciousness’.
    c. It houses a ‘well’ of morality (free will: goodness/evil).}

    I thought that I did make this point super clear.

    Quentin Ruyant: “…but you are apparently arguing for a whole framework of thought loaded with presupositions that would deserve examination. … perhaps that would require a few full books where it is carefully developed in all its details and confronted with a century of research in epistemology to show that your framework is non-trivial, non-deficient and really novel …”

    If you follow the links that I provided in my first comment, you will find a few full books on this issues. Why do you make your not knowing those books as a fault of other’s? Let me try one more time.

    One, Intelligence: see the links in the first comment.

    Two, Consciousness: https://scientiasalon.wordpress.com/2014/04/25/plato-and-the-proper-explanation-of-our-actions/comment-page-1/#comment-1245 and https://scientiasalon.wordpress.com/2014/04/25/plato-and-the-proper-explanation-of-our-actions/comment-page-1/#comment-1265 .

    Three, presuppositions: https://scientiasalon.wordpress.com/2014/07/10/string-theory-and-the-no-alternatives-argument/comment-page-1/#comment-4599 and https://scientiasalon.wordpress.com/2014/07/10/string-theory-and-the-no-alternatives-argument/comment-page-1/#comment-4805 . Note: Accusing other as presupposition without any research is not a scientific way of research.

    Like

  29. Hi DM,

    If this is the case, then our knowledge of physics would appear to be incomplete.

    All this is saying is that *if* our knowledge of physics (relevant to zombies) is incomplete, then our knowledge of physics (relevant to zombies) is incomplete.

    Yes, granted, but how do we leap from there to the stance that our physics is indeed incomplete?

    Like

  30. Hi Coel

    But some of us dispute that a non-conscious organism could be equally functional. In order to function equivalently it would need all the capabilities of consciousness and thus would *be* conscious.

    I’d be curious what you evidence you can provide to back up that claim, the simple fact that some organisms have consciousness would not be meant it necessarily serves a function. Moreover, very simple devices we have that can detect various properties such as color presumably don’t have consciousness and function just fine. If we keep adding up (solving the easy problems), where do we get the need to suddenly include consciousness to get some additional function?

    Well no, currently we cannot make machines anything like as functionally capable as a conscious mammal.

    I never said we have those machines currently, only that we can in principle. There is the level of complexity we have not achieved but I don’t see any reason why machines could not mimic more complex functions similar to organisms.

    Further, how do you know that such machines are *not* conscious (in a very rudimentary form of consciousness commensurate with their very rudimentary capabilities)?

    If you mean conscious in the sense that they have qualia, it’s an assumption I’m making but one I don’t think is reasonable to deny. It’s similar to the way I would say a bacteria or a rock is not conscious in the sense of having qualia (or in the rocks case, no consciousness at all).

    If we developed such machines to the functional decision-making capability of, say, a chimpanzee, wouldn’t we have to program in sufficient awareness of its sensory input and of its own state that it would then be conscious?

    Well that is the very thing we are discussing, unless we can somehow show it has qualia, than I would say no or at least, I don’t know because it would be hard to figure out if the machine is functionally acting like a chimp or functioning like a chimp + qualia. The only way out of that is to get an understanding of qualia, which I’m arguing is even in principle impossible to do, hence the hard problem. I’m not arguing that it’s not a physical thing, I think it’s very much physical and biological but simply outside of our epistemological range.

    Like

  31. Sorry messed up on my quote tag there, maybe Massimo can fix the old one, if not here it is again in a more readable way.

    Further, how do you know that such machines are *not* conscious (in a very rudimentary form of consciousness commensurate with their very rudimentary capabilities)?

    If you mean conscious in the sense that they have qualia, it’s an assumption I’m making but one I don’t think is reasonable to deny. It’s similar to the way I would say a bacteria or a rock is not conscious in the sense of having qualia (or in the rocks case, no consciousness at all).

    If we developed such machines to the functional decision-making capability of, say, a chimpanzee, wouldn’t we have to program in sufficient awareness of its sensory input and of its own state that it would then be conscious?

    Well that is the very thing we are discussing, unless we can somehow show it has qualia, than I would say no or at least, I don’t know because it would be hard to figure out if the machine is functionally acting like a chimp or functioning like a chimp + qualia. The only way out of that is to get an understanding of qualia, which I’m arguing is even in principle impossible to do, hence the hard problem. I’m not arguing that it’s not a physical thing, I think it’s very much physical and biological but simply outside of our epistemological range.

    Like

  32. Hi Coel,

    All this is saying is that *if* our knowledge of physics (relevant to zombies) is incomplete, then our knowledge of physics (relevant to zombies) is incomplete.

    I wouldn’t put it quite that way. It’s saying that it is possible that there is no way to derive consciousness from our present laws of physics, and if this is the case our knowledge of physics is incomplete.

    And if the new laws of physics we need are only relevant to consciousness, then seems we could never confirm their existence by independent means, so our knowledge of physics will always be incomplete.

    Like

  33. Hi imzasirf,

    If we keep adding up (solving the easy problems), where do we get the need to suddenly include consciousness to get some additional function?

    The view is not that we need to add consciousness to get some additional function. It is that having the wherewithal to perform certain functions entails having consciousness, just as it entails having complexity.

    Like complexity, consciousness is not an extra spice you have the option to add or not add. It is a feature of certain systems. There are certain functions that cannot be performed without complexity and there are certain functions that cannot be performed without consciousness.

    So you never have to explictly add consciousness, it is an emergent property of systems that implement certain combinations of the ‘easy’ problems.

    Like

  34. Hi imzasirf,

    Moreover, very simple devices we have that can detect various properties such as color presumably don’t have consciousness and function just fine. If we keep adding up (solving the easy problems), where do we get the need to suddenly include consciousness to get some additional function?

    This question suggests that, IMO, you are thinking about consciousness the wrong way. If it is a binary on-off, such that one either has consciousness to the degree a human has it, or no consciousness at all, then I agree that it cannot be a physical thing.

    But, biological properties such as consciousness, intelligence, awareness, etc are always continua.

    Your simple machine **does** have consciousness, and *needs* consciousness to function. However, it only has a millionth the consciousness that we do, because it is a simple machine with limited abilities and function.

    A slightly more complex machine has two-millionths of our consciousness (it needs slightly more for its slightly more complex function); a slightly more complex machine still has three-millionths, et cetera, all the way to us.

    If you like, think of these machines as the succession of our last million ancestors, and the gradual development of capabilities over that time.

    Or think of the million steps between a fertilized egg and a 3-yr-old, and the development in both functionality and consciousness over that time.

    Thinking of “consciousness” as something *additional* to function, or as an optional add-on, is going wrong. Think of it as an integral part of the function.

    We are aware of our visual field because being aware of it is useful (we can detect a hiding lion).

    If we are unconscious (asleep or under medical sedation) then we can’t spot the prowling lion — we lose the function!

    Imagining a p-zombie is akin to imagining an animal that is aware of its visual field and yet not aware of its visual field. It is nonsensical.

    Like

  35. Would it be possible to understand/analyze light, without bouncing it off or filtering it through some material? Waves, photons, spectrum, intensity, etc, are all aspects of this interaction and the opposite holds true, in that without light, we wouldn’t be able to analyze the structure of these various materials.
    It seems the same would hold true for the relationship of consciousness to intelligence. What would even a computer be, if there were no consciousness to question whether it is also potentially conscious?
    Is a six month old baby really less conscious than a fifty year old man? Having been both, frankly I’d argue the baby could be more conscious, but lacking the intellectual residue of fifty years. The filter though which that consciousness shines and thus absorbing its energies.

    Like

  36. Maybe it’s redundant with what DM said, but I would state it as follows:
    – If our physics is complete, then we can in principle deduce any property an object has from its physical state
    – if this is so, then it is not conceivable (it is logically incoherent) that any object with the same physical state lack any of the properties the object has (same state, same properties)
    – However it is conceivable (it is not logically incoherent) that a zombie with the same physical state than yours exists, but lack phenomenal consciousness
    – therefore our physics is not complete

    Like

  37. > I would say it is already perfectly obvious that zombies are impossible and inconceivable.

    Yet Chalmers argument is based on theor conceivability…

    > Chalmers’ gets a rough ride here

    I quite agree 🙂
    Could have been more charitable.

    > Why are these contradictory ideas?

    At least they seem contradictory: the first seems to say that phenomenal aspects are not physical, the second seems to say that they are…

    Like

  38. I’m going to re-read Chalmers. Maybe I I have this wrong. The thing is, I find it makes no difference to the result of his argument whether or not zombies are conceivable. It is whether they’re possible that seems to matter.

    You know, try as I might, I cannot see how your two statements contradict each other.

    – physics cannot account for phenomenal aspects: it’s just configuration of stuff…
    – zombies are impossible in our physical universe: Everyone with the right physical base is conscious.

    I would rather just say they are impossible, like married bachelors. . .

    The way I see it, the second statement here says, in other words, that consciousness cannot be accounted for by physics. It also seems to say that at least some physical phenomena are conscious, and that human beings are among them. The first statement adds nothing to it, or could be derived from it.

    I’m struggling to see how the second statement could interpreted as stating that phenomenal aspects are physical. Does it not say that the physical universe reduces to phenomenal aspects?

    Anyway, this is why I think both of the statements are true rather than contradictory. .

    I see an objection. If we give a zombie the ‘right physical base’, then it will be conscious. But all this means is that when we assign it the right physical base, a zombie is conscious. Thus it is not a zombie. Thus zombies are impossible. Whichever way we cut it, human beings have something that physics knows nothing about.

    If you don’t have time for this please don’t worry. There’s a lot going on here.

    Like

  39. Hi Quentin,
    Your summary is clear and thus illustrates where we disagree very well. I agree with your first statement, but:

    if this is so, then it is not conceivable (it is logically incoherent) that …

    I wouldn’t regard being “conceivable” as synonymous with “logically coherent”. For example I can conceive of kissing a frog and it turning into a prince, but I doubt that I could make that coherent with physics and biology.

    However it is conceivable (it is not logically incoherent) that a zombie …

    No-one has demonstrated that a p-zombie is logically coherent. All they’ve done is conceive of it in the manner of the frog-kissing. Thus I don’t go along with this argument.

    Like

  40. If you get a chance to read philosopher Scott Bakker’s blog rsbakker.wordpress.com he espouses his own Blind Brain Theory which in a nutshell says the brain is evolved to solve the distal environmental problems but when ‘turned on itself” it turns a blind eye or simply cannot solve itself due to it’s own complexity. Essentially brains are real time biological heuristic systems designed by nature to operate at the same speeds as our skeletal system so they have essentially solved a scaling problem just as all cellular biology is a scaling effect. My take on Scott is that the conundrums arise because we don’t understand how our own brains are engaged when we do science and solve the scalings of nature.

    Like

  41. The idea that zombies are logically coherent stems from an intuition that phenomenal aspects are “first-person perspective” aspects, and that no scientific description requires such aspects. Contrarily to your frog example, it’s not a question of such or such process occuring or not.

    Like

  42. > “The way I see it, the second statement here says, in other words, that consciousness cannot be accounted for by physics.”

    I don’t see how the second statement says such a thing. It’s like saying that everything with the right molecular constitution will be liquid and transparent: it doesn’t mean that it cannot be accounted for by physics.

    > “Does it not say that the physical universe reduces to phenomenal aspects?”

    I don’t see how it says such a thing either…

    Like

  43. Hi Quentin,
    Wouldn’t any scientific description of, say, social interaction and politics in a chimpanzee troop, need to include the concept of the animals having a first-person perspective?

    Like

  44. Okay Quentin. Never mind. I know that I can never understand how the zombie issue is made so complicated, so obviously I’m missing something. I won’t waste your time on it.

    Like

  45. Hi Coel,
    I don’t think so, couldn’t we view the chimpanzees as merely processing data?

    Like

  46. Hi Quentin,

    I don’t think so, couldn’t we view the chimpanzees as merely processing data?

    Let’s consider (real) scenarios such as: A chimp knows where some food is hidden, but makes no attempt to retrieve it while being watched by other chimps; later he retrieves the food, when he is alone and so does not need to share the food.

    I don’t see how to interpret this except in terms of the awareness and decision-making of an animal that does have a first-person perspective. It must think of itself as “me” and know about its place in the troop.

    Any scientific account would surely proceed along those lines. Now, one could say that this is a “high level” commentary of a scenario that could instead be described in terms of low-level movement of molecules and ions across synapses, etc. And I would agree, in principle it could. But that high-level account is just as much “scientific”.

    As an analogy, a “high level” concept such as “hurricane” can be described in terms of low-level movement of air molecules, but the high-level concept “hurricane” is just as scientific, and arises naturally through the aggregation of low-level behaviour.

    Ditto, the first-person perspective arises naturally through the aggregation of lower-level firings of neurons. But, I would say that a proper scientific account of weather needs the high-level concepts such as “hurricane” and “jet stream” (as aggregations of low-level behaviour), and in the same way any scientific account of chimpanzees is going to include the first-person perspective.

    Like

  47. I see your point. To be fair these issues heavily rely on intuitions and I’m not sure I can convince you that there is indeed a hard problem.

    The intuition, on your example, would go like this: you have some kind of first person perspective, in the sense that the chimpanzee acts for his own benefits, but it’s not a real “what it’s like” first person perspective that is involved. Everything is perfectly described from the outside and phenomenal aspects play no role in the explanation: you could imagine some sort of algorithm which would implement the same kind of behaviour without being conscious.

    A similar intuition is that whatever the scientific theory you endorse, you can always entertain a kind of scepticism with regards to other people and animals being conscious like you. You only have direct evidence for your own consciousness, and scientific theories won’t help: they don’t tell you “what it’s like” they merely structure the observable phenomena.

    If you are not convinced by these I’m not sure I can do much better. And conversely I’m not sure you can convince me (or anyone who believe there is a hard problem) that there won’t always be something important missing in purely scientific descriptions taken at face value.

    Like

  48. Hi Quentin,

    And conversely I’m not sure you can convince me (or anyone who believe there is a hard problem) that there won’t always be something important missing in purely scientific descriptions taken at face value.

    While I agree there is kind of a hard problem, the problem is solved by explaining why it is a pseudo-problem.

    In my view, the third-person scientific facts are key, and once they are known the amount of philosophical reasoning to dissolve the hard problem is minimal.

    I think it should be possible to explain the functional side of beliefs from a scientific perspective. Beliefs are from one perspective little more than representations of propositions, and these can be found in computer systems.

    If we make an AI that behaves like us, it will believe itself to have phenomenal experience, and we will understand why it holds such beliefs.

    But once we have explained from an objective perspective how it is possible for an entity to believe it has phenomenal experience, then what is left to explain? The only reason we have to believe that phenomenal experience is a thing (or that there is a hard problem) is that we believe we have phenomenal experience. If a system we understand completely believes itself to be experiencing, and we understand this belief, there is no principled reason to suppose we are any different.

    That is not to say that consciousness is an illusion, but rather that consciousness is a state of being where an entity believes itself to be experiencing.

    Like

  49. Hi Quentin,

    To be fair these issues heavily rely on intuitions …

    Agreed, and to me that is why the argument for a “hard” problem is weak. Human intuition is dualistic and vitalistic, and before Darwin the idea of species as fixed, separate creations was intuitive. These are all heuristics which serve sufficiently well, but are wrong.

    Why would we expect Chalmers’s intuition (or anyone else’s) about physics and complex biological systems, far more advanced than we can make ourselves, to be at all reliable?

    the chimpanzee acts for his own benefits, but it’s not a real “what it’s like” first person perspective that is involved.

    Isn’t that just “human exceptionalism”? Consider the line of ancestors back from you or me to the last-common-ancestor with chimps and then forward again to a modern chimp. Talk us through how you think “first person perspective” changes round that loop. (To me it doesn’t much, maybe slightly in degree.)

    … you could imagine some sort of algorithm which would implement the same kind of behaviour without being conscious.

    Well, I cannot imagine that! To me it entails “being aware of the lion in the grass in my vision field that might eat me” and simultaneously “not aware of the lion …”, and thus is contradictory.

    You only have direct evidence for your own consciousness, and scientific theories won’t help …

    But scientific theories do help! They tell us that we, as one member of our species, are nearly identical to all the other members of our species. It is entirely contrary to science to posit such a big phenomenological difference between one member of the species and all the other ones.

    And conversely I’m not sure you can convince me (or anyone who believe there is a hard problem) …

    That, to me, is the wrong burden of proof, especially if all you’re going on is intuition. One can always say: “There is some unknown physics, prove that there isn’t”, but it’s not helpful. The only sensible approach is to add to known physics only when we know we need to.

    Like

  50. @Coel

    No it’s not human exceptionalism. The same applies to chimpanzees or to human behaviour.

    > “Well, I cannot imagine that! ”

    You seem to understand “awareness” without really appealing to qualitative aspects–only some kind of information.

    > ” It is entirely contrary to science to posit such a big phenomenological difference between one member of the species and all the other ones. ”

    Ok but you have to appeal to your own phenomenology as an unanalysable primitive, then extend it to others by supputation.

    @DM

    > “Beliefs are from one perspective little more than representations of propositions, and these can be found in computer systems.”

    I’m not sure it makes sense to say that a compuer has beliefs. To me, an algorithm is only a way for human agents, endowed with beliefs, to defer their actions to a machine in a systematic way. But this conversation would bring us too far…

    Like

%d bloggers like this: