The intuitional problem of consciousness

turingmachineby Mark O’Brien

Could a computer ever be conscious? I think so, at least in principle.

Scientia Salon has seen a number of very interesting discussions on this theme which unfortunately have failed to shift anybody’s position [1]. That much is to be expected. The problem is that the two sides seem to be talking two different languages, each appearing obtuse, evasive or disingenuous to the other (although it has to be said, the conversation was very civil). I think the problem is that the two camps have radically different intuitions, so that what seems obvious to one side is anything but to the other. It’s important to keep this in mind and to understand that just because the other side doesn’t follow the unassailable logic of your argument doesn’t mean that they’re in denial, ideologically prejudiced or plain dumb.

The more formal critiques of computational consciousness include those such as Searle’s Chinese Room [2] and the Lucas-Penrose argument [3]. While these are certainly very interesting ideas to discuss, it seems to me that such discussion all too often ends in frustration as the debate is undermined by fundamental differences in intuition.

And so my goal in this article is not to discuss any of the more prominent arguments but to explore our conflicting intuitions. I’m not hoping to persuade anybody that computers can be conscious but rather to explain as well as I can my reasons for intuiting that they can, as well as my interpretation of the intuitions that lead others to skepticism. I am hoping to show that mine is at least a coherent position, and in particular that it is not as obviously wrong as it may appear to some.

I also want to make clear that I make no claims about the feasibility or attainability of general artificial intelligence. I am not one of those who think the Singularity is around the corner — my concern is only to explore one view of what consciousness is.

Let me start with some empirical assumptions which I think are probably true, but admit could be false.

Empirical assumption 1: I assume naturalism. If your objection to computationalism comes from a belief that you have a supernatural soul anchored to your brain, this discussion is simply not for you.

Empirical assumption 2: The laws of physics are computable, that is, any physical process can be simulated to any desired degree of precision by a computer [4].

It should soon become clear that these assumptions entail that it ought to be possible in principle to make a computer with the outward appearance of intelligence. If we take a strictly behaviouristic interpretation of intelligence (as I will from now on), it therefore seems to be relatively uncontroversial that computers can be intelligent, though this is certainly distinct from the stronger claim that they can be conscious.

The reason I am a computationalist has to do with a very straightforward thought experiment. The central idea involves conducting a simulation of a person, and so before I get too deep into it I need to address an objection that has often been raised (in this corner of the Internet at least), and this is the observation that a simulation of X is not X.

Let me concede right off the bat that a simulation of X is often not X. This is particularly clear if X is a substance. A simulation of photosynthesis may produce simulated sugar, but simulated sugar cannot sweeten your coffee! However, I suggest that sugar is not a particularly good analogy for consciousness because consciousness is clearly not a substance. We can’t after all have a test tube full of consciousness.

It would seem instead that consciousness must be a property of some kind. It is certainly true that physical properties are not usually exhibited by simulations. A simulation of a waterfall is not wet, a simulation of a fire is not hot, and a virtual black hole is not going to spaghettify [5] you any time soon. However I think that there are some properties which are not physical in this way, and these may be preserved in virtualisation. Orderliness, complexity, elegance and even intelligent, intentional behavior can be just as evident in simulations as they are in physical things. I propose that such properties be called abstract properties.

At this point, it would seem that consciousness could be either a physical property (like temperature) or an abstract property (like complexity). Indeed this seems to be one of the major points of departure between computationalists and anti-computationalists. I may not be able to persuade you that consciousness is an abstract property, but it would seem to me that the possibility is worthy of some consideration. If there are exceptions to the maxim that a simulation of X is not X, then consciousness could be one of them.

It seems obvious that we need to distinguish between physical properties and abstract properties, so let’s try to elaborate on that distinction. What jumps out at me straight away is that physical properties seem to involve physical forces and materials, while abstract properties seem to have more to do with pattern or form. To me that suggests that consciousness is the latter, but let’s see if we can delve deeper.

If we allow that we can distinguish between the physical and the virtual [6], it seems that physical properties are those which can directly affect physical objects. Such physical detectors include our own senses as well as devices such as thermometers and Geiger counters. In contrast, abstract properties seem to be those which cannot directly interact with physical objects. If they interact with matter at all, it is only when they are perceived by a mind or detected by a computer program. Since there is no such thing as a consciousness detector, and very little reason to think that any such device can ever be built, consciousness does not feel like a physical property to me. Indeed, if consciousness is detected directly by anything at all it is only by the mind that is conscious. For me, this very strongly suggests that consciousness is an abstract property.

Whether abstract or physical, consciousness is arguably unlike all the other properties so far discussed. Indeed it may be in a category all of its own, and this is because it is uniquely subjective. As far as we know, the only observer that has direct evidence of the consciousness of any entity is that very same entity. Whether or not consciousness is like complexity, orderliness or intelligence, there does seem to be enough reason to at least consider the suggestion that a computational process might in fact be conscious just as we are, if only because it is not obviously analogous to physical properties such as temperature or mass.

I hope that I have at least earned the right to ask you to entertain for a while the idea of a simulated person. If not, I implore you to bear with me anyway!

If naturalism is true, and if the laws of physics are computable, then it should be possible to simulate any physical system to any desired precision. Let’s suppose that the physical system we want to simulate is the room in which you are currently sitting [7]. The simulation is set up with the state of every particle in the room including those in your body [8].

Assuming that you accept for the sake of argument this rather fantastic, unfeasible premise, this virtual room includes within it a structure which corresponds to your body, and this body contains a virtual brain. The simulation may not be an absolutely perfect recreation (indeed quantum mechanics would seem to preclude that possibility), but, virtual/physical distinctions aside, it should be much more like you than your twin would be. Nothing ‘Virtual You’ does will be out of character for you. Anything you can do physically it can do virtually and vice versa. Everything you know it claims to know, everything you like it claims to like and so on.

But I have only established that Virtual You behaves like physical you. So far, I don’t think this is particularly controversial. The question is whether it has an inner mental life. This, again, is where intuitions divide. Computationalists think that it is abundantly clear that it must be conscious, while the opposite claim is just as evident to anti-computationalists. This may be an impasse, but I can at least outline some reasons for preferring computationalism.

Rather than asking whether it can be conscious, let’s first ask what seems to me to be a somewhat simpler question: can it believe? On this, opinions divide largely along the same party lines. Perhaps I cannot convince you that it believes, but it seems clear to me that it must at the very least have some kind of pseudo-belief, a virtual, functional kind of belief we can attribute to it whether or not it actually believes, such that Virtual You pseudo-believes the things that you actually believe. Let’s adopt the convention to refer to this kind of pseudo-belief as ‘belief*’ in order to be clear that I am not merely stipulating a definition of belief to suit my ends. I will continue to use this asterisk convention [9] to distinguish functional, objectively identifiable concepts from the intuitive subjective kind which apply only to conscious minds.

A believer* speaks and behaves as if it believes a certain proposition to be true. Within the brain of the believer* can be found an apparent representation of that proposition which, though virtual, is otherwise just like that we presume must be in your brain. It is debatable whether this is an actual representation – anti-computationalists might suggest that it is not – but it is at least a ‘representation*’ in that it corresponds to objects in the world (it might even be said to ‘refer*’ to them) and is modified appropriately as Virtual You acquires new information about those objects. From a functional perspective, beliefs* do everything that beliefs can do and even have analogous (virtual) biological representations. What, then, is the difference between beliefs* and beliefs?

The difference is not clear to me unless we stipulate that a belief can only take place in a physical biological brain. I imagine the anti-computationalist would offer the argument that beliefs* are formal, syntactic things whereas beliefs have semantics, following with the assertion that semantics cannot be derived from syntax. This latter claim is an oft-repeated refrain from the opponents of computationalism, but it seems to me to be an open question. If beliefs are no more than beliefs*, then perhaps a formal account of semantics is possible after all.

It might help if I can explain a little why I think beliefs and beliefs* are the same thing.

If beliefs* don’t have true semantics, they at least behave quite like they do, since it would seem that we can identify within the simulation the analogues of references and representations. So, on the one side we have beliefs, references, representations, semantics, understanding and so on, and on the other we have beliefs*, references*, semantics*, understanding* and so on. Everything that can be said about biological minds can be said about virtual minds* as long as we suffix our terms with ‘*’ as required. We therefore end up with two distinct models of mind which are conceptually indistinguishable apart from the labels we apply and the insistence that one is real while the other is fake. The computationalist intuition is that these two indistinguishable systems are in fact the same, while the anti-computationalist intuition is that they are somehow different.

If computationalism can answer all the other objections against computationalism (which I think it can), then it seems much more parsimonious to conclude that the computationalist intuition is correct. Parsimony is only a rule of thumb, and can lead us astray, but all things being equal it leads us to truth more often than not. This is why I think computers can believe.

Unfortunately, belief is not enough — far from it. We also need qualia, i.e., the ineffable indescribable whatness of sensory experience: the redness of red, the taste and feel of a hot morsel in the mouth, the agony of pain. The intuitions regarding qualia are perhaps the greatest factor in leading so many to regard computationalism as absurd. It is very difficult to imagine that any machine could have real experiences the way we do. Any appearance of consciousness in such a thing is therefore assumed to be an illusion.

However, if Virtual You believes at all, it is clear that Virtual You believes it is experiencing qualia. If you ask it if it feels pain, it will answer in the affirmative. It will claim to recall experiences from your childhood, and it will not notice any difference between the kinds of sensations it claims to feel today and those it remembers from long ago. If you are an anti-computationalist then it will be very difficult if not impossible to persuade Virtual You that it is a simulation, so convinced will it be of its false qualia (or qualia*). With this being the case, I propose that you have no way of being certain that you are not such a simulation yourself, because if you were a simulation then you would still believe you were experiencing the very same qualia as you perceive right now, and in a world where such simulations are possible you have no other evidence and no justification for holding yourself to be real.

Neurology and psychology may yet have a lot of light to shed on qualia. I am certainly not claiming to understand exactly how they work, but conceptually or metaphysically it doesn’t seem to me that there is a real problem here. If you can understand how a simulation could believe itself to be experiencing qualia, then I propose that there is no mystery to explain. If there is no principled way to justify the belief that your brain is more than a sophisticated biological computer, then the possibility remains open that it is just that.

When confronted with such arguments, it seems to me that the anti-computationalist either makes a straightforward appeal to intuition or makes a circular argument of some kind which boils down to the assertion that qualia really are a mysterious phenomenon which cannot be explained away so easily or that semantics are more than semantics*. When computationalists fail to agree, we are often accused of being disingenuous or obtuse.

But computationalists are not being disingenuous or obtuse. We just don’t see the distinction the other camp sees between semantics and semantics* or qualia and qualia*. It is becoming increasingly clear to me that the source of the dispute is not ignorance or stupidity or muddled thinking in either camp, but radically different fundamental intuitions. At least one set of intuitions is wrong, and it’s hard to say which. Either computationalists are missing some mental faculty of perception, or anti-computationalists are experiencing some kind of illusion.

It would seem the way forward is to put as little weight on intuitions as possible. The computationalist account has the advantage of parsimony, dissolving the problem of connecting semantics to syntax and explaining what properties of brains enable consciousness (i.e., logical structure). None of this means that computationalism is correct, but it suggests that it should be taken very seriously unless a fatal flaw can be identified which does not ultimately rest on anti-computationalist intuitions.

The famous arguments against computationalism alluded to earlier may present such fatal flaws. Their proponents certainly think so, while I obviously disagree, but until we understand each other’s intuitions a little better there is perhaps little point in having such discussions at all.

_____

Mark O’Brien is a software developer and amateur philosopher who despite never having achieved anything in the field has an unjustified confidence in his own opinions and sees it as his sacred duty to share them with the world. The world has yet to notice. You might very well think that his pseudonymous alter ego is a regular on Scientia Salon, but he couldn’t possibly comment. He is Irish and lives in Aberdeen, Scotland.

[1] The debate about consciousness was most vigorous on the following articles, but also creeps into other discussions quite frequently: The Turing test doesn’t matter, by Massimo Pigliucci, 12 June 2014; What to do about consciousness, by Mike Trites, 23 April 2014; My philosophy, so far — part II, by Massimo Pigliucci, 22 May 2014.

[2] This oft-discussed thought experiment is part of a family of such that seek to disprove computationalism by positing computational systems which our intuition suggests cannot understand. Other examples include Ned Block’s homunculi-headed robot and China Brain. See the Stanford Encyclopedia of Philosophy on the Chinese Room; here is an animated 60-second short explaining the basic idea.

[3] The Lucas-Penrose argument concludes that human intelligence cannot be reduced to mechanism because mechanisms are constrained by Gödel’s incompleteness theorems to be unable to prove all true statements. The argument fails (in my view) because it assumes without evidence that human beings are not also so limited.

[4] It must be said that the assumption that the laws of physics are computable is doubted by certain anti-computationalists, especially those who endorse the Lucas-Penrose argument.

[5] See Wikipedia if you are unfamiliar with this wonderfully evocative technical term.

[6] As trivial as it may seem to be to distinguish between virtual and physical, it may not be so straightforward if computationalism is true and we happen to live in a simulation!

[7] Such a detailed simulation is, of course, entirely unfeasible. I use it only to establish a point of principle about the nature of consciousness, so I encourage you not to concern yourself too much with practical barriers to implementation, unless of course some physical law makes such a computation physically impossible.

[8] We will also presumably need a crude simulation of the exterior of the room. We don’t want to run out of oxygen or radiate heat away to a vacuum, and the room will need to be supported so that objects within the room are bound to the floor by gravity.

[9] This convention is adapted from one established in the excellent paper: Field, Hartry (1978). Mental representation. Erkenntnis 13 (July):9-61.

317 thoughts on “The intuitional problem of consciousness

  1. Well I don’t think I have a decisive argument, but my overwhelmingly compelling position 🙂 comes from scientific evidence regarding the nature of the system we all agree has consciousness.

    I’m not seeing how a computational approach uses an inference to the best explanation.

    As I mentioned (in the post directly above yours) current hardware itself would seem to preclude a computational approach. A mere binary system of computation on fixed circuits (not capably of moving or flipping connections) does not seem capable of handling information in the way required for consciousness and holding a belief* that equals belief.

    Whether a conscious entity can emerge from an “artificial” system such that it is free from its strictly physical underpinnings (the software itself becoming aware) would seem even more unlikely. Possible but unlikely.

    If I see a flaw it is that computationalists have to some degree abstracted the mechanism of thought as just trees of decisions, which allows one to feel that a belief is separate from a very real physical state.

    Again, I think it could be captured artificially, just not within current limits. Keep at it though, it could be interesting.

    Like

  2. But the point is that if you have to add the “bio” then you are not a computationalist (not in the sense of DM). By the way, I’ve said repeatedly that some aspects of brain activity are definitely computational, so I guess I am indeed a bio-computationalist.

    Like

  3. Hi all,

    Thanks for a great discussion. Before comments close on this topic, I’ll try to acknowledge any important points I haven’t got to yet (mostly from gwarner99). If you ever want to discuss this further, you can contact me via my blog.

    Marc Levesque:

    Then doesn’t that mean that all computations (room, body, brain, consciousness, etc) are logical structures (parsimony), so we have not gained any explanatory power of consciousness?

    Well, it could be that the whole universe is a computation (this is basically the mathematical universe hypothesis). A less controversial reply would be that we know and understand that physical properties (e.g. temperature) are not reproduced when we run a simulation, at least from the perspective of an observer in the physical world. In this respect alone, these properties are not computational. We have no reason to believe that consicousness is one of these.

    Philip Thrift:

    Thanks for your comments. Sorry I haven’t responded but I don’t have much to say on your links and comments. Obviously, we disagree on whether consciousness is a physical process.

    Patrice Ayme:

    Again, sorry for calling your views “woo”, but they still seem to me to be rather weakly motivated.

    The fact that the brain has many parts is not particularly illuminating. I could say the same about a computer. Is a computer the RAM? The CPU? The hard disk? Etc.

    The fact that consciousness feels centralised is not particularly compelling either. How else could it feel? And why do you think feelings tell us something profound about what is going on at a physical level? There are all kinds of proven discrepencies between reality and how we perceive it.

    All this provides fodder for a very loose analogy with how quantum mechanics works. I don’t think such analogies are a basis for believing anything.

    Alexander Schmidt-Lebuhn, Thomas Jones, Labnut:

    I actually like the car analogy.

    I also think that asking why human information processing is accompanied by consciousness is indeed somewhat like asking why driving to Sydney is accompanied by travel. I do not, however, think that this is obviously the case. I think that it is far less obviously absurd to assume the possibility of human intelligence without consciousness. Philosophical zombies are not immediately absurd, although I do think that really coming to grips with what consciousness is would reveal them to be impossible.

    Just because a lot of people have debated this problem for a long time does not mean the problem is fundamentally nonsensical. Alexander’s theological examples are good, but obviously they will not appeal to Labnut’s Catholic sensibilities, in which case we could use examples from other religions, where theological debates have raged for hundreds of years about entities which do not exist.

    gwarner99.

    I have greatly appreciated your insightful commentary and I think it’s really great to have someone explaining things so clearly from Searle’s point of view. Of course I still think you’re wrong (and you me, no doubt), but you have certainly helped me to better understand the other point of view. I realise you have left comments on my blog which I have not yet answered. I will do so in the coming week and hope you return there at some point in future when you have the time.

    we are talking about the nature of evidence for the existence of consciousness

    I think the problem is that we have no reason to believe there can ever be such evidence. The only evidence we have is to infer consciousness where we see something that looks like us and behaves like us. That’s not good enough because it could never be used to decide if computers are conscious.

    It would only become something like behaviourism if you claimed that this was the only valid form of evidence.

    I don’t think behaviourism is about epistemology so much as ontology. A behaviourist says there is no difference without differences in behaviour. I say there are differences but they may be inaccessible to us — there may be no evidence at all. I also think that changes in brain state rather than macroscopic behaviour could count as evidence.

    “information” is one of the squirmiest of weaselly words with multiple different meanings that can entrap us. I believe that, in many cases, there are simply biological processes or interactions which work by physical cause and effect

    I agree that there are biological processes which work by physical cause and effect to produce intentional states. But I think these have to function as symbols at some level. The kind of process you describe would also operate in connectionism, even when implemented as software.

    In which case it isn’t a symbol, it is a physical structure embedded within a biological system, a structure which takes it part in the biologically causal processes that produce consciousness, and so needs no interim step of being “interpreted” as a “symbol.

    Right, well, the same can be said of a representation that is actually part of a computational process, with its physical implementation taking part in “electronically causal” processes.

    But there is no literal information in between the retina and the conscious visual experience

    I don’t understand what this means. The optic nerve sends signals. This is clearly a transfer of information, no matter how you want to interpret it.

    I suspect that it is possible to push some forms of connectionism so far that they cease to be computational in any important sense.

    Only because Searle and his ilk have an extremely narrow idea of computation. Connectionism can be and is usually implemented as a piece of software running on a normal computer. We almost never see physical artificial neurons because there is no need for them and they are expensive. In the article, the simulation of every particle in the brain is much closer to actual biology than any connectionist approach yet attempted. There is a continuum between this thought experiment, connectionism and GOFAI. If you reject the argument in the article, then generic connectionism is certainly out although you could still try to make a case for connectionism with physical devices.

    I’d be interested to know whether there are any particular connectionist projects you think are promising; I’d like to read up on them.

    I’m not up to date on the field. But I imagine a lot of the face-recognition software in use is using connectionist approaches. Artificial neural networks are good for that sort of thing, especially pattern recognition where you know what you want and so how to train a system but you have no idea how to explicitly program it.

    and if we could identify functional elements which seem to be closely analogous to those of a computer

    To me, it’s like you’re saying that we’ll know if animals can travel over land when we find evidence of wheels. There are many ways to achieve computation. They all turn out to be equivalent because of the Church-Turing thesis. Nobody is arguing that biological systems will have a von Neumann architecture or look anything like an intel CPU. But any information processing they can do an intel CPU can do.

    simulated-connection system is still purely computational in the classic sense, and ultimately equivalent to the classical or GOFAI paradigm.

    It’s still different in an important sense. In GOFAI, representations are explicit. In connectionism, we only represent the neurons and the connections etc. Representations of concepts somehow supervene emergently on this in the same way we imagine they do for brains — dynamically and messily. If there were a fatal objection to all symbolic processing, then (software) connectionism would be out, but since there isn’t, it’s very much in, and it is entirely immune to objections that assume naive GOFAI implementations, such as the idea that computationalists don’t take the brain seriously. Furthermore, even software connectionism is consistent with statements such as “It’s just a series of vectors. And I think that’s closer to how the brain works.”

    brandholm:

    Thanks for sharing your thoughts. I agree with much of them, and where I don’t agree it is for the reasons expressed in the article.

    I agree that your ideas about how consciousness works have merit, but I note that the virtual person in a simulated environment will have the capabilities you require, i.e. to build a model of itself and its intentions, etc, so I do not see this as any kind of argument against computationalism.

    mogguy:

    I don’t think sentience really does away with the problem. There is surely a sentience continuum too, if we go back in time through evolutionary history to the first precursors of organic molecules. At first there was no consciousness. Now there is. I doubt very much that this was a matter of flipping a switch, but something that developed gradually.

    How do we empirically detect it in Life so that we might compare it as something detectable on a computer?

    As I’ve said many times now, I don’t think we can. I think it’s a purely conceptual problem.

    richardwien:

    I don’t really think I’m a scientismist, although I am more sympathetic to that view than some. I was a computationalist since before I had training in computer science. Indeed, it may have had something to do with doing computer science.

    Correct me if I’m wrong, but it seems to me that it’s only the biological naturalists who think they have a decisive argument in favour of their position.

    The article may have been reasonably temperate in tone, but I feel pretty convinced that computationalism is correct. The alternative view seems to me to be muddled. But of course that’s just my perception. If I appear to be agnostic on the question, I’m not really.

    victorpanzica:

    Mary had all of the syntax facts and should have not learned anything new, but seems obvious that she did when she left the room and saw red for the first time.

    Mary’s room relies on an equivocation between two kinds of knowledge. There’s a difference between knowing that and knowing how. You could read all the books in the world on the mechanics and physics of juggling, but you can’t learn to juggle without practice. You need to train your brain to have the reactions necessary to pull it off. You’re not learning new facts, you are giving your brain a new ability. Seeing red for the first time is like this. You are giving your brain the ability to put itself in the state of experiencing the qualia red. It is not a new fact, and just like juggling doesn’t pose a problem for computationalists neither does Mary’s room.

    Everything you say about the unification of neurons can be said about virtual neurons, so your argument is compatible with computationalism.

    Like

  4. Fantastic discussion DM (DM*)!: Motor function and learning aside, Jackson’s argument on pure face value that Mary walks outside for the first time and sees a bright RED apple, not RED*, or the pure experience of RED is the knowledge that RED* can’t give us or the first intuition many have when they read Jackson’s argument. Of course if Mary is an AI then she would not have new experience, per your comment above.

    My own final thoughts: Before the 20th century and the age of psychology, neuroscience and cognitive science, the focus of philosophy was the nature of reality. To me reality and time are inexorably linked. A computer is a computational state machine which creates synthetic time with a synthetic clock as opposed to a biological consciousness which perceives (and cognitively generates for sensorimotor activity) reality time or TIME vs TIME*(synthetic time). I think this is the ontological basis for your argument. This convinces me that both POV’s are valid.

    Great Stuff!

    Like

  5. Thanks!

    Although, to be clear, I do think an AI would have a new experience when it first perceives red. The state of the AI where it understands all the facts about red is different from the state of the AI when it is currently perceiving or imagining red. One cannot (unless one has a very powerful imagination at least) trick oneself into imagining a qualia one has never experienced, and the same would be true for an AI built to emulate us.

    Like

  6. Thanks, DM! And thanks for the great opportunity to think deeper about these issues.

    That philosophical views are weak is not surprising. When a view is real strong, that’s called science.

    A difference between a computer and a brain is that only the CPU computes in a computer, whereas a brain is full of more or less continuously defined CPUs all over, and much more, as long distance axons integrate them all.

    Cut a brain in two, and you have got two brains, one on the left, one on the right. With apparently two consciousnesses, trying their best to integrate. Cutting a CPU in half does not work. A CPU is basically a bunch of canals and gates, like any irrigation system. Brains may all have something more, however dim, the consciousness which graces us.

    Quantum Entanglement (aka “non-locality”) represents a completely new “spooky interaction at a distance” (Albert Einstein dixit), and thus an imaginable possibility to support consciousness (as I can’t imagine how conventional electro-magnetism could do it). (The range of QE has been proven to be in excess of 100 kilometers… in the air: Zellinger and Al., Canary islands.)

    This is not about “believing anything”. It’s about trying to guess something in a humble spirit. That’s what the philosophical method does, guessing ahead, and why it is necessary to suggest possible avenues of scientific exploration.

    Like

  7. That sounds right unless it is a very clever AI that has direct knowledge of itself so it can twiddle its own visual cortex* and make itself see red as if its retina* was detecting red.

    Otherwise as you say it has only indirect access to its visual cortex which may be how we imagine or perform mathematics? Visualize multi- planar space? It does have several layers. Our visual system also projects into our frontal lobes. Could be why people believe they are projecting light out of their eyes when they look at things.

    Like

  8. Well it was a very nice article and debate, so thanks for my viewing pleasure!

    “I note that the virtual person in a simulated environment will have the capabilities you require, i.e. to build a model of itself and its intentions, etc, so I do not see this as any kind of argument against computationalism.”

    As long as the simulation truly can have these capabilities, rather than simply mimicking the results (which might be hard to discern) then I agree their belief* = belief. I hope I didn’t come off as saying it is theoretically impossible, and so computationalism is wholly bankrupt. It’s just for me that this is all easier said than done… particularly with the computational methods/hardware we currently have.

    Didn’t Data require a “positronic” brain or something like that? 🙂

    Like

  9. Dm,

    I really appreciate the thoroughness of your responses in the comment section.

    “A less controversial reply would be that we know and understand that physical properties (e.g. temperature) are not reproduced when we run a simulation” …

    I think that depends on the kind of simulation you are running. Though I agree no wetness will be generated in a CPU or elsewhere in current computers while it simulates a water fall.

    … “In this respect alone, these properties are not computational. We have no reason to believe that consciousness is one of these”

    I’d add we have no reason to believe that consciousness is one of these, or not one of these.

    “I agree that a broad definition of computation is relatively useless, but I also agree that pretty much anything can be described as a computation. What we call a computation depends on context and how useful it is to regard it so. I don’t think the question is whether cognition is computation — everything has a computational aspect”

    I think we agree that if we use words like computation, or cognition, without defining precisely what we mean or if we assume catch all definitions we will be able to conclude pretty much what we want. If I say 2 times 4 equals 8 I’m sure we are in agreement but if I really should be saying: about 2 times about 4 equals … , then the answer 8 no longer follows, the actual answer could be about 7, 8, or 9.

    “The question is whether reproducing only the computational aspect of cognition is enough to reproduce consciousness, i.e. whether consciousness is a property of a computational process or of a physical object or both”

    Using my loose analogy and paraphrasing your questions from my point of view, in the first half you are asking whether reproducing only the undefined computational aspect of ‘what the brain does’ cognition is enough to reproduce consciousness, and in the second half, you are asking whether consciousness is due to unknown computational processes or an unknown arrangement of physical objects, or both.

    So on those questions I have to suspend judgement.

    Nonetheless, a very interesting article and debate from my perspective.

    Like

  10. “I can explain how a car engine works in similarly general terms without being able to build one bolt by bolt myself, and reasonable people would still consider the process well explained and understood.”

    Reasonable people would still consider the process well explained and understood because OTHERS can and have understood and explained it elsewhere. That is not the case here. If you spoke in general terms about how a car works without demonstrating the details or being able to replicate its operations, and absent of the knowledge that others HAD done so and that their account accords with your own more general account, then I’d have absolutely no reason to believe your ‘general terms’ are accurate and that the process is well explained and understood. Precisely the opposite, I’d have good reason to be profoundly sceptical. Especially if you seemed to ignore or downplay (because you couldn’t explain) salient features of the car, like that it makes a noise while it’s running.

    “There is also something it is like to be a rock, except in the trivial sense that if I were a rock I would not be able to think about it because I’d be a rock.”

    Wait, what? There is something that it is like to be a rock except it’s nothing like what it is to be a person who is conscious and can think about what it’s like to be a person because a rock is not a person. That difference is precisely the point. Even if we bought in to your obvious (and unfounded) assumption that the difference between a rock and a human being is merely “thinking”, to classify this difference alone as trivial simply seems like you’re trying very hard to will difficult questions in to triviality because you have no answer for them and feel uncomfortable confronting the uncertainty. There is a profound difference between the inanimate and the animate, more than just that one thinks and one doesn’t, but this alone is a fascinating mystery, not a trivial detail to be passed over.

    Similarly you’re muddling the question and slipping in unanalysed loaded terms again when you describe subjectivity as “different perceptions and memory states.” The real difficulty with subjectivity isn’t that it’s unique to each individual, but that it’s there at all! You’re using the term perception, I suggest, because it comes with a whole raft of associated notions about qualitative subjective experience, but you’re using it simply to mean that one system accepts input from another. On this definition the marbles at the end of the footpath perceive my Tom Bowler when I send it hurtling in to them. I suggest that truly is a trivial definition, but you’re using it in anything but a trivial sense.

    “What he ultimately seems to say is merely that we cannot imagine what it is like to be a bat.”

    You need to read the paper again, more carefully this time.

    “If a body has an eye to see the surroundings and a brain to process that information, how could it possibly NOT experience what it sees?”

    Why can’t the inputs be processed without a qualitative state? Why is there this accompanying qualitative state, apparently unique to ‘conscious’ sentient animals and exhibited most complexly in human beings, a state that entails apparent qualities that SEEM so fundamentally at odds with our understanding of the physical universe, ie not spatially extended? The input of an organism’s surroundings could be processed without subjective experience, without this kind of immediate representational workspace occuring to what at least appears to be a finite singular entity that ‘experiences’. We know this because we design machines that do it all the time, and they don’t exhibit the kinds of behaviours and responses that we see in sentient creatures, associated with an experiential component to their existence. Your assumption is that they will, if we ramp up the computations. But you haven’t shown why this should be the case, you’ve simply asserted it by using a bunch of loaded terms and declaring any question to the contrary ‘trivial’ and ‘uninteresting.’ Your answer to the question of why there should be such a thing as subjective experience appears to be “Because there is! Of course there is. If there is it couldn’t possibly be any other way. What’s so mysterious? Next question please.” To me that’s simply avoiding the problem, not dissolving it.

    Like

  11. Why can’t the inputs be processed without a qualitative state? Why is there this accompanying qualitative state, apparently unique to ‘conscious’ sentient animals and exhibited most complexly in human beings

    I’m not trying to say that this doesn’t present a problem. But there’s an intuitive way in which it “makes sense” that there would be a qualitative state that points to a possible physical approach to the problem.

    Not many people have a problem with the notion that our sensory input is “summarized” and integrated in various pre-conscious brain processes. And not many people have a problem with the notion that certain, semi-localized brain processes function as: 1) an “executive” apparatus that uses this summarized input to then direct the body toward certain responses; and 2) as an “attention-directing” apparatus that is perhaps closely related and also orchestrates what input is to be accessed and processed. And, again, not many people have a problem with the notion that the executive and attention-directing processes are not fully instinctual/automatic – that expectations and interpolations are fed into the executive and attention-directing processes in a fashion that allows weighing and selecting between them.

    Given this arrangement of processes, it makes intuitive sense that this summarized blend of integrated sensory information and future-directed processing would “present” in some way to those processes making selections, directing attention, etc. In a sense, the electro-chemical processes involved in, say, a “fear” reaction are *for* presenting in this way. In other words, if we imagine “fear” as a set of electro-chemical processes that are merely “for” causing certain brain/body responses to automatically occur, something is missing.

    You can see from the above that it’s very difficult to get across the sense of this in normal, folk-psychological language. There are cognitive and linguistic barriers to developing an intuition of the shift from a physical process perspective to a phenomenological perspective.

    Like

  12. Alexander,
    “”Wait, labnut is Catholic? Now I see why using the Trinity as an example of a non-question may actually have been counterproductive…“”
    You chose a really bad analogy. In fact it does not even qualify to be called an analogy. That’s all there is to it.

    An analogy is based on a shared abstraction. One statement is recognisably true, there is a relevant shared abstraction and so one can conclude the second statement is reasonably true. But it is important to note that the shared abstraction must be substantially relevant to the problem.

    There is no useful shared abstraction between the doctrine of the Holy Trinity and the problem of consciousness. In fact the problems have nothing in common except their putative difficulty.

    Let me make it easier for you. In effect what you are saying is:

    1) Problem A has been studied for a long time and was considered a difficult problem. On mature reflection we conclude it was a false problem.

    2) Problem B has been studied for a long time and is thus considered a difficult problem.

    3) From (1) we conclude that problem B is a false problem.

    By this remarkable stroke of reasoning you have defined many long-standing mathematical problems out of existence. Congratulations, mathematicians can now be put to better use(like confirming the MUH).

    Statements (1) and (2) must have a substantially relevant shared abstraction for your conclusion to work. If that cannot be provided your analogy is simply not an analogy and you have drawn a false conclusion.

    You made such a bizarre comparison that I suspect it says you have strongly held biases which you wish to import into the conversation. However your biases are just not interesting to me so you will have to have the conversation with yourself. Your strange comparison certainly says nothing about consciousness and the conversation is about consciousness. If you really wish to contribute to the conversation about consciousness you need to take the conversation seriously and enter into it in a spirit of epistemic humility.

    DM,
    Alexander’s theological examples are good, but obviously they will not appeal to Labnut’s Catholic sensibilities

    I have news for you. His analogy is an example of thoroughly bad logic. Your claim that his examples are good, flies in the face of all logic. I am rather surprised you accept his superficial reasoning at face value. Is that because it suits your own well known sensibilities? You should know better.

    Like

  13. DM,
    “Alexander’s theological examples are good, but obviously they will not appeal to Labnut’s Catholic sensibilities”

    I have news for you. His analogy is an example of thoroughly bad logic. Your claim that his examples are good, flies in the face of all logic. I am rather surprised you accept his superficial reasoning at face value. Is that because it suits your own well known sensibilities? You should know better.

    Please see my reply to Alexander here – http://bit.ly/1w6Bb70

    Like

  14. DM,
    Asher said:

    “”I hope you’ll forgive me for beating this drum yet again….

    This is an example of “thing” thinking vs. “process” thinking. It’s weird to think of a non-living molecule begetting an organism“”

    I agree with Asher and he is right to beat the drum again in the hopes that you will hear the drumbeat.
    At its most fundamental, ‘things’ became a ‘process’, through an as yet unknown mechanism, however we label that mechanism. That process has become self-sustaining, self-replicating and grown exponentially in volume and variety, profoundly changing this planet.

    You said:

    “”We’re talking about the difference between a non-living system and a living system“”

    It is the difference between ‘things’ and a self-sustaining, self-replicating, multiplying, diversifying ‘process’ which is so profound. This is why it is right that Asher should emphasise the difference between ‘thing’ and ‘process’ thinking. But it is not just any process, it is a very special kind of process. Your use of the word ‘system’ does not capture this difference and so you are implicitly minimising the difference which is more than quantitative, it is qualitative.

    The existence of a fuzzy boundary says one of two things. We need to get the boundary commission to finish their work and/or nature is a messy reality that defies neat distinctions.

    Like

Comments are closed.