Strong Artificial Intelligence

strong AIby Massimo Pigliucci

Here is a Scientia Salon video chat between Dan Kaufman and Massimo Pigliucci, this time focusing on the issues surrounding the so-called “strong” program in Artificial Intelligence. Much territory that should be familiar to regular readers is covered, hopefully, however, with enough twists to generate new discussion.

We introduce the basic strong AI thesis about the possibility of producing machines that think in a way similar to that of human beings; we debate the nature and usefulness (or lack thereof?) of the Turing test and ask ourselves if our brains may be swapped for their silicon equivalent, and whether we would survive the procedure. I explain why I think that “mind uploading” is a scifi chimera, rather than a real scientific possibility, and then we dig into the (in)famous “Chinese Room” thought experiment proposed decades ago by John Searle, and still highly controversial. Dan concludes by explaining why, in his view, AI will not solve problems in philosophy of mind.

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

Daniel A. Kaufman is a professor of philosophy at Missouri State University and a graduate of the City University of New York. His interests include epistemology, metaphysics, aesthetics, and social-political philosophy. His new blog is Apophenia.

104 thoughts on “Strong Artificial Intelligence

  1. Great dialogue!

    While I’m probably more optimistic about the possibility of both strong AI and mind-uploading (and significantly less impressed with the Chinese Room argument) than Dan and Massimo, I still thoroughly enjoyed the conversation.

    Regarding the Ship of Theseus scenario where brain cells are successively replaced by silicon equivalents, I think it’s interesting for a reason not touched on in your chat, and that is the question of which physical aspects of our brains are salient for personality and personal identity. Is it at the level of the connectome? Is it at the level of the connectome plus the neurochemistry at the synapses? Do we have to go down to the level of the biochemistry of individual brain cells? Even deeper?

    While interesting in and of itself (or so I think), this is also relevant for mind uploading, which becomes more theoretically difficult the deeper we have to go in order to capture (and simulate) whatever physical processes that underpin personal identity.

    Nevertheless, it’s nice that we get to eavesdrop on your conversations. Looking forward to more.

    Liked by 1 person

  2. Hi,

    Being an issue close to my heart, this has brought me out of comment semi-retirement (said retirement prompted by fatherhood and a subsequent realignment of priorities!)

    As a strong AI proponent, I would not agree with the following characterisations of strong AI.

    1) Massimo: The thesis that human-level intelligent machines are imminent or feasible — they may be neither, and indeed strong AI specifically is a claim about the possibility in principle of algorithmic consciousness, not intelligence. Limiting the claim to intelligence only is the weak AI hypothesis. I understand you get the distinction but I think it should be clear that there is no real ambiguity about which claim strong AI is making — this is how Searle defined the term.

    2) Dan: The thesis that AI research is going to tell us much about human intelligence. I don’t know that it is. Strong AI proponents tend to believe that computation is fundamental and can be used to model pretty much anything. The strong AI claim is therefore not that AI research will finally explain intelligence but that it ought to be possible in principle to build a conscious algorithmic intelligence. We ought not assume that just because we can reproduce something we can necessarily understand it.

    On the Turing Test:

    Massimo is quite right that the TT is strictly just a test of intelligence and not consciousness. However strong AI proponents tend to believe for independent reasons that human-level general intelligence is probably not achievable without the system having subjective experience. So the TT is indeed a test of consciousness, but only if you buy those philosophical arguments (which you don’t).

    I also think Massimo’s intuitions are very out of whack when he imagines that it ought to be pretty feasible for a talented computer programmer to create a system which could consistently fool humans for hours. This goal has never been achieved. If it is ever achieved (which ought to be possible in principle, but not just by scaling up ELIZA, which is far less impressive to me than to Massimo), then strong AI proponents would be inclined to regard it as conscious.

    On replacement of neurons:

    I don’t get the argument from Massimo that strong AI proponents are forgetting that neurons are not just passive conduits. The heart is active too. Similarly, I think there is a double standard from Dan when he says the function of the heart is “mechanical” but that the function of the brain is “neurochemical”. I could just as easily describe the function of the heart as “myocardial” and the function of the brain as “informational” and make the reverse argument. All we’re really articulating here is our respective biases with respect to whether the substrate or the function is more important.

    More when I have time (if ever!).

    Liked by 1 person

  3. Bjorn, thank you for the comment! (Dan Kaufman here)

    Due to the number of topics we covered, I was not able to pick Massimo’s brain on the question of substrate, as much as I would have liked, and so I welcome the opportunity here for him to expand more on the subject.

    Over at MeaningofLifeTV, where this originally aired, a lot of people pushed hard on this point. What, specifically, is so special about the carbon-based substrate, with regard, specifically, to thinking, feeling, experiencing, etc? One person even asked, point blank, why Massimo wondered — at one point in the discussion — whether replacing parts of the brain with silicon “equivalents” might even poison and kill a person.

    Massimo! Can you weigh in a bit more on the specific role you see possibly being played by the very specific chemical nature of the brain’s substrate?

    Liked by 1 person

  4. Interesting that the future of silicon brain implants was brought up.

    1) Well how about a Bluetooth type implant that connects to the optic nerve. A procedure like cataract surgery which can be performed in an out-patient clinic or even in the T-Mobile office someday. Wouldn’t have to take the phone out of your pocket because the screen image would appear in your brain. Instead of a brain substitute, this is technically a new biological function.

    2) Better yet there are devices which can monitor the human sub brain activity alpha waves, so why not keep this device in your pocket and have it communicate to another Bluetooth implant in your brain so you can read the alpha waves of people in the room? http://9-11themotherofallblackoperations.blogspot.com/2008/12/united-states-patent-3951134-malech.html

    The discussions between you two are enlightening and I have an affinity for you two; I share Dan’s body type and Massimo’s hairline. But interestingly it is like the three of us went to Best Buy and stood next to the wall of HDTV’s. If I asked you guys to explain how they worked, I don’t think you two would get past the philosophical discussion, which I would probably find interesting.

    For brain physiology it is similar if we can’t distinguish primary structures which exhibit primary Stimulus Response i.e. sticking a pin in you vs secondary SR functions like threatingly showing you a sharp object. The secondary functions also would include 1 & 2 above.

    As far as the CR, I remember form the 1970’s and 80’s working for companies and generating all type of engineering spec and proposal documents. With no PC’s, all of the document generation was done by secretaries who we fed the rough hand written drafts to. They often explained to us that the didn’t have the slightest idea what any of these documents meant even though the engineering jargon was technical but essentially our engineer’s folk psychology.

    Liked by 1 person

  5. Very fascinating discussion. Can machines think? Chomsky likes to answer this question with the question ‘Can airplanes fly?’ Most English speakers will immediately answer ‘sure’. But then ask, ‘Can submarines swim?’ And you’ll get a blank stare or a hesitant ‘no’, the language isn’t equivalent but the situation is the same.

    When it comes to consciousness, it’s a bit suspect for so many arguments about something that no one can define, much less explain.

    As to feeling pain, what do pain receptors have to do with it, al all? Pain is a mental state. Isn’t it easy to make someone feel most anything if the brain is stimulated in some way, from feelings of ecstatic sublimity to even elucidating quite detailed memories of childhood events?

    I would ask, does it matter if they’re conscious and can think when they’re turning all of us meat-sacks into paperclips?

    Liked by 3 people

  6. After working in the AI field for more than 35 years, I have come to one (fairly) solid conclusion: Any discussion of strong AI that doesn’t include what’s going on in the “new synthesis” of AI+SB (artificial intelligence and synthetic biology) is amiss.

    Liked by 1 person

  7. Thanks Dan!
    ——————
    Computationalists are not quite the radical substrate deniers Massimo paints. We don’t usually mean the substrate is *completely* irrelevant. If you’re using a classical electronic computer, for instance, you’re going to need a lot of memory to instantiate a human mind. And good luck building a mind using only hydrogen atoms.

    The substrate does matter to computationalists, but only insofar as it must be possible to build a sufficiently powerful computer from it.

    I think too much is being made of the distinction between functionalism and computationalism. The crucial claim is functionalism. Computationalism is little more than a gloss on this insight. Since computers can instantiate pretty much any functional network, then it ought to be possible to make a computer instantiate a conscious mind if functionalism is true. We can go a little farther and say that the brain is a computer if a computer is just a device for processing information in the same way that a heart is a pump, but this is no more profound a comparison than the latter. The profound claim is functionalism.

    That’s not to say that there is no room for anti-computationalist functionalism but that niche is mostly occupied by people such as Roger Penrose who believe that human consciousness is crucially dependent on unspecified uncomputable functions which are somehow implemented using aspects of quantum mechanics that are not yet understood. Also Hilary Putnam describes himself as anti-computationalist despite accepting “liberal functionalism”. As near as I can make out he is rejecting Good Old-Fashioned AI (GOFAI) and dislikes the computationalist gloss.

    Yes, there is a bad analogy to be made between human decision-making and high-level algorithms. This bad analogy was at the heart of GOFAI and this is a dead research program (at least as far as Strong AI goes). Much more respectable these days is connectionism, but rather than an alternative to computationalism this should be seen as a variant of it, since the causal network realised by human neurons ought also to be realisable by a computer. What this means is that a conscious algorithm probably looks much less like a series of GOFAI statements such as “if (isHungry()) {find(FOOD);eat(FOOD)}” and much more like a simulation of a biological brain. It’s not that computer science is going to tell us much about how the human mind works, but rather that neuroscience has a lot to tell us about how to build an intelligent, conscious computer!

    I think Dan is dead on when he says that functionalism implies that the mind is an abstraction! Most computationalists would not agree but I think this criticism is exactly right, however I don’t find it fatal to the thesis. Aristotelian dualism is also a good characterisation.

    I hope that my account of the closeness (near identity) of functionalism and computationalism to some extent answers the criticism of universal computation (a rock being a computer). The claim is not that computationalism explains the mind or that the mind is conscious because it is a computer, but that functionalism explains what kind of thing the mind is and also implies that computers ought to be able to realise consciousness and that it is reasonable to describe the mind as a computer (qua information processor).

    Liked by 1 person

  8. Björn,

    “the Ship of Theseus scenario where brain cells are successively replaced by silicon equivalents, I think it’s interesting for a reason not touched on in your chat, and that is the question of which physical aspects of our brains are salient for personality and personal identity”

    Indeed. Of course, I don’t think that’s a philosophical question, but rather a scientific one to be settled empirically. Still, the way it may be settled would then influence philosophical accounts of consciousness.

    DM,

    welcome back, and congrats!

    “indeed strong AI specifically is a claim about the possibility in principle of algorithmic consciousness, not intelligence”

    I’m not so sure that everyone involved in that field would agree. At the least early on there was a lot of talk about doing AI in order to learn about human-type intelligence. Anyway, if you are right, it really should be called AC, not AI.

    “Limiting the claim to intelligence only is the weak AI hypothesis”

    I thought that the weak thesis was about developing *some* kind of intelligence, not necessarily human.

    “strong AI proponents tend to believe for independent reasons that human-level general intelligence is probably not achievable without the system having subjective experience. So the TT is indeed a test of consciousness, but only if you buy those philosophical arguments (which you don’t)”

    Exactly. I think we’ve covered this territory before… 😉

    “Massimo’s intuitions are very out of whack when he imagines that it ought to be pretty feasible for a talented computer programmer to create a system which could consistently fool humans for hours”

    Oh I don’t know, I remember long sessions with ELIZA, and they were very entertaining. But my point is that *even if* it were possible, that still wouldn’t prove consciousness, because I make a distinction between external behavior and consciousness (and no, this isn’t a zombie-type argument: all it requires is for an algorithm to be sufficiently clever and flexible to mimic human discourse).

    “I don’t get the argument from Massimo that strong AI proponents are forgetting that neurons are not just passive conduits. The heart is active too.”

    Sure it is, but I still agree with Dan that it has more mechanical aspects than neurochemical / informational ones. Perhaps we are, as you say, simply articulating each other’s biases. But the fact remains that artificial hearts are a reality, while we have no clue on how to build artificial brains.

    “And good luck building a mind using only hydrogen atoms”

    But for the computationalist that ought to be possible in principle, no? If so, my argument stands, quite regardless of whether it is practical or not.

    “Since computers can instantiate pretty much any functional network”

    By “instantiating” you mean “simulating,” with which I have no qualms. But this is also territory already covered.

    “We can go a little farther and say that the brain is a computer if a computer is just a device for processing information in the same way that a heart is a pump”

    But the brain isn’t just that. And the heart isn’t just a pump either (as you say, it is biologically “active”).

    “connectionism, but rather than an alternative to computationalism this should be seen as a variant of it, since the causal network realised by human neurons ought also to be realisable by a computer”

    If one accepts the usual assumptions that I am agnostic (and even, admittedly, somewhat skeptical) about…

    Liked by 1 person

  9. Dan,

    “Massimo! Can you weigh in a bit more on the specific role you see possibly being played by the very specific chemical nature of the brain’s substrate?”

    Yes. The classical example there is from Searle, when he brings up the point that we can simulate photosynthesis in detail, and yet one thing we don’t get out of it is, well, sugar. The analogy is only partially convincing because one can argue (as DM has done a number of times in the past) that the brain doesn’t produce anything physical, just information. (I would push back on this and say that the brain produces *only* physical stuff, in the form of electrical and chemical signals and the morphological changes they in turn produce, but let’s not go there.)

    One modification of the photosynthesis analogy is the one I think I give in the video: while different substrates can and do yield a functional photosynthetic system, most of them don’t. One still needs quantum transducing molecules, as well as, of course, a photopigment.

    If one is not convinced by this, then the next analogy is the origin of life, which I also mention in the video. Sure, it is *possible* (though very far from being established, and in my mind not at all probable) that one can get life out of non-carbon materials. But since a lot of what makes an organism alive depends on the specific chemical properties of carbon, non-carbon life is unlikely, certainly if one ventures too far from carbon on the periodic table. Since consciousness is, as far as we know, a property of certain biological systems, and only of those systems, why think that it too is not substrate-sensitive?

    victor,

    “interestingly it is like the three of us went to Best Buy and stood next to the wall of HDTV’s. If I asked you guys to explain how they worked, I don’t think you two would get past the philosophical discussion, which I would probably find interesting”

    Uhm, no, I’d be arguing about which tv is the least ugly and the more likely to allow me to see the next World Cup in all its glory. Not much philosophy there…

    mechtheist,

    “‘Can airplanes fly?’ Most English speakers will immediately answer ‘sure’. But then ask, ‘Can submarines swim?’ And you’ll get a blank stare or a hesitant ‘no’, the language isn’t equivalent but the situation is the same.”

    Ah, nice example! I’ll use it in the future, if you don’t mind.

    “what do pain receptors have to do with it, al all? Pain is a mental state”

    Yes, but it’s a mental state that is made possible by a certain neurophysiological apparatus, proof of which is that we can make the pain go away by interfering with the functionality of said apparatus.

    Philip,

    “Any discussion of strong AI that doesn’t include what’s going on in the “new synthesis” of AI+SB (artificial intelligence and synthetic biology) is amiss”

    I would be interested in hearing more about this. However, synthetic biology is a very young, and highly controversial, field in its own right, so I’m not sure that it would help the current discussion. Besides, if we are talking about replicating consciousness by using biologically-based systems, I have no doubt it can be done. We know of a very low tech, and much fun, way of accomplishing such a feat: having sex.

    Liked by 1 person

  10. Hi Massimo,

    Haven’t had a chance to watch the video yet. Just wanted to say thanks for the link and ref provided in the previous comments section as I didn’t have a chance then. Much appreciated.

    Hi DM,

    Congratulations!

    Liked by 1 person

  11. Massimo: True, for me there would be plenty of philosophy. For Dan and yourself there would be plenty of engineering. I just think you guys are screaming out structure to me throughout the discussion, which was awesome.

    BTW they say the brain structures which we are most interested in for environmental interaction, evolved first for movement with language (and philosophy) coming later. Can a soccer player with a total silicon brain ever pass an athletic Turing Test?

    Liked by 1 person

  12. Unfortunately I don’t have time to view the video right now, but looking over the discussion so far I am bothered by the same problem as always: the lack of a clear and non-question-begging definition what people even mean with consciousness, and the unwillingness to suggest an empirical test for consciousness. The WP and IEP entries on consciousness immediately launch into a discussion of how unclear it is what it even is. That is fairly convenient because it allows those who doubt that machines can be conscious to reply “not good enough” no matter what; it is like trying to nail fog to the wall. But that just won’t do. One could just as well claim that Caucasians are the bestest of all humans because they have ujgarlac and other humans don’t, without ever clarifying what that is. Surely in that case people would be justified in rejecting my claim of Caucasian specialness.

    As for an empirical test, it is just too easy to say that the Turing Test isn’t it and then simply to assume that the burden of evidence is on the side of those who think that machines could be conscious. However, if one just doesn’t accept that consciousness is anything super-fancy beyond a thinking machine being ‘on’ and perceiving input, and if one doesn’t see why there should be a magical extra limited to humans or what that magical extra should even be, then one would locate the burden of evidence on the other side. I agree with the importance of the substrate, but the fact that it would be clearly impossible to build a thinking device from hydrogen does not already demonstrate that biological cells are the only possible substrate (there is possibly a name for that fallacy, but I understand naming it isn’t the done thing).

    Liked by 2 people

  13. Hi DM

    The substrate does matter to computationalists, but only insofar as it must be possible to build a sufficiently powerful computer from it.

    Depending on what powerful means. If computationalism * is true then, in principle, I could be, for all I know, a mechanically simple instantiation of a Turing machine with a really, really long tape.

    As I am looking at the word “the” and appear to be seeing all three letters at once. Even to appear to be seeing all three letters at once implies some connection. I still don’t see what that connection would be if the process of seeming to see these three letters at once was billions of individual mechanical operations one after the other. By the time the last relevant crank of the machine happened the cranks of the handle relating to the first part of the word would have been millions of years in the past.

    And the simple mechanical operation that happens each time only has meaning to someone who can understand the mechanism and the code.

    So, if I was a simple mechanical device with a really, really long tape – how could I be even seeming to see the entire word “the” at the same time?

    I also agree that Dan’s point about functionalism implying that the mind is abstract is good. I heard Searle say something similar and he added that he wished that he had thought of that before the whole “Chinese Room” thing.

    With the ELIZA style programs, I am quite surprised that they are not much better. they are not even much better than the attempt I made back in the early 80’s on an IBM PC with Turbo Pascal. And that was not very good at all. But it still fooled people just a little, some people said “you are cheating, there is someone on the other end providing these answers”.

    Maybe the Turing Test is less a test of machine intelligence than a test of how easy we are to fool.

    Liked by 1 person

  14. Hi Alex SL,

    Unfortunately I don’t have time to view the video right now, but looking over the discussion so far I am bothered by the same problem as always: the lack of a clear and non-question-begging definition what people even mean with consciousness, and the unwillingness to suggest an empirical test for consciousness.

    It is not an unwillingness to suggest such a test, it is the apparent impossibility of there being such a test. I can’t even be sure if I am talking about the same thing as you because I have no idea whether or not you are conscious, whether you feel pain or nausea or anything.

    Now I am pretty sure I know what I mean when I say I feel pain but I simply have no way of knowing whether or not you ever feel any pain.

    I know what I mean by not conscious because I go for an operation, the needle gets put in my arm and someone starts counting and next thing I know I am lying in another bed and it is much later and a nice nurse is saying “How are you feeling Mr Herbert?”. When I heard the counting I was conscious and when I heard the nurse asking how I was I was conscious, but in between I was not conscious. But I cannot communicate that to anyone else unless I have a way of knowing that there was that difference in someone else. Maybe you and some or maybe even all people except me are they way I was in between those events all the time.

    There is nothing in science that suggests that this is impossible.

    Guilio Tononi, a neuroscientist, has a theory of consciousness and says that an implication of it is that a square array of XOR gates is also conscious. I don’t see how this could be tested and I don’t even know what it could even mean.

    And yet I still say that I know what I mean when I say I am in pain.

    I can remember lying in hospital with kidney stones trying to think what it would mean to say that I don’t know whether or not I am in pain. It seems that I did know.

    Liked by 2 people

  15. The issue is all with consciousness. If whatever entity is under consideration isn’t subjectively conscious then it only acts like it understands. Until there is a “consciousness detector” there is no way to find out. There doesn’t seem to be any way to know if other people are conscious or not so not much more can be said. Does intelligence require consciousness? This is just a matter of definition not of fact. Casual use is ambiguous on this so appeal to intuition is completely inappropriate, extremely misleading. Some people are willing to use “understanding” based on observable behavior only, others want to include subjective consciousness. With other people we generally ignore the difference and assume “other minds” to avoid all the problems that arise if we sound like we think other people aren’t conscious.

    Liked by 2 people

  16. Hi Massimo (and Dan). I am gratified that you, Massimo, find the photosynthesis analogy only partially persuasive. I am writing to encourage you to take the next step, which is to find the analogy totally unconvincing. As I think you would agree, the product of photosynthesis is sugar (the thing we care about) plus other stuff (waste products we don’t care about). I suggest the product of the brain that we care about is information, and all the other stuff that gets produced (waste energy and waste products) we don’t care about. Please note that some of the physical effects, e.g., morphological changes or increases in specific neurotransmitters, we care about only because of the information they contain by virtue of the physical change.

    I must admit that I don’t understand how the “modification of the photosynthesis analogy” changes anything. Your example seems equivalent to saying “there are several configurations of matter which can calculate 4 + 26, (e.g. my phone, Babbage’s Analytical Engine,my 12 year old), but most don’t.

    Finally, I understand your “origin of life” analogy as saying it is intuitively possible to imagine non-carbon based life, but physicists have specific reasons why it is very unlikely. But no one has offered analogous reasons that any information process performed by the brain cannot be produced in another system

    Liked by 1 person

  17. A very nice video, I enjoyed it! 🙂 I especially liked Massimo’s account of the Chinese Room argument as implying that there is “something missing” when one makes a step from syntax to semantics. That is the whole point od Searle’s argument nailed down.

    Regarding mind dependence on substrate, teleportation, mind uploading etc., there is a nice thought experiment that can hopefully clear up certain points regarding consciousness. Let us suppose that, through some sophisticated technological wizardry, we have a machine that can make a copy of a whole person, with atom-level precision. Suppose that this can be done almost instantaneously, by scanning the original person and then creating a living clone in the other part of the room (star-trek-style, only the original does not disappear). Immediately after creation, the clone contains the exact same brain state as the original.

    The question is: does the original person starts having some “transcedental” experience of having the same consciousness in two different bodies simultaneously? I think not — the consciousness of the original stays with the original, while the clone obtains their own consciousness, along with their new body. The original and the clone can talk to each other, shake hands, etc. But the consciousness of the original does not get *transferred* to the clone. One could say that it is still tied to its original substrate.

    Now consider a similar scenario — teleportation, where the original is being destroyed in the process. The most blunt way to imagine it is the following: the operator performs the copying of the original as in the previous case, and then destroys the original by shooting them in the head with a gun. One can think of more sophisticated methods of destroying the original, but there should be no conceptual difference — this is because in information processing the “move” operation is equivalent to the “copy and then delete” operation.

    The second question is: after being cloned and then killed by the operator, does the consciousness of the original gets transferred to the clone? Most likely not — the clone already has their own brain and their own consciousness, and “deleting” the original (by killing them) is not really a step that facilitates any type of “transfer” of consciousness.

    Now, for all intents and purposes of the *rest* of the world, the clone will keep behaving precisely as if the original got teleported along with their consciousness, simply because it is impossible to distinguish the copy from the original, looking from the outside. But the person being teleported *does* feel the difference, because their consciousness dies with them in the process of deletion.

    This thought experiment is thus a strong indication that consciousness is substrate-dependent — information can be copied, but the substrate cannot.

    I’ll address other topics in subsequent posts, if I manage to find the time. 🙂

    Liked by 2 people

  18. Regarding the choice of the substrate for an intelligent agent, I think it is not really carbon-dependent, and can be made of other materials, as long as it is functionally equivalent.

    One can imagine a computer, simulating the workings of a neural network, taking into account all quirks of such a simulation — not just neural impulses, but reaction to chemical agents, details of synapse workings, etc. Add also some “random noise” to its functioning, just as much as a human brain has. Program the computer to include some basic instinctive behavior (the same that human babies have from birth), pack it up into a skull-sized box, attach cameras, microphones, legs, arms, skin with touch-sensitive sensors, etc. In other words, make a fully functional robot.

    Then turn it on and let it interact with the environment, and start teaching its neural network in the same way you would teach a baby — to walk, to talk, to be careful not to touch a hot stove, etc. Teach it math, language, sciences, geography, etc., like you would a regular child — send it to school, help it with its homework, etc. Full-blown upbringing, for several years.

    My claim is that — if the robot’s “brain” (i.e. the simulated neural network) is big enough, and structurally similar enough to the real human brain, the robot will display both consciousness and intelligence in the same way humans do. There should be no difference whether the underlying substrate is carbon-based or silicon-circuit-based. So in this sense the substrate can vary, provided that the embodied functionality is equivalent enough to that of a human. (Note that this does not enter into the question of qualia, i.e. whether the robot would be a real robot with experiences, or be a zombie-robot, merely behaving like a robot with real experiences…).

    On the flip side, this kind of a conscious and intelligent robot will be equally prone to all the misfeatures of a real human brain (and IMO any neural-network-type entity). It would make mistakes in math class, have trouble figuring out fractions, could get drunk, could get into a fight with other children etc., just like a human could.

    The issue is that there is no indication that the robot could be *more* intelligent than an Average Joe.

    And that is the crux of the matter for AI — its *purpose*. Nobody wants to make an AI that would be as fallible as a human, or behave like a dumb idiot, or such. What we really want is a *superior* AI, one that would not be prone to making mistakes in reasoning, logic, math, etc., and one that would be more efficient in problem solving. But there is no guarantee that the simulation of a neural network would achieve anything more in that regard than the real human neural network can already do. Until we manage to understand the details of what properties of a given neural network provide it with intelligence, we have no way of building an AI which can have those properties enhanced beyond the level of a human.

    Liked by 1 person

  19. There is something in physics called the “no cloning theorem” which is very relevant to the idea of making an exact copy of a person – you cant. Given any system there is something called quantum information which can’t even be measured so you can’t even get at the information that you need for an exact copy. The information definitely exists and is needed to make a copy. If you were making a system from scratch you could prepare the system in such a way as to have that information, but if you are given a system to copy and don’t have that information it can’t be measured.

    BUT, and this part is really cool, you could teleport the information via a process of what’s called entanglement. There is a major consideration though that in teleporting the information — the original must be destroyed! (The process of “teleportation” destroys the quantum information in the original as it transmits it to the receiver.) This is exactly the kind of thing that one would require if someone’s consciousness were to be actually transferred from one brain to another. As you pointed out above, merely making a traditional copy of someone wouldn’t transfer the actual consciousness, It would just produce a copy that thought that they were the original person. Considering quantum mechanics however such a copy is impossible. The usual notion of making a copy assumes that one can measure all the information from the original in some manner. This is called the classical information. This is what can be measured but is insufficient to make an exact copy.

    The substrate dependence you refer to might just be the quantum information.

    Liked by 3 people

  20. Regarding whether the human mind works like a computer — I’d say it doesn’t.

    The main point is computability. Regardless of what one may think about the Church-Turing thesis, it certainly does apply to to real-world computer devices that we have today. They are all embodiments of a Turing machine, i.e. they all evaluate recursive functions. In other words, they cannot make guesses.

    A human could (in principle) guess the exact value of a Kolmogorov complexity constant for a given string (though there could be no way to prove that the guessed value is indeed correct). In contrast, a computer could only estimate various upper bounds.

    When discussing whether the human mind works like a computer or not, we are essentially asking whether the working of a human brain can be represented by a recursive function (i.e. an algorithm, a deterministic sequence of instructions). This is intimately connected to the issue of the Chinese Room, in which the operator is assumed to follow an algorithm step by step. In my opinion, the distinction between the syntax and semantics, the symbol manipulation and symbol understanding, is equivalent to (non)computability — semantics is the part of the mind that is not recursive. An algorithm does not (and cannot) define the meaning of a string of symbols, and yet the human mind does that all the time.

    On the other hand, this uncomputable semantic behavior can then only be described in terms of making guesses, since it is non-algorithmic. And making guesses runs the risk of making a *wrong* guess. So in the end it seems that the human mind displays three different types of behaviour, that a computer does not: (1) it can make guesses, (2) it can make mistakes, and (3) it can ascribe semantic meaning to things. My conjecture is that these three properties are actually three facets of the same property — the uncomputable behaviour of the mind.

    Of course, it goes without saying that the “mind” here can be a neural network simulated on a computer, like in my robot example in the previous post. This is not a problem, since the simulation of a neural network is not (and must not be) algorithmic — the neural network continuously interacts with the environment, there is some level of “random noise” in its functioning, etc. — all of which are non-algorithmic, uncomputable parts that enter the simulation. So there is no contradiction between saying that a human mind does not work like a computer, and saying that a computer-based robot could have a (human-like) mind. The simulation of the brain in the computer memory is not recursive, i.e. it does not follow an algorithm, because it is constantly bombarded with new data and various sources of noise, which cannot be generated algorithmically. An algorithm is a set of rules that, given certain initial condition, runs uninterrupted and without interference from the environment. This is not the case for either a human brain or a computer-simulated robot brain.

    Liked by 1 person

  21. Robin Herbert,

    I find it strange that you are playing the solipsism card here. The idea that other humans are just like I am appears to be the fairly obvious null hypothesis, and the idea that they are different – that I am the only conscious human, for example – so bizarre (and narcissistic) a priori that it would need a lot of evidence to lend it even just a smidgen of plausibility.

    Actually, however, this is surprisingly similar to the point I really wanted to make in that it is all about the burden of evidence. Again, if you want to consider it plausible that you are the only conscious human, the burden of evidence would appear to be on you, simply because there is no reason whatsoever to make that assumption. If somebody claims that they can replicate a specifically human mind on a different substrate (be it a computer or some Ship of Theseus scenario), the burden of evidence is on them, because it is a fairly strong claim and because it seems reasonable to assume that one has to be a human with human neurons and suchlike to have a mind that works like a human’s (plus all the technical, practical difficulties).

    On the other hand, the claim that it should be possible, given some kind of computer architecture that may potentially differ quite a lot from the computers we are currently having in front of us, to build some thinking machine that reproduces the functionality of “consciousness” (whatever it is) in its own way is not a very grand one. For starters, we know that such thinking machines have already been built before, and indeed continue to build more of themselves every day: humans. The only counter-argument is ultimately that there must be something about “consciousness” (whatever it is) that cannot be reproduced by any architecture that is not a human body. If phrased in this way, it should become immediately obvious that the burden of evidence is, in this case, on the side of the skeptic. If they want to make sense they will have to clarify (1) what it is that they consider irreproducible, (2) why they consider it irreproducible and (3) how they would test for its presence or absence.

    It is not the job of the naturalist to provide these items, not least because in all comparable cases it turned out that there was no magic to it; life, for example, turned out to be just more complicated chemical reactions instead of a special elan vital that an object either has or doesn’t have.

    Liked by 1 person

  22. Yes wonderful discussion guys! Like you I consider some of these question to be amazingly important, though others are mostly a waste of time. Furthermore I do believe that I’ve developed some pretty good answers for the important ones, and so will present a synopsis on the hope that some of the interested will indeed inquire:

    “Minding” is surely a useful term from which to reference what mind does, but consider my own definition for the noun. I define mind as “That which processes information,” leaving everything else as “mechanical.” Observe that a computer has “mind,” though we don’t generally consider a rock to process information, rendering it “mechanical.”

    Note that “mind” isn’t inherently special here — specialness instead seems to emerge through “conscious mind,” and specifically its “qualia” input. Effective consciousness seems to require “sense” and “memory” inputs, like a normal computer, though it also seems to need motivation through the good/bad of qualia. In fact one concise definition for consciousness might effectively be, “A form of computer for which existence is not perfectly inconsequential.”

    To go just a bit further still, consider “thought” as the processing element of the conscious mind. Thus computers will not “think” from this definition, without the motivation of qualia. Furthermore I identify two seperate varieties of thought. The first is “the interpretation of inputs,” such as pain, an image, a memory, and so on. Then the second is “the construction of scenarios,” where inputs are effectively pondered in order to figure out what to do, given that feeling good and not feeling bad is what matters to the conscious entity. And why indeed did consciousness evolve? Perhaps to promote autonomy — evolution might not otherwise be able to program its creations well enough in sufficiently diverse environments.

    AlexSL, above you wisely asked for effective definition, and I do hope that I’ve delivered. As a youngster I noticed that if the field of philosophy has developed amazing questions, but without accepted answers for them, then perhaps there are various structural impediments in the field that I should avoid? Well I am back now, and yes I do see major problems here (which tend to get me into trouble when I complain too much!). I most certainly do need to be patient, though I should be able to manage when given great discussions such as this one to ponder!

    Liked by 1 person

  23. One thing that amuses me in these discussions is that whenever Massimo says something like “surely nobody would be so crazy as to suggest…”, I’m already guessing that I’m going to support the crazy suggestion to follow.

    In this case it’s the idea of a conscious mechanical computer. I imagine that most computationalists would grant that a powerful mechanical (Babbage-style perhaps) computer could indeed be conscious if it could ever be built. This is unlikely to be feasible because the more gears and cogs and so on you add the more difficult it is for motion to be transmitted faithfully throughout the device. Even with perfectly machined parts there seems to be a low upper limit for achievable complexity. If these problems could be overcome then there’s no reason a mechanical computer could not be conscious with the right program (or so I say).

    I disagree with Dan when he says that the Chinese Room does not completely rely on intuition. It depends on the intuition that it is absurd to imagine that the room (or the system) could be conscious or understand. Even if the guy memorises the system and leaves the room, I would say a distinct virtual mind is supervening on that of the guy. You probably find that idea absurd, but I don’t. That’s the problem!

    Hi Robin,

    > how could I be even seeming to see the entire word “the” at the same time?

    It doesn’t seem to me that I’m a ball of neurons either.

    Because all ‘seeming’ is is how your mental state is represented to you. If your mental state represents that you are seeing three letters at once, that is how it will seem to you. It doesn’t really matter how your mental state is implemented. The “you” you identify with is operating at so many levels of supervention above the raw mechanics of your thinking device that there is little connection between how things seem and how this thinking comes about.

    Hi Massimo,

    From SEP: ‘According to Strong AI, these computers really play chess intelligently, make clever moves, or understand language. By contrast, “weak AI” is the much more modest claim that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities.’

    > But for the computationalist that ought to be possible in principle, no?

    Not necessarily. The computationalist only says that if you can figure out how to make a system compute the right algorithm, then you can make that system conscious. The computationalist does not say that it is possible to make such a system out of any substrate.

    > I would push back on this and say that the brain produces *only* physical stuff, in the form of electrical and chemical signals

    If you replaced the brain with a computer it would also be producing electrical and chemical signals as output. It would need to have the same *interface* with the rest of the body even though internally it could operate completely differently.

    Anyway, just want to wind up by saying it was a good discussion, guys. Massimo and particularly Dan did a great job of trying to be fair to the other side.

    Liked by 1 person

  24. I did read in the paper that ‘Watson’ will now have access to ‘wisdom’, by which they mean they have fed in a lot of Ted talks.

    That is good. Now when someone asks Watson about Plato they will be directed to a claim that Platonic Idealism dominated the thinking of all the major philosophers of the 17th, 18th and 19th century thereby delaying the discovery of the theory of evolution.

    If they ask Watson about Aristotle they will be directed to a site which claims that Aristotle thought that real flesh and blood rabbits were an approximation of an Ideal Platonic Rabbit.

    In other words IBM engineers are beavering away attempting to pioneer artificial stupidity, as though we don’t have enough of the natural stuff on tap.

    Liked by 3 people

  25. I enjoyed the discussion and found it very sane and useful.

    All these ideas about substrates etc. will remain a matter of conjecture, however, in the absence of any empirical study of consciousness. Everyone talks about consciousness as if it belongs to some else and is inaccessible, which is surely just a little odd and may even be rather unscientific.

    Liked by 1 person

  26. Alex,

    “the lack of a clear and non-question-begging definition what people even mean with consciousness”

    I’m constantly puzzled by this. I think of consciousness as the ability of having first person phenomenal experience (you know, pain, hunger, desire, seeing red, etc.). There is nothing question begging about it, and its meaning ought to be clear to anyone who has had such experiences.

    “the unwillingness to suggest an empirical test for consciousness”

    I can’t think of one, and it may be impossible.

    “too easy to say that the Turing Test isn’t it and then simply to assume that the burden of evidence is on the side of those who think that machines could be conscious”

    It may be easy, but it strikes me as epistemologically correct: some people are making extraordinary claims (about artificial intelligence and/or consciousness), seems only fair to think that the burden of proof is on such people to deliver.

    Robin,

    “Maybe the Turing Test is less a test of machine intelligence than a test of how easy we are to fool”

    Beautiful.

    vector,

    “Does intelligence require consciousness? This is just a matter of definition not of fact”

    I disagree, it is very much a matter of fact. What we know is that there is a rough correlation, in the biological world, between the two. It is hard to imagine a creature who is not intelligent and yet has consciousness (e.g., plants aren’t and don’t), or vice versa. But they are also clearly not the same thing. The interesting question (other than how exactly the brain generates conscious experience) is whether one could decouple the two artificially, building a highly intelligent machine that is not conscious.

    Marko,

    “does the original person starts having some “transcedental” experience of having the same consciousness in two different bodies simultaneously? I think not”

    Agreed.

    “the consciousness of the original does not get *transferred* to the clone. One could say that it is still tied to its original substrate”

    Yup. Even Chalmers has admitted that he wouldn’t subject himself to an uploading machine if it meant having to go through a process destructive of the original. He man doesn’t put his life where his mouth is…

    “I think it is not really carbon-dependent, and can be made of other materials, as long as it is functionally equivalent.”

    I agree with the first part, not the second. I have never said that it has to be carbon. But functional equivalency is too weak a requirement: could consciousness arise out of pieces of cardboard, properly connected? I don’t think so, because cardboard doesn’t have the kind of physical-chemical properties that actual neurons have. And these have those properties because of the materials they are made of.

    “What we really want is a *superior* AI, one that would not be prone to making mistakes in reasoning, logic, math, etc”

    Cue the soundtrack of Battlestar Galactica… 😉

    ” the distinction between the syntax and semantics, the symbol manipulation and symbol understanding, is equivalent to (non)computability — semantics is the part of the mind that is not recursive. An algorithm does not (and cannot) define the meaning of a string of symbols, and yet the human mind does that all the time.”

    I like it! I’ll think some more about this point.

    Eric,

    “I define mind as “That which processes information,” leaving everything else as “mechanical.””

    I guess we think of “minding” differently. I think one of the problems here is precisely talk of “mind” as if it were a disembodied object, distinct from the brain, somehow, and yet part of it. I rather think of “minding” as a verb: it is what the brain (in interaction with other parts of the body and the external environment) does. Just like lungs do breathing, the heart does blood circulating, and so forth.

    Liked by 1 person

  27. Robin Herbert said; ‘If they ask Watson about Aristotle they will be directed to a site which claims that Aristotle thought that real flesh and blood rabbits were an approximation of an Ideal Platonic Rabbit.

    In other words IBM engineers are beavering away attempting to pioneer artificial stupidity, as though we don’t have enough of the natural stuff on tap.’

    One of the top ten Theory of Mind comments of all time!

    Perhaps the ultimate IBM objective is to simulate the mind of the American voter?

    1) What I find psychologically interesting about these arguments is: AI, Computationalism, Turing Tests etc. are all perceived ways of seeing the brain or tricking other brains into thinking they are talking to another biological brain. Everything is an argument about the perception of OUR objective reality and how we CALL brain processes as intelligence, consciousness, sentient, qualia based….

    2) By OUR objective reality it is clear that we don’t just interact and bond or non-bond with other particles, orbit stars, turn towards sunlight, swim around in puddles…

    Massimo said: “If one is not convinced by this, then the next analogy is the origin of life, which I also mention in the video. Sure, it is *possible* (though very far from being established, and in my mind not at all probable) that one can get life out of non-carbon materials. But since a lot of what makes an organism alive depends on the specific chemical properties of carbon, non-carbon life is unlikely, certainly if one ventures too far from carbon on the periodic table. Since consciousness is, as far as we know, a property of certain biological systems, and only of those systems, why think that it too is not substrate-sensitive?”

    What we CALL carbon is actually a unique structure of physical particles that ‘instantiate the forces of nature’ such that the structure can perform very unique interactions with other physical particle structures. Likewise we can say that the Carbonness of carbon was something which the ancients perceived about carbon and realized that it could be easily ground down into smaller and smaller carbon particles; which may have put them on the right track but failed to know that the fundamental particle level was no different than a rock. It was the forces of nature that computed those particles into Carbonness of carbon or the unique property was still structure which occurs much deeper than the perceived computational level. Likewise the Waterness of water which occurred at different levels of heat caused it to exist in clouds, flow through river beds, pool in craters or form as ice at the poles to stabilize the heat level and sea level of the oceans.

    The great biological question is how do the neurons ‘instantiate the forces of nature’ to achieve an environment which is greater than an adjacent particle or water puddle.

    Problem is that AI people want to computationally simulate a brain in a program when there is little recognition that the brain is actually 30 or so sub-organs and structures to begin with plus lack of theory how neurons interact with other neurons at the fundamental forces level to create the Neuronness of neurons. From the engineer’s POV what this seems to be first is a good old fashioned systems integration problem; GOFSIP, which is why I made the analogy to standing with you guys in the Best Buy store looking at the wall of HDTV’s. There seem to be no proper starting point or locus for these philosophical discussions when brain SYSTEM structure / ontology is so deficient.

    Liked by 3 people

  28. Perhaps the focus should be more on basic self-consciousness when looking at machine capability to think as we humans do.
    Human thinking has a key characteristic which is the possibility to think about oneself as an existing entity, as an entity existing in the environment and represented like other humans are represented. Such consciousness of oneself as an existing entity is a foundational component of self-consciousness. Having a representation of oneself as an existing entity in an action scenario is a considerable advantage in the simulation of action. The possibility to represent oneself as we represent others also goes with TOM and with the development of communication up to human language in evolution.
    Human thinking cannot exist without such a performances of self-consciousness.
    But the nature of that self-consciousness is a mystery for today science and philosophy, and it looks pretty obvious that we cannot build a machine carrying a performance that we do not understand. Regarding the possibility to successfully replace brain cells by silicon equivalents, I’m afraid that such option will make sense only when the build-up of life will be mastered in our labs. And we are far from that. “We might be missing something fundamental and currently unimagined in our models of biology” as R. Brooks said
    Consequently, I would support the position about AGI as not being a today foreseeable achievement. Better understandings about the natures of life and of self-consciousness are needed. (such limits about AI have already been addressed using a model of meaning generation -the “something missing” when going from syntax to semantics -http://philpapers.org/rec/MENTTC-2)

    Liked by 1 person

  29. Excellent discussion!

    Massimo: a science question from a non-scientist- is Vector Shift’s point about the problem of copying information at the quantum level basically what you were getting at when you said in the photosynthesis analogy that quantum transducing molecules were needed?

    DM: Congrats!

    “Even if the guy memorises the system and leaves the room, I would say a distinct virtual mind is supervening on that of the guy. You probably find that idea absurd, but I don’t. That’s the problem!”

    Can you explain this a bit more? I haven’t the foggiest notion what you mean here.

    Liked by 1 person

  30. DM wrote:

    “I disagree with Dan when he says that the Chinese Room does not completely rely on intuition. It depends on the intuition that it is absurd to imagine that the room (or the system) could be conscious or understand. Even if the guy memorises the system and leaves the room, I would say a distinct virtual mind is supervening on that of the guy. You probably find that idea absurd, but I don’t. That’s the problem!”

    ——————————————-

    The point is that you don’t have to rely on *Searle*’s guy. You can do it yourself. Presuming you don’t speak Hebrew, I could give you a set of rules for what to do with a number of symbols that are presented to you, in strings of varying lengths, based on the shapes and order of those strings of symbols. Is it “intuition” to point out that not only don’t you understand a single word of Hebrew, you don’t even know that the language you are manipulating *is* Hebrew.

    So, you stamp your foot and say, “But *my* intuition is that when I engage in this following of syntactically defined rules I *do* understand Hebrew and *do* know that I’m speaking Hebrew in this case. Prove that I don’t!

    My answer to you would be to point out the difference between your experience of the manipulation of Hebrew symbols and your listening, speaking, reading, and writing in a language you *do* understand, like English. Once you do this, it is immediately obvious, not only that you *do not* understand Hebrew, but furthermore that *what you’re doing* in the Hebrew case is nothing like what you do, when you understand and speak English.

    Look, if what we do is what computers do when they translate, then we already have machines that understand languages. Google translator. But then, there’s no issue at all and one wonders what all the fuss is about, on *both* sides.

    Liked by 3 people

  31. Research into AI is valuable for improving machinery for packaging as consumer commodities; it will undoubted continue to increase unemployment in various sectors; it keeps researchers busy, especially those in Universities spared the pains of having to teach more than a couple courses a semester, opening opportunities for underpaid adjuncts. It’ll continue to entertain the masses with distraction from more pressing political and economic issues. Shiny new toys always have their fascination.

    The more I read about strong AI, the more I am convinced that:

    It’s a form of science fiction; a mythopoetic construction of illusory utopian futures which re-enforce faith in the social status quo. (Poverty, racism, war – all disappear in the Singularity, right?)

    It’s certainly a dualism, I think Cartesian, rather than Aristotelian. But either way, it carries an underlying faith in non-physical intellective properties transferable between entities without regard to the materiality of the entities. It’s what I would call a ‘mechanistic spiritualism,’ and like most forms of spiritualism, it doesn’t require reason or empirical verifiability – it’s a faith.

    As a faith with a highly developed language of justification, it’s produced reams of wild speculation that have virtually no use; reminding one of the more vapidly esoteric speculations of the Medieval scholars (angels on the head of a pin, consciousness in a silicon chip).

    As with most spiritualisms, it evidences a psychology of profound disappointment with the human experience and a profound disgust for the human body. Just think: we eat corpses to survive. Our guts are filled with bacteria, we have to squat to excrete their waste along with ours. Our bodies are given over to all sorts of ills and disabilities, worsening as we age. We have genitalia that frequently irritate us in a way we can’t quiet, but when we are fortunate to do so, sticky smelly fluids result. Then – we die.

    Who wouldn’t want their consciousness (dualistically: soul) uploaded into a beautiful machine with no such problems?

    But as with most spiritualisms, this implicit asceticism requires developing a blindness to the rich diversity of the human experience. Certainly strong AI proponents love, feel awkward, grieve, struggle with private rages and anxieties like everyone else – they’re human; but in living through these they rely on their faith-promised future where some form of life – which they will have helped create – will appear needing none of those contingent and ephemeral (yet enormously powerful) experiences – which is why none of these will be programmed for in Strong AI; they probably can’t be, but even if they could, no one in the field wants them, and cannot even give a good account of them.

    The money’s there for the research; if the machine can be built, go ahead.

    I want that computer not only to explain what an apple is and why it is delicious; I want a computer that eats an apple and then exclaims ‘delicious!’

    And let it provide apples to all the hungry people of this world. Then it might be useful.

    Liked by 2 people

  32. Hi Massimo. You told me:

    I guess we think of “minding” differently. I think one of the problems here is precisely talk of “mind” as if it were a disembodied object, distinct from the brain, somehow, and yet part of it. I rather think of “minding” as a verb: it is what the brain (in interaction with other parts of the body and the external environment) does. Just like lungs do breathing, the heart does blood circulating, and so forth.

    Actually this “minding” is something that we agree on — I see no disembodiment either. The technical difference is that I’ve also provided an associated noun, and one that’s far more inclusive than the way most people think of “mind.” This definition would surely include an ant, which seems to have a non-conscious mind, though not a tree, which seems purely mechanical. If you haven’t settled upon a noun from which to base your “minding” concept, then perhaps it would be helpful to do so. But nevertheless when others consider my work, they will be in error if they don’t also use my provided definitions. Here “the mental” processes information, and “the mechanical” does not.

    You’ve mentioned just above to Alex SL that you consider consciousness as “the ability of having first person phenomenal experience (you know, pain, hunger, desire, seeing red, etc.).” I think this is great since I consider it this way as well, though we obviously do acknowledge that the field has much to figure out. I believe that I could contribute here quite substantially, and as we go along we’ll see whether or not you find my ideas useful. Please do ask me any “stumpers” that you think my models may not adequately address — I do love challenges such as these!

    Liked by 1 person

  33. In order to argue for artificial consciousness (as opposed to artificial computation, this being a trivial problem), we would have to start, it seems to me, by assuming, tautologically, that human beings are conscious machines. The problem would then be that if we are no more than this then the whole debate will be a trivial one about which substrates may and may not support consciousness, and we might as well be flying kites. .

    The real problem, and it seems to me to be the only problem, the problem that causes all the other problems, arises when we try to prove that we are conscious machines. This is often assumed, and some whole areas of research take it for granted, but if we are no more than this then the whole of religion would be nonsense and metaphysics would be impossible. . .

    I feel that these debates often take place at too high a level. The underlying problems, which are invariably metaphysical and starkly simple, become buried under a lot of complications and difficult arguments that can never be resolved at the level at which they are taking place. .

    Surely we cannot just assume that we are conscious machines and that religion is nonsense and move on. If we are a proponent of artificial consciousness then we must start with this crucial implication and deal with it.

    Yet the Buddha’s teachings are irrefutable and unfalsifiable, such that this assumption looks philosophically naive. It can be demonstrated that it would be impossible to prove that we are merely conscious machines, even by building one. This does not mean we are not one, but it does mean that there will never be any justification for the assumption that we are one. In the same way, we might assume that the sun orbits the earth. We just have to ignore the counter-evidence.

    The substrate for consciousness is not our only problem, As yet we have no substrate for matter either. .Maybe it would be better to start with this problem than attempt to gainsay over three millennia of first-person reports stating that there is a lot more to our consciousness than we usually notice.

    The sages generally say that as human beings we are conscious machines to nearly all intents and purposes, (or perhaps semi-conscious), but not in fact, and that it would be possible to know this and thus to replace mechanical behaviour with spontaneity.

    Is the mystical description of consciousness actually less plausible than the idea we are machines? The first allows a solution for metaphysics and explains religion, the second leaves us in the same old impossible muddle. It hardly seems like a contest.

    Liked by 1 person

  34. Hi Aravis,

    My answer to you would be to point out the difference between your experience of the manipulation of Hebrew symbols and your listening, speaking, reading, and writing in a language you *do* understand, like English.

    I’m not so sure there is as significant a difference here as you make it seem, Dan.

    As someone who has (somewhat) learned Hebrew, I assure you that at first I was parsing and manipulating strings of characters. I think I still do that (even in English!), but the only real difference is the ease with which I do it. But if you are right and I am indeed doing something different in English now than I did in Hebrew when I started out, there should be points (or stages, I’m not particular with regard to the conceptual organization in this case) in learning the language where I started doing things differently — yet that doesn’t seem particularly reasonable to me. Instead, the only difference I perceive is the ease with which I do whatever I do when I’m operating in either language. To use Daniel Kahneman’s analogy, my Hebrew is still largely a System 2 task whereas my English has become a System 1 task.

    Further, I don’t doubt that with more practise I could move my Hebrew to a fluent (i.e. System 1) task. But again, if that were to happen I don’t think anything fundamental in the mechanics of my thinking will have changed. Instead, I would suggest that I cease to be aware of what I am doing with the language. And in the sense that I can consequently focus on different tasks (e.g. interacting with your post rather than on constructing grammar for this one), I agree that I can be experiencing and thinking something different — but the difference isn’t really one of kind; instead, it’s one of attention.

    Liked by 2 people

  35. Fascinating discussion as usual but I’m going to throw a couple of hats in the ring which relate back to the video and my previous comment on the brain as a Secondary Stimulus-Response Brain System built on the Primary Stimulus-Response Brain System; and also based on some of the work of Michael Graziano: https://psych.princeton.edu/psychology/research/graziano/index.php

    Specifically his AEON Article “Build-A-Brain” lays out the case for his Attention Schema Theory of consciousness: http://aeon.co/magazine/psychology/is-consciousness-an-engineering-problem/

    The only point of his theory which I disagree with is that he is actually theorizing the Secondary SR System with no reference to a Primary SR Brain System. If in fact the rooting of the Secondary SR Brain is rooted back into the Primary SR Brain, then what the microcolumnar structures of the Secondary SR Brain do is act as the “rods and cones”; or actually form a mental picture of the other activity and sensory cortical areas of the brain, which is fed back via the thalamic system to the Primary SR Brain. Not just a feeding back to the thalamic structures, but in fact a forward feed of the neocortex into the higher lobal functions forming the aggregates of just not perceived physical objects via the Attention Schema but also formation of non-physical objects of thought or concepts. In fact the entire neocortical plate is manipulated by not just external visual stimuli but by accompanying sound, smell, touch, emotions, volitions; or the language of neural thought.
    http://www.pnas.org/content/97/10/5019.full

    In ‘other words’ Massimo’s ‘minding’ really relates to the ‘horizontal’ activities across the microcolumnar structures across the neocortex. Going back further to the video on Dan’s comment that we cannot ever discount or eliminate our folk psychology; relates to the ‘vertical’ learning which occurs betwen the neocortex and the Primary SR Brain.

    So there you have it, an engineers structural theory inspired by the minds of Prof’s Graziano, Pigliucci and Kaufman..and RS Bakker, Dennett and others

    Liked by 1 person

  36. Vector schift and jarnauga111,

    The issue with the no-cloning theorem is the following. The traditional assumption is that consciousness is a global artefact of the neurochemical activity of the human brain. In other words, it is a large-scale effect, and arguably it does not depend on whether the brain of the clone has one of the water molecules moved 5 nanometers to the left, with everything else completely equal. But this difference is enough to work around the no-cloning theorem, since the wavefunctions of the original and the clone are not the same anymore, courtesy of that water molecule.

    In other words, the traditional assumption is that consciousness does not depend on the level of precision discussed in the no-cloning theorem. Of course, you may question that assumption and instead postulate that the existence of consciousness crucially depends on the very fine quantum effects in the brain, Penrose-style. In that case no-cloning theorem does apply, and it says that you cannot make a copy of the brain at that level of precision. But in that case you would also be hard-pressed to explain how the human consciousness can exist in the brain for more than a fraction of a second, given the enormous magnitude of decoherence due to the brain’s environment. That’s the main reason why most physicists (myself included) are somewhat skeptical about that proposal by Penrose.

    Regarding quantum teleportation, there is a lot of misunderstandings and misinterpretations about what it is all about. The first main point is that the *matter* (i.e. the material of the brain) doesn’t get teleported. So you would need the original brain here, and an identical copy of the brain there, and what you teleport from one to the other is the *information about the state* of the original (the wavefunction). In other words, you perform a certain measurement on the original brain, then perform a certain measurement on the clone brain, and QM guarantees that now the state of the clone is the same as the state of the original was before the measurement. However, the state of the original is not “destroyed” by the measurement process, it is merely *changed* by it (in an unpredictable way). The original brain does not *lose* its consciousness, but rather has it slightly modified.

    The second main point is that not even the full *state* (i.e. the full wavefunction) of the system can get teleported that way. Only *certain aspects* of the wavefunction can be teleported, not the whole thing. The most obvious counterexample is that the full wavefunction also contains information about where is the system in space (the “orbital” part), and the target system must necessarily have that part different, since it is sitting on the other side of the room.

    The point of the story is that the no-cloning theorem should not be relevant for the copying of the consciousness, while quantum teleportation cannot really teleport the whole thing perfectly anyway.

    Liked by 2 people

  37. Hi Massimo, Daniel, Everyone,

    I have to respectfully submit that I was somewhat put off by the tone of comments on BH. It is one thing to be impatient and exasperated with a fringe argument. It is another when you are dealing with the *majority* view of relevant scholars. And yes one can always say that they are all in it for the grant money (the sweet, sweet opulance of humanities grant money) but I find that dissatisfying and even…unkind.

    First off, Massimo calls the Turing test a “behaviorist” test and speculates that it caught the spirit of BF Skinner who was writing about the same time. This strikes me as quite unhelpful. It is indeed a *performative* test, and DK uses the usual term, so in a sufficiently broad sense it is based on “behavior”, but not at all in the narrow Skinnerian sense of behavior as dispositions to act observably. The Turing test is based on language use. Language use was actually a major fall down for psychological behaviorism as Searle and Chomsky energetically pointed out. It just seems impossible to give a dispositional analysis of an ordinary conversation.

    The charge that computationalism is dualist strikes me as puzzling. I take it that if a statue is made out of wet clay and I squeeze the clay into a ball, the clay still exists and the statue does not. Am I a dualist about statues? Yes on computationalism minds are immaterial things. But so are a lot of things. From programs to currencies, novels to contracts the world is filled with objects that are in an unproblematic and unremarkable sense immaterial. As Sellars used to say there is more in the world than cabbages and kings. On computationalism the mind is still totally causally dependent on the physical and any mental change requires some kind of physical change. This makes it physicalist and monist in every important sense. This charge strikes me as the more odd coming from Massimo who is usually stridently anti-reductionist. He is want to assure us that even if psychological phenomena do not reduce to biological phenomena, or biological to chemical, still this does not make the former in any sense occult or magical. They just occur at a different level of description. How then is this not so for minds?

    On the all important issue of medium independence I think you guys dealt somewhat lightly with a big issue. Mental states are widely thought to be medium independent because they are *intentional* (representational) and every example we have of intentional phenomena is medium independent. These words are arbitrary markings, other ones could just as easily have been chosen. The children’s science museum in Boston which I visited as a kid had a digital computer made out of sticks and strings. It was big enough to fill a small room and only played tic-tac-toe but it was a digital computer. As far as I know this doesn’t change much as you change the nature of computation. Churchlandian connectionist networks can be run on different kinds of computers. If you are not a representionalist, you are not a representationalist, but I think you might have dwelt on that argument *for* medium dependence before rolling so hard again’ it.

    Ditto DM on the conflation of computationalist and Strong AI (actually very impressed with DM’s comments in general on this one).

    Liked by 1 person

  38. Massimo,

    Consciousness: Well, my digital camera sees red; fruit flies get hungry (as does my camera when it says “battery low”); and while somebody here recently argued that fish can’t feel pain I have a hard time calling it something else if they are wriggling from perceiving a wound and desperately trying to get away from what wounds them. And so on. So even if that meaning is clear to everybody (and to me consciousness is being awake), it still leaves unclear what that mysterious extra is that supposedly cannot be replicated in a sufficiently complex thinking machine. If a test for it is impossible then one might start thinking on the lines of “what can be postulated without evidence can be rejected without evidence”. The most parsimonious conclusion would be that there is no there there.

    Burden of evidence: See my second comment. If the claim is merely that it should be possible to somehow produce the functionality of consciousness in some kind of thinking machine one day, then I do not see how the burden is on the side of those making that claim because it reduces to the claim that there is nothing supernatural about human minds. It is the people who are basically saying that there is something really special about us, even if they can only make very vague gestures towards what it is (“aboutness” etc.) who are making an extraordinary claim, a claim contradicting all we have learned over the last few hundred years.

    (On the other hand, I consider claims of brain simulation = brain, of mind uploading, or of the coming singularity to be nonsensical because chlorophyll analogy, copy =/= immortality, and the singularity depending on magical thinking, respectively.)

    Liked by 1 person

  39. David Ottlinger wrote:

    I have to respectfully submit that I was somewhat put off by the tone of comments on BH. It is one thing to be impatient and exasperated with a fringe argument. It is another when you are dealing with the *majority* view of relevant scholars. And yes one can always say that they are all in it for the grant money (the sweet, sweet opulance of humanities grant money) but I find that dissatisfying and even…unkind.
    _____________________________________________________________________________

    To whom/what are you referring here? I’m not sure I’m getting it.

    Liked by 1 person

  40. Hi Peter J,

    I see that we each have our problems with modern philosophy, you with a supernaturalistic position, while mine is naturalistic. Still I’d hope for you to see that naturalistic philosophy need not inherently be anti religion. Even if it does happen to be useful for us to consider ourselves as “conscious machines,” this doesn’t mandate that:

    “…the whole of religion would be nonsense and metaphysics would be impossible. . .”

    Observe that a higher power might still have created this specific machine. Notice that while anything in the naturalistic world could theoretically be disproven, it’s quite impossible to disprove the supernatural. Why? Because the very concept here references a freedom from causality — everything that we idiots think we discover, might simply occur through a deity’s plan.

    It seems to me that there are two separate ways that a given theist can approach philosophy (and yes with a grey middle as well). The less virtuous would be to cheer on the standard conventions of these predominantly atheistic theorists (outwardly discounting how they never seem to achieve any accepted understandings for their great questions). Then conversely one could view philosophy as a troubled field from which to potentially help us determine what it is that God created. I suspect you to be more of the latter sort, and thus seek to understand what philosophy will need in order to develop accepted understandings regarding our nature.

    And cheers to everyone else out there who loves philosophy enough to both acknowledge its various problems, as well as work to overcome them!

    Liked by 1 person

  41. Regarding the essential differences between biological and computer generated intelligence: Intelligence either has or serves a strategic purpose. In choice making systems, whose purposes are thus differentiated from choice reactive systems, these intelligent strategies are used for a purpose as well as being in service of the purposeful entities from which living systems have evolved. But of course, as with Massimo, if you only believe that purposes have emerged within the more intelligent of our living systems, and not within the systems through which the universe itself has evolved, then in the end you would be no different, intelligence wise, than a smart computer. Which makes even less sense when one considers that humans purposely constructed computers which would not appear to have the ability to either acquire or evolve any purpose for constructing the human version of a brain in return.

    Liked by 1 person

  42. Hi Aravis and Massimo,

    [To DM:] So, you stamp your foot and say, “But *my* intuition is that when I engage in this following of syntactically defined rules I *do* understand Hebrew and *do* know that I’m speaking Hebrew in this case. Prove that I don’t!

    I don’t think that DM would give that reply, rather he’d reply that the *system* is understanding. The “man” in the Chinese Room is irrelevant, it’s a deliberate distraction to divert attention from the important thing.

    Here’s my version of the CR:

    The room contains an iPhone with Siri software capable of Chinese. The iPhone has a duff battery, so a man is there cranking the handle on an electricity generator (this emphasizes how irrelevant the man is, replace him with a good battery if you like).

    The “room” (aka the iPhone/Siri) is conducting a conversation in Chinese with a Chinese speaker. On what basis is Searle or anyone else asserting that it does not understand? It seems to me it can only be human-exceptionalist intuition: “understanding” can only be human, or only biological, or only done by conscious agents.

    Massimo says that my answer, that the room/iPhone does “understand” is an utterly weird one that we’re driven to by ideology. To me it’s the reverse, the idea that the iPhone “understands” seems straightforward and prosaic, and the rejection of that is nothing but unsupported intuition.

    Massimo said something like:

    “We know that computers do syntax, and not semantics”

    How do we know that? Isn’t that assertion exactly the topic under dispute? If you think of “meaning” as something mysterious that only humans can do, wrapped up with consciousness, then it follows that the iPhone is not doing “semantics”.

    But, to me, “meaning” is simply linkages between information and “understanding” is simply correctly manipulating such linkages. Thus the iPhone/Siri is indeed doing semantics and meaning and understanding. I don’t see any actual argument for denying that.

    Declaring, by fiat, that the computer is not doing semantics, and then getting all puzzled as to how to get back to semantics, is purely a self-created problem.

    Sure, the computer is “only” shuffling bits of electric charge around, but then the human brain is “only” shuffling bits of electric charge (ions) around neural networks.

    I predict that the next generation will grow up talking to computers, and it’ll be entirely natural to them to think of a computer either “understanding” or “not understanding”, according to how good the software is. There really isn’t anything more to it than that.

    Liked by 2 people

  43. Massimo,
    Thank you for the response.

    “Ah, nice example! I’ll use it in the future, if you don’t mind.”

    Thanks again, it’s a very nice example, obviously couldn’t be me, that’s Chomsky. I’ve heard him discuss this a few times, it’s in “Powers and Prospects”, 1996′, found at http://www.chomsky.info/books/prospects01.htm

    I hope you don’t mind if I quote a bit of the relevant paragraph, it’s Chomsky, so it should be interesting:

    “There is no answer to the question whether airplanes really fly. . . . The same is true of computer programs, as Turing took pains to make clear in the 1950 paper that is regularly invoked in these discussions. Here he pointed out that the question whether machines think “may be too meaningless to deserve discussion,” being a question of decision, not fact, though he speculated that in 50 years, usage may have “altered so much that one will be able to speak of machines thinking without expecting to be contradicted” — as in the case of airplanes flying (in English, at least), but not submarines swimming. Such alteration of usage amounts to the replacement of one lexical item by another one with somewhat different properties. There is no empirical question as to whether this is the right or wrong decision.”

    Back to you: “Yes, but it’s a mental state that is made possible by a certain neurophysiological apparatus, proof of which is that we can make the pain go away by interfering with the functionality of said apparatus.”

    That’s true of all mental states, of course, they’re some manifestation of neurophysiological apparatus. Are the pain receptors you mean the peripheral nerves, or are you implying specific areas in the brain? EIther way, though, it think it’s that certain signals from certain nerves cause the brain to make us hurt. That sensation is extremely influenced by all kinds of other mental states like anticipation, and it arises with zero stimulation–phantom limb pain, VS Ramachandran has done some work revealing the true weirdness of that. What I’m trying to get across is the sensation is in many ways independent of the inputs.

    As to gradual replacement of the brain by circuitry of some kind. I think it will start with replacing damaged parts, we’re already doing that to some extent with ears and even eyes. Then slowly, augmentation and replacement of more and more functions rather than mere augmentation, At some point, what’s left of the physical will be insignificant. I can’t really fathom how any non-dualist would think we won’t have this capability at some point. A question not asked: If it’s possible to faithfully simulate a human mind, a working, fully functioning mind you could interact with, is that a human being?

    The chinese room problem, what are working neurons but really complex shuffling, of chemicals/molecules?

    Liked by 1 person

  44. Alex – You got me going with this comment.

    “If the claim is merely that it should be possible to somehow produce the functionality of consciousness in some kind of thinking machine one day, then I do not see how the burden is on the side of those making that claim because it reduces to the claim that there is nothing supernatural about human minds. It is the people who are basically saying that there is something really special about us, even if they can only make very vague gestures towards what it is (“aboutness” etc.) who are making an extraordinary claim, a claim contradicting all we have learned over the last few hundred years.”

    This seems to be a common stance. In short, there’s nothing special about consciousness, this is an opinion and we do not need to justify it since what have we learnt in the last two centuries changes something (which remains unspecified) even though the argument has not moved on for two millennia (as we see here) and even though almost nobody else thinks it is settled. I don’t know how one can arrive at such a view or even begin to justify it. Would we just ignore the video discussion that got this started?

    What have we learnt over the last few hundred years that changes anything? I see nothing at all, not even one research finding that would help justify this pessimistic view. It seems fantastically unscientific to take a guess and then defend it in this dogmatic way and it is certain to lead nowhere useful.

    Surely you cannot really believe that it is not up to you to prove anything but up to other people to prove you wrong. Really?

    I wonder if you would be able to unpack the phrase ‘functionality of consciousness’. What functionality is that? Is it the supernatural idea that the immaterial can act causally on the material? Do you mean the functionality of brains?

    I wonder why it is so difficult to see that this intentionally blinkered approach to consciousness, which is very common, goes nowhere when it is so completely obvious from a review of the field. It seems to be some sort of selective blindness caused by a fear of the unknown. It means having to face up to the problem with one hand tied behind our back, or even not facing it at all,

    I’d say the burden of proof falls on anyone who has an opinion.

    Liked by 1 person

  45. Hi Dan,

    > So, you stamp your foot and say, “But *my* intuition is that when I engage in this following of syntactically defined rules I *do* understand Hebrew and *do* know that I’m speaking Hebrew in this case. Prove that I don’t!

    That’s not what I’m saying. Coel has made that argument, but I disagree with it. If I’m the guy in the Hebrew room, I will not understand Hebrew even though I may be the engine for a system which does. I’m saying that *my* intuition is that engaging in the right syntactic operations (implementing the right algorithm) instantiates another mind which supervenes on my own. I don’t know what that mind is thinking and it doesn’t know what I am thinking (and not even necessarily that I exist at all).

    The guy in the room is like the physical hardware of a computer. I don’t think the computer qua hardware understands anything. I think it’s the software that would be doing the understanding. As you said, the actual mind is a kind of abstract pattern which supervenes on the physical mechanism, and if I’m the guy in the room I’m just the physical mechanism.

    > Look, if what we do is what computers do when they translate, then we already have machines that understand languages. Google translator.

    I don’t think Google translator understands what it is saying because it has no model (or only a very simplistic model) of the world and how its terms relate to that. If it understands anything it is only stuff pertinent to its domain, like what a “word” or perhaps a “verb” is. Even so, its understanding of these concepts is much more limited than yours because its understanding does not relate to anything outside of its competence, whereas to a human these concepts have all kinds of significance from many different domains. (Of course I recognise that our intuitions differ on whether computers can be said to understand their domains of competence.)

    But, to return to your challenge, if you actually did give me a “a set of rules for what to do with a number of symbols” then it is unlikely that any real understanding would come about, unless this set of rules was extraordinarily sophisticated, being essentially an algorithm capable of passing the Turing Test in Hebrew (pace ELIZA, Eugene Goostman et al I’m going to say that no such algorithm has ever been created).

    I think the CR argument sometimes misleads people by its implication that the TT might be passed by just looking up a simple table of questions and responses. That might just work for a minute or so but it is nowhere near good enough to work in the long term. To really pass a rigorous TT the system would need to be able to adapt and learn and demonstrate the kinds of mental abilities that are not so easy to encode in a predetermined lookup table. So thinking of it as a relatively simple (though perhaps very long) set of rules is misleading. Incredible complexity is needed.

    Hi David Ottlinger,

    Thanks for the kind words!

    Liked by 1 person

Comments are closed.