Strong Artificial Intelligence

strong AIby Massimo Pigliucci

Here is a Scientia Salon video chat between Dan Kaufman and Massimo Pigliucci, this time focusing on the issues surrounding the so-called “strong” program in Artificial Intelligence. Much territory that should be familiar to regular readers is covered, hopefully, however, with enough twists to generate new discussion.

We introduce the basic strong AI thesis about the possibility of producing machines that think in a way similar to that of human beings; we debate the nature and usefulness (or lack thereof?) of the Turing test and ask ourselves if our brains may be swapped for their silicon equivalent, and whether we would survive the procedure. I explain why I think that “mind uploading” is a scifi chimera, rather than a real scientific possibility, and then we dig into the (in)famous “Chinese Room” thought experiment proposed decades ago by John Searle, and still highly controversial. Dan concludes by explaining why, in his view, AI will not solve problems in philosophy of mind.

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

Daniel A. Kaufman is a professor of philosophy at Missouri State University and a graduate of the City University of New York. His interests include epistemology, metaphysics, aesthetics, and social-political philosophy. His new blog is Apophenia.

Advertisements

104 thoughts on “Strong Artificial Intelligence

  1. Philosopher Eric

    Yes, we do seem to both have similar problems with philosophy as it is usually practiced. But I must make it clear that I endorse naturalism, I am not a supernaturalist or a theist. It would be an error to imagine that religion must be ‘unnatural’ in some way or that it requires God.

    Naturalism often forgets that until we understand Nature we have no idea what is natural and what is not. We cannot just lay down arbitrary rules. (Well, we can and we do, but they are meaningless). How would it be possible for a phenomenon not to be natural? The idea makes no sense to me.

    So maybe we agree more than you think.

    My view is very straightforward. It would be that if we examine metaphysic closely these problems can be solved. No appeal to the supernatural would be necessary or even any appeal to mysticism, just the calculations. But we would have to begin by remembering that we do not yet understand Nature rather than making assumptions that close off avenues of exploration. .

    Liked by 1 person

  2. Robin; You mentioned the kidney stone experience which I also had two years ago. The nurse in the ER gave me a very Nagelian statement when she told me it’s nature’s way of teaching men what it’s like to have a baby.

    On these threads there is always thought experiments that talk of machines which make atom for atom copies of people and if such a machine existed we may actually try such an experiment, while your nephew would use it to make a copy of the Playmate Of The Month or the neighbor’s Porsche.

    DM; Welcome back and congratulations on the new addition to your family. As I once mentioned the computational aspect of mind entails the very thing which mind and consciousness work with, which is perception of time and computers which are machines that run from an artificial clock are very tasked to do the imitation of time and behavior. Critics of Strong AI would liken it to a college advisor telling a student that lucrative professions besides doctor, lawyer or banker would include counterfeiter.

    What underlies my humor in the three statements above is the notion of value; men can’t sympathise childbirth, those of us on this thread are motivated by philosophical interest and society values educated professionals etc. Semantics itself is the underlying value of the symbols or syntax that we see, language that we hear, actions that we see. It’s a bit simplistic to think that the brain is a dense mass of computation that can be simulated on a machine by algorithms unless those algorithms include simulation of emotion, desire, values etc. Also to believe that the simulation of time, which many argue does not exist without consciousness can make a non-biological substrate or machine conscious is wrong because the simulation of time (computation) can only simulate behavior.

    Liked by 2 people

  3. As often Coel has beautifully summarised what I also think but couldn’t have put like that myself.

    PeterJ,

    Perhaps we are talking past each other. What the last few hundred years have shown over and over again is that there is no magic. There is only matter; life is complicated matter interacting (instead of an elan vital inside the matter), and we are justified to extrapolate from that. As far as we can tell the mind is likewise complicated matter interacting (instead of, say, an immaterial soul inside the matter). In other words, if somebody ever finds out how the matter does it one should be able to reproduce the same behaviour with appropriate but potentially different matter.

    Probably a thinking machine with consciousness (whatever that is) equivalent to that of humans will have to be a dynamic network instead of microprocessors. Perhaps it is 500 years away, or perhaps we will never build it because the next dark age happens before we get there. But I argue that the only way to declare it impossible in principle is to postulate something about the human mind that is at odds with naturalism.

    And yes, I can believe that sometimes the burden of evidence is on one side. That is how reasoning works. If a system seems to work fine with parts A, B and C, and somebody claims that there is an extra part D for which there is, however, sadly no evidence, then they have to deliver. If they also fail to explain what D is and how one would notice its absence, then surely it becomes quite unreasonable to expect the proponents of ABC only to disprove the existence of D.

    Again, we may be talking past each other, because I have no idea where pessimism or the immaterial come into what I wrote. And I have never claimed to understand what people who think there cannot be conscious machines consider consciousness to be, quite the opposite. I wish they would clarify, because it is always either things that insects or simple machines can also do or nebulous phrases on the lines of the aforementioned aboutness.

    Like

  4. If I may be permitted Coel, let me say that you gave a wonderful Seri/Chinese Room discussion above. I’d say that you’re quite right, except that there is indeed a very important difference with conscious function, or what evolution seems to have favored. Apparently phenomenal experience was necessary in order to provide the autonomy required in diverse environments. Sure we might build some cool stuff, as if we’re gods, but evolution seems to have thrown up its hands and just said, “I’m done with all this programming mess. Here is qualia, or phenomenal experience, or good/bad. Now you conscious life forms must figure out what to do for yourselves, or face the consequences!”

    Like

  5. Consider Mao Zedong’s famous phrase ‘let a hundred flowers bloom’.
    This simple phrase represents a complex meaning deeply intertwined with Chinese history and culture.
    Here it is in five languages and seven representations.

    百花齊放 (traditional Chinese)
    百花齐放 (simplified Chinese)
    bǎi huā qí fàng (Pinyin)
    Let a hundred flowers bloom (English)
    Lassen hundert Blumen bluehen (German)
    Que cent fleurs fleurissent (French)
    Permitten cien flores florezcan (Spanish)

    Now say these words and appreciate their mellifluous character which gives additional force to the words. This is especially apparent in Chinese. It also comes through clearly in French and German(to my ear at least).

    From the above it can be seen that one meaning is capable of many representations. I can make this more complicated by doing the same thing in many fonts and in many other languages, orally, in ink, wood carvings, pixels, tapestry, etc. In fact the number of physical representations of the same meaning has no practical limit. Each of these representations is nothing more than a physical ordering of ink, electrons, sound waves, pixels or other material.

    Given that a virtually unlimited number of physical representations stands for the same meaning, we must conclude that the meaning is not contained in the representation. The meaning is contained in our brains and the representation is merely a key that invokes the meaning stored in our brains. In other words syntax cannot equal semantics. That is because syntax is a physical, external representation and semantics is an internal, stored understanding. The link between the two, between the key and the response, is one that is assigned by our brains and this link is not contained in the physical representation. Therefore the physical representation is always inadequate to create meaning.

    Strong AI asserts that by adding more complexity and more interrelationships to the physical representations, somehow how, quite by magic, meaning will arise. They never tell us what this magic is. How does complexity and interrelationships fundamentally create meaning? The theory has never been spelled out and the practice has never been demonstrated.

    We do not understand the meaning making process in our brains, but we can see the complete impossibility of making meaning equal to some arbitrary physical representation. And if that is impossible it is equally impossible to create meaning by ordering electrons in a computer. Which makes strong AI a pipe dream.

    Liked by 1 person

  6. RD=Pronounced same as Red or Read

    1) “I RD The RD Book.”

    Repeat 1) but force all words through left brain lobe.

    Repeat 1) but force all words through right brain lobe

    Like

  7. You write: “The meaning is contained in our brains and the representation is merely a key that invokes the meaning stored in our brains. In other words syntax cannot equal semantics. That is because syntax is a physical, external representation and semantics is an internal, stored understanding.”
    Not so. The meaning is contained in the combination of the signaling purposes and the receiving service of those purposes. The intelligence that constructs the message understands that it is to be sent to an intelligent system that will understand its meaning. There is no meaning stored in the brain that doesn’t need to be activated and constructed by the communicative process.

    Liked by 1 person

  8. Oh my this discussions is really getting good! Labnut I believe that the reason that those strong artificial intelligence people are unable to answer your question, is because an accepted answer has not yet become achieved in general. Yes meaning most certainly IS in our brains, but what exactly might that be? If those AI people are ever able to build things for which existence is not perfectly irrelevant, then I believe that they will have succeeded! (I doubt they ever will, though that’s neither here nor there.) The important thing that we need to grasp, I think, is that personal relevance versus personal irrelevance, does happen to be this thing. If they are ever able to do so, then I suppose they’ll also be able to start playing with what they’ve been attempting (not that they seem to even understand this principal yet).

    More important however, would be for us to acknowledge the nature of good/bad for that which does happen to be conscious. This would surely help us lead our lives and structure our societies, in a manner which is more productive for ourselves.

    Like

  9. Massimo and Dan,

    Enjoyed. I especially liked your Chinese room explanation, for the first time I think I get what Searle was going for.

    Dan your comment at meaningoflife.tv in which you included this picture to show a valid extension of the word running hit home for me (as did mechtheist’s comment on planes flying and submarines swimming).

    Like

  10. So many interesting comments from all, and not enough time to devote to all. Let me just underline Labnut’s point with and example with a more specific meaning:

    Εἰπέ τις, Ἡράκλειτε, τεὸν μόρον ἐς δέ με δάκρυ
    ἤγαγεν ἐμνήσθην δ᾿ ὁσσάκις ἀμφότεροι
    ἠέλιον λέσχῃ κατεδύσαμεν. Ἀλλὰ σὺ μέν που,
    ξεῖν᾿ Ἁλικαρνησεῦ, τετράπαλαι σποδιή,
    αἱ δὲ τεαὶ ζώουσιν ἀηδόνες, ᾗσιν ὁ πάντων
    ἁρπακτὴς Ἀίδης οὐκ ἐπὶ χεῖρα βαλεῖ.

    Man nannte mir, Herakleitos, Deinen Tod; zu Tränen mich es,
    trieb, als ich mich erinnerte, wie oft wir beide,
    die Sonne im Gespräch in den Schlaf legten. Doch du, mich deucht,
    Gast aus Halikarnassos, seit langem ein Haufen Asche.
    Aber sie leben, Deine Nachtigallen, an die aller Dinge
    Räuber, Hades, keine Hand lege.

    Someone told me of your death, Heraclitus, and it moved me to tears, when I remembered how often the sun set on our talking. And you, my Halicarnassian friend, lie somewhere, gone long long ago to dust; but they live, your Nightingales, on which Hades who siezes all shall not lay his hand.

    There is a definite meaning to all there “it moved me to tears, when I remembered how often the sun set on our talking”. and can be expressed in the same language and different words by a 19th century poet: “I wept as I remember’d how often you and I / Had tired the sun with talking and sent him down the sky.”

    You can see these lines posted very often in response to a friend or a partner’s death.

    Was the meaning in the brain of a poet of the 3rd century BC? Or in the brain of an Eton schoolmaster in the 19th century. Or in my brain or the brain of all who read these words in whatever language?

    I am not saying I know the answer, I don’t know. But it does seem to be the same meaning.

    Liked by 1 person

  11. Coel,

    “Massimo says that my answer, that the room/iPhone does “understand” is an utterly weird one that we’re driven to by ideology. To me it’s the reverse, the idea that the iPhone “understands” seems straightforward and prosaic, and the rejection of that is nothing but unsupported intuition”

    I use the word ‘understand’ for Siri too but its on a continuum with how a dead-fall trap understands how to catch an animal, not on a continuum with how human understanding is meant.

    For me what humans or biological organisms do is the reference here for the word ‘understanding’, what machines do is in a completely different category.

    Like

  12. My fifth post. 🙂

    Vector schift,

    Matter (stuff like atoms, molecules, etc.) is a specific localized configuration of quantum fields. I don’t see what point you are trying to make.

    Everyone,

    Regarding the Chinese Room thought experiment, and what Alex SL said:

    If a system seems to work fine with parts A, B and C, and somebody claims that there is an extra part D for which there is, however, sadly no evidence, then they have to deliver.

    I agree with this, provided that the antecedent is fulfilled, i.e. provided that it has actually been *demonstrated* that the system works fine with parts A, B and C only. In that case I agree that D is unnecessary. But in the context of the Chinese Room and related examples, the antecedent has *not* been demonstrated. That is the whole point of what Massimo said in the video — there is a missing step when going from syntax to semantics.

    In other words, Coel put forward the argument that

    “meaning” is simply linkages between information and “understanding” is simply correctly manipulating such linkages.

    IOW, semantics is nothing but a suitable graph of relationships between appropriate syntax nodes. That is the antecedent above. But note that this is just a *pure conjecture*, not an established fact, and is actually begging the question.

    The way I understand it, the whole point of Searle’s argument is this — can you *establish*, beyond any doubt, that semantics is reducible to a graph between syntax elements, and appropriate algorithmic manipulation rules? If you can, then great, you’ve managed to reduce semantics to syntax, and you have resolved the Chinese Room argument. But I haven’t seen anyone manage to perform such a feat, and Massimo (following Searle) is completely right to point out that there is a missing step there.

    Take the following example — imagine Siri, packed up in a robot-like body with arms, legs, eyes, etc., and imagine that its software has been extended to make use of those. Siri then walks, talks and acts like a real conscious person. But can we actually claim that Siri has consciousness, or is it merely an algorithm simulating (mimicking) consciousness, doing it well enough to fool humans in the Turing test? Answering that question is what the Chinese Room argument is all about.

    Note also that this question is an overture to what Chalmers calls the “strong problem of consciousness” — one can formulate the very same question, however not for Siri but for a human — is the person you are talking to someone with *real* (phenomenal) consciousness, or are they merely someone who behaves like they are conscious, but really are not (like sleepwalkers)? In the latter case, the person is called a p-zombie, an imitation of a conscious person.

    I’d say that *if* the Chinese Room argument does get resolved (in the future) in the way Coel advocates, then the Chalmers’ strong problem of consciousness would be automatically resolved as well. But otherwise, both problems become very relevant.

    Liked by 1 person

  13. The Chinese Room argument is illogical because, a la Russell’s warning, it has a false premise.
    If Searle’s protagonist claims he has a set of rules in English that “enable me to correlate one set of formal symbols with another set of formal symbols”, that is, the Chinese characters, these rules will NOT allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced that Searle can actually understand the Chinese conversation too, even though he cannot. Because if he can not understand the characters, there are no possible rules that will allow him to arrange them in a strategically meaningful way. It’s a thought experiment that’s in the end, rather thoughtless, somewhat like the dead or alive cat, that realistically can’t return to life after he has sequentially been really dead.

    Liked by 1 person

  14. Coel:

    Given that this is the same answer you’ve given 15 times in the past — and given that it is the very first reply that Searle considers *in his article* (aka “the systems reply”) — I’m afraid I’m not inclined to go around again.

    At the end of “Minds Brains and Programs,” Searle anticipates and replies to pretty much every criticism that has been made here. I strongly recommend that people actually read — or re-read — those replies.

    Like

  15. Roy Niles:

    What you are describing is *precisely* what the backers of Strong AI are suggesting. It is *their* position that is “illogical,” not Searle’s.

    Do you *really* think this thought experiment would have had the impact that it has had on the philosophy of mind, if it could be summarily dismissed in one paragraph on an internet discussion thread?

    Like

  16. Hi Roy Niles,

    “The Chinese Room argument is illogical because, a la Russell’s warning, it has a false premise.”

    Just to be clear, “illogical” means “invalid,” i.e the conclusion doesn’t follow from the premises. “Unsound” on the other hand means an argument has at least one false premise. So at best, you are claiming that Searle’s argument is unsound, but certainly not illogical (at least not for the reasons you espouse in your comment).

    Second, you have literally just made an assertion (that the man in the room could operate over symbols to respond effectively just like a native chinese speaker), but not supported your assertion in any way. In other words, you haven’t yet made an argument against Searle. Please have more respect for the influential arguments which have influenced the philosophy of mind for decades by at least defending your assertions.

    Like

  17. So long as our interlocutor is not known to lack an appropriately complex internal structure (a situation which would indicate some form of trickery), it seems to me — as clearly it does also to some others in this discussion — that the Turing test or something like it is really all we’ve got and all we need to decide on whether an interlocutor is intelligent (or ‘thinks’). Both Turing and Chomsky (as mechtheist pointed out) see the matter as boiling down to a question of lexical choice in the end. And there is a lot to be said for this kind of deflationary approach.

    As I see it, the main substantive question relating to AI is whether we will eventually be able to build computers or (more likely, because they move around and interact with the physical world) human-like robots which (whom?) we will be inclined to treat and speak of as intelligent beings.

    I do not see any reason to think this will not happen in time. Don’t we already have robots with learning capacities (even if they are still far from the human level)?

    What seems to be motivating some here is the desire to expose certain extreme and not very sensible claims about matters relating to AI. Fair enough. But, in the interests of clarity and minimizing confusion, this can (and I think should) be done without making contentious counter-claims (i.e. without pushing back too far the other way).

    Why not just remain agnostic about those issues which remain contentious in mainstream scientific circles? Contending scientists have to ‘take sides’, because this is how science proceeds: with conjectures which are subsequently tested. But interested observers have no such obligation. In fact, outside of a scientific framework, claims which impinge on scientific knowledge are idle.

    Like

  18. The comment that I left earlier was apparently not approved by the moderator. But if an invalid argument will not at the same time be illogical, I’m using the wrong dictionary. As to having more respect for Searle’s historical argument, I’m not the first person who has brought up the exact objections that I have.

    Like

  19. Mark,
    As I see it, the main substantive question relating to AI is whether we will eventually be able to build computers … we will be inclined to treat and speak of as intelligent beings.

    Why not just remain agnostic…

    I agree that is the immediate and practical question but should we remain agnostic? We need also be concerned with the deeper issue of consciousness, free will and emotion. Should a computer acquire these additional properties, beyond great intelligence, we will be required to think of them and treat them in wholly new ways. That is because these properties will, together with intelligence, confer personhood, equal to our own. We will recognise their personhood and that will impose on us certain ethical duties towards the ‘computer people’.

    This raises enormously difficult questions. May I switch off, destroy or dismantle the computer person? May I intentionally limit the computer person’s power? May I confine the computer person? What duties do I owe to the computer person? Does his greater intelligence give him greater rights? Do I need schooling if my pet computer person is always beside me to answer all questions and guide me throughout life? Will the computer people think that their greater intelligence confers on them greater rights? Will humans become mere maintenance technicians, required to sustain the life of computer people, until one day we become unnecessary? I think this is the likely outcome if strong AI is indeed possible.

    These are important questions which drive home the grave importance of whether such a thing is possible at all. Will we become ‘conscious AI’ denialists merely to preserve our privileged status? Or are we in danger of being overwhelmed by uncaring automatons whose intelligence give them the gloss of personhood? Are we engineering ourselves out of existence?

    This future is coming at us with increasing speed driven by profit motivated industrialists. But profit has no ethical concerns. We need to prepare for this challenge.

    Just a few short years ago I scoffed at Google for trying to develop a phone operating system(Android). It seemed an unnecessary diversion from their core business. Now it is obvious how wrong I was. The future is always unexpected and when you discover this it is usually too late, as Microsoft is finding out to its cost.

    Like

  20. Robin,
    I love your example, it is so beautiful and so grievously true. The grief of loss is slowly displaced by remembered love but one is always haunted by the shadow of the loss.

    Roy Niles,
    it has a false premise… these rules will NOT allow him to respond, in written Chinese, to questions, also written in Chinese, in such a way that the posers of the questions – who do understand Chinese – are convinced…, even though he cannot.

    I’m afraid you seem not to understand the structure of the argument. This is a conditional ‘if’ premise which follows the lines of ‘if conditional premise is true then some argument‘. In common English one would say(to use a more general example), ‘assuming for the sake of argument that …abc… is true, then the following would hold, …xyz…’.

    To build on this, Searle is saying, in effect – assuming a sufficiently intelligent program could be built, it would, given our understanding of how computers operate, be executed by an agent that has no understanding of the symbols it manipulates.

    And, in any case, by denying the conditional premise you are conceding John Searle’s point, making the rest of the argument unnecessary.

    It’s a thought experiment that’s in the end, rather thoughtless

    The thought experiment is a powerful device intended to draw attention to and clarify the formal arguments that Searle made. He succeeded admirably in this. If you wish to disagree with Searle you should turn your attention to his formal arguments.

    You may think that philosophers are mistaken but to accuse them of being thoughtless is a step too far, especially when you fail to understand the role of conditional premises and show no knowledge of the main arguments for/against the Chinese Room.

    There is no meaning stored in the brain that doesn’t need to be activated and constructed by the communicative process

    I can internally survey my own repository of knowledge and extend its meaning by reflection. It is a process of cognitive scaffolding that we all engage in and is frequently called authorship. That we need to write things down in this process is merely an accident of the limits of our short term memory. I habitually go for long walks and develop my arguments while walking.

    Like

  21. There seems to be a widespread belief here that AI, machine learning etc have nothing to do with semantics or linguistics or reasoning or decision making or goal directed action in the real world. A brief scan of the recent literature would find a) people are fully aware of the stumbling blocks and b) that progress continues in all these areas. I have previously alluded to those deep neural network (DNN) systems that categorize the world, as presented to it, in an unsupervised fashion, and correlate this web of concepts with a similar web of verbal descriptions.

    Several of these issues are also mentioned in the first para of this paper that combines bags of words with bags of chemical compounds using standard machine learning methods (in passing they mention human olfactory qualia are 30 dimensional; why I think qualia are a non-problem – although they are hard to talk about they do have these obvious geometric structures, and Mary doesn’t seem so impressive if you replace colour with a smell you haven’t smelt before).

    http://www.anthology.aclweb.org/P/P15/P15-2038.pdf

    Distributional semantics represents the meanings of words as vectors in a “semantic space”, relying on the
    distributional hypothesis: the idea that words that occur in similar contexts tend to have similar meanings. Although these models have been successful, the fact that the meaning of a word is represented as a diistribution over other words implies they suffer from the grounding problem i.e. they do not account for the fact that human semantic knowledge is grounded in physical reality and sensorimotor experience.
    Multi-modal semantics attempts to address this issue and there has been a surge of recent work on perceptually grounded semantic models…[mainly in vision]

    Anyway, do we have an artificial system as cunning as a fox yet? I was promised one about 15 years ago. I presume we would agree that there is some kind of aboutness in what goes on in a fox’s mind, practical reasoning and goal directed behaviour. If robots can play soccer, I can further easily imagine a collection of DNNs that could do everything a fox mentality can do, although speed and power is still a problem.

    Finally, someone recently commented that the organisation of the brains of other apes are not different from ours, we just have a bit more quantity. There is a correlation between brain size and IQ measures.

    Like

  22. Hi Aravis,

    Searle anticipates and replies to pretty much every criticism that has been made here

    Yes, he does. But I’m not the only one who considers his replies on this unconvincing, and who regard the “systems” reply as an entirely adequate rebuttal of the CR.

    Hi labnut,

    … we must conclude that the meaning is not contained in the representation.

    Agreed, it isn’t. The meaning is about linkages between bits of information.

    Suppose you write “Please take the dog for a walk” on a piece of paper. Your newly bought house robot encounters the paper, reads it using OCR software, interprets it, and then takes hold of the dog’s lease and takes it for a walk. It is then simply perverse not to accept that the robot has “understood” the “meaning” of the writing.

    All of that, by the way, is known and current technology. There is no “magic” there. If your 12-yr-old child did exactly the same acts, you’d have no problem with the idea that he “understood”. This is *only* about human-exceptionalist intuition.

    Hi Marko,

    But can we actually claim that Siri has consciousness, …

    I think it’s unhelpful to wrap everything up with consciousness. Can we deal with the concepts of “meaning” and “understanding” first? And “meaning” and “understanding” is all that I’m claiming about Siri. If we’re ever agreed on them, then we can proceed to the harder issue of consciousness. 🙂

    Like

  23. Dear Everyone,

    Did you know that there is a young boy living in London, from a French family. He speaks French at home and English at school. As a native French speaker he certainly understands French.

    He also gives every appearance of being fluent in English. And he indeed is, speaking it accent-less, just like any of his school friends. But, you know what?, he doesn’t actually “understand” English. He’s just picked up and learnt a whole lot of useful heuristic tricks that enable him to converse as-native in English. And he can relate all the English words to French words and to things in the world around him. But he doesn’t actually “understand” English.

    Dear Massimo,

    On the progressive replacement of neurons, one by one with functionally equivalent artificial devices, such as silicon. You suggest that that might kill the person. Why would that be so? If the replacements are functionally equivalent, then the same messages are being sent down the nerves to the muscles, so the man is still walking around. He could be a p-zombie, but he couldn’t be “dead” if the replacements were functionally equivalent.

    Like

  24. Roy,

    “As to having more respect for Searle’s historical argument, I’m not the first person who has brought up the exact objections that I have.”

    True. By the same token, more than 50% of Americans reject the theory of evolution. They must be onto something…

    Coel,

    “On the progressive replacement of neurons, one by one with functionally equivalent artificial devices, such as silicon. You suggest that that might kill the person. Why would that be so? If the replacements are functionally equivalent”

    The problem is that much is smuggled into the argument by vague talk of “functional equivalency.” What does that mean, exactly? Could the replacement neurons be made of cardboard? Presumably not, because they wouldn’t haven’t the necessary physical-chemical properties, right? Neither might silicon ones, given that silicon has significantly different properties from carbon.

    As for my iPhone “understanding” things, I’m simply baffled by your insistence on this. And I do attribute it entirely to your ideological commitment. (I know, you do the same for me, but I think the burden of proof is squarely on your shoulders.)

    Like

  25. davidlduffy wrote:

    A brief scan of the recent literature ….

    ———————————————————————-

    You mean, *some* of the recent literature.

    And yeah, so what? In 1979, Paul Churchland was predicting that “one day” we would know so much about neurology that we would actually drop our ordinary language, in favor of neurological descriptions. So, instead of saying “I love you,” we would say, “Neurons 3, 6, and 12 are firing at a rate of ….”

    There’s a lot in “the literature.” Some better than others.

    ——————————————————————————-

    this paper that combines bags of words with bags of chemical compounds…

    ————————————————————————

    For some reason, this immediately reminded me of Mitt Romney’s “binders full of women.”

    If anyone needed a good dose of Wittgenstein, it’s these poor people. They are actually hunting for beliefs in the brain, like you might dig for buried treasure. Way to misunderstand just about everything to do with language and meaning.

    Liked by 1 person

  26. Coel,
    The meaning is about linkages between bits of information

    You have not even begun to explain how linkages can possibly create meaning. Saying it is so does not make it so. A linkage is just another token. Adding more tokens to a string of tokens does nothing to create meaning. A spider’s web consists of many linkages but no meaning except the meaning that our minds choose to assign to it.

    …and then takes hold of the dog’s lease and takes it for a walk
    You have made a huge leap of faith that following instructions equals showing understanding. Computers follow instructions, period. I am a veteran of writing large systems, with innumerable linkages, that follow exceedingly complex instructions and I am happy to tell you there was no vestige of understanding or consciousness in my programs. I would be most alarmed if there was.

    Following instructions is not understanding. This is your fundamental problem. No matter how you mix it up or recast the problem, computers follow instructions, blindly. Blindly following instructions cannot ever, under any circumstances, be construed as understanding or consciousness. This blind following of instructions holds at every level, from the CPU firmware, the OS firmware, the low level libraries, high level libraries, the classes and subroutines, the modules, all the way up to the topmost executive in the program. At no stage does the rigid control of instructions lose its hold. Sure, it takes in data from its environment and makes choices accordingly. But those choices are just as rigidly determined as are all the other choices. Sure, it learns by storing new data but the manner of its learning is still rigidly determined by the instructions.

    Let me repeat it, following instructions is not understanding and you have failed to show how it could possibly be understanding.

    It is then simply perverse not to accept …
    Tendentious rhetoric never did help the argument along.

    There is no “magic” there
    Agreed. But you are going to need a great deal of magic to turn instruction following into understanding and token linkages into meaning.

    Can we deal with the concepts of “meaning” and “understanding” first?
    I wish you would. So far you have claimed that linkages create meaning. How? Please explain. And you have claimed that following instructions is understanding. How? Please explain.

    You have many times over made these assertions but never have you given anything that looks like a reason why we should believe your assertions. We, on the other hand, have given clear arguments why that cannot happen.

    Now I invite you to justify your assertions, not repeat them.

    More fundamentally, we cannot explain, at a basic level, the workings of our minds. We do not yet know how it finds meaning or understands things. It just happens. If we do not understand that, how can we possibly create machines that do the same thing? How can a scientist insist something is possible when there is no evidence for it? Is this not a denial of what science stands for, on reasoning from the evidence?

    Liked by 2 people

  27. Massimo: There is always the problem of distinguishing outer function and behavior from inner function. I say the fallacy of Leibniz Mill Analogy is if you envision the gears and parts of the mill made from jello or styrofoam, so the true functionality and inner function of the mill parts is to transmit the forces of nature, in this case the waterfall, to grind the wheat. The same analogy carries over to the inner function of neurons.

    Human conscious experience is not one dimensional and there is some complexity in the CNS but anyone with a reasonable systems engineering background can have more insight into it.

    Like

  28. Massimo, DanK:

    I wanted to note that my original comment was not a complaint about your most interesting discussion. Rather, I came out as cranky as I did because I suspected that the comment thread would go pretty much as it has gone.

    Statements of the form “Theoretically, X should be possible,” or “In principle, there’s no reason X should not be possible,” are useful discursive tactics for allowing differing positions in a discussion; but they do not themselves constitute either theory or principle, let alone proper defenses or explanations or demonstration of theories or principles. “Theoretically, it should be possible for me to win the lottery,” doesn’t actually win me the lottery – it doesn’t even buy me a ticket.

    As to the whole discussion on the Chinese room – if people cannot allow the clear distinction between semantics and syntax, or refuse to understand what “understanding” means, I don’t see there being much of a conversation to carry on. (Even my computer cringed at some claims posted here.)

    I admit I’ve no time to delve much into the literature itself, but reading of either the SEP or the IEP discussions of the controversy reveals how difficult – and nuanced – the critical debate concerning the case can actually get.

    http://www.iep.utm.edu/chineser/ http://plato.stanford.edu/entries/chinese-room/

    I am largely persuaded by Searle, but not sure this is the proper forum for considering the matter in depth.

    labnut,

    “More fundamentally, we cannot explain, at a basic level, the workings of our minds. We do not yet know how it finds meaning or understands things. It just happens. If we do not understand that, how can we possibly create machines that do the same thing? How can a scientist insist something is possible when there is no evidence for it? Is this not a denial of what science stands for, on reasoning from the evidence?”

    Exactly.

    Liked by 1 person

  29. Hi Massimo,

    … vague talk of “functional equivalency.” What does that mean, exactly?

    Communication between neurons and along nerves is by ionic and electric signals, so a “functionally equivalent” neuron or “unit” is one that passes exactly the same ions and electric potentials onto the next “unit”.

    I agree that it might be technically hard (or impossible) to make a functionally equivalent device out of silicon, but that doesn’t negate the fact that if we could build artificial devices producing the same outputs of ions and electric signals, then the person would not be “dead”, since the same signals would be sent to the muscles, so they’d behave as “alive”.

    As for my iPhone “understanding” things, I’m simply baffled by your insistence on this.

    Which just shows that this whole Chinese-Room argument comes down *entirely* to intuition. I’m just as baffled by those who don’t accept my argument, and consider that the burden of proof is on them to say explicitly and specifically what is missing.

    Hi labnut,

    You have not even begun to explain how linkages can possibly create meaning.

    No, I’m not saying that linkages “create” meaning, I’m saying that they **are** meaning. That is all there is, the linkages! You keep wanting something more, something that “only humans can do” or “only consciousness can do”. My whole point is that there *isn’t* anything more!

    Following instructions is not understanding. This is your fundamental problem.

    All you (and Massimo and Aravis and others) are doing is declaring my stance wrong based solely on your intuition. Knowing about the linkages really is all there is to “understanding”. So, if I ask Siri what the time is, and Siri correctly interprets the speech, knows which memory register to consult, and how to report the contents, then Siri *has* understood. That’s all there is to it. You wanting something more is simply your intuition misleading you.

    Your stance is analogous to vitalism, asserting that specific patterns of molecules could not alone make a bacterium “alive”, and demanding that there must be something more to it than that.

    these properties [consciousness, free will and emotion] will, together with intelligence, confer personhood, equal to our own.

    A mouse has all of those things, in the same way that we do (though not to the same degree), and that doesn’t confer equal personhood on them.

    Liked by 1 person

  30. Coel,

    But that’s what I’ve always meant when I said that consciousness is a biological phenomenon, and therefore substrate dependent. I doubt it will be possible to use silicon because it has significantly different chemical properties from carbon. But that’s an open empirical question. I’m prett damn sure it can’t be done with cardboard, which means that the thing is, in fact, substrate dependent!

    As for iPhone intuitions, let me get this straight: are you claiming that my iPhone has the sort of internal mental states, a feeling of understanding, like I do when I read your words? Because if not, then you are simply playing word games. And if yes, I’d love to see some evidence of what is a truly extraordinary claim.

    Like

  31. EJ,
    Even my computer cringed at some claims posted here

    🙂 yup, my computer reacted the same way, and I thought it was foolproof. 🙂

    The discussion has reached a surreal stage and I am beginning to think I have slipped through a time warp into the Best Exotic Marigold Hotel 🙂

    Liked by 2 people

  32. I certainly wouldn’t want to be dismissive about the Chinese Room. But I do still feel confused (like many others) as to what it is meant to demonstrate.

    Is it that “understanding” – in its richest, deepest, most human, and least-metaphorical sense (i.e. the one furthest-removed from that of “Siri understood my question”) – is far beyond the capabilities of any present day computer? And, in principle, forever beyond the capabilities of any computer the same in kind (i.e. programmed rather than capable of independent creative thought) as present-day ones, no matter how much more powerful in degree? If so, my response is “You don’t say”.

    Why? Why is it just-obvious to me that neither a lone symbol-manipulating non-Chinese-speaking individual nor “the whole room” (nor for that matter the whole population of India operating the same program) can be said – non-metaphorically – to “understand” Chinese?

    Because not only does this accord with my intuitions, but also, my intuitions get powerful backing from a nexus of more-or-less Wittgensteinian beliefs about the different things we mean when we talk about such things as “minds”, “understanding”, “thought” and “personhood”.

    If, though, unlike mine, your thinking emerges from a nexus of strong AI beliefs, it’s child’s play to dismiss my sort of intuitions (even if – as is almost certain – they’re your intuitions too) as “folklore”. What would count – what could count – for someone who thinks like this, as a definitive proof that, e.g., “the whole system” can’t meaningfully be said to “understand” Chinese? What has Searle’s argument – even if it’s only meant to show that you can’t get from syntax to semantics – done, to persuade the not-already-converted? Couldn’t they quite reasonably – by their own lights – say that their definition of “understanding” doesn’t have to include things like intentionality?

    But is Searle’s argument perhaps meant to show that it’s impossible in principle for any kind of artificial intelligence whatsoever to be truly, non-metaphorically, intelligent? If so, I can’t see how it does.

    I think Margaret Boden, who is sceptical that we will ever succeed in creating truly intelligent machines, and Ray Kurzweil, who says we may do so within the next 20 years or so (wasn’t he saying that 20 or so years ago, though?), would both agree that – just-obviously, almost-by-definition – a truly intelligent machine would have to be different in kind from a mere symbol-manipulator governed by algorithms (even if the latter were 10,000 times more powerful than all the presently-existing super-computers put together).

    And I think they would also both agree (somebody correct me if I’m wrong!) that, even if Searle’s thought-experiment succeeds in demonstrating this truth, it says little or nothing about the future potential for computers or robots that were not programmed and could “think for themselves”. (Well, perhaps Boden would go for “little” or “not nearly enough”, and Kurzweil for “nothing”?)

    Like

  33. nick,

    forgive me, but I’m having a hard time understanding what is it that people don’t understand about the Chinese room. As Dan/Aravis has explained a number of times, it is about the gap between syntax and semantics, a gap that nobody has any good ideas, at the moment, how to bridge – pace Coel’s bold pronouncements about iPhone’s consciousness.

    Of course nobody is saying that the gap cannot be bridged in principle (well, nobody maybe except Chalmers and “Mysterians” like McGinn), but it’s there, and no amount of handwaving is going to make it go away.

    Like

  34. nick m:

    Searle doesn’t mean anything fancy, beyond normal linguistic understanding.

    I am a Wittgensteinian myself and I can assure you that he would never ascribe linguistic understanding to a machine, defined as Strong AI defines it (and Strong AI is what this dialogue was about).

    To speak and understand a language is, for Wittgenstein, to participate in a form of life that is irreducibly social in nature. So, while I would agree with you that what’s missing in Strong AI isn’t some ineffible thing *inside* the mind, it certainly is missing quite a lot.

    Like

  35. Coel,

    I said “For me what humans or biological organisms do is the reference here for the word ‘understanding’, what machines do is in a completely different category”

    But I think I should have said that computer ‘understanding’ is a certain way of looking at a facet of what we mean by human ‘understanding’.

    If you want to define ‘understanding’ that way, ok, but it doesn’t help explain how or what biological organisms are doing.

    You said “Knowing about the linkages really is all there is to “understanding”.

    Yes, under your definition.

    I’m not sure what you are arguing for.

    Like

  36. There must simply be very different intuitions at work here. I find Coel’s example of the robot that understand the instruction to take the dog for a walk immediately convincing. Why should a human understanding instructions be any different? Then we have perceptions, complex internal information states (e.g. feelings), and even the knowledge of abstract concepts. A poet may be able to convey a feeling, and a scholar may be able to convey an abstract concept, using written symbols. If I manage to understand them, why should anything more be going on than me linking the written symbols to the internal state I have often had myself or me linking the written symbols to the abstract concept that was explained to me and thus ‘saved’ in my memory when I was in school?

    What is missing? How do we show that it is missing or present? People here freely admit that we cannot. Is it then really such an exotic or revolutionary idea that the claim that [placeholder] is present when [placeholder] cannot be shown to be either present or absent should be rejected on grounds of parsimony and burden of evidence? If we swap the placeholder for fairies, luminiferous ether or the elan vital I guess most people would agree. Why is it so different if the thing whose existence is completely indistinguishable from its non-existence is god or, as in this case, some ill-defined item like “true” consciousness or “true” understanding? Special pleading?

    Coel,

    The problem I see with gradual replacement of neurons specifically is very similar to Massimo’s: In biology (and presumably engineering as well), there are always trade-offs. If you are an animal that is optimised for invulnerability (turtle), you will not be optimised for speed. If you are a plant that is optimised to survive in extremely arid environments (cactus), you will not be optimised for fast growth. If you have numerous highly specialised body parts (animal), then you generally can’t just grow more of them in a modular fashion (like plants do)*. It is quite possible that the way our brain is built and works requires dynamic, biochemical, squishy, short-lived elements, so that at best one could replace neurons for artificial elements that come with no upside compared to the neurons we already have. It seems fairly likely that a metal replacement would work as well for our mind as a metal replacement would work for our gonads: we couldn’t get a sperm cell out of the metal, and we probably couldn’t get information processing in a specifically human way out of it either.

    * In case somebody thinks Axolotl, try chopping its head off.

    Like

  37. Hi labnut, thanks for the response. You wrote:

    “We need also be concerned with the deeper issue of consciousness, free will and emotion. Should a computer acquire these additional properties, beyond great intelligence, we will be required to think of them and treat them in wholly new ways.”

    You’re jumping ahead a bit here, aren’t you? These are not urgent moral issues. And I was focussing on intelligence, not emotion (or consciousness).

    When I talked about being agnostic I was thinking not so much about whether computers will get more intelligent — they *are* getting more intelligent — but rather about more difficult (scientific) questions like what the ultimate limitations might be on building artificial minds, on substrates and that kind of thing. The fact is, we don’t know the answers to these questions, but we can extrapolate to some extent on what has been achieved over the last seventy years or so. More flexible and intelligent performance is virtually assured (even in silicon-based machines), and this will affect the way we talk about (and think about) our creations.

    Our ideas will no doubt evolve over time as new developments occur. But, strangely, we’ve been preparing for the sorts of challenges you mention at least since the Romantic period — imagining artificial humans and so on.

    In response to Coel, you wrote (and Massimo wholeheartedly agreed with you):

    “More fundamentally, we cannot explain, at a basic level, the workings of our minds. We do not yet know how it finds meaning or understands things. It just happens. If we do not understand that, how can we possibly create machines that do the same thing?”

    Firstly, I for one am not talking about machines that do exactly what we do. And I see mysterianism lurking here. “It just happens.” Really? Look, of course we don’t understand all the details of how our brains work. But we understand a whole bunch of important things about, say, the visual and auditory and memory systems. We also understand the importance of culture (and social context) for human development.

    I think that you and some others are taking a line not unlike McGinn’s original line on consciousness; saying, in effect, it’s a mystery (and probably always will be). And maybe it is at some deep level.

    But what I was saying was that we don’t need to deal with this deep level in order to talk about intelligent robots. Instead, we can just focus on various forms and levels of intelligence and language and communication and so on. And I don’t see meaning (as in the meaning of a word or sentence) or intelligence as deeply mysterious concepts.

    Like

  38. Here’s my attempt to bridge the gap between syntax and semantics. Warning: here comes the systems response.

    Syntax requires a set of rules about how to process information. Thus, it can be instantiated by an appropriate information processing system. This instantiation necessarily contains information about how to perform the rules. The information that is “about” the rules is semantics. Specifically, if you change the (semantic) information, you change the rules.

    So the semantics of the syntax of the Chinese Room are essentially the rules that are written in English, plus the knowledge of where the input comes, output goes out, etc.

    Communicating in Chinese requires a set of rules about how to process information. Thus, it can be instantiated by an appropriate information processing system. The instantiation necessarily contains information about how to perform the rules. If you change the (semantic) information, you change the rules.

    What many are missing is that the The Chinese Room is a system, and the person inside is a subsystem of that system. The system contains the semantics about Chinese, and a subset of those semantics are about the syntax. The subsystem which is the person is not privy to the (semantic) information which is in the data.

    Whether something “understands” Chinese is determined by whether the input is appropriate to the output, so here is a possible conversation (using I for interrogator and CR for …):

    I: hi, did you have breakfast today?
    CR: Yes, I had bacon and eggs (not a very Chinese breakfast)
    I: I thought you were a vegetarian.
    CR: Oh no, I love bacon.

    Now, if we change the semantics in the data, but not the syntax, you might get

    I: hi, did you have breakfast today?
    CR: Yes, I had my breakfast right before going to bed.
    I: I thought you were a vegetarian.
    CR: Oh no, I love orange juice too.

    If we change a little more without changing syntax we might get

    I: hi, did you have breakfast today?
    CR: I disagree. My father was very kind.
    I: I didn’t ask about your father.
    CR: yes, it’s around the corner next to the gas station.

    I’m pretty sure you would agree that the first conversation shows understanding, the second shows some understanding with some misunderstanding, and the third shows no understanding. Changing the semantic information in the syntax would of course produce gibberish.

    Finally, when I say that Siri understands English, I mean that Siri partially understands English, just the way I partially understand English. I just understand a lot more, and the information processing system I use to perform that understanding may use different algorithms, but sometimes they achieve the same end.

    James

    Like

  39. Hi James: The difference between you and Siri is that when you understand, you actually understand, not implement an algorithm. When you’re insulted or elated you actually feel insulted or elated. When you want to reply with an inappropriate word you restrain yourself while Siri has a library of non-useable words.

    Just like you can trick a fish with a lure, you can trick a human’s agency detection networks with a computational simulation.

    Like

  40. (Undoubtedly off-topic, but perhaps not irrelevant – our entertainments oft reveal more about ourselves than our sciences….)

    The Pobble Who Has No Toes – Poem by Edward Lear

    The Pobble who has no toes
    Had once as many as we;
    When they said “Some day you may lose them all;”
    He replied “Fish, fiddle-de-dee!”
    And his Aunt Jobiska made him drink
    Lavender water tinged with pink,
    For she said “The World in general knows
    There’s nothing so good for a Pobble’s toes!”

    The Pobble who has no toes
    Swam across the Bristol Channel;
    But before he set out he wrapped his nose
    In a piece of scarlet flannel.
    For his Aunt Jobiska said “No harm
    Can come to his toes if his nose is warm;
    And it’s perfectly known that a Pobble’s toes
    Are safe, — provided he minds his nose!”

    The Pobble swam fast and well,
    And when boats or ships came near him,
    He tinkledy-blinkledy-winkled a bell,
    So that all the world could hear him.
    And all the Sailors and Admirals cried,
    When they saw him nearing the further side –
    “He has gone to fish for his Aunt Jobiska’s
    Runcible Cat with crimson whiskers!”

    But before he touched the shore,
    The shore of the Bristol Channel,
    A sea-green porpoise carried away
    His wrapper of scarlet flannel.
    And when he came to observe his feet,
    Formerly garnished with toes so neat,
    His face at once became forlorn,
    On perceiving that all his toes were gone!

    And nobody ever knew,
    From that dark day to the present,
    Whoso had taken the Pobble’s toes,
    In a manner so far from pleasant.
    Whether the shrimps, or crawfish grey,
    Or crafty Mermaids stole them away –
    Nobody knew: and nobody knows
    How the Pobble was robbed of his twice five toes!

    The Pobble who has no toes
    Was placed in a friendly Bark,
    And they rowed him back, and carried him up
    To his Aunt Jobiska’s Park.
    And she made him a feast at his earnest wish
    Of eggs and buttercups fried with fish, –
    And she said “It’s a fact the whole world knows,
    That Pobbles are happier without their toes!”

    Liked by 1 person

  41. Hi Coel,

    you say it is obvious that Siri understands the request when it tells the correct time after being asked for it. Is it as obvious to you that the lamp understands the request for “let there be light” as soon as the switch is flipped?

    Or would it only be understanding if you’d have to clap once for “on” and twice for “off”?

    Are you absolutely sure that there is no self-serving equivocation game going on here?

    Like

  42. In response to Coel et al discussions, particularly the ‘system understands’ argument.

    First, though, Coel, my last post mentioned neurons being complex shuffling in response to the video’s assertion about ‘mere complex shuffling’, and right above is your Comment saying the same thing. Sorry, I missed seeing that

    I just read Searle’s paper, his response to the systems argument seems absurd to me. He posits the case that he could ‘internalize’ the rules and carry out conversations passing back and forth only squiggles and he still wouldn’t understand Chinese.I can only say WTF? A set of rules allowing for arbitrary conversations? Allowing him to answer ‘how do you feel’, ‘what color is my dress’, ‘how much did you enjoy third grade’, ‘what did you think of Dan and Massimo’s treatment of the Chinese Room thought experiment’, or ‘between Yale, Harvard and MIT, which school would I do better at if I opened a coffee shop called Gedanken Donuts’. That is one hell of a set of rules. If he doesn’t understand Chinese, then he is doing something far far more difficult.

    One word I’m amazed hasn’t been popping up frequently, it isn’t in any comment and I don’t think it was in the video [transcripts anywhere?], and it’s ’emergent’. While it’s rather like PET and MRI scans in the neurothinktank media world, it is a real explanation for a whole host of phenomena

    **victorpanzica. Thanks for the link to Graziano, his paper describes, though in a more thorough and coherent way, many ideas that keep percolating around my head for years. In particular, how our conscious perceptions are a construct, one that is highly filtered, highly tailored, and excludes as much unnecessary info as possible. Consciousness’ functions appears to be modeling of the self and it’s interactions with the environment, modeling, monitoring, and providing for some control outputs. In a brain full of control system modules, surely this was the last to evolve. Maybe one purpose of the construct is to allow for the ‘slow’ thinking of Kahneman, one reason why so much extraneous info needs filtering out. As brain evolution created a more and more sophisticated versions, at some point consciousness emerged.

    Like

  43. Massimo and Dan/Aravis: thank you for your responses. (And I very much enjoyed your video dialogue.)

    I really don’t think I have a problem understanding that Searle is saying you can’t get (or just help yourself to getting) from syntax to semantics. I may be missing something obvious, but surely not that! Moreover, I completely agree with Searle (well, duh!) about this – about, that is, the enormous difference between merely following instructions, no matter how elaborate, and what we humans mean (at least when we aren’t bewitching ourselves by doing philosophy and [or] advocating for strong AI), by “understanding” complex things like Chinese.

    Maybe I’ve just been overthinking it? But:

    (1) I’ve always had the impression that Searle thought his argument was effective against the idea of any kind of artificial intelligence, and not just against the idea that a following-instructions program like that of the Chinese Room could ever be intelligent. Perhaps this is because he thinks it implausible that computers could ever be other than syntax-governed? I’d be happy for you to correct me, if in fact he’s agnostic about the future possibilities for artificial intelligence of non-syntax-governed kinds. Even if he isn’t, though – you might still say: so what? He has still provided a lucid and much-needed argument against “strong AI”; that is, against the kind of thinking that credits i-phones – non-metaphorically – with abilities like “deciding”, “thinking”, “being conscious” and so on. Hasn’t he?

    (2) Except that the argument doesn’t seem to be all that lucid, to judge by the sheer volume of controversy it has generated. Or rather, perhaps, it’s lucid in a too “eristic” way, and not in a sufficiently “dialectical” way – to use R.G.Collingwood’s distinction between going for the win, and trying to persuade your opponent by doing your best to see things from his or her point of view. How best, in this case, do you engage an opponent (to repeat: not me! the opposite of me!) who thinks (e.g.) that the syntax/semantics distinction is an anthropocentric irrelevance that should have no bearing on the question as to whether or not an i-phone can think? Maybe not merely by insisting that syntax =/= semantics, because you think this is a “knock-down argument” (a phrase that Searle is quite fond of) – and instead engaging with the more difficult but rewarding task of trying to uncover the presuppositions that make it so much as possible for someone to say – non-metaphorically* – that an i-phone “thinks” or “believes” or “decides” or “desires” X?

    (3) Again, I would hate to sound as if I thought I could dismiss with my amateur internet comments a thought-experiment that has had as much impact as the Chinese Room. But it’s surely possible for thought-experiments to be both wonderfully stimulating and deeply flawed. (Several commentators, as I’m sure you will know, have said that Kripke’s book about Wittgenstein and rule-following turns on a simple misreading or even overlooking of a couple of key sentences in Philosophical Investigations – but this doesn’t have to mean that the extended thought-experiment that Kripke’s book could be described as isn’t a genuine piece of creative thinking in its own right.)

    *If I had the space, I could go on inchoately but at length about how uneasy “non-metaphorically” makes me feel. It’s as if, in using the expression at all, I’m tacitly accepting – instead of trying to “diagnose” – something deeply scientistic and materialistic in Searle’s (and my) own thinking.

    Like

  44. Thanks to you both for another lucid and engaging discussion.

    I agreed with almost everything you said. I would like to ask that we don’t stop at the Chinese Room when discussing Searle on AI.

    Aaron Schwartz wrotein his blog post ‘Hating John Searle’;

    “ Reading Searle’s published books, it’s striking how little space the Chinese Room Argument takes up. Indeed, his book on the subject of consciousness — The Rediscovery of the Mind — gives it little more than a paragraph and notes that his more recent argument against functionalism is far more powerful.”

    As Searle puts it;

    “This is a different argument from the Chinese room argument, and I should have seen it ten years ago, but I did not. The Chinese room argument showed that semantics is not intrinsic to syntax. I am now making the separate and different point that syntax is not intrinsic to physics. “ (The Rediscovery of the Mind p. 210)

    Searle point is that there is nothing about the nature of computers that makes their different physical states intrinsically 1s and 0s. This is just an ascribed meaning; ascribed, that is by conscious beings. In the same way, the word ‘cat’ in English doesn’t intrinsically mean one of those furry things. It’s just a sound or marks on paper, or pixels arranged on screen. It only means something because we agree that does. That’s what allows language to “carry” intentional meaning from one mind to another.

    There are some things that are intrinsic or” observer independent”; real and exist regardless of what we believe about them, like mountains and molecules, Other things are real, but only because we have a social agreement that they exist; for example, money, political office, marriage, property and human languages. These are observer relative. And, Searle says, computing is observer relative. I found that startling when I first heard it, but having looked closely at the argument, I’m convinced.

    The implication is that concepts like ‘symbol’ and ‘syntax’, which are central to computational explanations of the mind, are only meaningful if there are already conscious or intentional minds to ascribe meaning to them. Nothing is intrinsically a symbol by it’s physical nature; only by being used as such between conscious minds that come to agree on the symbols’ meaning within a social context. In that, they are like words in natural languages.

    Therefore, computing depends on the prior existence of a group of conscious minds who agree to ascribe the meanings “symbol” or “syntax” to some physical process composed of silicon and metal, or gears and springs, or whatever physical implementation you like. Clearly, then, computing alone cannot constitute such an ascribing consciousness, a source of intentional thought. That would be circular. Computationalism tries to make consciousness haul itself into existence by it’s own shoelaces.

    I’d really encourage anyone interested in philosophy of mind to read all of ‘Rediscovery of the Mind’, and not only for its relevance to AI.

    Liked by 2 people

  45. “Can human beings know anything, and if so, what and how? This question is really the most essentially philosophical of all.”

    (Bertrand Russell in a letter to Lady Ottoline Morrell dated 13 December 1911. )

    I’m not a fan of Russell but nobody is always wrong and I’d agree with him here. Yet apparently we are able to produce a machine that knows, thus proving that the natural sciences can answer philosophical questions.

    Such a proof would represent real and significant progress. I foresee a thousand years of grant applications for pursuing this exciting scientific adventure into la-la land, secure in the knowledge that there would be no possibility of killing the goose.

    As a taxpayer I am not impressed. Despair seems the only option.

    After three years of investigating the scientific literature on consciousness I gave up on it completely. It seems that nobody is actually looking for answers or any real understanding but just trying to justify their assumptions. This is a genuine research finding and not a hasty opinion. I remain up to speed with the field, however, since no reading is required for this.

    Cynical? I’ll say. Unfair, possibly, there may have been some progress somewhere. But if we were to ask whether this discussion could have happened in the 18th century we might have to conclude that apart from some of the terminology it could have proceeded in just the same way. Perhaps we’ve even gone backwards. It’s hard to tell.

    It shouldn’t be this easy for people to mock consciousness studies.

    Fifth post, I think, so I am off the hook…

    Like

  46. Hi Massimo,

    let me get this straight: are you claiming that my iPhone has the sort of internal mental states, a feeling of understanding, like I do when I read your words?

    No, I am not saying that. I don’t think an iPhone is conscious or has first-person subjective experiences, or has a feeling of understanding.

    As I’ve said previously, it is unhelpful and unwarranted to wrap up all these things with consciousness into an insoluable bundle. The way to make progress is to take the issue apart and start with the bits one can deal with.

    I am quite deliberately limiting my remarks to the concepts “semantics”, “meaning” and “understanding” and making no claims about consciousness.

    Because if not, then you are simply playing word games.

    There I disagree. As I see it I’m simply dealing with “semantics” and “understanding”. Quoting from the OED:

    Semantics: “… concerned with meaning”.
    mean: 1. “to convey or refer to (a particular thing)”. “I meant you, not Jones”
    1.1 “(Of a word) have (something) as its signification in the same language or its equivalent in another language”. “its name means `painted rock’ in Cherokee”
    understand: “Perceive the intended meaning of (words, a language, or a speaker)”. “she understood what he was saying”.

    Those are the concepts that I’m saying an iPhone/Siri can do. Thus an iPhone can do semantics. None of Searle, you, Aravis or labnut have explained, explicitly and specifically, what is missing, and why what an iPhone/Siri does doesn’t amount to semantics.

    Your reply might be that it is only “real” meaning if it’s done by a conscious being, but if that’s your reply then you’re making the Chinese Room to be about consciousness, not about understanding and semantics. I agree that consciousness is a harder problem. For now I’m trying to deal only with “understanding” and “meaning”.

    Hi miramaxime,

    Is it as obvious to you that the lamp understands the request for “let there be light” as soon as the switch is flipped?

    The lamp does not understand or even respond to the English-language command “let there be light”; it responds to the switch. Now, if you equipped the light with voice-recognition software and a microprocessor to interpret it …

    Hi mechtheist,

    Searle’s response to the systems argument seems absurd to me.

    Agreed. Hence my attempt at ridiculing it.

    Dear All,

    This being my fifth, a final remark: through history people have held to vitalism and dualism. Many still do today. They simply intuit that “there must be something more to it” than just physical molecules. The whole CR and the “can’t do semantics” argument is just more of that. There is no actual argument, merely intuition. Proponents can’t point out specifically what is missing, nor do they have any better account of “meaning” or how our brains do semantics. The trend of history suggests that what they’re looking for is as illusory as elan vital or dualistic souls.

    Like

  47. mechteist wrote:

    I just read Searle’s paper, his response to the systems argument seems absurd to me. He posits the case that he could ‘internalize’ the rules and carry out conversations passing back and forth only squiggles and he still wouldn’t understand Chinese.I can only say WTF? A set of rules allowing for arbitrary conversations? Allowing him to answer ‘how do you feel’, ‘what color is my dress’, ‘how much did you enjoy third grade’, ‘what did you think of Dan and Massimo’s treatment of the Chinese Room thought experiment’, or ‘between Yale, Harvard and MIT, which school would I do better at if I opened a coffee shop called Gedanken Donuts’. That is one hell of a set of rules. If he doesn’t understand Chinese, then he is doing something far far more difficult.

    ————————————————

    You seem to be doing what several others have done, namely identify precisely what is crazy about Strong AI and then blame it on Searle.

    It is the Strong AI proponent who claims that in understanding Chinese, we are doing what a computer does.

    Computers *do* follow instructions, whose substance consists entirely of manipulating symbols based on nothing but their syntactic properties (shapes).

    It is *this* that Searle is demonstrating cannot be correct, by way of his thought experiment.

    Liked by 1 person

  48. Aravis:

    Interesting point about philosophy of mind as really philosophy of language; this is probably why Searle started out from philosophy of language. I was worried that you were going down eliminative materialist lines when talking about “folk explanations”. However, you rightly made it very clear why you weren’t.

    DM:
    Congratulations. I really enjoyed the discussions we had on Searle and AI on your blog last year, and I hope you will be back here as soon as you can!

    Nickm;

    No, Searle didn’t think his arguments applied to every possible artificial consciousness, only to those based on algorithmic data processing. As he said, machines can think and be conscious, because we are an example.

    James Smith:

    “Information is one of those weasel words that invite multiple equivocations. I think that’s well shown in Raymond Tallis’s “Why the Mind is not a computer: a pocket lexicon of Neuromythology. There is a view of it that buttresses computationalism. One of the important aspects pf Searle’s work – not only the Chinese room – is to tease out those ambiguities.

    Massimo:

    “Does intelligence require consciousness? This is just a matter of definition not of fact”
    I disagree, it is very much a matter of fact. What we know is that there is a rough correlation, in the biological world, between the two. It is hard to imagine a creature who is not intelligent and yet has consciousness (e.g., plants aren’t and don’t), or vice versa. But they are also clearly not the same thing. The interesting question (other than how exactly the brain generates conscious experience) is whether one could decouple the two artificially, building a highly intelligent machine that is not conscious.”

    Precisely; I brought this up in the discussion of Mark Bishop’s essay. He seems to think that his and Searle’s arguments against computational consciousness imply that highly intelligent machines could never become dangerous; implicitly because consciousness is necessary for human of superhuman levels of intelligence. I don’t see why that is necessarily true.

    Searle reviewed Nick Bostrom’s “Superintelligences” book, which I think give technically, politically and philosophically well-founded reasons to be afraid that we will develop dangerous AIs without first giving enough thought to how we will control them. Searle, Like Bishop, seems to think his arguments preclude that. I don’t see why.

    I accept that consciousness almost certainly had an important role and an evolutionary ‘purpose’ in human intelligence. Only an epiphenominalist could deny that. Individually, perhaps focusing, integrating and generating better interpretations; most importantly, underpinning our social existence. It’s quite possible that, for a biologically evolved being, consciousness is required for highly intelligent minds. But once we are taking over the role of evolution, surely the non-conscious AI can develop as a “piggyback” on our consciousness, without having to internalise it? The non-biological, “at least semi-intelligently designed” route changes the case, surely?

    To put it another way, philosophical zombies may not make sense, as, you’ve well argued, but technological ones seem quite conceivable.

    What do you think?

    Liked by 1 person

Comments are closed.