The Turing test doesn’t matter

turing testby Massimo Pigliucci

You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being [1,2])! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.

Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened [3] was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed the Turing test, the other one was Cleverbot, back in 2011 [4]. Fourth, Eugene only squeaked by, technically convincing “at least 30% of the judges” (a pretty low bar) for a mere five minutes. Fifth, Veseloy cheated somewhat, by giving Eugene the “personality” of a 13-yr old Ukrainian boy, which thereby somewhat insulated the chatterbot from potential problems caused by its poor English or its inept handling of some questions. As you can see, the whole thing was definitely hyped in the press.

Competitions to pass the Turing test have become fashionable entertainment for the AI crowd, and Brian Christian — who participated in one such competition as a human decoy — wrote a fascinating book about it [5], which provides interesting insights into why and how people do these things. But the very idea of the Turing test is becoming more and more obviously irrelevant, ironically in part precisely because of the “successes” of computer scripts like Cleverbot and Eugene.

Turing proposed his famous test back in 1951, calling it “the imitation game.” The idea stemmed out of his famous work on what is now known as the Church-Turing hypothesis [6], the idea that “computers” (very broadly defined) can carry out any task that can be encoded by an algorithm. Turing was interested in the question of whether machines can think, and he was likely influenced by the then cutting edge research approach in psychology, behaviorism [7], whose rejection of the idea of internal mental states as either fictional or not accessible scientifically led psychologists for a while to study human behavior from a strictly externalist standpoint. Since the question of machine thought seemed to be even more daunting than the issue of how to study human thought, Turing’s choice made perfect sense at the time. This, of course, was well before many of the modern developments in computer science, philosophy of mind, neurobiology and cognitive science.

It didn’t take long to realize that it was not that difficult to write short computer scripts that were remarkably successful at fooling human beings into thinking they were dealings with humans rather than computers, at least in specific domains of application. Perhaps the most famous one was Eliza, which simulates a Rogerian psychotherapist [8], and which was invented by Joseph Weizenbaum in the mid ‘60s. Of course, Eliza is far more primitive than Cleverbot or Eugene, and its domain specificity means that it technically wouldn’t pass the Turing test. Still, try playing with it for a while (or, better yet, get a friend who doesn’t know about it to play with it) and you can’t avoid being spooked.

That’s in large part because human beings have a strong instinctual tendency to project agency whenever they see patterns, something that probably also explains why a belief in supernatural entities is so widespread in our species. But precisely because we know of this agency-projection bias, we should be even more careful before accepting any purely behavioristic “test” for the detection of such agency, especially in novel situations where we do not have a proper basis for comparison. After all, the Turing test is trying to solve the problem of other minds [9] (as in, how do we know that other people think like we do?) in the specific case of computers. The difference is that a reasonable argument for concluding that people that look like me and behave like me (and who are internally constituted in the same way as I am, when we are able to check) indeed also think like me is precisely that they look, behave and are internally constituted in the same fashion as I am. In the case of computers, the first and third criteria fail, so we are left with the behavioristic approach of the Turing test, with all the pitfalls of behaviorism, augmented by its application to non biological devices.

But there are deeper reasons why we should abandon the Turing test and find some other way to determine whether an AI is, well, that’s the problem, is what, exactly? There are several attributes that get thrown into the mix whenever this topic comes up, attributes that are not necessarily functionally linked to each other, and that are certainly not synonyms, even though too often they get casually used in just that manner.

Here are a number of things we should test for in order to answer Turing’s original question: can machines think? Each entry is accompanied by a standard dictionary definition, just to take a first stab at clarifying the issue:

Intelligence: The ability to acquire and apply knowledge and skills.

Computing power: The power to calculate.

Self-awareness: The conscious knowledge of one’s own character, feelings, motives, and desires.

Sentience: The ability to perceive or feel things.

Memory: The faculty of storing and retrieving information.

It should be obvious that human beings are characterized by all of the above: we have memory, are sentient, self-aware, can compute, and are intelligent (well, some of us, at any rate). But it’s also obvious that these are distinct, if in some ways related, attributes of the human mind. Some are a subset of others: there is no way for someone to be self-aware and yet not sentient; yet plenty of animals are presumably the latter but likely not the former (it’s hard to tell, really). It should further be clear that some of these attributes have little to do with some of the others: one can imagine more and more powerful computing machines which nonetheless are neither intelligent nor self-aware (my iPhone, for instance). One can also agree that memory is necessary for intelligence and self-awareness, but at the same time realize that human memory is nothing like computer memory: our brains don’t work like hard drives where information is stored and reliably retrieved from. In fact, memories are really best thought of as continuous re-interpretations of past events, whose verisimilitude varies according to a number of factors, not the least of which is emotional affect.

So, when we talk about “AI,” do we mean intelligence (as the “I” deceptively seems to stand for), computation, self-awareness, all of the above? Without first agreeing at the least on what it is we are trying to do we cannot possibly even conceive of a test to see whether we’ve gotten there.

Now, which of the above — if any — does the Turing test in particular actually test for? I would argue, none. Eugene passed the test, but it certainly lacks both sentience and, a fortiori, self-awareness. Right there it seems to me therefore that its much trumpeted achievement has precious little interesting to say to anyone who is concerned with consciousness, philosophy of mind, and the like.

If I understand correctly what a chatterbox is, Eugene doesn’t even have memory per se (though it often does rely on a database of keywords), not in the sense in which a computer has memory, and certainly not in the way a human does. Does it have computing power? Well, yes, sort of, depending on the computing power of its host machine, but not in any interesting sense that should get anyone outside of the AI community excited.

Finally, is it intelligent? Again, no. Vladimir Veselov, the human who designed Eugene, is intelligent (and sentient, self-aware, capable of computation and endowed with memory), while Eugene itself is just a (very) clever trick, nothing more.

And that’s why we need to retire the Turing test once and for all. It doesn’t tell us anything we actually want to know about machine thinking. This isn’t Turing’s fault, of course. At the time, it seemed like a good idea. But so were epicycles in the time of Ptolemy, or luminiferous aether before the Michelson–Morley experiment.

What are we going to replace it with? I’m not sure. Aside from the necessary clarification of what it is that we are aiming for (intelligence? Self-awareness? Computational power? All of the above?),  we are left with an extreme version of the above mentioned problem of other minds. And that problem is already very difficult when it comes to the prima facie easier case of non-human animals. For instance, it’s reasonable to infer that closely related primates have some degree of self-awareness (let’s focus on that aspect, for the sake of discussion), but how much? Unlike most human beings, they can’t communicate to us about their perceptions of their own character, feelings, motives, and desires. What about other animals with complex brains that are more distant from us phylogenetically, and hence more structurally different, like octopi? Again, possibly, to a degree. But I’d wager that ants, for instance, have no self-awareness, and neither does the majority of other invertebrate species, and possibly even a good number of vertebrates (fish? Reptiles?).

When we talk about entirely artificial entities, such as computers (or computer programs), much of the commonsense information on the basis of which we can reasonably infer other minds — biological kinship, known functional complexity of specific areas of the brain, etc. — obviously doesn’t apply. This is a serious problem, and it requires an approach a lot more sophisticated than the Turing test. Indeed, it is dumbfounding how anyone can still think that the Turing test is even remotely informative on the matter. We are in need first of all of clarifying quite a bit of conceptual confusion, and then of some really smart (in the all-of-the-above sense) human being coming up with a new proposal. Anyone wish to give it a shot?

P.S.: the Colbert Report just put out a video that includes my latest and most cutting edge thoughts on black lesbian robotic invasions. Thought you might be interested…

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] For an in-depth discussion see: The Turing Test, by Graham Oppy and David Dowe, Stanford Encyclopedia of Philosophy.

[2] Incidentally, and for the sake of giving credit where credit is due, perhaps this should be called the Descartes test. In The Meditations, Descartes wrote: “If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.” Of course, Descartes was a skeptic about machine intelligence, but the basic idea is the same.

[3] A Chatbot Has ‘Passed’ The Turing Test For The First Time, by Robert T. Gonzalez and George Dvorsky, io9, 8 June 2014.

[4] Why The Turing Test Is Bullshit, by George Dvorsky, io9, 9 June 2014.

[5] The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, by Brian Christian, Doubleday, 2011.

[6] The Church-Turing Thesis, by B. Jack Copeland, Stanford Encyclopedia of Philosophy.

[7] Behaviorism, by George Graham, Stanford Encyclopedia of Philosophy.

[8] If you’d like to play with Eliza, here is a java version of the script.

[9] Other Minds, by Alec Hyslop, Stanford Encyclopedia of Philosophy.

371 thoughts on “The Turing test doesn’t matter

  1. Reblogged this on Fascinating Future and commented:
    An interesting article by Massimo Pigliucci on the “Turing test”, or more accurately how a computer was able to fool (a small percentage of) people to believe it was a real person. Important thing to remember, a robot take over in the near future is very, very unlikely.

    Liked by 1 person

  2. Hi Massimo,

    Glad you’ve returned to the topic of AI again.

    was that a chatterbot (i.e., a computer script), not a computer

    This distinction is meaningless to me. No computer can do anything without a program, and the script is just a program run by a computer. You may instead want to draw a distinction between superficial “paint by numbers” programs such as this script and more dynamic, generative programs such as neural network simulations which would give us more reason to take them seriously.

    It’s little more than a (clever) parlor trick.

    Agreed.

    Third, this was actually the second time that a chatterbot passed the Turing test, the other one was Cleverbot, back in 2011 [4]

    I would say the Turing Test, correctly formulated, has never been passed by a computer program. Cleverbot is just as much a case of poor test design as Eugene Goostman.

    But the very idea of the Turing test is becoming more and more obviously irrelevant, ironically in part precisely because of the “successes” of computer scripts like Cleverbot and Eugene.

    What is becoming clear is that the test protocol needs to be more stringent than it was in these instances.

    Now, which of the above — if any — does the Turing test in particular actually test for?

    OK, let’s take them each in turn.

    Intelligence: The ability to acquire and apply knowledge and skills.

    This can be tested via the Turing Test. Indeed, it should be part of the Turing Test. The fact that it wasn’t tested for in Eugene Goostman just shows that this was not a proper Turing Test.

    Computing power: The power to calculate.

    OK. This is trivial for computers, so not really relevant. It need not be part of the Turing Test because computers are clearly better than humans in this regard.

    Self-awareness: The conscious knowledge of one’s own character, feelings, motives, and desires.

    That word ‘conscious’ is problematic, as there is no way to test for it directly. But the behaviouristic side of self-awareness should be part of the Turing Test.

    Sentience: The ability to perceive or feel things.

    This shares the same problems. There’s no way to know if another entity is feeling anything. We can only judge from its behaviour, and again this should be part of the Turing Test.

    Memory: The faculty of storing and retrieving information.

    As with calculation, computers surpass humans, but I do think it should be part of the Turing Test in the sense that the computer program should be expected to maintain a thread of conversation and to recall and refer back to earlier parts of the conversation as appropriate.

    Indeed, it is dumbfounding how anyone can still think that the Turing test is even remotely informative on the matter.

    Because there is no chance whatsoever that either you nor I would be fooled into thinking Eugene Goostman or Cleverbot were human, at least if we were forewarned that we might be talking to a machine. There is absolutely no sense of another person there once you know it’s a script. There is no persistent sense of consciousness but a very brittle illusion.

    If somebody did make a computer system which was in every sense behaviorally indistinguishable from a human, it would be very hard to shake the illusion that it was a person, even when you know it’s just a machine. You have never met me face to face. Suppose you did and it turned out I was a robot. Would you really be so quick as to dismiss all our conversation as a clever parlour trick, or would you wonder whether a computer could be conscious after all?

    some really smart (in the all-of-the-above sense) human being coming up with a new proposal. Anyone wish to give it a shot?

    Sure. Keep the Turing Test but make it more rigorous. Have lots of judges, and give them all some training on what to look out for and plenty of time to make their decisions. Demand of the test subject that it demonstrate learning, inference, pattern recognition, maintaining a continuous conversation over time, a sense of humour etc. Demand that the test subject appear not only to be human, but to be a thoughtful, mature, intelligent human speaking in their native language.

    I loved that Colbert Report video, by the way.

    Liked by 1 person

  3. I think you kind of intuitively had the answer when you talked about the criteria that fail when interacting with a computer.

    Something was missing in the Royal Society’s test. It’s the same thing that was missing from Turing’s thoughts about universal computation and from all the words (Intelligence, sentience, etc.) you defined above to sketch out the area of relevance to the discussion.

    What’s missing is the body, the agency of the body, and the inseparable way the body makes us part of the world.

    I’m not saying that to be intelligent something must have a body like a humans, or even human affect. I’m saying that something becomes intelligent just as much by virtue of its body as its brain. And that the role the body plays in taking an organism from non-intelligent (newborn human) to intelligent is so integrated that we’re not even able to think about it properly if we’re just considering the computational/signaling aspects of thought.

    I think I said before in the comments here that artificial intelligence will never be developed until our creations have both bodies and childhoods. That means that a real Turing test won’t happen until nothing is hidden behind a curtain or computer screen.

    Liked by 1 person

  4. Hi Massimo,

    Many media outlets exaggerated Veselov’s “success” and therefore confused, and burned, many people. I remember being excited when I first heard about it, and I thought the programmers and judges had created a refined, decisive definition of the Turing test (I even wondered, “How did they interpret the test to consider their results as successful?”). Clearly, we have a lot more work to do, and I think that replacing the Turing test with a much more rigorous test is a good start. Great post!

    By the way, I really enjoyed your walk in the future on the Colbert Report. If you can time travel, then just promise me you’ll use that ability to protect us from a black lesbian robotic invasion – you know, kind of like Terminator. 🙂

    Like

  5. I think you’re undersell the significance of the Turing Test a bit – or, if not the Turing Test exactly, the general idea of getting a computer to converse in a way that makes it seem like it understands what you (and it) are saying.

    All the current chatbots I’ve seen (such as Cleverbot) fail miserably as soon as you try to test how well they understand the contents of the conversation. For example, whenever I ask a chatbot to repeat what it said a couple of lines earlier, it gets confused. If I tell it some information (e.g., my mother’s name is “Sally”) and then ask it to recall my mother’s name five seconds later, it will get confused. If I ask it a question like , ‘Which word is longer: ‘cat’ or ‘banana’?”, it won’t be able to answer the question. They can’t really make logical deductions over the course of a conversation either since they don’t have much memory (as you mentioned).

    Suppose a computer could answer these kinds of questions in a way that made it seem like it understood the content of what it was saying. Do I think that means the computer actually does “understand” what it’s saying, or that it’s “intelligent”, or “sentient”? I don’t know – that’s a tough question for all the reasons you brought up. But it would still be an amazing feat in AI research and natural language processing, so I don’t think it’s something that should be abandoned or retired.

    Like

  6. Hi Massimo,
    First some minor stuff: “there were three judges, one was fooled”. The original press release says 30 judges, so 10 would have been fooled. (http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx ) Also, rather than “organized *by* the Royal Society in London” it was more organised “at” the Royal Society.

    More substantively, all of the characteristics that you list (intelligence, self-awareness, memory, sentience) are continua. Thus any “test” for their presence can only have an arbitrary threshold. Thus the question “can machines think?” is ill-defined. I would have no problem considering that current computers “think” in the sense of being somewhere on a continuum that has us vastly higher up that continuum. From there everything is a matter of degree.

    Liked by 1 person

  7. Some (in unconventional/natural computing) hold that a simulation of a biological brain running on a conventional silicon-based computer, no matter how big or parallel, will not be able to experience feelings like our brains actually do. Our brains, for one thing, are interacting with different kinds of chemicals, and a computer without processing these chemical molecules cannot experience things like how food tastes. A simulation running in a conventional silicon-based computer will never experience anything like that. But there is still the possibility of an unconventional computer (an assembly of non-silicon materials, the output of a matter compiler) that does process molecules as a biological brain does that does have feelings.

    So the (conventional) Turing test really doesn’t matter. A simulation is not an assembly.

    Liked by 1 person

  8. Hi Massimo,

    I’d say that’s a simulation of life, not life

    I’m sure you would, but I was asking Asher specifically about his belief that a body is required for intelligence and whether that body needs to be physical. The question of whether a virtual organism is alive or not is not really relevant to that particular question.

    Like

  9. Hi Philip,

    a computer without processing these chemical molecules cannot experience things like how food tastes

    I don’t understand this point of view. By the time a brain is experiencing taste sensations, the chemical molecules themselves are out of the picture. The sensory data has long since been converted into electrical impulses. If we removed the chemicals completely from the equation but managed to stimulate those same nerves in the same way, then surely the brain would experience the same sensation. Indeed, the sensations of tastes and smells can be induced in the brain by stimulation of certain parts of the brain.

    Like

  10. I would say that a causally complete simulation is effectively reality. In other words, at a certain point, Massimo’s objection is a distinction without a difference (I actually wrote a bit about this, using Boids as a toy example. If you go to sevenless.org, you’ll see it).

    But I also wouldn’t say that a simulation is necessarily “non-physical”. I think we’re too attached to “stuff” as a criterion for physicality. Physical reality is much more about process than substance.

    I think a lot of people doubt the body’s importance. If you start to look into how deeply embedded body agency is in our most basic thoughts and concepts, it starts to look more likely. We wouldn’t have a concept of cause at all without years and years of repetitively causing things with our bodies. Plus the brain really just started out as a solution for how to move the body around. If we think we’ve shed that legacy, we’re sorely mistaken.

    Like

  11. I wrote a program 30 years ago that could fool people for about 5 minutes. Even when they were told it was a computer program I was accused of cheating and having someone secretly providing.

    It was not very sophisticated, nor particularly clever – some basic sentence parsing and the ability to turn the grammar round so that if you said to it “I hate you”, it would come back later and say “You said earlier that you hate me, is this relevant to your last remark?”

    So I am somewhat surprised that 30 years later similar programs can only fool 30% of the judges for about 5 minutes and only by resorting to the absurd pretext of being a 13 year old who does not understand English well.

    Maybe I will dig out the code and update it so that I can have a “supercomputer” pass the test. If I can find something to read 5.25″ floppies!

    Liked by 1 person

  12. Ask yourself this: why did Turing specify the blind? Was he thinking that it would be way harder to create a convincing body than it would be to create a convincing intelligence? Or was it because he didn’t consider the body to be an important factor?

    Either way, it’s a weird conclusion to come to when the whole point of behaviorism is being able to observe behavior.

    Like

  13. Hi DM,
    Considering we only know “life” to be based on “bodies”, I think the burden of proof would be on someone claiming that the virtual organism is the same thing. This is not to say they couldn’t be the same but I don’t think we have quite figured out what exactly makes life (organic matter) truly distinct from non-life (inorganic matter) so we wouldn’t even know what to look for to make such a conclusion.

    Like

  14. Hi Asher,

    I think we’re too attached to “stuff” as a criterion for physicality. Physical reality is much more about process than substance.

    Sure. I agree. However most people would still feel a distinction between a perfect simulation of reality and actual physical reality. There certainly is a distinction from *my* point of view. Stuff can hurt me but a simulation of stuff cannot.

    If you start to look into how deeply embedded body agency is in our most basic thoughts and concepts, it starts to look more likely.

    You’re certainly right if we’re talking about human intelligence specifically. I’m not sure that intelligence or consciousness per se requires a body. Besides, all the ways that bodies shape the brain can be essentially hardcoded in. One extreme way to do this might be by scanning a human brain and then simulating it without bothering to simulate the body (apart from the blood supply and an appropriate ambient temperature). I’m sure this would not be a pleasant experience for the simulated brain, but I don’t think it would necessarily cease to be conscious immediately just because it is bodiless, any more than a locked-in syndrome patient does.

    Ask yourself this: why did Turing specify the blind? Was he thinking that it would be way harder to create a convincing body than it would be to create a convincing intelligence? Or was it because he didn’t consider the body to be an important factor?

    Both! In order to make a convincing lifelike robot, you have two herculean tasks to overcome. You have to make a convincing artificial simulacrum of a biological body, and you have to make something which behaves intelligently. That’s a tall order! But also, it is probably true that he doesn’t consider the body to be a vital factor for intelligence, as I don’t.

    You could be right that having a (virtual/physical) body and an environment to interact with and learn from might be a practical necessity in developing a new computer system which is really intelligent, particularly one which is supposed to mimic a human. It cannot be a metaphysical necessity because we could in principle directly code in all the effects of that body and environment to the virtual brain if we knew how to do so, just as we could instantly create a fully functional, intelligent adult human from scratch if we had the ability to assemble all the atoms just so as to replicate Massimo Pigliucci as he is right now. Such a clone would not have learned anything for itself, but it would have had all the benefits of learning baked in from the start.

    Like

  15. Great video on Colbert report!

    As for the turing test, I agree in that it doesn’t really get at the heart of the issue and it’s confusing why anyone thought it would. However, I’m also not sure if there are any alternatives either until or if we ever get a better understanding of processes like consciousness and self-awareness. If we don’t, I think I’d be much more inclined to believe the organism (perhaps alien life) that acts intelligently like us is conscious much more so than a computer.

    Like

  16. Maybe the experience of tasting food could work out like that, but my point is technologically how the whole device (artificial brain) could experience feelings in general. And as you say, it would be a ‘stimulation’ (not ‘simulation’).

    Like

  17. But the point is that if you want to try and fool people into thinking your computer program is a person, then the last thing you want to be doing is trying to make it “intelligent” or to understand anything.

    You need to concentrate on trying to produce the illusion of these things and in that sense I think the Turing Test is irrelevant, especially highly rigged ones like the one described in the article.

    Nevertheless I think that a properly designed Turing Test would be very effective in sorting out what can and cannot think.

    In particular I think that if there was a computer that could read examinations designed for humans and get regular pass marks then we could only conclude that this program had satisfied the criteria set by the examiners for understanding the material and had therefore understood it.

    If something can understand then I think that it has “thought”. A lecturer at my old university used to say that Mathematica knew more about mathematics than most of the lecturers in the university.

    But clever as Mathematica is, I doubt it could pass even a simple maths test that was designed for humans.

    Like

  18. Hi imzasirf,

    Again, I wasn’t asking that question. I was only asking Asher if he thinks a virtual body would be sufficient to allow intelligence to develop (and he does).

    Whether virtual life is the same as physical life depends only on how you define ‘life’, and as such I don’t think it is a very interesting question in itself except as one which must be answered before attempting to communicate any ideas which depend on this definition.

    Like

  19. I think the core idea of the Turing test remains a valid one, but that the original 30% in 5 minutes threshold that Turing described in 1950 (really alluded to in an aside prediction) is too naive. Let’s remember what the Turing test aims to measure: when we are willing to regard a machine, computer, program, or whatever as a fellow thinking being. The Turing test is just as much a test of us as it is of the technology.

    Our threshold for accepting a machine as a fellow being is probably much higher today than in Turing’s time, after six decades of progressive computing technology. But the idea of us accepting an entity as a thinking machine at some level of observed sophistication remains valid.

    The question is, what is a valid threshold? Will conversation over a text interface ever be enough? Personally, I think for a test to be meaningful, it would have to give the humans as much time and interaction as they need to make a confident decision, and the machine would need to be as successful statistically as an average human being.

    Now, that doesn’t mean that all of the attributes Massimo lists aren’t going to have to be meticulously programmed for, that they’re going to appear magically. The magic sauce of this remains in the programming, something Turing himself recognized in his paper. But even after that has happened, and most people are convinced, there will be some insisting that it is a parlor trick.

    Liked by 1 person

  20. I don’t know anything about Turing tests etc., but I do know what I would ask.

    “Could you give me summary, in your own words, of the idea I tried to explain to you? Should be easy, you had no problems to talk with me about it.”

    Like

  21. But of course if we simulate a physical system then the only test that our simulation is successful is if it behaves as the physical system does.

    So a test of a computational simulation of our biological processes would be if the simulation behaved like a human does, just as a simulation of C Elegans will be to show that the simulation behaves just as the physical C Elegans does.

    So when we have the understanding and computing power to simulate the biology of a human being then the test that the simulation is correct will necessarily by very similar to the Turing Test.

    We would have no way of knowing if the simulation could feel things, but – at least on paper – the computational simulation should behave exactly as if it did feel.

    And that, as I have said in the past, raises some interesting questions.

    Like

  22. “Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.”

    How very interesting to claim that people can act in “all the contingencies…” I’m not sure that this is a legitimate standard to set. The notion that machines must necessarily consist of “particular” organs for any given action seems to be equivalent to saying there must be a particular program for any given purpose. And the “universal instrument” of reason is the abstract ability to learn. The conclusion that the machine is necessarily limited to the actions pre-programmed into it, which is not understanding but merely a “disposition of their organs.” I must admit that it seems to me that it is inadvisable to conceive the human mind apart from the condition of the brain, to start with. Nor does it seem useful to declare it is simply a learning organ.

    Isn’t it likely that the many well-known cognitive errors in abstract reasoning and the multitudinous failures we all suffer in learning that the brain is such an organ? Perhaps the brain is more about motivations than analysis? Or to put it another way, the real Turing Test is when the computer calls you and talks to you about what it (seems) to want to talk about?

    Like

  23. Hi Robin,

    then the last thing you want to be doing is trying to make it “intelligent” or to understand anything.

    You need to concentrate on trying to produce the illusion of these things and in that sense I think the Turing Test is irrelevant, especially highly rigged ones like the one described in the article.

    Depending on how rigorous the test is, it may be impossible to produce those illusions. The easiest way to pass a robust Turing Test (and indeed perhaps the only way) would be to actually be intelligent (and perhaps conscious).

    Like

  24. I have a problem with a “simulation is successful is if it behaves as the physical system does”. Particularly with the word “behaves”.

    In the end a computer simulation is a bunch of electrons going through silicon logic gates and making pixels on a LED screen change or output sounds from a speaker. The simulation is a physical thing in its own right, but it’s a cartoon of something else. One could have a simulation or model of a heart in a computer, but one would need some advanced matter compiler to output that model (a simulation) into a working artificial heart (an assembly) to put into a human. I think to make a brain (with feelings) would be the same.

    Like

  25. Interesting, I completely skipped over that footnote but it seems like the challenge Descartes was raising is not the one that I think is central issue in this case. In his time it made sense to assume that “machines” cannot be generalize or become complex enough to imitate human beings but living in the world we live in, with very complex machines and technology, I don’t see why we would not be able to create a machine eventually that will be in it’s behavior, exactly like a human being. This is something that a project that behaviorism can help to accomplish. The harder part is to figure out how to get the insides right, the feelings and qualia aspect.

    Like

  26. DM

    >>>Whether virtual life is the same as physical life depends only on how you define ‘life’, and as such I don’t think it is a very interesting question in itself except as one which must be answered before attempting to communicate any ideas which depend on this definition.

    I think defining life is not a secondary issue but in fact the main issue here. How else can we talk about life in a virtual world when we have not even understood what exactly makes life different from inorganic matter in the normal environment? In that sense, I think you have to not just provide a definition of life that you and I agree on but explain in terms of physical properties what differentiates life and non-life. Once we do that, we can ask questions about virtual life.

    Like

  27. DM,
    I’m not sure that intelligence or consciousness per se requires a body.

    Alva Noe(Action in Perception), deals with this subject. From the book blurb:
    Perception is not something that happens to us, or in us,” writes Alva Noë. “It is something we do.” In Action in Perception, Noë argues that perception and perceptual consciousness depend on capacities for action and thought — that perception is a kind of thoughtful activity. Touch, not vision, should be our model for perception. Perception is not a process in the brain, but a kind of skillful activity of the body as a whole. We enact our perceptual experience.To perceive, according to this enactive approach to perception, is not merely to have sensations; it is to have sensations that we understand. In Action in Perception, Noë investigates the forms this understanding can take. He begins by arguing, on both phenomenological and empirical grounds, that the content of perception is not like the content of a picture; the world is not given to consciousness all at once but is gained gradually by active inquiry and exploration. Noë then argues that perceptual experience acquires content thanks to our possession and exercise of practical bodily knowledge, and examines, among other topics, the problems posed by spatial content and the experience of color. He considers the perspectival aspect of the representational content of experience and assesses the place of thought and understanding in experience. Finally, he explores the implications of the enactive approach for our understanding of the neuroscience of perception.

    You should try a thought experiment. Imagine for a moment that a drug has left you conscious but blocked all nerve signals to your brain. You cannot see, hear or feel. No perceptual signals of any kind come through to your brain. Will you be conscious? We don’t know because we can’t create these conditions. If Alva Noe is right, thought will cease and that will be unconsciousness.

    Like

  28. “Black lesbian robotic invasions”, a great video.

    “Here are a number of things we should test for in order to answer Turing’s original question: can machines think?”

    Exactly, ‘thinking’ is the word. Alan Turing defined intelligence as human “behaviors”. Yet, almost all (not including thinking and ‘physical’) human ‘behaviors’ can be formalized or largely (90%) formalized, such as,
    Reasoning — logic or illogic (note: this kind of reasoning is not ‘thinking’)
    Knowledge representation — a huge knowledge data base,
    Planning — with set goals and fixed rules,
    Learning — with a self-adding knowledge data base,
    Perception — with mechanic sensors and a large knowledge data base,
    etc..

    That is, most of human ‘behaviors’ can be mimicked. In the book “Linguistics Manifesto, ISBN 978-3-8383-9722-1, published in 2010”, it divides intelligence into two categories.
    One, zombie intelligence: any task that can be formalized can be performed by zombie intelligence (such as a computer program, etc.).
    Two, beyond zombie intelligence (BZI): Zombie Principle — there is intelligence which is not reachable by zombie.

    The BZI consists of three vital parts:
    a. a spontaneous intention for doing a task on any encountered situation,
    b. some tasks identified by that spontaneous intention.
    c. some methods to accomplish those tasks.

    The tasks and the methods can be performed by zombies. But, a large data base of intention-choices can never be the spontaneous intention of a human agent. That is, enough ‘trails’ (exhausting the choice-data base) will definitely separate a zombie from a human. Thus, in that book, it proposed a ‘Linguistic Test’.

    Linguistics Test — if a machine can read an (arbitrary) essay and write a commentary about it, that machine is intelligent if no human can distinguish its work from other human’s writings on the same essay.

    An abridged chapter on this issue of that book is available at http://www.prebabel.info/aintel.htm .

    Like

  29. I’m glad to see others defending Turing in the comments. Turing is not concerned with whether computers have a mind in any deep sense; he’s only concerned with whatever behaviors are required to sustain the ‘human illusion’, given the biases and prejudices of human judges.

    There are lots of ongoing research in psychology and computer science that has been inspired by this basic insight. Dennett’s intentional stance is basically the theoretical refinement of Turing’s proposal, and it’s had a huge impact on our psychologial understanding of a Theory of Mind. My favorite example of recent experiments modeled on the Turing Test paradigm are the perceptual crossing experiments: http://goo.gl/UnKb7f

    In light on this ongoing research, it’s almost offensive to compare Turing to Ptolemy, as if the former is part of some laughably obsolete paradigm. In fact, Turing’s test is part of the core conceptual resources of the modern understanding of the mind. Our views have matured in the last 64 years, sure, but that makes Turing more like Darwin from the perspective of the modern synthesis, and much less like Ptolemy after the Copernican revolution.

    Massimo, especially given that you are a public figure known for demarcating pseudoscience from real science, I find the comparison of Turing to Ptolemy to be intellectually irresponsible. Turing arguably deserves to have his head carved in the granite of Mount Science alongside Newton, Darwin, and Einstein. There’s no question as to his central role in the development in computer science, and his treatment of artificial intelligence continues to have an impact on our understanding of the mind.

    I’ll be defending Turing’s test in a public HOA tomorrow at 10pm EST. Everyone is welcome to participate! http://goo.gl/AoYJWB

    Liked by 1 person

  30. I wonder if there’s another linguistic test possible. I would like to baptise it the “Yogi Berra test” in honour of that great American philosopher.

    If a machine can understand sentences like “most lies they tell about me aren’t true” or “nobody goes to that restaurant anymore, it’s much too busy” or “If you don’t go to other people’s funerals, they won’t come to yours” that would be interesting.

    Or, if you see two women and you wonder if they’re lesbians,
    – You think they’re a couple?
    – The one of the right, yes, the other one, no, I don’t think so.

    Like

  31. Hi Massimo,

    You said a test for machine thinking should include:
    Intelligence: The ability to acquire and apply knowledge and skills.
    Computing power: The power to calculate.
    Self-awareness: The conscious knowledge of one’s own character, feelings, motives, and desires.
    Sentience: The ability to perceive or feel things.
    Memory: The faculty of storing and retrieving information.

    My philosophy (ahem) is that any amount of information processing constitutes thinking (and, ahem, consciousness). In this regard, I think Eugene (or an easily modified variation of Eugene) could pass each of your tests, to a degree. The question is can Eugene perform as well as a human in each ability. Assuming for the sake of argument that we can agglomerate one or more existing computers and programs and call that “Eugene”, I think it’s safe to say Eugene would surpass humans in knowledge acquisition (Watson), computing power (Wolfram Alpha),memory (duh), sentience (take the collection of measurement devices hooked up the computers from any decent University), and self awareness (having a more or less perfect awareness of its internal states, something easy to program).

    I think what is missing from your list (and what most people here require/suggest) is abstract thought, i.e., the ability to map a generic abstract object to a construct in memory, assign abstract properties to the abstract object, perform abstract manipulations of the object based on those properties, and provide predictions (verbally or via other behavior) based on those manipulations.

    Personally, I think passing the Turing test is a worthwhile project as an incentive toward improving our information processing abilities. It’s just time to raise the bar.

    James

    Like

  32. I’m sure this would not be a pleasant experience for the simulated brain, but I don’t think it would necessarily cease to be conscious immediately just because it is bodiless, any more than a locked-in syndrome patient does

    So you’re saying that if you remove absolutely all sensory input to that brain – all energy flow into the brain – it will remain conscious? What is causing the neurons to fire? That actually requires energy flow, no? I’m not sure why you say “immediately” here. I’m not talking about logical necessity. I’m talking about physical necessity. So sure, the brain might continue to fire for a very brief time.

    It cannot be a metaphysical necessity because we could in principle directly code in all the effects of that body and environment to the virtual brain

    Yeah, I don’t think I said it was a metaphysical necessity. I was saying that the body and body agency play a critical role in intelligence coming to be. So yeah, I’m a physicalist. If you “copy” the brain to a sufficient level of accuracy, you’d get all experiences, memory, etc. But that brain would still need a body (physically) to remain intelligent.

    Like

  33. Hi imzasirf,

    How else can we talk about life in a virtual world

    Who’s talking about *life* qua life in a virtual world? I’m talking only of a virtual entity existing in a virtual environment and asking whether it could be intelligent or conscious. I am not asking whether it would be alive, because that question doesn’t interest me very much.

    Like

  34. When we talk of a Turing test, modified or not, we have already admitted failure.
    That is because we are sidestepping a problem that is simply stated but very profound. The problem, quite simply, is that there is no known way of getting semantics from syntax. What is the point of a Turing test when we cannot solve that problem? If we don’t solve the problem and the machine does pass the Turing test all we have done is create a clever illusion. If we do solve that problem we have no need for the Turing test.

    Talking about the Turing test is just an evasion, a form of hand waving in the hope that a clever illusion can substitute for solving the real problem.
    The real problems that must be solved are:
    1) cognition, get semantics from syntax,
    2) emotion, get emotion from syntax,
    3) awareness, get awareness from syntax,
    4) intent, get intent from syntax.
    Once we solve these problems the Turing test becomes an interesting historical relic.
    Until we solve these problems the Turing Test is a time wasting diversion from the real problem.

    Like

  35. I am not sure what would even be the point of a simulation if it did not model the behaviour of the system it was simulating.

    We would not need an advanced matter compiler to get a simulation of C Elegans which produced wriggling, seeking food, avoiding danger behaviour that matched the observed behaviour of a real C Elegans. I don’t see why the principle would be different for a simulation of a human, just the scale and the practicalities.

    Unless Naturalism is false then it is possible, in principle, to have a computer simulation which models the externally observable behaviour of our biological systems.

    Whether or not that simulation actually feels anything comes down to philosophical interpretation.

    Like

  36. I doubt that actual intelligence is in any way “easy”. It certainly has not been easy for AI researchers.

    On the other hand it would probably be, by comparison, quite simple to produce a compelling illusion of it. My chatbot in the 80’s worked with a memory limitation of 64k and a slow 360K disk. If I applied the same principle with any normal computer and had a few terabytes of data stored in a modern database and had a large team of programmers to produce some robust syntax parsing and scenario modelling then I am pretty sure that it would blow the Turing Test away and probably pass muster on Twitter, Facebook and a number of forums I could mention.

    Sure it would come across as a bit dumb and irritatingly evasive but it would probably be taken by most as a real person, but would be no more intelligent that the version running on the vintage IBM PC.

    CyberRobin even managed to raise a few laughs, which is more than I am able to do.

    Like

  37. There certainly is a distinction from *my* point of view. Stuff can hurt me but a simulation of stuff cannot.

    Yeah, I think this illustrates the problem people have in thinking about simulations. A simulation of stuff can hurt you if you are in the simulation.

    When someone says that within a perfect simulation, a simulated conscious being would be actually conscious, they mean that the simulated being within the simulation would experience consciousness. This is nothing but a corollary of physicalism (although to some it seems an ironic one). If the simulation is physically causally complete, a physicalist would say that there’s nothing else to be going on, so it would *have* to be conscious from the perspective of the inside of the simulation.

    All of which means that Massimo is not a physicalist ;). So I guess he and Descartes are like pals.

    Like

  38. Just think of how many cat owners say “He understands everything I say”, the computer program need only successfully exploit that effect in order to pass the Turing Test.

    Like

  39. And, granted, a cat is intelligence, but it is not the intelligence of the cat which leads people to think that the cat is understanding, just their own projection.

    Think also of how people react to cars as though they were intelligent beings.

    Like

  40. Hi Labnut,

    You cannot see, hear or feel. No perceptual signals of any kind come through to your brain. Will you be conscious?

    Yes, I think so. Although you probably could not continue for long in that state without going mad, experiencing hallucinations etc.

    Like

  41. Hi Asher,

    So you’re saying that if you remove absolutely all sensory input to that brain – all energy flow into the brain – it will remain conscious?

    Yes, I think so. For quite some time, although it may not be sustainable indefinitely.

    What is causing the neurons to fire?

    Other neurons. I think that neural activity is self-sustaining. If you killed all neural activity I think you would kill the brain or the mind. So I’m assuming that the brain remains active as sensory input is cut off.

    That actually requires energy flow, no?

    Energy is supplied by nutrients in the blood. I’m assuming, if our simulation is a full physical simulation of a biological brain, that the blood supply to the brain is being simulated.

    I’m not sure why you say “immediately” here.

    Because I don’t know how long human consciousness is sustainable without sensory input. It may not be indefinite. But I would think it would last hours or days if not longer. Perhaps the brain would drift between sleep and hallucinatory phases.

    Like

  42. Hi Robin,

    I doubt that actual intelligence is in any way “easy”. It certainly has not been easy for AI researchers.

    I never said it would be easy! I said that one way might be easier than another way, the same way there are easier and harder ways of getting to Mars.

    I am pretty sure that it would blow the Turing Test away

    Perhaps it would do as well as Eugene Goostman (although that seems unlikely). Would it pass a properly run Turing Test? No way, in my opinion.

    Like

  43. We’re not talking about people having the illusion of an agent. People already have this illusion with quite ordinary computer programs. We’re talking about a program people cannot distinguish from an actual person, even when they are trying their best to do so. A cat cannot do this and nor can any existing computer program.

    Like

Comments are closed.