The Turing test doesn’t matter

turing testby Massimo Pigliucci

You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being [1,2])! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.

Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened [3] was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed the Turing test, the other one was Cleverbot, back in 2011 [4]. Fourth, Eugene only squeaked by, technically convincing “at least 30% of the judges” (a pretty low bar) for a mere five minutes. Fifth, Veseloy cheated somewhat, by giving Eugene the “personality” of a 13-yr old Ukrainian boy, which thereby somewhat insulated the chatterbot from potential problems caused by its poor English or its inept handling of some questions. As you can see, the whole thing was definitely hyped in the press.

Competitions to pass the Turing test have become fashionable entertainment for the AI crowd, and Brian Christian — who participated in one such competition as a human decoy — wrote a fascinating book about it [5], which provides interesting insights into why and how people do these things. But the very idea of the Turing test is becoming more and more obviously irrelevant, ironically in part precisely because of the “successes” of computer scripts like Cleverbot and Eugene.

Turing proposed his famous test back in 1951, calling it “the imitation game.” The idea stemmed out of his famous work on what is now known as the Church-Turing hypothesis [6], the idea that “computers” (very broadly defined) can carry out any task that can be encoded by an algorithm. Turing was interested in the question of whether machines can think, and he was likely influenced by the then cutting edge research approach in psychology, behaviorism [7], whose rejection of the idea of internal mental states as either fictional or not accessible scientifically led psychologists for a while to study human behavior from a strictly externalist standpoint. Since the question of machine thought seemed to be even more daunting than the issue of how to study human thought, Turing’s choice made perfect sense at the time. This, of course, was well before many of the modern developments in computer science, philosophy of mind, neurobiology and cognitive science.

It didn’t take long to realize that it was not that difficult to write short computer scripts that were remarkably successful at fooling human beings into thinking they were dealings with humans rather than computers, at least in specific domains of application. Perhaps the most famous one was Eliza, which simulates a Rogerian psychotherapist [8], and which was invented by Joseph Weizenbaum in the mid ‘60s. Of course, Eliza is far more primitive than Cleverbot or Eugene, and its domain specificity means that it technically wouldn’t pass the Turing test. Still, try playing with it for a while (or, better yet, get a friend who doesn’t know about it to play with it) and you can’t avoid being spooked.

That’s in large part because human beings have a strong instinctual tendency to project agency whenever they see patterns, something that probably also explains why a belief in supernatural entities is so widespread in our species. But precisely because we know of this agency-projection bias, we should be even more careful before accepting any purely behavioristic “test” for the detection of such agency, especially in novel situations where we do not have a proper basis for comparison. After all, the Turing test is trying to solve the problem of other minds [9] (as in, how do we know that other people think like we do?) in the specific case of computers. The difference is that a reasonable argument for concluding that people that look like me and behave like me (and who are internally constituted in the same way as I am, when we are able to check) indeed also think like me is precisely that they look, behave and are internally constituted in the same fashion as I am. In the case of computers, the first and third criteria fail, so we are left with the behavioristic approach of the Turing test, with all the pitfalls of behaviorism, augmented by its application to non biological devices.

But there are deeper reasons why we should abandon the Turing test and find some other way to determine whether an AI is, well, that’s the problem, is what, exactly? There are several attributes that get thrown into the mix whenever this topic comes up, attributes that are not necessarily functionally linked to each other, and that are certainly not synonyms, even though too often they get casually used in just that manner.

Here are a number of things we should test for in order to answer Turing’s original question: can machines think? Each entry is accompanied by a standard dictionary definition, just to take a first stab at clarifying the issue:

Intelligence: The ability to acquire and apply knowledge and skills.

Computing power: The power to calculate.

Self-awareness: The conscious knowledge of one’s own character, feelings, motives, and desires.

Sentience: The ability to perceive or feel things.

Memory: The faculty of storing and retrieving information.

It should be obvious that human beings are characterized by all of the above: we have memory, are sentient, self-aware, can compute, and are intelligent (well, some of us, at any rate). But it’s also obvious that these are distinct, if in some ways related, attributes of the human mind. Some are a subset of others: there is no way for someone to be self-aware and yet not sentient; yet plenty of animals are presumably the latter but likely not the former (it’s hard to tell, really). It should further be clear that some of these attributes have little to do with some of the others: one can imagine more and more powerful computing machines which nonetheless are neither intelligent nor self-aware (my iPhone, for instance). One can also agree that memory is necessary for intelligence and self-awareness, but at the same time realize that human memory is nothing like computer memory: our brains don’t work like hard drives where information is stored and reliably retrieved from. In fact, memories are really best thought of as continuous re-interpretations of past events, whose verisimilitude varies according to a number of factors, not the least of which is emotional affect.

So, when we talk about “AI,” do we mean intelligence (as the “I” deceptively seems to stand for), computation, self-awareness, all of the above? Without first agreeing at the least on what it is we are trying to do we cannot possibly even conceive of a test to see whether we’ve gotten there.

Now, which of the above — if any — does the Turing test in particular actually test for? I would argue, none. Eugene passed the test, but it certainly lacks both sentience and, a fortiori, self-awareness. Right there it seems to me therefore that its much trumpeted achievement has precious little interesting to say to anyone who is concerned with consciousness, philosophy of mind, and the like.

If I understand correctly what a chatterbox is, Eugene doesn’t even have memory per se (though it often does rely on a database of keywords), not in the sense in which a computer has memory, and certainly not in the way a human does. Does it have computing power? Well, yes, sort of, depending on the computing power of its host machine, but not in any interesting sense that should get anyone outside of the AI community excited.

Finally, is it intelligent? Again, no. Vladimir Veselov, the human who designed Eugene, is intelligent (and sentient, self-aware, capable of computation and endowed with memory), while Eugene itself is just a (very) clever trick, nothing more.

And that’s why we need to retire the Turing test once and for all. It doesn’t tell us anything we actually want to know about machine thinking. This isn’t Turing’s fault, of course. At the time, it seemed like a good idea. But so were epicycles in the time of Ptolemy, or luminiferous aether before the Michelson–Morley experiment.

What are we going to replace it with? I’m not sure. Aside from the necessary clarification of what it is that we are aiming for (intelligence? Self-awareness? Computational power? All of the above?),  we are left with an extreme version of the above mentioned problem of other minds. And that problem is already very difficult when it comes to the prima facie easier case of non-human animals. For instance, it’s reasonable to infer that closely related primates have some degree of self-awareness (let’s focus on that aspect, for the sake of discussion), but how much? Unlike most human beings, they can’t communicate to us about their perceptions of their own character, feelings, motives, and desires. What about other animals with complex brains that are more distant from us phylogenetically, and hence more structurally different, like octopi? Again, possibly, to a degree. But I’d wager that ants, for instance, have no self-awareness, and neither does the majority of other invertebrate species, and possibly even a good number of vertebrates (fish? Reptiles?).

When we talk about entirely artificial entities, such as computers (or computer programs), much of the commonsense information on the basis of which we can reasonably infer other minds — biological kinship, known functional complexity of specific areas of the brain, etc. — obviously doesn’t apply. This is a serious problem, and it requires an approach a lot more sophisticated than the Turing test. Indeed, it is dumbfounding how anyone can still think that the Turing test is even remotely informative on the matter. We are in need first of all of clarifying quite a bit of conceptual confusion, and then of some really smart (in the all-of-the-above sense) human being coming up with a new proposal. Anyone wish to give it a shot?

P.S.: the Colbert Report just put out a video that includes my latest and most cutting edge thoughts on black lesbian robotic invasions. Thought you might be interested…

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] For an in-depth discussion see: The Turing Test, by Graham Oppy and David Dowe, Stanford Encyclopedia of Philosophy.

[2] Incidentally, and for the sake of giving credit where credit is due, perhaps this should be called the Descartes test. In The Meditations, Descartes wrote: “If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.” Of course, Descartes was a skeptic about machine intelligence, but the basic idea is the same.

[3] A Chatbot Has ‘Passed’ The Turing Test For The First Time, by Robert T. Gonzalez and George Dvorsky, io9, 8 June 2014.

[4] Why The Turing Test Is Bullshit, by George Dvorsky, io9, 9 June 2014.

[5] The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, by Brian Christian, Doubleday, 2011.

[6] The Church-Turing Thesis, by B. Jack Copeland, Stanford Encyclopedia of Philosophy.

[7] Behaviorism, by George Graham, Stanford Encyclopedia of Philosophy.

[8] If you’d like to play with Eliza, here is a java version of the script.

[9] Other Minds, by Alec Hyslop, Stanford Encyclopedia of Philosophy.

Advertisements

371 thoughts on “The Turing test doesn’t matter

  1. Hi Labnut,

    I am using the word in the intuitive sense that most people use it and that is good enough for the purposes of my argument.

    This is the problem. The intuitions you have are not the intuitions that Coel and I have. To Coel and I, it is perfectly intuitive to imagine that a computer can mean. It is not intuitive to you. If we could pin down and articulate in a non-circular way what it is that you find lacking in Coel’s and my accounts of “shmeaning”, then we might be able to make some progress.

    The fact that you cannot articulate what you are looking for does not mean that you are wrong. But it is what makes discussing this as difficult as it is. If you could express yourself clearly, it would really help us out.

    But the fact that we don’t understand how meaning-making happens in our brains is absolutely fatal to your cause. How can you possibly construct something when you don’t understand what it is that you are trying to construct?

    We think meaning and “shmeaning” are the same thing: nothing more than functional ability, representation and correlation/correspondence/causal connection, and so we think that we do understand it. You think we’re leaving something out, but we don’t know what that is. We are not being disingenuous. We simply have very different intuitions from you.

    When I was very young, I remember having the intuition that things must fall down if they are not supported. The idea of gravity as a force that could be absent was very weird to me. It seemed crazy that anything could float without being supported. Furthermore, gravity had to point “down”. The idea that it could point towards the centre of a ball did not sit well with me. I couldn’t understand intuitively why those in the Southern hemisphere (like you!) wouldn’t simply fall off.

    When I was much older, I found quantum mechanics to be very unintuitive. I just could not get my head around the double slit experiment. Reality simply couldn’t work like that. It made too little intuitive sense. But, as with gravity, I recalibrated my intuitions and I no longer find it quite as weird (although it’s still pretty weird!)

    The reason I feel that I can dismiss your intuition as false is because I think it likely that they are just the same inexpressible intuitions as those I once held regarding consciousness. I read a few books on the subject (Godel, Escher, Bach was quite thought-provoking) and thought a great deal, and now I no longer hold those intuitions. I think they are false, misleading intuitions just like those I held regarding gravity and quantum mechanics, and I think they create problems where there are none.

    I could be missing something, but since you cannot express what that is, your arguments are not persuasive to me.

    Like

  2. The truth is we simply don’t know. What I am claiming is that syntactical machines, such as our present computers, manipulate syntactical objects that possess zero meaning. No matter how you process and link objects that have zero meaning you will still end up with a result that contains zero meaning. The logic is inescapable.

    Well, since the logic is inescapable, it is incumbent on the computationalist to explain how he escapes it.

    I would say that the “atom” of meaning is a connection between two symbols which might themselves be meaningless. As long as you have one such connection, you have the very simplest kind of meaning, which is that A is related to B in some way. From such atoms, we can build richer semantics, particularly as we can vary the strength of the connections, change the nature of certain connections (e.g. unidirectional or bidirectional) and include links to the senses in the semantic web.

    Like

  3. DM,
    thank you for your kind words. I thought you might appreciate the Chinese slant.

    The basic problem is that it conflates the idea of external representations such as language with the internal representation of a mind
    Not at all. I am saying the external representation is a trigger that evokes a cascade of internal meaning(internal to the mind). The internal meanings are attached to the external representation which can then be used to augment the internal store of meaning.

    So let’s not take syntax and semantics too literally
    Why ever not? It illuminates the heart of the problem so we should take it seriously.

    Like

  4. DM,
    I would say that the “atom” of meaning is a connection between two symbols which might themselves be meaningless.
    That really is the heart of your argument and I am glad you have spelled it out so plainly.
    What you seem to be saying is this

    [object with zero meaning]

    [object with zero meaning]
    = some meaning.

    I completely fail to see how a network of relationships between objects with zero meaning creates meaning(in a computer).

    DM, Coel, I am bowing out of this discussion for good so I wish you two goodnight and thanks for your contributions. I am suffering from extreme comment fatigue and Massimo probably needs a rest. Instead I will return to reading two delightful novels by Guo Xiaolu, which vividly bring to mind my own experiences of China. DM, I think you will enjoy them. Once again, thanks for all your thoughts.

    Like

  5. Hi Labnut,

    What you seem to be saying is this

    [object with zero meaning]

    [object with zero meaning]
    = some meaning.

    Yes, although I would draw the connection and I would attribute the meaning to the connection.

    In any case, your argument seems to me to be like saying that because electrons are colourless and protons are colourless and neutrons are colourless, it must therefore be impossible to create colourful objects by combining electrons and protons and neutrons. The logic is just as “inescapable”, but in this case it’s clearly wrong.

    Like

  6. Disagreeable Me: “We can also have more complex phrases such as
    #%#_^_%^
    This would seem to be a syntax without semantics.”

    With the currently accepted syntax-theory (CST), DM’s statement is correct. But, there are two issues on this CST.
    One, it is not all encompassing, that is, at least one type of language sits not well with this CST description.
    Two, it is not self-consistent. And, this is what I am trying to show here.

    There is no issue about that syntax is different from semantics. But, can they be totally ‘independent’ from each other?
    1. Are they ‘necessarily’ independent from each other?
    2. Is in ‘principle’ that they can be independent from each other?

    Let’s start with the “Fido is a dog. Fifi is a cat.” That is, they are the semantic-manifestations from the syntax (X is a Y).

    When we ‘define’ that (X is a Y) is voided with semantics, then of course the syntax (X is a Y) must be without semantics, by definition.

    Yet, is (X is a Y) carrying any additional something (especially viewed by an intelligent agent), in addition to being just inked marking? I do see something additional; it shows a relation (between X and Y). What is this additional something? Nonsense or some meaning? With the CST, this additional something is definitely not semantics by definition. Of course, it is fine. But, this additional something is still here as a ‘reality’. Again, we can ignore it or try to deal with it.

    Well, let’s look DM’s syntax (#%#_^_%^). Let me tell a story first. I have showed that the entire Chinese character system is composed of 220 roots. But, this fact was not known for 2,000 years. Thus, half of the roots which I identified was not known before my work, and they are not implemented in the ‘language-pack’. There is no way to ‘type’ them out. So, I must make them as ‘jpg’ files. Then, I give the ‘semantics’ to those syntaxes (jpg files).

    Now, all the gismos of DM’s syntax (#; %; #; _; ^) are ‘implemented’ in the English-language-pack. In that implementation, they must have the followings,
    a. The symbol (sign)
    b. The meaning of that symbol
    c. The shared ‘usage’ of the symbol

    Examples:
    “Enter your last 4-digits of your social, followed with #”
    “To go back to the main-manual, press *.”
    “What is the % increase of your wage this year?”
    “The landscape of M-string theory is 10^500 …”
    “Put a _ between the words”

    So, is these gismos (symbols) voided of meaning? Of course, we can still artificially ‘define’ that they have no ‘semantics’. And, I will take this definition without a big fuss when I am not working on ‘Linguistics’. If I do work on ‘Linguistics’, I must deal with the b) and c) somehow.

    Yet, is (#%#_^_%^) has meaning? It has ‘a least’ one meaning, nonsense. Is a ‘nonsense’ not some kind of semantics? Of course, it could be a top ‘secret-code’ of some sort. Well, we can define the ‘semantics’ as follow:
    “If the meaning of a syntax is not readable by a reader, it is voided of ‘semantics’ for that reader.”

    A good definition. But, the ‘advanced calculus’ is definitely voided of semantics for 99% of the population.

    Is a big falling rock on the middle of the highway voided of semantics? First, it is not even an ‘entity’ in linguistics. Should we ignore that big rock (not even a linguistic marking) when we run our cars with the speed of 60 miles an hour in the collision course with it? I will choose to ‘read’ the exact meaning which ‘radiates’ out by that thing. Indeed, every ‘marking’ (inked or protruded or else) always radiates something out to its ‘environment’ (not of itself).

    The current ‘Syntax-theory’ is perfectly fine, by definition. But, it is useless in the real world.

    Like

  7. Hi DM,

    But Labnut’s example is about reading in text, so you also seem to be saying that a machine could learn to understand everything by reading in definitions of definitions ad infinitum all written in unfamiliar symbols.

    Not “everything”, just something.

    Like

  8. Hi labnut,

    But it doesn’t disqualify my argument because the common, everyday intuitive definition of meaning is perfectly adequate for my argument. I am using the word in the intuitive sense that most people use it and that is good enough for the purposes of my argument.

    What you mean is that your argument depends entirely on using a human “intuitive” understanding of “meaning”. That human intuition is anthropocentric and largely dualist. Thus low-level computing can do the shuffling around of symbols, but it takes a “soul”, or something more than the materialist account of electrons whizzing around to do the “understanding” of “meaning”. When you say that you use an “intuitive definition of meaning”, you are not even giving a definition, all you’re doing is giving an “intuitive intuition” about meaning.

    So, sorry, I don’t buy any argument based on such feeble foundations. All you’re doing is dismissing the physicalist stance and declaring it untenable based on mere intuition. I’ve given a defendable and useful definition of “meaning” under which your account is simply wrong. If you don’t like it, produce a better definition of meaning, and argue for it.

    But the fact that we don’t understand how meaning-making happens in our brains is absolutely fatal to your cause.

    Not at all. You and Avaris and co are the ones asserting that you know for sure that there is an absence of “meaning” in low-level computing and shuffling around of symbols. How do you know that if you don’t understand what meaning is and where it comes from? Proceeding merely on an intuitive intuition about it and an artificial syntax/semantic divide is not sufficient.

    You are arguing that something is possible when you cannot even begin to show how it is possible.

    I just did begin to show how it is possible, including a definition of “meaning” and a clear route to producing it.

    Some more modesty in your claims is called for.

    Not really my style, sorry! 🙂

    And if you deny that all you have to do is show me how a machine constructs meaning.

    I did just explain to you how a machine constructs meaning, based on my definition of “meaning”. If you don’t like my definition then produce a better one.

    Like

  9. Hi labnut,

    I completely fail to see how a network of relationships between objects with zero meaning creates meaning(in a computer).

    Let’s start with “Thing A {has relation Q} to Thing B”. And also, “Thing A {has relation Q} to Thing C”. We already start to know something about these Things; we may, for example, suspect a relation between B and C. Further, let’s suppose that Thing A in our neural network tends to be triggered by Sensory Input Alpha. And Thing B tends to be triggered by Sensory Input Beta, et cetera. We start to have relations between these things, and can begin to build a model of the world.

    Now this sort of thing, but with trillions of such relations, is what “meaning” and “understanding” actually are. That is all there is to it. It is not the case that at some point some mystical dualistic woo gets sprinkled over everything, and then magically the brain understands. The web of such relations is all there is.

    Like

  10. What you mean is that your argument depends entirely on using a human “intuitive” understanding of “meaning”. That human intuition is anthropocentric and largely dualist. Thus low-level computing can do the shuffling around of symbols, but it takes a “soul”, or something more than the materialist account of electrons whizzing around to do the “understanding” of “meaning”.

    This seems so obviously correct to me.

    For those disagreeing — we know about “meaning” because we “inhabit” our brains. If you are not a dualist, then ask yourself what you would say about the machine that is the brain if you were observing it. You would say the same (or very similar) things about it as you would say if you observed a computer CPU. You wouldn’t see the meaning present in the first case any more than the second.

    Here’s an interesting thing to think about when you consider phenomenological ideas like “meaning”:

    Remember Nagel’s “What is it like to be a bat?” Why does he say “what is it *like*”? Try coming up with an alternative but synonymous title. Do we have any other way of expressing phenomenological ideas besides saying they are “like” something?

    Like

  11. Hi Coel,

    I assert that that simulation is indeed conscious, just like the human, and is experiencing the same experiences.

    And so we are back to my original question.

    We crank the first handle and you say that a conscious experience, just like the one you are experiencing right now, commences.

    What we see is some moving parts move into a different configuration and then stop.

    Now we crank again and those moving parts move into another configuration and then stop.

    Is there something in nature that is linking those two mechanical events, besides the fact that they have both contributed to the current configuration of the parts and the markings on the tape?

    Step 1 was over and done with – finished – before step 2 started. If two events are both contributing to this conscious experience I am having then how exactly are they building upon each other?

    I hold up my hand and see five fingers in the same field of vision. And yet at any given time there is only the vanishingly tiniest part of the processing of even one finger happening and all the rest of that data is sitting static as marks on a tape.

    What is linking all these separate mechanical events that I can see five fingers in the same field of vision?

    And what exactly is producing this conscious experience – is it the moving of the mechanical parts? Is it the marks on the tape? Or a combination of both? How?

    Like

  12. My previous comment claims that any ‘existent’ not only has an intrinsic existential meaning (IEM) but will also radiates something additional to that IEM. But, the ‘meaningless’ can still be defined by an observing intelligent agent. The following four articles provide my arguments on this point.

    “If a thing is physically there but is “never” interact with anything (including itself), it carries no meaning. At here, the “meaning” of a thing has nothing to do with consciousness. As long as it participates in an interaction, it has meaning. … (see http://prebabel.blogspot.com/2012/04/origin-of-spatial-dimensions-and.html ).”

    “Now, for all languages (including mathematics), they share two identical continents (meta-space and meaning-space). That is, “all” languages are permanently linked among one another by these two continents. And, every linguistic-marking will have meaning. … (see http://prebabel.blogspot.com/2013/06/g-string-final-nail-seals-higgs-coffin.html ). This is the base for the ‘Martian language Thesis’.

    “Thus, the following data have no meaning to this SUSY parousia. a. For SUSY (with s-particles, such as Neutralino) — no SUSY below 1 Tev was discovered at LHC, and it received a deadly blow by the LHCb data. b. For the … If SUSY does not interact with ‘this’ universe, it has no meaning to this universe. (see http://prebabel.blogspot.com/2013/11/the-hope-of-susy-parousia.html ).

    “Existential principle (EP) — for attribute X which exists at the bottom tier of a hierarchy system (with multi-level tiers), the “meaning” of the attribute X will be preserved (shown up) in the top tier (such as the macro-world) of the … (see http://prebabel.blogspot.com/2012/04/origin-of-time-breaking-of-perfect.html ).”

    Like

  13. The “Turing Test” certainly ought to be called the Descartes Test, in light of the quote you gave. To know who “invented” what is not just a question of justice. And no just a question of the history of the systems of thought. It’s also a question of logic: knowing an idea appeared early on is a hint that it ought to be obvious, for example.

    This process of associating the correct labels ought to be extended to all fields on inquiry. For example, Johanus Buridanus formulated clearly the law of inertia, circa 1320. That’s more than three centuries before the Anglo-Saxon gentleman generally celebrated as its author, was born.

    This is a testimony that the Church was incredibly efficient, in the late Fifteenth Century, in its repression of advanced thinking. Buridan was put to the “Index”… Except in Cracow, where Copernicus studied, and, when he was dying, the latter re-published Buridan heliocentric proposition.

    This ought to be a warning to the pseudo-scientist attitude about the Multiverse and Strings: too much craziness could lead to an anti-science backlash, on the ground of common sense.

    Like

  14. Hi Robin,
    Not sure if further replies are now allowed on this thread, but:

    First, let’s consider my scenario of a human in a machine that can freeze all molecules and ions dead stop, plus an experimenter sitting there randomly pressing the freeze/unfreeze button.

    What happens to the human consciousness? My answer is that it is not changed by the freezes. The human subjective experience edits them out, or rather, since the gaps are not experienced, they don’t even have to be edited out. I give the same answer about your crank intervals.

    To your question: “what is linking all these mechanical events” I answer that it is the information processing that they add up to that is important. I could ask the same question about electrons moving a micron in the brain, and give the same answer.

    To your question “Or a combination of both?” I’d say yes, it is the information processing that is important and all the mechanical effects add up to that, just as electron and ion motions do.

    Like

  15. Self awareness is the acid test. I believe homo sapiens did not become self aware before the old stone age. Once it happens then self judgement appears and the conscience is born , man becomes

    Like

  16. Reblogged this on Patrice Ayme's Thoughts and commented:
    Descartes-Turing Test Is Stupid
    The “Turing Test” is a big deal in Artificial Intelligence and logic, for reasons that are assuredly not flattering. As the “Test” is obviously flawed. It confuses conversation and imagination, while identifying both to intelligence.
    That’s more or less demonstrated in the following essay (which I wanted to write long ago, and have alluded to, here and there).
    The “Turing” Test certainly ought to be called the Descartes Test, in light of the quote given in the attached essay. To know that the Turing test was actually invented by Descartes is of no small consequence.
    Who “invented” what is not just a question of justice. And no just a question of the history of the systems of thought. It’s also a question of logic: knowing an idea appeared early on is a hint that it ought to be obvious, for example.
    This process of associating the correct labels ought to be extended to all fields on inquiry. For example, Johanus Buridanus formulated clearly the law of inertia, circa 1320. That’s more than three centuries before the Anglo-Saxon gentleman generally celebrated as its author, was born.
    This is a testimony that the Church was incredibly efficient, in the late Fifteenth Century, in its repression of advanced thinking. Buridan was put to the “Index”… Except in Cracow, where Copernicus studied, and, when he was dying, the latter re-published Buridan heliocentric proposition.
    This ought to be a warning to the pseudo-scientist attitude about the Multiverse and Strings: too much craziness could lead to an anti-science backlash, on the ground of common sense.
    The Turing Test pretends that intelligence is all about conversation. It’s not. It’s about imagination.
    Patrice Ayme

    Like

  17. “If there is a significant difference from the perspective of a simulated brain between having no spinal cord or having a spinal cord severed at the brainstem, or between having an endocrine system which doesn’t work and having no endocrine system, then I don’t see it”

    But it’s not only the endocrine system and ‘spinal system’ body parts that you are proposing to not simulate, and depending on who you ask, the parts of the brain that would be simulated vary a lot…

    But comments will be turned off shortly, so I’ll just add that I think the brain would care significantly about the missing parts, if it could care on its own.

    “Perhaps we have no principle by which we can suppose it would work, but I don’t see that we have any principle by which to suppose it wouldn’t either.”

    Yes, we also don’t have any principle by which to suppose it wouldn’t work, and that also doesn’t make the idea any more realizable.

    Like

  18. Reblogged this on A Year of Coding and commented:
    An excellent exploration of the Turing test, AI, and the ability of the former to assess the later. I especially agree with Mr. Pigliucci’s argument that we really don’t even know what we are looking for with this test. We’re apparently asking whether some ‘being’ is close enough to us in its way of thinking and expressing itself that we are fooled into assuming that it is human.
    The question is pushed into one of ‘what is it to be human?’ or ‘can we even believe that other humans are human?’
    Does cloning / synthesizing the attributes of a human make a human? This might be a problem of too much ‘teaching to the test.’
    And, again, the author reminds us that we actually want to believe in agency. It works with the way we think…
    He continues,”I’d wager that ants, for instance, have no self-awareness,” yet we want to see intelligence in what they do.

    A great question and a great article.

    Like

Comments are closed.