The Turing test doesn’t matter

turing testby Massimo Pigliucci

You probably heard the news: a supercomputer has become sentient and has passed the Turing test (i.e., has managed to fool a human being into thinking he was talking to another human being [1,2])! Surely the Singularity is around the corner and humanity is either doomed or will soon become god-like.

Except, of course, that little of the above is true, and it matters even less. First, let’s get the facts straight: what actually happened [3] was that a chatterbot (i.e., a computer script), not a computer, has passed the Turing test at a competition organized at the Royal Society in London. Second, there is no reason whatsoever to think that the chatterbot in question, named “Eugene Goostman” and designed by Vladimir Veselov, is sentient, or even particularly intelligent. It’s little more than a (clever) parlor trick. Third, this was actually the second time that a chatterbot passed the Turing test, the other one was Cleverbot, back in 2011 [4]. Fourth, Eugene only squeaked by, technically convincing “at least 30% of the judges” (a pretty low bar) for a mere five minutes. Fifth, Veseloy cheated somewhat, by giving Eugene the “personality” of a 13-yr old Ukrainian boy, which thereby somewhat insulated the chatterbot from potential problems caused by its poor English or its inept handling of some questions. As you can see, the whole thing was definitely hyped in the press.

Competitions to pass the Turing test have become fashionable entertainment for the AI crowd, and Brian Christian — who participated in one such competition as a human decoy — wrote a fascinating book about it [5], which provides interesting insights into why and how people do these things. But the very idea of the Turing test is becoming more and more obviously irrelevant, ironically in part precisely because of the “successes” of computer scripts like Cleverbot and Eugene.

Turing proposed his famous test back in 1951, calling it “the imitation game.” The idea stemmed out of his famous work on what is now known as the Church-Turing hypothesis [6], the idea that “computers” (very broadly defined) can carry out any task that can be encoded by an algorithm. Turing was interested in the question of whether machines can think, and he was likely influenced by the then cutting edge research approach in psychology, behaviorism [7], whose rejection of the idea of internal mental states as either fictional or not accessible scientifically led psychologists for a while to study human behavior from a strictly externalist standpoint. Since the question of machine thought seemed to be even more daunting than the issue of how to study human thought, Turing’s choice made perfect sense at the time. This, of course, was well before many of the modern developments in computer science, philosophy of mind, neurobiology and cognitive science.

It didn’t take long to realize that it was not that difficult to write short computer scripts that were remarkably successful at fooling human beings into thinking they were dealings with humans rather than computers, at least in specific domains of application. Perhaps the most famous one was Eliza, which simulates a Rogerian psychotherapist [8], and which was invented by Joseph Weizenbaum in the mid ‘60s. Of course, Eliza is far more primitive than Cleverbot or Eugene, and its domain specificity means that it technically wouldn’t pass the Turing test. Still, try playing with it for a while (or, better yet, get a friend who doesn’t know about it to play with it) and you can’t avoid being spooked.

That’s in large part because human beings have a strong instinctual tendency to project agency whenever they see patterns, something that probably also explains why a belief in supernatural entities is so widespread in our species. But precisely because we know of this agency-projection bias, we should be even more careful before accepting any purely behavioristic “test” for the detection of such agency, especially in novel situations where we do not have a proper basis for comparison. After all, the Turing test is trying to solve the problem of other minds [9] (as in, how do we know that other people think like we do?) in the specific case of computers. The difference is that a reasonable argument for concluding that people that look like me and behave like me (and who are internally constituted in the same way as I am, when we are able to check) indeed also think like me is precisely that they look, behave and are internally constituted in the same fashion as I am. In the case of computers, the first and third criteria fail, so we are left with the behavioristic approach of the Turing test, with all the pitfalls of behaviorism, augmented by its application to non biological devices.

But there are deeper reasons why we should abandon the Turing test and find some other way to determine whether an AI is, well, that’s the problem, is what, exactly? There are several attributes that get thrown into the mix whenever this topic comes up, attributes that are not necessarily functionally linked to each other, and that are certainly not synonyms, even though too often they get casually used in just that manner.

Here are a number of things we should test for in order to answer Turing’s original question: can machines think? Each entry is accompanied by a standard dictionary definition, just to take a first stab at clarifying the issue:

Intelligence: The ability to acquire and apply knowledge and skills.

Computing power: The power to calculate.

Self-awareness: The conscious knowledge of one’s own character, feelings, motives, and desires.

Sentience: The ability to perceive or feel things.

Memory: The faculty of storing and retrieving information.

It should be obvious that human beings are characterized by all of the above: we have memory, are sentient, self-aware, can compute, and are intelligent (well, some of us, at any rate). But it’s also obvious that these are distinct, if in some ways related, attributes of the human mind. Some are a subset of others: there is no way for someone to be self-aware and yet not sentient; yet plenty of animals are presumably the latter but likely not the former (it’s hard to tell, really). It should further be clear that some of these attributes have little to do with some of the others: one can imagine more and more powerful computing machines which nonetheless are neither intelligent nor self-aware (my iPhone, for instance). One can also agree that memory is necessary for intelligence and self-awareness, but at the same time realize that human memory is nothing like computer memory: our brains don’t work like hard drives where information is stored and reliably retrieved from. In fact, memories are really best thought of as continuous re-interpretations of past events, whose verisimilitude varies according to a number of factors, not the least of which is emotional affect.

So, when we talk about “AI,” do we mean intelligence (as the “I” deceptively seems to stand for), computation, self-awareness, all of the above? Without first agreeing at the least on what it is we are trying to do we cannot possibly even conceive of a test to see whether we’ve gotten there.

Now, which of the above — if any — does the Turing test in particular actually test for? I would argue, none. Eugene passed the test, but it certainly lacks both sentience and, a fortiori, self-awareness. Right there it seems to me therefore that its much trumpeted achievement has precious little interesting to say to anyone who is concerned with consciousness, philosophy of mind, and the like.

If I understand correctly what a chatterbox is, Eugene doesn’t even have memory per se (though it often does rely on a database of keywords), not in the sense in which a computer has memory, and certainly not in the way a human does. Does it have computing power? Well, yes, sort of, depending on the computing power of its host machine, but not in any interesting sense that should get anyone outside of the AI community excited.

Finally, is it intelligent? Again, no. Vladimir Veselov, the human who designed Eugene, is intelligent (and sentient, self-aware, capable of computation and endowed with memory), while Eugene itself is just a (very) clever trick, nothing more.

And that’s why we need to retire the Turing test once and for all. It doesn’t tell us anything we actually want to know about machine thinking. This isn’t Turing’s fault, of course. At the time, it seemed like a good idea. But so were epicycles in the time of Ptolemy, or luminiferous aether before the Michelson–Morley experiment.

What are we going to replace it with? I’m not sure. Aside from the necessary clarification of what it is that we are aiming for (intelligence? Self-awareness? Computational power? All of the above?),  we are left with an extreme version of the above mentioned problem of other minds. And that problem is already very difficult when it comes to the prima facie easier case of non-human animals. For instance, it’s reasonable to infer that closely related primates have some degree of self-awareness (let’s focus on that aspect, for the sake of discussion), but how much? Unlike most human beings, they can’t communicate to us about their perceptions of their own character, feelings, motives, and desires. What about other animals with complex brains that are more distant from us phylogenetically, and hence more structurally different, like octopi? Again, possibly, to a degree. But I’d wager that ants, for instance, have no self-awareness, and neither does the majority of other invertebrate species, and possibly even a good number of vertebrates (fish? Reptiles?).

When we talk about entirely artificial entities, such as computers (or computer programs), much of the commonsense information on the basis of which we can reasonably infer other minds — biological kinship, known functional complexity of specific areas of the brain, etc. — obviously doesn’t apply. This is a serious problem, and it requires an approach a lot more sophisticated than the Turing test. Indeed, it is dumbfounding how anyone can still think that the Turing test is even remotely informative on the matter. We are in need first of all of clarifying quite a bit of conceptual confusion, and then of some really smart (in the all-of-the-above sense) human being coming up with a new proposal. Anyone wish to give it a shot?

P.S.: the Colbert Report just put out a video that includes my latest and most cutting edge thoughts on black lesbian robotic invasions. Thought you might be interested…

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] For an in-depth discussion see: The Turing Test, by Graham Oppy and David Dowe, Stanford Encyclopedia of Philosophy.

[2] Incidentally, and for the sake of giving credit where credit is due, perhaps this should be called the Descartes test. In The Meditations, Descartes wrote: “If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others. For we can certainly conceive of a machine so constructed that it utters words, and even utters words that correspond to bodily actions causing a change in its organs. … But it is not conceivable that such a machine should produce different arrangements of words so as to give an appropriately meaningful answer to whatever is said in its presence, as the dullest of men can do. Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs. For whereas reason is a universal instrument, which can be used in all kinds of situations, these organs need some particular action; hence it is for all practical purposes impossible for a machine to have enough different organs to make it act in all the contingencies of life in the way in which our reason makes us act.” Of course, Descartes was a skeptic about machine intelligence, but the basic idea is the same.

[3] A Chatbot Has ‘Passed’ The Turing Test For The First Time, by Robert T. Gonzalez and George Dvorsky, io9, 8 June 2014.

[4] Why The Turing Test Is Bullshit, by George Dvorsky, io9, 9 June 2014.

[5] The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, by Brian Christian, Doubleday, 2011.

[6] The Church-Turing Thesis, by B. Jack Copeland, Stanford Encyclopedia of Philosophy.

[7] Behaviorism, by George Graham, Stanford Encyclopedia of Philosophy.

[8] If you’d like to play with Eliza, here is a java version of the script.

[9] Other Minds, by Alec Hyslop, Stanford Encyclopedia of Philosophy.

371 thoughts on “The Turing test doesn’t matter

  1. Hi Aravis,

    It’s not begging the question if I’m explaining the viewpoint. It’s begging the question if I’m concluding from this that the viewpoint is correct.

    we have no idea, at this point, what it means to say that a computer “recognizes” that squiggle refers to Aravis or that squoggle refers to the property of being American.

    Well, I at least feel like I have a good idea of what it means. Perhaps I’m wrong.

    It depends on whether squoggle is an internal or external symbol. If it is an internal symbol, then squoggle is simply that computer’s concept of “American”. It doesn’t need to recognise that it means American because it simply is “American” to that computer. If it is an external symbol, then the computer does indeed need to recognise it by some process of translation to the internal representation. But computers do that kind of thing all the time, so that’s no great mystery.

    The problem is that you have a strong intuition that the concepts in your head are more than symbols. They also have meaning, and this is true because (in my view) what it is for you to perceive meaning is for you you be able to translate it into a mental representation, which you don’t have to do for your own mental representation. I think this explains your intuition that there is some difference between semantics and syntax.

    If we need to cash out what it means for those symbols to refer to external objects, then causal relationship and correspondence is enough. If it’s an abstract object that only exists in your head, then there is no external reference to explain. If it’s an abstract object that also exists in the heads of other people, then we can say that there is a causal connection and correspondence in our communication with other people about that abstract object.

    If there’s a problem with semantics and syntax that is not explained by this approach, then this poor computationalist doesn’t see it.

    Like

  2. By saying the brain is unconventional computer, I mean the brain is a (naturally-made) biological computer instead of a humanly-made electronic computer.The two types are made of different materials and process different materials. From a physicalist viewpoint, that doesn’t seem to be uncontroversial to me. (I don’t see what the subject of syntax or semantics to do with distinguishing the two as physical entities.)

    Like

  3. Philip, well, I for one am not at all convinced that to think of the brain as a “computer” is too helpful. There certainly are computational aspects to what the brain does, but I’m afraid computationalists are simply in the thralls of the latest technology and enamored with an analogy.

    Like

  4. Aravis Tarkheena: “That is the problem we are trying to solve — how a computer, which does nothing but perform operations on the syntactic properties of symbols, can “grasp” a symbol-reference relation.”

    Is this a question for me?

    The only problem I see is a major mixed-up. Computer thus far has only an ‘artificially installed conscious’. That is, it is not ‘its’ responsibility to ‘grasp’ anything. If computer get confused, it is all innocent. No, computer is not responsible to ‘grasp’ anything (at least, at this point). If it spells out some nonsense, it will still just be as happy as ever can be.

    But, for us (human), why trouble ourselves? Mathematicians did an excellent job.
    {1, 2, …, n,…} are numbers
    {a, b, …, z} are variables
    {1 + 2 = 3} is an arithmetic formula
    {a + b = c} is an algebra formula
    {a # b = c, # can be any function} is an abstract-algebra formula

    Can we use {a # b = c} as a ‘variable? A definite ‘Yes’.

    So, Fifi is a cat.
    Gigi is a dog.

    While {Fifi, is, a, cat, Gigi, dog} are all tokens, can { Fifi is a cat.} and { Gigi is a dog.) also tokens? Again, a definite ‘Yes’. But, they are two ‘different’ tokens. Well, is {X is a Y} also a token? Of course, it is. Is {X is a Y} a token without any ‘semantic-value’? Definitely, not. {X is a Y} has a semantic-value of being ‘abstract-token’, and we can ‘define’ its semantic-value = 0 or any arbitrary number. Being zero (0), it is absolutely not without a value.

    Like

  5. If the brain isn’t being studied as a type of unconventional/natural computer (the BRAIN initiatives in the US and EU, for example), then what is an alternative approach?

    Like

  6. Hi Massimo,

    First, exactly what developmental processes and selective pressures are you invoking?

    The somewhat trite answer would be: “most of them”, and of course any actual account of this would involve a vast amount of largely unknown detail (as it would if one asked about the developmental processes and selective pressures leading to a kidney).

    Second, what is the adaptive advantage of understanding Fermat’s Last Theorem?

    I don’t know, though there does not have to be adaptive advantage of understanding advanced maths, it could be a by-product of selection for other things.

    Third, you think imaginary concepts, such as Harry Potter, are “mistakes”?

    Not necessarily. The brain will be doing a lot of story telling and slightly-different-reality simulations and running through of what-if? scenarios. All of those would be pretty useful for decision making (which is what the higher processes of the brain evolved to do).

    your explanation is just hand waving, and incomplete hand waving at that.

    Yes it is incomplete (very much so), do you know any account of the human brain that is not? I’d argue for “outline” rather than “handwaving”, but I still don’t see why there is any road block to an explanation along these lines.

    Like

  7. Hi labnut,

    Aravis and Massimo have given good, logical explanations that are transparently obvious to most of us. It is hard for onlookers like myself to see why you insist on rejecting their arguments.

    If it is so transparent to you then explain simply and clearly why the contents of a computer memory cannot contain meaning, but can only contain syntax.

    Like

  8. “The problem is that you have a strong intuition that the concepts in your head are more than symbols. They also have meaning, and this is true because (in my view) what it is for you to perceive meaning is for you you be able to translate it into a mental representation, which you don’t have to do for your own mental representation. I think this explains your intuition that there is some difference between semantics and syntax.”

    —-

    This is quite a jumble of mixed-up ideas.

    The intuition is not “mine.” That at a minimum, “having the concept ‘American'” requires knowing what “American” means (either in the minimal sense of what it refers to or in the thicker sense of both connotation and denotation) is part of the basic framework of the problem of mental content, as *everyone* working on the problem understands it.

    If one does not, at a minimum, know that “American” refers to the property of being American, then in what sense—and not a stipulated, ad hoc, made-up sense—does one “have” the concept “American”?

    Like

  9. “The different referents make the sentences mean two different things.” If I were speaking to another person, who said to me that Fido is a dog and Fifi is a cat, one of the meanings I would get is “names of my pets,” which is the same kind of thing. There is an obvious and indisputable difference in the literal meaning of the two sentences. I may make an error in deriving further meaning from the pair’s syntactical identity. Maybe the interlocutor is teaching remedial taxonomy instead. But it seems to me that syntax in the context of mulitple statements begins to carry, however imperfectly, its own meaning. Out of context? The thing is, taking things out of context is a great way to render meanings incomprehensible. I hate to use a buzz word, but is isn’t it possible that the semantic contents of the sentences in natural language (usually words with referents?) interact with the directions implied by the syntax, multiplying the semantic content, so that the genuine meaning of natural language is emergent? (Can’t be reduced to syntactical rules of a sentence operating the semantics of the referent words, sentence by sentence, which simply add ups?)

    “Put another way, computers recognize syntactic properties. The trouble is, they do not recognize semantic properties.

    Thus, A.I. enthusiasts, like DM, have to find a way to correctly “individuate” states like the first belief from the second on purely syntactic grounds. Trouble is, syntactically they are identical.”

    I think it might be possible to program a robot to let the cat in the door and keep the dog out, or even vice versa on alternate weeks. On the other hand, I don’t think I could distinguish between our cat Verizon from our dog Duke on purely syntactic grounds. I would have to look. In general my beliefs in such matters come from direct experience rather than analysis.Thinking back, I don’t recall learning how to distinguish cats from dogs, which makes me wonder how important self-awareness or qualia are in such matters. My guess is that a learning process of some length took place. Recognizing Fido as a dog or Fifi as a cat seems to be more like a habit than a belief state.

    I’m willing of course to accept your informed opinion that philosophers of mind and students of linguistics haven’t been able to provide a convincing sketch of how such syntactic grounds can “individuate” belief states. Could that be a strong hint that this is the wrong approach, and we should focus on interaction with the environment? I’m afraid I’m not sure that the insistence that syntax isn’t a way of begging the question. Worse, I’m not sure that the question isn’t misleading, that the real issue is how to derive syntax from semantics. Chomsky’s answer is incomplete without a sketch of possible evolutionary pathways I think. I know Dennett claimed this problem was solved, in a footnote in Darwin’s Dangerous Idea, but he neglected to actually provide the details as I recall.

    Like

  10. I will go through the relevant issues one more time, and then it is enough.

    1. The impetus for all of this AI research is to help us understand how the *human* mind works. Part of what is involved in understand this is to understand what it is for a physical system to have internal states that possess semantic content.

    2. Because this is what we are after, we have to make use of our common ideas of meaningfulness and understanding. To redefine these is to explain something else, rather than what we are interested in.

    3. One of the stumbling blocks that the AI program runs into is that computers operate purely formally — that is, they perform computations on the purely formal–i.e. syntactic–properties of symbols. The computer, therefore, may competently simulate linguistic behavior, but because its operations are purely at the level of syntax, it in no way “knows” what it is saying or writing or what have you.

    4. Part of the reason that we draw this conclusion is because we know, with respect to language, that the syntactic and semantic properties of signs and strings are entirely different. We know that the syntactic properties have to do solely with signs’ and strings’ shapes, while semantic properties have to do with their meaning/reference and truth/falsity. I could recognize a string, as a subject-predicate sentence in English and yet not know what the sentence means, if I don’t know the reference of the subject and the predicate.

    5. These are the issues at the heart of Searle’s Chinese Room thought-experiment and not a single thing that has been said in this thread comes even close to answering them.

    Like

  11. We have to make a careful distinction between whether we are talking about semantic meaning or speaker meaning. Undoubtedly, in actual speech contexts, all sorts of issues of Pragmatics come into play, such as perlocutionary and illocutionary force, conversational implicacture, etc. This, of course, is the domain of an entirely separate branch of linguistics.

    Like

  12. Hi Aravis,

    The intuition is not “mine.”

    Meaning what? That it’s not yours alone? OK. It’s not really my intuition any more.

    That at a minimum, “having the concept ‘American’” requires knowing what “American” means (either in the minimal sense of what it refers to or in the thicker sense of both connotation and denotation)

    You’re assuming that there is more to knowing what “American” means than representing its connotations and denotations with links to other concepts and being able to navigate and manipluate this web of links successfully. It’s not clear to me that this is so.

    If one does not, at a minimum, know that “American” refers to the property of being American, then in what sense—and not a stipulated, ad hoc, made-up sense—does one “have” the concept “American”?

    OK. So let’s stipulate that “shnow” means all the ways that computers can virtually know things (e.g. a simulation of a human brain if we want a concrete example) while “shunderstand” means all the ways that a computer can understand things and so on. These “shintentional” concepts capture all the functional aspects of these ideas, and the internal representation to boot.

    My problem is that I don’t see what difference there is between “knowledge” and “shnowledge” or between “understanding” and “shunderstanding”. You’re insisting that there is a difference, and it’s as clear as day to you that there is, but I just don’t see it. What reasons do you have to go on other than your intuitions? Is there an argument backing you up that isn’t founded on such intuitions?

    Like

  13. Robin, I have to high-light the wording of your thought experiment. You mention “this conscious state that I am currently experiencing”. The form “experiencing” implies not just one state, but a series of states. So, if you are referring to your conscious state at time t, it is not hard to imagine that there will be a single crank which brings the computer to the equivalent state. If you are referring to your conscious experience between times t1 and t2, I submit that you are referring to a set of discrete states which you pass thru between t1 and t2. For any given state which you identify between t1 and t2, there will be a single crank which brings the computer to the equivalent state. The “conscious experience” is the process of passing through all those states.

    James

    Like

  14. Hi Coel,

    First, it is indeed similar to the Chinese Room. Yes, that was about understanding and this about consciousness, but the main point of both is to construct a scenario to confuse human intuition by steering it away from what is important.

    No, as I said – different subject matter, different structure – different argument altogether and not even remotely relevant to what I am saying.

    And it is nothing to do with misdirection or confusion either. If you have become misdirected or confused then it is none of my doing :).

    No, sorry, I don’t grant you that. There is consciousness right there at that step. And to see that we need to talk about timescales.

    First, consciousness is a process, it needs to be a dynamic thing happening over time. If you put a human into a state of suspended animation with all molecules frozen then that person would not be experiencing thoughts and awareness during that suspended-animation state.

    Our human brains operate on a “clock speed” of about 10 millisecs (i.e. intervals between neural firings).

    But here is the problem – the speed of running or the interval between steps makes no difference whatsoever to an algorithm. None.

    You might have 10 millisec between steps 1 and 2 and 6 years between steps 2 and 3 and it would make absolutely no difference to the computation.

    So if consciousness can be a computation then a computation that feels like a few seconds have passed when there is a 10 ms gap between steps then it ought to feel like a few seconds have passed if there is 20 minutes or an hour between steps.

    So, if, after that first handle crank, we ask the question, is that process at that point experiencing consciousness-with-a-10-millisec-clock-speed then the answer is clearly “no”.
    However, if we ask, is that process then experiencing consciousness with a clock speed of the handle-crank interval, whatever that is, then the answer seems to me to be “yes”.

    Again, the length of the gap between steps can never play any part in a computation, nor can the length of time it takes to complete a step. So you have just given a very good reason why consciousness is not a computation – if the speed of the clock makes a difference to how the experienced state feels then it is clearly not an algorithm.

    Now speed up your handle cranks such that they occur every 10 millisecs. In what way is it now “clear” that there is no consciousness throughout?

    It is clear because we know that the length of the gap between steps makes not difference to a computation, it is not factored into a computation. It is clear because we know that there is no difference between the computation that is made with a 10 millisec gap between steps and a computation with 3 second gap between steps.

    And it was clear to you that the conscious state you are experiencing right now could not have been produced by the slow crank process and we know that the gap between steps plays no part in a computation. So we can lose the scare quotes and see that it is very clear indeed that our conscious experience could not be that process – no matter what speed you run it at.

    Like

  15. DM wrote: “You’re assuming that there is more to knowing what “American” means than representing its connotations and denotations with links to other concepts and being able to navigate and manipluate this web of links successfully. It’s not clear to me that this is so.”

    —–

    I’m not sure exactly what this means. I am assuming nothing other than what is commonly understood when we say things like “John understands the word ‘American’.” What that involves and how we might explain it in physical terms comprises the subject matter of the problem of intentionality, in the Philosophy of Mind.

    That common understanding has it that at a minimum, for John to understand the word ‘American’, he must know what it refers to, and it is the task of any theory of intentionality to explain that. Indeed, explaining that represents a minimal condition of adequacy that any purported theory of intentionality must seek.

    You seem to simply want to invent your own subject matter and then claim that computationalism explains it. Unfortunately, that’s not how the business of scientific explanation works. One explains the relevant phenomena, as it is. One doesn’t invent one’s own phenomena.

    And with that, I think we can safely say that this parrot is demised; deceased; a stiff. In any event, I’ve had enough of the topic for now.

    Like

  16. Labnut you commented on feedback control loops but not on the crucial issue of plasticity and learning – which is a form of ongoing adaptive selection for an ensemble of options

    As regards Massimo’s comments – the brain is crucially shaped by emotions, as Damasio has shown. It is not a purely cognitive instrument, as envisaged by Cognitve Science.

    Like

  17. In order to cash in on the computer-intelligence idea, then, one has to think of some way that one could realize states like the beliefs above, in a computer.

    The way that computers work is by performing operations on the formal properties of symbols. Put another way, computers recognize syntactic properties. The trouble is, they do not recognize semantic properties.

    This is basically a category error. It would be like saying that neurons recognize firing patterns, so they can’t recognize semantic properties. Of course they can’t — they’re neurons.

    You’re thinking of computer instructions as operating at the same level as symbolic cognitive processing, which is a wholly different category at a wholly different level of organization.

    I think this is one of the dangers of over-abstraction. We can say, “look, these are symbols and those are symbols, so they’re basically the same thing”. They’re not. If a computer is ever able to recognize semantics, it won’t be at the level of machine instructions.

    Like

  18. Hi Aravis,

    for John to understand the word ‘American’, he must know what it refers to, and it is the task of any theory of intentionality to explain that.

    I have been explaining just that.

    Here is my explanation again. For John to understand the world “American”, he must be able to connect it with the node in his semantic web (a formal structure in his mind) which plays the role of his internal concept of “American”. Again, this kind of operation is routine in programming.

    He doesn’t need to know that this node in his mind refers to “American” because it is his concept of “American”. When John thinks about “American”, this node is really what he is thinking about. It relates to concepts in other peoples minds and objects in the world via correspondence and causal connection.

    You seem to simply want to invent your own subject matter and then claim that computationalism explains it.

    That’s not what I’m doing. You always seem to lose interest before I can make much headway in explaining my view.

    Like

  19. Hi Robin,

    So if consciousness can be a computation then a computation that feels like a few seconds have passed when there is a 10 ms gap between steps then it ought to feel like a few seconds have passed if there is 20 minutes or an hour between steps.

    I agree with you that the subjective sensation of time passing would be the same in the case of a 1-hr clock speed as in the case of a 10-millisec clock speed. I do not, though, agree that you can directly link the subjective sensation of time to external-world time intervals.

    The knowledge of the actual time interval (as oppose to the subjective experience of time passing) derives from sensory input from the external world. So, let’s suppose you run your handle-cranking simulation on a clock speed of 1 hour. And we rig up sensory input into this simulation that is also slowed down, so that it “looks” at a clock ticking at a very slow, slowed down by the same amount as the slowing of the clock speed.

    I assert that that simulation is indeed conscious, just like the human, and is experiencing the same experiences.

    … if the speed of the clock makes a difference to how the experienced state feels then it is clearly not an algorithm.

    The sensory input to an algorithm could well make a difference to the experienced state.

    If we want to consider a zero-sensory-input version then I again assert that the simulation is conscious and experiencing the same thing, but with a slower clock speed. Thus what the human experiences as “a few moments passing”, the simulation would experience as “a few moments passing”, though for the simulation the whole experience would be slowed down by a vast factor as measured by an external clock (as measured by a subjective internal clock it would take the same time).

    Like

  20. Hi DM,

    The problem is that a computer program is discrete. It can be paused and can resume without any consequences.

    So let’s first discuss what happens to human consciousness, supposing that some sort of machine could freeze all motion of all molecules/ions in our brain, and then unfreeze them again. My intuition says that our subjective “consciousness” would be the same, and would simply edit out the time gaps. I’m open to different ideas however.

    However, since a discrete process is never really in motion, it would never actually have the chance to enjoy consciousness the way an analogue brain can.

    My stance is that the series of discrete steps separated by time intervals adds up to a dynamic process with motion. For all we know, time itself could be a succession of discrete steps (at the quantum gravity Planck scale). But let’s first consider the case of a human in the above freeze-machine, with an experimenter randomly pressing a start/stop button, and see if we can agree on the outcome of that.

    Like

  21. Hi Coel,

    I agree completely with your analysis, and in particular that our subjective experience would be unaffected by being frozen, apart from seeing the external world jump ahead in time when we are unfrozen. However I can understand the intuitions that would cause anti-computationlists to persist in this argument.

    The intuition is if we can freeze a brain, then while it is frozen there is no consciousness. If a computational process can be construed as a series of frozen instants, there is therefore no instant where it is conscious. If it is unconscious at all instants, then it can never be conscious.

    I guess anti-computationalists believe that brains are different, being more analogue and continuous. I agree that time itself could be discrete, but this is not settled. If we cannot therefore provide an account of how consciousness works with discrete time, the option to believe that time is continuous remains open to the anti-computationalist.

    So how can we explain consciousness in the context of discrete processing? As mentioned previously, my solution would be to regard consciousness as a property of the timeless mathematical object that corresponds to this process, and not a property of any physical implementation of that process. This solution seems crazy to you, so what’s your solution?

    Like

  22. Hi Aravis,

    First, thanks for your exposition, which certainly helps me to understand your stance.

    Because this is what we are after, we have to make use of our common ideas of meaningfulness and understanding.

    A big issue here is that we have not agreed on what “meaningfulness” and “understanding” actually are. We have not agreed a set of goal posts that a computer needs to aim for to be said to be doing “meaning”, and we have not agreed an operational test of whether “meaning” is present.

    We do know that “meaning” and “understanding” are things that our brains can do, and if we appeal to common folk intuition we’re likely to arrive at a semi-mystical dualist stance that only our brains can do them.

    But, as I see it, all of these things (understanding, awareness, meaning, consciousness) are continua, and thus they can be present at levels vastly below that in a human brain.

    I suggest that “meaning” and “understanding” are about linkages between items of information, and about the degree to which these linkages are available as inputs to an information-processing and/or decision-making process.

    Thus, the more that consequences and implications of some information are available to feed into other other parts of the neural network, the more that neural network “understands” the “meaning” of that information. So, in essence, “meaning” and “understanding” are all about useful cross-linkages between different parts of a neural network (or equivalent computing device). Obviously one can thus have low-levels of “understanding”.

    One of the stumbling blocks that the AI program runs into is that computers operate purely formally — that is, they perform computations on the purely formal–i.e. syntactic–properties of symbols.

    This is where I disagree, and to me an assertion such as this is begging the whole question, or rather defining things in a way that rules out “meaning” by fiat.

    Certainly, when I write computer code the lines of code are full of meaning, and what the computer running them is doing is full to meaning to me. Of course the crucial bit of that is the “… to me”. I am aware of all sorts of implications and linkages between different bits of information, and of how they relate to all sorts of things that the computer does not know about.

    But the computer-state does contain *some* linkages between bits of information, since the whole point of programming it is to process information in meaningful ways, and thus the computer is told about those linkages. Thus, by my definition, the computer is doing “meaning” and does have some “understanding”, but does so to a degree vastly lower on the continuum than I am, since the computer is a much cruder device than my brain is.

    … because [the computer’s] operations are purely at the level of syntax, it in no way “knows” what it is saying or writing or what have you.

    I disagree with that claim, and the whole way you have set up the problem, which delcares it to be syntax-only and meaning-free at the outset. My stance is that the computer does “know” some things about what it is doing, it just “knows” vastly less than I do. From there, of course, it is only a matter of degree — the extent of the linkages — that separates us.

    we know, with respect to language, that the syntactic and semantic properties of signs and strings are entirely different.

    Just because that distinction might be a useful analysis tool does not show that pieces of information whizzing around a neural network can be semantics-free. Since, as I have defined it, “meaning” is about linkages between information, and since in a neural network the whole point is that electrons whizzing in one part are linked to other parts, that would not be the case.

    These are the issues at the heart of Searle’s Chinese Room thought-experiment …

    Plenty of philosophers give the “systems” reply to the Chinese Room, and the dispute really is that people have not first agreed on what “understanding” actually is.

    Like

  23. Coel,
    OK. Your comment gave no justification for the claims: “Computers do not store any meaning at all. They only store syntax” and “today’s computers will never understand because they do not contain meaning, they only contain syntax”. What is your justification for those claims?

    As justification I present
    The Chinese Computer Experiment (with apologies to John Searle)
    Unknown to you, the Chinese have leapfrogged Western technology by designing a computer that no longer reads in binary digits ‘0110101’, etc. It now handles Chinese characters natively. This has resulted in a more that 100 fold speed increase and that will lead to their cultural domination of the West. I am not permitted to tell you how this is done, otherwise I will spend the next 20 years in a forced re-education through labour camp where I will spend 16 hours a day writing self-criticisms.

    Now consider this lovely poem (感遇四首之二 by 張九齡):

    蘭葉春葳蕤
    桂華秋皎潔
    欣欣此生意
    自爾為佳節
    誰知林棲者
    聞風坐相悅
    草木有本心
    何求美人折

    My pre-release, prototype of the new Chinese computer, reads in the above poem and tries to understand it.
    It reads the first character (蘭).
    To the computer, this is just a syntactical symbol, nothing else. It means the same to the computer as it does to you and I(non-Chinese speakers), which is precisely nothing. It is just a collection of squiggles.

    So, it does what you and I would do, consult a dictionary and Wikipedia. But there is a problem, the dictionary and Wikipedia are Chinese language as well(the computer only handles Chinese symbols natively).
    What it finds is this – 幽人歸獨臥

    Uh uh. Now it looks up the first character(幽) in that sequence in the same way. What it finds is this:
    滯慮洗孤清
    Uh uh, try again. It looks up the first character(滯) in this sequence and what it finds is this:
    持此謝高鳥
    Uh uh, try yet again. It looks up the first character(持) in this sequence and what it finds is this:
    因之傳遠情

    This sequence will go on indefinitely and the computer will never discover the meaning. That is because the syntactical elements do not contain meaning and referring to further syntactical elements merely repeats the problem. It does not matter if you do this 10 million times in one pica second, the computer will still only end at a syntactical element that does not contain meaning.

    On the other hand, when tienzengong read that poem, this is in effect what he saw:

    Orchid and Orange by Zhang Jiuling.

    Tender orchid-leaves in spring
    And cinnamon-blossoms bright in autumn
    Are as self-contained as life is,
    Which conforms them to the seasons.
    Yet why will you think that a forest-hermit,
    Allured by sweet winds and contented with beauty,
    Would no more ask to be transplanted
    Than would any other natural flower?

    What a lovely poem!
    That understanding, conception of loveliness, rhetorical question and delighted emotion are all contained in tienzengong’s mind. That meaning is triggered by the sight of the syntactical elements that represent the poem. The meaning is in his mind. The syntactical elements are in the computer. Trying to understand one syntactical element by referring to another syntactical element is an exercise in futility, no matter how many times you repeat and no matter what the combinations you use in the repetition. you still end up at a syntactical element that has, of itself, no meaning. The syntactical element is a sign that triggers meaning in the mind.

    You and I saw the same thing that tienzengong saw but it was only meaningful to him because his mind contained the associated meanings and ours did not.

    Chinese readers will notice that my so-called explanations were merely quotes from the poem 張九齡 感遇四首之三. I am a lazy person.

    Like

  24. gfrellis,
    but not on the crucial issue of plasticity and learning – which is a form of ongoing adaptive selection for an ensemble of options

    Yes, agreed.

    Like

  25. DM,
    A computationalists believe that emotions are also fundamentally cognitive
    Yes, they would believe that because it wishes out of the way what would be a most impenetrable problem.

    But you have a still bigger problem showing that emotions are ‘fundamentally cognitive’.

    Like

  26. Hi DM,

    This solution seems crazy to you, so what’s your solution?

    My solution is to regard a series of discrete states separated by time intervals as functionally sufficiently similar to a continuous timeline as to make no difference to consciousness.

    If a computational process can be construed as a series of frozen instants, there is therefore no instant where it is conscious.

    There can never be an “instant” when anything is “conscious”, since consciousness is a process. It’s a bit like asking whether there is an “instant” when a mountain is eroding. You could take a snapshot of it, notice that every soil and rock particle is stationary at that instant, and then maintain that therefore there the mountain is not eroding that instant. But if you compare the mountain over time you see that there is erosion, and if you compare the sequence of discrete states in our consciousess simulation then you see differences over time, and that is the process of consciousness.

    Like

  27. Hi labnut,

    This sequence will go on indefinitely and the computer will never discover the meaning. That is because the syntactical elements do not contain meaning and referring to further syntactical elements merely repeats the problem.

    You have not told me what you mean by the term “meaning”. As I understand the term — which I have outlined in a reply this morning to Aravis — your statement is not true. I think this is the basic problem here, we have not yet agreed on what we mean by “meaning”.

    Like

  28. The crucial thing to remember is that when your brain sees a model(syntax, for example) it reaches back into its vast store of meaning to associate the two, by retrieving a cloud of meaning which it assigns to the model.

    So meaning is really just association? That doesn’t sound quite so hard to accomplish.

    Computers do not store any meaning at all. They only store syntax. They only store symbols.

    But you would say the same of the physical brain if you were only analyzing it from the outside. You would essentially say that the brain is encoding things via connections and connection strengths.

    We can’t store meaning syntactically

    If you can’t store it syntactically/symbolically then how can you store it? Meaningtactically?

    Like

  29. If you stopped a mechanical form of a Turing machine for a thousand years, the rust would conclusively prove speed does affect the implementation of the algorithm. In the case of the human brain, since the electric potentials and neurotransmitter gradients would “rust” even more quickly. After about ten minutes, the person would be irretrievably dead. I don’t think it is meaningful to talk about possible brain algorithms being performed at a significantly different time scale. I think this is the sort of thing that Massimo PIglucci means by biological naturalism, although I’m not so sure that it has the consequences he thinks.

    The speed issue in this case seems to me to be equivalent to complexity, which the brain achieves by massively parallel processing. If a ballplayer tries to imagine where the ball will strike the ground, he doesn’t solve an equation of motion. But whatever his brain does do to predict the path of a baseball can be carried out by a computer program. The ballplayer however is doing rather more than just analyzing the trajectory of a projectile of course. To model everything that is happening, including physical sensations, requires coordinating all those various strands of neural activity, which individually can be formulated as algorithms suitable for a Turing Machine. But when they are all put together the complexity of the information, the meaning, increases much more rapidly. No pixel is informative by itself, but the whole picture is meaningful. We get a kind of heap paradox. When does the complexity become so great that the whole becomes more important the parts, which becomes indistinguishable. But I can’t agree that finding a heap paradox means the projects is impossible.

    It seems to me that the objection is that someone has to be watching the screen, so to speak. But the picture analogy is just an analogy. It correctly illustrates the notion that the many processes going on in the brain are put together. But the analogy breaks down if you think that there really is a movie screen. The ballplayer is playing. the game. So far as self-awareness is concerned, I think it should be clear that the good ball player is more aware of the game than themselves. This suggests to me that the issue of self-awareness is confused by other concerns. Also, more importantly, it seems to me that the issue is why the ballplayer wants to play. Emotions provide the motivation, but the visceral sensations that accompany so many emotions suggests to me a pretty straightforward biological origin. The processes creating them could be formulated as an algorithm. But absent an actual body to help provide the sensory inputs these algorithms would work on, the inputs would also have to be simulated. I think in computer engineering the rule is that software can substitute for hardware, no? In the human it seems to me is that the rule is the reverse, that hardware substitutes for software.

    Like

  30. In regard to the idealistic world exposed by Plato, which is inhabited by ideas and numbers, I would like to comment on the following I don’t know if there are disembodied algorithms floating in the platonic space, some say yes and some say no, but there is a point that deserves a look. Provided that reality is perceived by a mixed system, sensorial as well as logic, that ends in a neuronal model built up by the human player, how do we know that are there disembodied algorithms pondering themselves or, conversely, are not there? Why are there different models about? At least we see that the numbers and its logical laws are consistent with the human logic.

    Own the numbers an ontological status that make them conscious? I have no idea. Are there disembodied algorithms contemplating themselves? I don’t know. Then, I consider that my toehold to better understand this subject is to call upon a skeptic point of view that makes me think that the numbers coexist with words inside the brain, in this case the numbers have an ontological status through ourselves (if) we consider ourselves ontological beings. In this case, if there is an universal grammar that generates words then would also be an universal grammar embedded in the brain that generates numbers and algorithms. Numbers and words are symbolic units that own sense and intention and allow us to build up models that help us to survive in the material plane and understand our reality at the theoretical level.

    But if it isn’t, if there is an ontological realm aware of itself full of numbers and algorithms that exist and contemplate themselves far-off the human brain, then seems that the common sense barely allows such a realm. Once the common sense takes over, those who think or see or believe that this realm do exists (i) they have an exotic insight similar to the mystic visions or (ii) they expose a metaphysical belief in a meta-universal grammar that is inside but also outside the human brain.

    That said, the current definition of common sense could be different in the future, so the platonic idealistic hypothesis would be perceived for more persons and they won’t be considered exotic and mystics but normal beings. To my mind, some aspects of this issue are still unresolved. At the time, the newfangled aspect of Einstein’s physics was considered radical but Einstein himself considered radical the quantum mechanics. This is, at least for some, a complex and still open question as the common sense might change in the future. I don’t know how and when will be reached the right insight on this subject, perhaps will stand evanescent as has happened until now.

    Like

  31. Hi Labnut,

    Your post excellently explains the intuitive objections to the view that semantics can arise from syntax. It’s a very useful and clear addition to the debate in my view. However, as a thought experiment, I think computationalists would regard the intuitions it evokes to be misleading.

    The basic problem is that it conflates the idea of external representations such as language with the internal representation of a mind. I think I’ve had enough of the confusion with linguistic analogies for now, because language is not crucial to understanding (animals have understanding but no language, apart from perhaps body language in social animals).

    So let’s not take syntax and semantics too literally. The question you ought to be asking is how a formal structure within a mind (or computational process) can have meaning, and this is quite unlike the process of taking in some text as input and trying to extract meaning from it, although I can certainly discuss how I see that working in both humans and computers if that really interests you.

    And as I have said, the way that internal representations have meaning is how they relate to other internal representations. These do not have to have their semantics inferred or translated because they are already in the format the computer/mind understands and can manipulate directly. Where they refer to objects outside the mind, they do so by means of correspondence and causal connection.

    Like

  32. And you have the problem of showing that they cannot be, or that they can be the firing of neurons.

    I have reasons for believing that all mental phenomena are fundamentally mathematical/computational, and it doesn’t really matter which phenomena these are. I intend to write this full argument up as a submission. But anyway, even if I can’t show that, is there any reason to think that emotions cannot be computational? Your intuitions tell you they can’t. My intuitions tell me they can. Seems to me agnosticism might be called for.

    Like

  33. There are two types of computationalism:

    Nonphysical computationalism: Computation exists apart from physical existence, independent of a particular physical instantiation.
    Physical computationalism: Computation only exists in physical compositions, and the particular physical composition determines its abilities and behavior.

    DM, it seems like you are talking about the first type. The second type works out better, I think.

    Like

  34. Hi Coel,

    Again, I’m very much on board with your thinking, but I can see why it may be unsatisfactory to some.

    You could take a snapshot of it, notice that every soil and rock particle is stationary at that instant, and then maintain that therefore there the mountain is not eroding that instant.

    Erosion is perhaps not the best example, as it is to an extent discrete. We don’t have much problem saying that a cliff is not eroding right now as long as no matter is falling off it. Motion such as a falling rock might be a better example.

    If time is analogue and continuous, then at any moment a falling rock is moving, in the sense that it has a momentum. If you think of it from the point of view of calculus, then we can make sense of this as a change in position as we approach a limit of arbitrarily small delta-T. On discrete computation, then there are actual appreciable amounts of time where no computational event occurs at all, so it is much more natural to see it as a sequence of moments without consciousness than it is to see a falling rock as a sequence of still-lifes.

    I want to explore a bit more some of the problems with reconciling discrete computation with continuous-seeming consciousness.

    So, if I have a conscious computer, and I hibernate it mid-thought and power it down, should I say that it is conscious or unconscious right now, or is this a malformed question? If it is malformed, then is it also malformed to ask whether a running computer is conscious right now?

    While frozen, that process is arguably still in existence, it’s just temporarily stopped (as it is all the time really, especially mid-cycle). Now you can let it sit there for a thousand years. Is it still conscious? What about when the computer is destroyed?

    But what if we print out the contents of the memory of the computer before it’s destroyed, and then scan in the printouts to recreate the process on another computer. Is the same consciousness transferred or another identical one created? Could it really be that whether consciousness exists or not could depend on whether some pages are printed?

    I think all these problems go away if we are willing to adopt Platonism, but seeing consciousness as a real thing while also being a physicalist seems really difficult to reconcile. I think the only two computationalist approaches which really succeed in this are to eliminate the concept of consciousness altogether (as in eliminative materialism) or my Platonic approach.

    Like

  35. Hi labnut,

    Thinking about this more, your example actually illustrates my definition of “meaning” very well. Up thread I defined meaning as:

    ““Meaning” and “understanding” are about linkages between items of information, and about the degree to which these linkages are available as inputs to an information-processing and/or decision-making process. Thus, the more that consequences and implications of some information are available to feed into other other parts of the neural network, the more that neural network “understands” the “meaning” of that information.”

    Now, you say:

    To the computer, this is just a syntactical symbol, nothing else. … So, it does what you and I would do, consult a dictionary and Wikipedia. …

    So the computer looks up relations and linkages between symbols. In doing that it is finding “meaning” and “understanding”. Admittedly it is highly limited “meaning” and “understanding” because the linkages between symbols are so far very few.

    This sequence will go on indefinitely and the computer will never discover the meaning.

    Not true. As the sequence continues the computer makes more and more linkages, and thus develops the “meaning” and “understanding”.

    That is because the syntactical elements do not contain meaning …

    It is the linkages between syntactical elements that are the “meaning”.

    … and referring to further syntactical elements merely repeats the problem.

    Building up more and more linkages expands the “meaning” and “understanding” (as I have defined them).

    On the other hand, when tienzengong read that poem, this is in effect what he saw:

    What you mean is that tienzengong knows about vastly more linkages and relevancies of those symbols to all sorts of things. Therefore there is — under my definition — vastly more “understanding” and “meaning” when he reads the poem.

    You and I saw the same thing that tienzengong saw but it was only meaningful to him because his mind contained the associated meanings and ours did not.

    Or, better phrased, his mind contained a vastly bigger set of linkages relevant to those symbols. But, under my definition, the vastly poorer set of linkages that the computer had initially is still “meaning”, just vastly less “meaning” owing to the vastly fewer linkages. But, of course, from few to many is just a matter of degree. What tienzengong’s brain did during childhood was simply develop a web of linkages between those symbols and also with sense data. There is no difference in principle here.

    As I said, it seem to me that the basic problem is that of not starting with an actual definition of “meaning”. If anyone doesn’t like my definition, what is yours?

    Like

  36. DM wrote: “You always seem to lose interest before I can make much headway in explaining my view.”

    —-

    DM, this discussion thread is now 342 entries long, and many of them are yours. If you haven’t made headway in explaining your views by now, it ain’t gonna’ happen. At least, not in this discussion thread. Perhaps a different approach is necessary?

    Like

  37. Hi Philip,

    I think physical computationalism is OK, but as explained in conversation with Coel I think it runs into problems when it tries to treat consciousness as a real thing while also trying to tie it to physical instantiations. I’m not sure you can have both. But eliminative materialism which denies the existence of subjective mental entities is reasonably tenable in my view. I prefer to say that consciousness exists, so I adopt a Platonist approach.

    Like

  38. Hi Coel, I agree with your account, but I’m not convinced that an iterative reading of textual symbols like this could lead to a full understanding no matter how long it went on. I think it might need a richer kind of association between symbols than mere proximity or correlation. There seems to be limited ability to represent the roles or kinds of connections between concepts in text, whereas this is not so much of a problem in a mathematical structure or computer program.

    But I guess I’m open to the possibility that some limited understanding might be achievable by analysing and absorbing the entirety of the Chinese Wikipedia and/or Internet.

    Like

  39. Hi DM,

    . I think it might need a richer kind of association between symbols than mere proximity or correlation.

    I agree. The term “linkages” was supposed to cover a multitude of ways that symbols could be associated in complex neural networks.

    Like

  40. Hi Coel,

    The term “linkages” was supposed to cover a multitude of ways that symbols could be associated in complex neural networks.

    Don’t get me wrong, that was actually perfectly clear to me in the context of a neural network. But Labnut’s example is about reading in text, so you also seem to be saying that a machine could learn to understand everything by reading in definitions of definitions ad infinitum all written in unfamiliar symbols. I’m just saying that I don’t think these rich types of associations can be so easily reconstructed in this way.

    Like

  41. Coel,
    quite frankly, I regard your call for a definition of meaning as a bit of a dodge. You do that knowing that a formal definition is very difficult and thinking that therefore disqualifies my argument. But it doesn’t disqualify my argument because the common, everyday intuitive definition of meaning is perfectly adequate for my argument. I am using the word in the intuitive sense that most people use it and that is good enough for the purposes of my argument.

    A large part of the problem is that the process of meaning-making in our brains is not understood. It happens and we recognise its happening with great ease. It is completely intuitive. But the fact that we don’t understand how meaning-making happens in our brains is absolutely fatal to your cause. How can you possibly construct something when you don’t understand what it is that you are trying to construct? You are arguing that something is possible when you cannot even begin to show how it is possible. Some more modesty in your claims is called for.

    This is why people like you have to resort to the camouflage of the Turing Test. It is a smokescreen intended to hide the fact that you don’t know what you are doing.

    And if you deny that all you have to do is show me how a machine constructs meaning. It will be a world-first worthy of the Nobel Prize.

    Like

  42. Note that I am not being a mysterian and claiming that there is some deep, mysterious thing that can’t be replicated. The truth is we simply don’t know. What I am claiming is that syntactical machines, such as our present computers, manipulate syntactical objects that possess zero meaning. No matter how you process and link objects that have zero meaning you will still end up with a result that contains zero meaning. The logic is inescapable.

    Like

Comments are closed.