The danger of artificial stupidity

Artificial-Intelligence-Could-Co-Pilot-Future-Combat-Jets-Navy-Officialby J. Mark Bishop

It is not often that you are obliged to proclaim a much-loved international genius wrong, but in the alarming prediction made recently regarding Artificial Intelligence and the future of humankind, I believe Professor Stephen Hawking is. Well to be precise, being a theoretical physicist — in an echo of Schrödinger’s cat, famously both dead and alive at the same time — I believe the Professor is both wrong and right at the same time.

Wrong because there are strong grounds for believing that computers will never be able to replicate all human cognitive faculties and right because even such emasculated machines may still pose a threat to humanity’s future existence; an existential threat, so to speak.

In an interview on December 2, 2014 Rory Cellan-Jones asked how far engineers had come along the path towards creating artificial intelligence, and slightly worryingly Professor Hawking, replied “Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Although grabbing headlines, such predictions are not new in the world of science and science fiction; indeed my old boss at the University of Reading, Professor Kevin Warwick, made a very similar prediction back in 1997 in his book “March of the Machines.” In that book Kevin observed that even in 1997 there were already robots with the “brain power of an insect”; soon, he predicted, there would be robots with the brain power of a cat, and soon after that there would be machines as intelligent as humans. When this happens, Warwick claimed, the science fiction nightmare of a “Terminator” machine could quickly become reality, because these robots will rapidly become more intelligent than and superior in their practical skills to the humans that designed and constructed them.

The notion of humankind subjugated by evil machines is based on the ideology that all aspects of human mentality will eventually be instantiated by an artificial intelligence program running on a suitable computer, a so-called “Strong AI” [1]. Of course if this is possible, accelerating progress in AI technologies — caused both by the use of AI systems to design ever more sophisticated AIs and the continued doubling of raw computational power every two years as predicted by Moore’s law — will eventually cause a runaway effect wherein the artificial intelligence will inexorably come to exceed human performance on all tasks: the so-called point of “singularity” first popularized by the American futurologist Ray Kurzweil.

And at the point this “singularity” occurs, so Warwick, Kurzweil and Hawking suggest, humanity will have effectively been “superseded” on the evolutionary ladder and may be obliged to eek out its autumn days gardening and watching cricket; or in some of Hollywood’s more dystopian visions, be cruelly subjugated or exterminated by machine.

I did not endorse these concerns in 1997 and do not do so now; although I do share — for very different and mundane reasons that I will outline later — the concern that artificial intelligence potentially poses a serious risk to humanity.

There are many reasons why I am skeptical of grand claims made for future computational artificial intelligence, not least empirical. The history of the subject is littered with researchers who have claimed a breakthrough in AI as a result of their research, only for it later to be judged harshly against the weight of society’s expectations. All too often these provide examples of what Hubert Dreyfus calls “the first step fallacy” — undoubtedly climbing a tree takes a monkey a little nearer the moon, but tree climbing will never deliver a would-be simian astronaut onto its lunar surface.

I believe three foundational problems explain why computational AI has failed historically and will continue to fail to deliver on its “Grand Challenge” of replicating human mentality in all its raw and electro-chemical glory:

1) Computers lack genuine understanding: in the “Chinese room argument” the philosopher John Searle (1980) argued that even if it were possible to program a computer to communicate perfectly with a human interlocutor (Searle famously described the situation by conceiving a computer interaction in Chinese, a language he is utterly ignorant of) it would not genuinely understand anything of the interaction (cf. a small child laughing on cue at a joke she doesn’t understand) [2].

2) Computers lack consciousness: in an argument entitled “Dancing with Pixies” I argued that if a computer-controlled robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be present in all objects throughout the universe: in the cup of tea I am drinking as I type; in the seat that I am sitting on as I write, etc., etc.. If we reject such “panpsychism,” we then must reject “machine consciousness” [3].

3) Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational [4].

Taken together, these three arguments fatally undermine the notion that the human mind can be completely instantiated by mere computations; if correct, although computers will undoubtedly get better and better at many particular tasks — say playing chess, driving a car, predicting the weather etc. — there will always remain broader aspects of human mentality that future AI systems will not match. Under this conception there is a “humanity-gap” between the human mind and mere “digital computations”; although raw computer power — and concomitant AI software — will continue to improve, the combination of a human mind working alongside a future AI will continue to be more powerful than that future AI system operating on its own. The singularity will never be televised.

Furthermore, it seems to me that without understanding and consciousness of the world, and lacking genuine creative (mathematical) insight, any apparently goal directed behavior in a computer-controlled robot is, at best, merely the reflection of a deep rooted longing in its designer. Besides, lacking an ability to formulate its own goals, on what basis would a robot set out to subjugate mankind unless, of course, it was explicitly programmed to do so by its (human) engineer? But in that case our underlying apprehension regarding future AI might better reflect the all too real concerns surrounding Autonomous Weapons System, than casually re-indulging Hollywood’s vision of the post-human “Terminator” machine.

Indeed, in my role as one of the AI experts on the International Committee for Robot Arms Control (ICRAC), I am particularly concerned by the potential military deployment of robotic weapons systems — systems that can take decisions to militarily engage without human intervention — precisely because current AI is still very lacking and because of the underlying potential of poorly designed interacting autonomous systems to rapidly escalate situations to catastrophic conclusions; such systems exhibit a genuine “artificial stupidity.”

A light-hearted example demonstrating just how easily autonomous systems can rapidly escalate situations out of control occurred in April 2011, when Peter Lawrence’s book The Making of a Fly was auto-priced upwards by two “trader-bots” competing against each other in the Amazon reseller market-place. The result of this process is that Lawrence can now comfortably boast that his modest scholarly tract — first published in 1992 and currently out of print — was once valued by one of the biggest and most respected companies on Earth at $23,698,655.93 (plus $3.99 shipping).

In stark contrast, on September 6th 1983, during a military exercise named “Operation Able Archer,” a terrifying real-world example of “automatic escalation” nearly ended in disaster when an automatic Soviet military surveillance system all but instigated World War III. During the height of what Russia perceived to be an intimidating US military exercise in central Europe, a malfunctioning Soviet alarm system alerted a Soviet colonel that the USSR was apparently under attack by multiple US ballistic missiles. Fortunately, the colonel had a hunch that his alarm system was malfunctioning, and reported it as such. Some commentators have suggested that the colonel’s quick and correct human decision to over-rule the automatic response system averted East-West nuclear Armageddon.

In addition to the danger of autonomous escalation I am skeptical that current and foreseeable AI technology can enable autonomous weapons systems to reliably comply with extant obligations under International Humanitarian Law; specifically three core obligations: (i) to identify combatants from non-combatants; (ii) to make nuanced decisions regarding proportionate responses to a complex military situation; and (iii) to arbitrate on military or moral necessity (regarding when to apply force).

Sadly, it is all too easy to concur that AI may pose a very real “existential threat” to humanity without ever having to imagine that it will reach the level of superhuman intelligence that Professors Warwick and Hawking so graphically warn us of. For this reason in May 2014 members of the International Committee for Robot Arms Control travelled to Geneva to participate in the first multilateral meeting ever held on Autonomous Weapons Systems (LAWS); a debate that continues to this day at the very highest levels of the UN. In a firm, but refracted, echo of Warwick and Hawking on AI, I believe we should all be very concerned.

___

Mark Bishop is Professor of Cognitive Computing at Goldsmiths, University of London, was Chair of the UK Society for Artificial Intelligence and the Simulation of Behaviour (2010-2014), and currently serves on the International Committee for Robot Arms Control.

[1] Strong AI takes seriously the idea that one day machines will be built that can think, be conscious, have genuine understanding and other cognitive states in virtue of their execution of a particular program; in contrast, weak AI does not aim beyond engineering the mere simulation of (human) intelligent behavior.

[2] Searle illustrates the point by demonstrating how he could follow the instructions of the program (in computing parlance, we would say Searle is “dry running” the program) and carefully manipulating the squiggles and squiggles of the (to him) meaningless Chinese ideographs as instructed by the program without ever understanding a word of the Chinese responses the process is methodically cranking out. The essence of the Chinese room argument is that syntax — the mere mechanical manipulation (as if by computer) of uninterpreted symbols — is not sufficient for semantics (meaning) to emerge; in this way Searle asserts that no mere computational process can ever bring forth genuine understanding and hence that computation must ultimately fail to fully instantiate mind. See Preston & Bishop (2002) for extended discussion of the Chinese room argument by twenty well known cognitive scientists and philosophers.

[3] The underlying thread of the “Dancing with Pixies” reductio (Bishop 2002, 2005, 2009a, 2009b) derives from positions originally espoused by Hilary Putnam (1988), Tim Maudlin (1989), and John Searle (1990), with subsequent criticism from David Chalmers (1996), Colin Klein (2004), and Ron Chrisley (2006), amongst others (Various Authors 1994). In the DwP reductio, instead of seeking to secure Putnam’s claim that “every open system implements every Finite State Automaton” (FSA) and hence that “psychological states of the brain cannot be functional states of a computer,” I establish the weaker result that, over a finite time window, every open physical system implements the execution trace of a Finite State Automaton Q on a given input vector (I). That this result leads to panpsychism is clear as, equating FSA Q(I) to a finite computational system that is claimed to instantiate phenomenal states as it executes, and employing Putnam’s state-mapping procedure to map a series of computational states to any arbitrary non-cyclic sequence of states, we discover identical computational (and ex hypothesi phenomenal) states lurking in any open physical system (e.g., a rock); little pixies (raw conscious experiences) “dancing” everywhere. Boldly speaking, DwP is a simple reductio ad absurdum argument to demonstrate that: IF the assumed claim is true (that an appropriately programmed computer really does instantiate genuine phenomenal states) THEN panpsychism is true. However if, against the backdrop of our current scientific knowledge of the closed physical world and the corresponding widespread desire to explain everything ultimately in physical terms, we are led to reject panpsychism, then the DwP reductio proves that computational processes cannot instantiate phenomenal consciousness.

[4] Gödel’s first incompleteness theorem states that “… any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory F that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.” The resulting true but unprovable statement G(gˇ) is often referred to as “the Gödel sentence” for the theory (albeit there are infinitely many other statements in the theory that share with the Gödel sentence the property of being true but not provable from the theory). Arguments based on Gödel’s first incompleteness theorem — initially from John Lucas (1961, 1968) were first criticized by Paul Benacerraf (1967) and subsequently extended, developed and widely popularized by Roger Penrose (1989, 1994, 1996, 1997) — typically endeavoring to show that for any formal system F, humans can find the Gödel sentence G(gˇ) whilst the computation/machine (being itself bound by F) cannot. Penrose developed a subtle reformulation of the vanilla argument that purports to show that “the human mathematician can ‘see’ that the Gödel Sentence is true for consistent F even though the consistent F cannot prove G(gˇ).” A detailed discussion of Penrose’s formulation of the Gödelian argument is outside the scope of this article (for a critical introduction see Chalmers 1995; response in Penrose 1996). Here it is simply important to note that although Gödelian-style arguments purporting to show “computations are not necessary for cognition” have been extensively and vociferously critiqued in the literature (see Various Authors 1995 for a review), interest in them — both positive and negative — still regularly continues to surface (e.g., Bringsjord & Xiao 2000; Tassinari & D’Ottaviano 2007).

References

Benacerraf, P. (1967) God, the Devil & Godel. Monist 51: 9-32.

Bishop, J.M. (2002) Dancing with Pixies: strong artificial intelligence and panpyschism. in: Preston, J, Bishop, JM (Eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford.

Bishop, J.M. (2005) Can computers feel?, The AISB Quarterly (199): 6, The society for the study of Artificial Intelligence and the Simulation of Behaviour (AISB), UK.

Bishop, J.M., (2009a) Why Computers Can’t Feel Pain. Minds and Machines 19(4): 507-516.

Bishop, J.M., (2009b) A Cognitive Computation fallacy? Cognition, computations and panpsychism. Cognitive Computation 1(3): 221-233.

Bringsjord, S, Xiao, H. (2000) A refutation of Penrose’s Go ̈delian case against artificial intelligence. J. Exp. Theoret. AI 12: 307-329.

Chalmers, D.J. (1995) Minds, Machines And Mathematics: a review of ‘Shadows of the Mind’ by Roger Penrose. Psyche 2(9).

Chalmers, D.J. (1996) The Conscious Mind: In Search of a Fundamental Theory, Oxford: Oxford University Press.

Chrisley R. (2006) Counterfactual computational vehicles of consciousness. Toward a Science of Consciousness April 4-8 2006, Tucson Convention Center, Tucson Arizona USA.

Klein, C. (2004) Maudlin on Computation (working paper).

Lucas, J.R. (1961) Minds, Machines & Godel. Philosophy 36: 112-127.

Lucas, J.R. (1968) Satan Stultified: A Rejoinder to Paul Benacerraf. Monist 52: 145-158.

Maudlin, T. (1989) Computation and Consciousness. Journal of Philosophy (86): 407-432.

Penrose, R. (1989) The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press.

Penrose, R. (1994) Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press.

Penrose, R. (1996) Beyond the Doubting of a Shadow: a reply to commentaries on ‘Shadows of the Mind’. Psyche 2(23).

Penrose, R. (1997) On Understanding Understanding. International Studies in the Philosophy of Science 11(1): 7-20.

Putnam, H. (1988), Representation and Reality. Cambridge MA: Bradford Books.

Preston, J. & Bishop, M. (eds) (2002) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford & New York: Oxford University Press.

Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain Sciences 3(3): 417-457.

Searle, J. (1990) Is the Brain a Digital Computer? Proceedings of the American Philosophical Association (64): 21-37.

Tassinari, R.P., D’Ottaviano, I.M.L. (2007) Cogito ergo sum non machina! About Gödel’s first incompleteness theorem and turing machines, CLE e-Prints 7(3).

Various Authors (1994) Minds and Machines 4(4), Special Issue: What is Computation?, November.

Various Authors (1995) Psyche, Symposium on Roger Penrose’s Shadows of the Mind. Psyche 2.

103 thoughts on “The danger of artificial stupidity

  1. Hi Robin,

    I must confess I don’t understand either Coel’s nor Aravis’ definition of “understand” …

    My stance is that “understanding” is a rather simple and prosaic matter of linkages between pieces of information. In the Chinese Room “the system” knows how to manipulate (= change linkages between) symbols within the Chinese language. That is (by definition) “understanding” of the relationship between symbols in the Chinese language.

    Now stick on some video cameras and image-recognition software, and program in linkages between those Chinese symbols and outside-world objects and events. Then the “understanding” is far greater because the web of linkages is then far more extensive.

    Similarly, if one programmed in linkages between the Chinese symbols and English-language symbols, then the understanding is again much greater, again because the set of linkages that the system then “knows about” is much more extensive.
    That’s all there is to it.

    In your case of noughts and crosses, the game is so simple that the set of linkages is very limited, and thus the “understanding” is very limited.

    In truth all it has done is to index a list with the current board state and ANDed the current board state with the returned number. It does not “know” what the conceptual meaning of that operation is.

    If “understanding” is an emergent phenomenon then it will always and inevitably be the case that one can analyse the system at the “low level”, and in that low-level description there will be no “understanding”.

    In the same way, you could take a rabbit, zoom in to the atomic level, and declare that all that is happening is physical particles obeying physical laws. You could continue that the individual molecules are not “alive” and are not themselves doing “living”, and thus declare that something more is needed to explain the “aliveness” of the rabbit. You would then become a vitalist!

    That is essentially what Searle is doing when declaring that The Room is not doing “understanding”, it’s the same as declaring that the rabbit is not doing “living” because none of the individual molecules are doing “living”.

    Any naturalistic and thus Darwinian account of “understanding” has to be gradualistic and emergent, and thus it has to be the case that one can explain “understanding” in terms of interactions of elements that do not, themselves, “understand”.

    Liked by 1 person

  2. Hi Marko,

    Gödel’s theorems are constructive in the sense of being entirely formal without relying at any stage on any informal intuition. They show that there must exist Gödel sentences for any formal system T meeting certain criteria. They are not however constructive in the sense of providing a step by step procedure for creating Gödel sentences for an arbitrary formal system. They cannot be, because if there were such a procedure, a computer could construct them, which would contradict the theorem. Without such a procedure, there is no proof that humans can always construct such sentences for any formal system T.

    > “This statement is not provable within theory T”

    This is not a Gödel sentence. It is an informal English analogue of a Gödel sentence. A Gödel sentence is expressed in the syntax of theory T.

    Besides, if we’re allowing such informal sentences, I counter with this variation of the Whiteley sentence.

    “This statement is not provable by human Marko Vojinovic”

    I can see this is true. Everybody else can see it is true. By definition, you alone cannot. Does that mean I am superior to you? No, of course not. It just shows that humans are no exception to Gödel’s insight.

    > Wikipedia even maintains a list of ZFC-undecidable statements here.

    ZFC-undecidable statements are not in general Gödel sentences, precisely because we can’t prove if they are true or not. A Gödel sentence would be unprovable within ZFC but which nevertheless must be provably true of ZFC by having the proof use tools outside of ZFC.

    So, again, you have not provided an example of a Gödel sentence.

    On syntax vs semantics,

    A symbol has semantics when it is involved in some kind of communication. The speaker intends the meaning. The listener infers the meaning. Without such communication, semantics talk is questionable.

    But the symbols of a mind are not produced by a speaker or consumed by a listener. The only mind is the one in dispute, and it is not using symbols for communication but is instead actually constituted of symbols and their functional roles. It is therefore a category mistake to talk of syntax and semantics, because no communication is taking place.

    When we talk about someone manipulating symbols in the Chinese room, we have the intuition that this person cannot infer the meaning of the symbols, and so suppose that there cannot be inherent meaning at all. The problem is that this person is external to the symbols and is not constituted of those symbols. The meanings of the functional states of a mind cannot (easily) be inferred by an external observer, they are inherent to the mind itself and can only be considered from the perspective of that mind. The idea is that this meaning arises from the functional roles played by those states and their causal relations to other states and objects in the world, and the fact that this is opaque to an external observer has no bearing on the argument.

    Like

  3. While I am bored to death of this topic, given how many times we’ve discussed it, since so many posts are being directed towards me and some of them completely misrepresent what I’ve said, here goes:

    1. Alex SL wrote: “Regarding Aravis’ assumption that the whole issue is decided by Searle.”

    I never said that and don’t think that. Next.

    “But how does one recognise genuine understanding if merely behaving exactly like a competent human (Turing Test) isn’t enough? Is it genuine only when a human displays the exact same behaviour? But that is then just begging the question.”

    Actually, it’s the Turing Test and those who appeal to it to justify Strong AI that begs the question. Massimo patiently went through this, in two dialogues with AI enthusiasts (one of them, alas is the awful Eliezar Yudkowsky):

    http://bloggingheads.tv/videos/2561
    http://bloggingheads.tv/videos/2483

    2. Coel wrote: “Its “sense” or “content” is the set of linkages it has to other symbols, to other pieces of information.”

    The most widely held view in the philosophy of language is that the content of a term or expression consists of its reference. Some philosophers, like Frege and more recently, Katz, think that in addition to reference, terms and expressions have an additional semantic value — sense — which, roughly, consists of a description of the referent. In either case, meaning describes a word-world relation, not a word-word relation.

    The view you have described is known in the literature as conceptual role semantics and was championed chiefly by Ned Block. It is a view with difficulties too many to count (you can see the relevant Encyclopedia entries), but its main problem is that it cannot make sense either of truth or intentionality, so as far as I am concerned, it is a loser.

    Regarding your earlier remarks on syntax and semantics, it is nothing but a word jumble. These terms are very clearly understood and belong to a real science, known as Linguistics. The relevant definitions are easy to find and bear no resemblance to what you have said here or in previous discussions.

    3. Robin Herbert wrote: “I must confess I don’t understand either Coel’s nor Aravis’ definition of “understand” and yet somehow I understand what I mean when I say I don’t understand them. Aravis seems to have defined it in terms of another word we use for “understand” (grasp).”

    Now *this* is a really good question/challenge. By “grasp” is typically meant something like “mentally represent,” and while many philosophers are happy to leave it at that, I — given my Wittgensteinian leanings on so many things — am not. I invoked it here primarily to provide a common, standard account of understanding, so as to focus on the Chinese Room question — and to counter Coel’s assertion that no one has such an account and thus, cannot declare its absence. Of course, Wittgenstein’s view on mental representation helps Coel and the Strong AI crowd even less, so….

    4. Asher, with regard to your examples, once your descriptions of the machine’s activity become intentional, I would argue that we are unwarranted in ascribing them. That’s really the point of the Chinese Room and other arguments like it — Ned Block came up with a bunch of different “non standard realizations” in his paper, “Troubles with Functionalism.”

    There also, of course, is an equally difficult — perhaps even more difficult — problem with ascribing sensations to a machine, which we haven’t even touched on.

    Liked by 1 person

  4. A very interesting article, and I agree that there’s plenty to worry about short of super-AIs, but I’d like to suggest that it misses the point when it introduces Searle’s and others’ arguments against computational consciousness as a reason to dismiss the threat of AI.

    Interesting though that discussion is, I’d say it’s beside the point. Why does it matter for this purpose whether the AI is conscious? A superintelligent AI would not have to be conscious to threat our existence; it would only have to have distinct goals and the capacity to pursue them. It could be a technological zombie, only behaving as if it was conscious, and still destroy us or enslave us.

    In humans, our evolved consciousness seems to play an important role in allowing us to form and pursue goals, manipulate other and so on. However , what reason do we have to assume that consciousness would be a necessary aspect of a superhuman AI’s ability to do so?

    I’m surprised nobody has mentioned the Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies, perhaps the best recent discussion of the issue. While Bostrom doesn’t accept Searle’s argument, as evident form his simulation hypothesis, it makes no difference for his case
    .
    It would probably have some goal or goals of its own, or perhaps goals hardwired when it was constructed. Bostrom give a simplified example that stands in for any goal; make as many paperclips as possible. It might transform the earth into a paperclip factory; and although it might have the power to change its own goal, it would evaluate any such change in the light of whether it would improve its capacity to make paperclips. And since it would have a sophisticated model of human psychology, it would realise that we might want to switch it off, and take steps to prevent that, perhaps by eliminating or controlling humans.

    “Almost any goal we might specify, if consistently pursued by a superintelligence, would result in to destruction of everything we care about”. (Bostrom)

    Couldn’t we program the AI to be nice? Even trying to specify in advance goals for a super-AI that are beneficial to human might fail. Whether we could engineer a goal system that can do this is a big outstanding challenge. The key point is that we must learn how we could control such an AI before we build – after would be too late!

    Note that the important goal here is something like “general intelligence” in humans, rather than having consciousness as its central goal. Most of Bostrom’s arguments still apply to the case of superintelligent AIs that only present the appearance of consciousness.

    I do think that Searle and other have worthwhile arguments against computationalist cognitive science,(minds are to brains as programs are to computers). You may disagree. But that’s a completely different discussion, and should be set aside when we try to determine whether AI could threaten our existence.

    Liked by 1 person

  5. I’m going to take another shot at pointing out the logical disconnect here.

    In the beginning of this essay Bishop quotes Hawking as saying, “Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

    But two paragraphs later Bishop writes, “The notion of humankind subjugated by evil machines is based on the ideology that all aspects of human mentality will eventually be instantiated by an artificial intelligence program running on a suitable computer, a so-called “Strong AI””.

    That ideology is not present in Hawking’s remark. Superseding of humans by machines does not imply instantiating all aspects of human mentality, any more than superseding of dinosaurs by mammals implies that mammals instantiated all aspects of dinosaur mentality. Hawking does not make any assertion about strong AI, and shouldn’t be criticized on that basis.

    I’ll go on to say that I’m unhappy to see somebody who is so naive about the possible development of machines serving as an advisor to the military. This is an area where paranoia is preferable to a Pollyanna principle.

    Liked by 1 person

  6. Phil H and Coel;

    It’s not Searle’s arguments that are the zombies, but the many often-repeated and often corrected misstatements of his views;

    First, he doesn’t think that only “wetware” could be conscious. That’s a misrepresentation by his opponents of his distinction between the as yet ill-understood) physical/chemical/biological process that we know are sufficient in the only indisputable examples of consciousness; and on the other hand of computation, as usually understood, which Searle argues is an observer relative aspect of reality, and so can’t be the fundamental ground for consciousness.

    He believes that consciousness might be implemented artificial on a range of physical bases; but only on those which provide the same sort of causal powers as those instantiated in human brains. These causal powers are ultimately physical and observer independent, while computation is observe relative, constituted like money, political power and languages by social consensus between people in societies. Computation, by its nature, is qualitatively the wrong kind of thing to underpin consciousness.

    Second, and following from this; Searle’s arguments are not just the Chinese Room. As Searle puts it in The Rediscovery of the Mind;

    “This is a different argument from the Chinese room argument, and I should have seen it ten years ago, but I did not. The Chinese room argument showed that semantics is not intrinsic to syntax. I am now making the separate and different point that syntax is not intrinsic to physics. “ (p. 210)

    It’s a sign of the weakness of much criticism that it only addresses or seems to be aware of the Chinese Room, ignores everything else he’s said, and then sometimes accuses him of being a one-hit wonder!

    Thirdly, given the above, Searle believes that the right arena to research the basis of consciousness is biology, not computationalist cognitive science, albeit informed both by philosophy of mind and by the use of computational methods and models (weak AI).

    I think his arguments are certainly strong enough to give us pause, to question the assumptions and perhaps the equivocating understanding of terms like computation and information that underpin computationalist theories of consciousness.

    Still, I think this is irrelevant to the main question, the dangers of AI. Super- AIs need not necessarily be conscious to surpass or abilities, nor to be dangerous to us.

    Like

  7. Asher, with regard to your examples, once your descriptions of the machine’s activity become intentional, I would argue that we are unwarranted in ascribing them. That’s really the point of the Chinese Room and other arguments like it

    Aravis – I wanted to avoid talking about the Chinese Room at all, because I think the details of the “symbol manipulation” question are far more valuable as conceptual tools that allow us to philosophize in new ways than they are in rebutting the Chinese Room or deciding whether computers could “understand” Chinese.

    Part of what I was trying to get at is that people often have no problem saying that “computers manipulate symbols”, *even when they know* that describing it that way implies high-level behavior that can be described and theorized about without reference to electricity. But when it comes to, say, identifying/classifying Chinese characters, they have a problem thinking that the process can be described without reference to manipulating symbols. Searle’s original though experiment had problems with this, as did his reply to the systems objection. But of course it’s more complicated than that.

    I say “identifying” rather than “understanding” for a reason. My opinion about ascribing “intention” or “understanding” to a computer is that it doesn’t make sense with respect to a discussion about the functional operation of the computer (and I think that applies to brains as well). The words “understanding” and “intention” operate within frameworks that don’t have to do with functional processes. That is why Turing was moving in the right direction with his “imitation game”. And it’s also why that approach, in a general sense, could be considered a Wittgensteinian approach.

    Like

  8. Doing a SUDOKU puzzle the meaning is a rule that you can’t violate : No repeating of a number in a nine number box, row or column. Performing the puzzle is just a series of symbol manipulations and numeric reasoning. The skill you develop is the ability to quickly recognize which numbers are missing from the boxes, rows and columns as you do the puzzle which is actually a form of subconscious meaning or reasoning that could be easily simulated by an algorithm etc.

    The Chinese Room places consciousness on the category of the Manifest Image, or to an unknowing observer the Room is perfectly performing the function of consciousness. Even Chalmers Zombie Conjecture draws from the principle of perfect conscious function with no inner experience. Although many want to abandon functionalism, the problem is we have not cracked the problem beneath the neuron level or we don’t see or have a handle on the inner functions that happen in cells yet.

    Instead of Mary’s Room, what if one of the ancients were teleported into our living room with the 55 inch HDTV. They may conclude that the images they see are not actual people but a projection by their souls from another realm. They see more children and children’s images in the morning which tell us that they are from the realm of the good and light as opposed to what they see at night.

    Like

  9. I’m starting by riffing on Thomas Jones’ comment about autonomy, to start.

    We can talk about programmers trying to create something like emotions, or consciousness, in a computer, but that’s not good enough.

    The core of evolutionary biology, part of the core of what “life” is, is self-replication.

    Until a computer can have its own emotions or consciousness, everything else is just spitballing. And, if “strong” AI remains “just around the corner,” this version of AI, which, to riff on Dennett, I shall call “The version of AI worth having,” is around at least two corners, if not more.

    Aravis runs with this a bit further in talking about machine sensations. Again, programming a robot someday to have preset sensations is small beans. When a robot programmed to “see” in the high ultraviolet evolves the equivalent of new cone cells to also see in the low X-ray, or whatever, call me back.

    Gwarner On word use, I wouldn’t call any machine without consciousness as demonstrating “superintelligent AI.” What would you use for a machine that, hypothetically breaks through? Per the previous essay here, some might use “Massively Modular AI.” To that, like George Takei, I might say, “Oh, MMAI.”

    I otherwise agree with what you said in your second comment.

    I especially agree with this:

    Thirdly, given the above, Searle believes that the right arena to research the basis of consciousness is biology, not computationalist cognitive science, albeit informed both by philosophy of mind and by the use of computational methods and models (weak AI).

    Indeed.

    I think at least a certain subset of consciousness researchers, as I noted in one comment on the previous essay, have locked themselves in an “analogizing box.” As a newspaper editor, in columns, I know that analogies can be a fine explanatory tool — as long as the analogy is generally sound. If it isn’t, then it can be a terrible master, especially if one remains wedded to it. And, scientists and philosophers can be as stubborn in defense of ideas they have birthed as anybody else.

    All of this, about programming a machine to do something in our control, then “losing it,” has been well covered, of course, in “2001.” (Anecdotally, at a screening of the movie, Asimov reportedly said “They’re breaking Third Law” in the middle of the movie.)

    If machines are able to engage in some sort of evolution, then, to reach a certain point of intelligence, they will be able to break beyond programming, for better or for worse.

    In other words, to trump the Turing Test, and get back to who else but Hume, when a robot can understand what the Is-Ought issue is, and then wrestles with it on one of his own actions, we’ll have something of note.

    Sidebar to Mark, per a comment at the top of the piece. Hawking’s been wrong about other things in the world of science, too, like manned travel to Mars, ignoring the potential effects of solar radiation, among other things.

    Like

  10. Asher Kay:

    I agree that aside from all the problems with Machine Functionalism — and there are so many that one can only ascribe the continuing commitment to the program either to (a) wishful thinking (which I think explains many of the Singluarity People’s commitment) or (b) financial concerns (what do we do with all these Cog Sci programs and all these people we’ve hired?!) — there is a more fundamental absurdity of ascribing intentionality — or any mental states whatsoever — to things that do not participate in the relevant social institutions and language games, but I don’t agree that Turing represents a move in this sort of Wittgensteinian direction, beyond the crude behaviorism implied by the Imitation Game and the sometimes — in my view incorrect — ascribing of behaviorist views to Wittgenstein.

    My sticking to a more conventional opposition to Strong AI — via Searle — was in an effort to remain on-topic and not drag every conversation over to Wittgenstein.

    Everyone:

    Please be aware that every single argument that Coel presents against Searle is actually discussed by Searle in Minds, Brains, and Programs, and in the exchange with Jerry Fodor that I referenced earlier. In my view, Searle easily refutes these arguments, but regardless, you ought to check them out for yourself. What you must not do, however, is assume that because Coel ignores these counter-arguments that they are not there.

    Liked by 1 person

  11. Philip,
    It doesn’t look like a near term threat. Bio-engineering might be a more productive route to creating new organics.
    This debate seems a bit like playing with dolls. We take a facsimile of the human form, imagine it to have many more such attributes and start asking what if. It’s a normal human tendency and makes money for Kurzweil and Mattel, but the odds of such systems becoming both robust and adaptive enough to provide viable competition to biology would make the lottery look like a sure thing. As the essay and various commentators point out, we have much more to fear from such systems reaching the limits of their effectiveness and breaking down catastrophically, then we do that they will become emergently superhuman.
    I’m more concerned that humanity, en masse, tends to act like slime mold, with the planet as its petri dish, due to such feedback aspects as tragedy of the commons/prisoner’s dilemma. We naturally like to project and act linearly, much like the mold racing across the dish, as those introspective enough to consider the longer term consequences invariably pushed aside by those obsessed with the short term benefits.
    It is not as though we can change nature, but some day, maybe we can start educating future generations to think as much in terms of dichotomies, feedback/blowback, cyclical aspects of this reality and not just assume the traditional, start to finish linear, top of the hill first wins, models, on which so many of our current lives are based. Then when the prophets get up on their soapboxes and point the way, the audience will be better able to ask the necessary questions.
    It’s time to grow up and learn to better balance the may impulses, not just race after the most appealing ones.

    Like

  12. OP >Professor Hawking: “Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

    I agree that in principle AI eventually matching or outstripping human mental ability is feasible [cf. Alex SL (28th 12.56am) >the proof lies in the observation that such machines already exist: humans.]  But human beings are much more than their brain. Our 1.5kg of electro-chemical thinking stuff Is housed in a body which (within limits admittedly) is multi-purpose, self-sensing, self-sufficient, self-repairing, self-reproducing and obtains and uses natural available resources for powering. We breathe, move and manufacture whilst living on “ham and eggs”.

    I think Professor Hawking’s statement disregards that at present computers are completely dependent (on us!) for both manufacture and power supply. AI to supersede or master us would have to match us in much more than thinking abilities. 

    Like

  13. My problem with The Chinese Room is that it confuses language with thoughts and meanings. Language is used as a precise way of exchanging thoughts, (in a way it resembles using token notes and coins to exchange values).
    What happens when you hear/read a intelligible word, phrase or sentence? Is it possible that inside the human brain there is something analogous to a Chinese Room, somewhere that thoughts and the aural/visual symbols of verbal language representing those thoughts are interpreted and manipulated?
    Meaning is not contained within or expressed by the words of a language unless you have learned that language, i.e. you already have stored as available reference data held in your memory a combination of dictionary, thesaurus and grammatical usage of *meanings* which sometimes let you down as in “Oh, what is that word?” In human language as distinct from computer programming a meaning of a word as “thought” is not a simple single definition but complex, a cloud of ideas, connotations and connections… enabling its inventive use, double-entendres, jokes, crossword puzzles, etc., even much philosophical debate about meaning.

    To quote from “A Sense of Style” (Chap.2) by Steven Pinker:
    >As Charles Darwin observed, “Man has an instinctive tendency to speak, as we see in the babble of our young children, whereas no child has an instinctive tendency to bake, brew, or write.” The spoken word is older than our species, and the instinct for language allows children to engage in articulate conversation years before they enter a schoolhouse. But the written word is a recent invention that has left no trace in our genome and must be laboriously acquired throughout childhood and beyond.<

    This innate characteristic for speech is part of the Very-much-scribbled-on (not-Blank) Slate we are born with. Though I have no credentials as a linguistic authority I do have unusual practical experience of imparting language with my late wife as hearing parents to our first son, John, who was born congenitally severely-deaf. This has a profound effect on language acquisition since the above evolved "instinctive tendency" for speech from 2 to 5 years is greatly obstructed. Deafness was difficult ad late in diagnosis in those far off days of the 1940's: we had no personal experience of it and little advice was available. The accepted wisdom then was special education by the "Oral Method" with the (somewhat hopeless in my son's case) intention of giving all deaf children intelligible speech. Formal "Sign Language" was banned from both school and home as detrimental to speech acquisition. Grudgingly we accepted this view but concentrated on his acquisition of English *written* words: even so I guess his vocabulary was only a small part of what it would have been were he born with normal hearing. But it was quite obvious that this comparatively-mute small boy was able to learn, remember, reason and to *think*. It convinced me that thought and language are two quite different mental activities.

    Like

  14. but I don’t agree that Turing represents a move in this sort of Wittgensteinian direction, beyond the crude behaviorism implied by the Imitation Game and the sometimes — in my view incorrect — ascribing of behaviorist views to Wittgenstein.

    Okay, yeah — I admit that the assertion is more playful than viable, but I think the comparison gets at something central to both Turing and Wittgenstein’s “moves”, which is a recognition of the incoherence we’re forced into when trying to look at things like “understanding” in a particular way, and a turn toward “use”, public language, and the higher-level games that all implies.

    I agree that functionalism has a lot of problems. But luckily, Cog Sci isn’t necessarily functionalist, and all those people we hired might shift their thinking over time toward better approaches (which there is plenty of room for, just like in Phil of Mind). Plus, just because functionalism is wrong doesn’t mean that it’s not a reasonable, valuable and maybe even necessary step on the way to developing decent theories of cognition.

    Like

  15. The narrow Chinese Room Argument goes as follow:
    {One, If Strong AI is true, then there is a program for Chinese such that if ANY computing system RUNs that program, that system thereby comes to understand Chinese.
    Two, I (one of the ANY computing system) could run a program for Chinese without thereby coming to understand Chinese.
    Three, Therefore Strong AI is false.}

    The above argument is totally flawed. No, Searle was definitely unable to RUN the PROGRAM the same as it being run in a computer. For the computer to answer one question correctly, it might need to run one-billion steps. If Searle can go through this one-billion steps, he RAN the program.

    In statement One, {Any computer system = (computer + Searle)}. In statement Two, {Any computer system = Searle}. Thus, the above argument is the mixing apple with orange, a total nonsense.

    {Searle’s wider argument … shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation). … that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics. … the re-description of the conclusion indicates the close connection between understanding and consciousness …}

    This is correct for the CURRENT computer. But, first we must know what MEANING means. In linguistics, the meaning of a syntax is often recursively defined. As the entire computable (not including non-computable) world can be totally described with ‘recursive functions’, this recursive definition has very deep consequence, but I will not discuss it here. This recursive-ness (arbitrariness) is very important for arts and creative-ness. But, I would like to restrict the definition of MEANING in the domain of philosophy (searching for wisdom) and sciences (searching for truths), with some solid examples.

    Near absolute zero (Kelvin): meaning, superconductivity, very low thermo-activities, etc. Based on physics laws.

    Stars rotate faster than the Newton gravity allows: meaning, there must be dark matters. Based on gravity law.

    Be shot right between the eyes: meaning, he will die. Based on empirical knowledge.

    In these cases, the MEANING is totally law-based, having nothing to do with the syntax or the recursive definition.

    In philosophy and physics, the MEANING must be law-based, not arbitrary recursive definition. Computer is a computing device. Yet, a computing device needs not to be a computer. The terms {understanding, consciousness, etc.} must be defined on law-base as below:

    The necessary condition (NC) of computing is: having a memory and recall-memory.

    The NC of consciousness is: having spontaneous (internal, not external) RE-call memory.

    So, {merely use syntactic rules to manipulate symbol strings} is not the criterion to rule out having consciousness. In the link (http://www.prequark.org/inte001.htm ), the topological (not biologic) neurons (top-Nu) have only two states (ground/excited) and two types of connections (neighbor/remote). Then, each top-Nu has two status (member/non-member) among groups (events, concepts, …). With these, those top-Nu are able to gain spontaneous recall memory, which will lead to consciousness.

    Like

  16. The entire “Chinese Room Argument” is based on two points.

    P1, the machine speaks Chinese but doesn’t understand Chinese.

    P2, Syntax is not by itself sufficient for, nor constitutive of, semantics.

    These two points are totally wrong.

    I have showed in my previous comment that the MEANINGs of many (syntaxes or else) are law-based. That is, those meanings are ontological realities, and they can be arbitrarily assigned with some symbols (syntaxes). In these cases, the semantics preexists the syntax, which is just a tag-along. When a computer processes those tags, it is truly processing the semantics. This fact is described as Martian Language Thesis (in the book “Linguistics Manifesto”) — Any human language can always establish a communication with the Martian or Martian-like languages, regardless of what kind of syntaxes that Martians are using. The asymmetry of the beta-decay is universal. When a Martian call that asymmetry direction as kaka, we (Earth man) know right the way that {kaka = left (direction)}.

    A standalone symbol (syntax) has no live. In language, syntaxes (lexicon) are recursively defined. These CHAIN definitionS encompass some innate MEANINGs (the first order semantics). As soon as one symbol acquires a second order meaning, all other syntaxes in the chain gain their second order semantics. With a huge law-based MEANINGs as a base, there is no syntax/semantics divide in reality. The P2 is wrong but not the fault of John Searle, as it is the conclusion of the traditional linguistics which does not know anything about the Martian Language Thesis.

    Then, what is “UNDERSTANDING”?

    In the topological-neuron device (http://www.prequark.org/inte001.htm ), there is NO module to produce “understanding”. When that device receives an external SIGNAL, it:
    One, create a representation, the top-map.

    Two, store that representation with 2nd order top-map.

    Three, a giving 2nd-order top-map can be recalled by very-alike-switching.

    Four, the FLOWing of many maps (representations) can be BURNed in as concepts.

    Five, the interactions among many concepts will become UNDERSTANDing, the relations among the concepts.

    The “understanding” is just about the references among many different representations. And, this ‘understanding’ is the consequence of a process which processes many different representations. Why is a machine which is able to PROCESS the semantic of a question and gives a correct reply not UNDERSTANDing what it has processed?

    The necessary condition of consciousness is: having spontaneous (internal, not external) RE-call memory. How can this spontaneity arise? It is based on the BURNT-in in the top-neuron device. With two IDENTICAL top-neuron devices, they will be completely different if they are burnt-in differently. One can be burnt in as a Christian while other became a suicide-bomber. Different burnt-in produces different ‘understanding’. John Searle’s argument on ‘understanding’ is wrong, just a preconception.

    Like

  17. Regards Searle I respond that a number corresponds to a wide variety of things, not just a number, if analyzed in the right way within it may lay a picture, a sound, music, video, writings, etc or multiple simultaneously. This is intrinsic information, no one put it there, it is simply an aspect of this number. The feeling of understanding, may simply be something that lay within certain numbers, and as an aspect of these will be present wherever the number is present. Thus Consciousness would be a kind of intrinsic information that exists at the same time as the numerical information(just like a stored written poem exists within an encrypted binary sequence even if one were unable to decrypt it, it simply is part of the number and exists occupying the same exact space it does and will be present wherever this sequence appears).

    Now regards the idea that computers can’t accomplish what man can or better, we need to assume that the brain can perform a noncomputational operation that allows it to do so. We must assume that there must be some special property of the universe, that somehow allows this. We must also say that while doing so, this special property cannot increase the computational power of a computer, or else hypercomputation would be physically possible.

    Given that the brain is a machine, we must assume that if we designed a new biological thinking conscious machine from the ground up, some aspect of our design must tap this special property. But what would constitute it? As far as is known, consciousness does not depend on the exact molecular composition of the brain, I see no reason why different functionally equivalent neurotransmitter molecules and receptor proteins wouldn’t also produce consciousness. If evolution had happened to use different molecules or the proteins used where different, I honestly think it implausible to suggest the resulting organism wouldn’t exhibit consciousness if all the related changes where functionally equivalent.

    That said toassume impossibility of computer consciousness we would also have to assume that digital physics is impossible. That there is some inherent property of the world that is not informational in nature. Keep in mind, that the brain does not have direct access to the world, all the information that enters the brain is action potentials and the same, in kind, from all sense except in a statistical way, that is yes or no, digital information. IF there is some special property to the universe for consciousness it somehow extracts qualia from digital information not from reality. So what sort of property could take in digital information and somehow its procedure generate noncomputable states that result in digital information being outputted. Keep in mind that within the brain, that is between brain areas, the information exchanged is also digital in nature, action potentials.

    Like

  18. Disagreeable: Almost everything you say about Goedel is false. Of course there is a step-by-step procedure for constructing a Goedel sentence. That was the point of the theorem.

    Marko has his own funny idea about Goedel’s theorem. He posted an essay here claiming that it defeats any concept of ontological reductionism in physics.

    These arguements about Goedel and Chinese rooms have almost nothing to do with whether we will be subjugated by machines, as in the above essay.

    Like

  19. I really enjoyed the clarity of this article. Its a breath of fresh air to be able to follow along with the author’s thoughts with ease.

    Most have been focusing on Searle’s Chinese room argument on this thread, but I think there could be another general issue to raise and would really appreciate thought on the matter.

    So the author laid out 3 conditions that AI would need to satisfy in order to order to replicate a human mind: Consciousness, mathematical insight, and genuine understanding.

    He then claims that computers can never attain these characteristics, and consequently, they can never replicated the human mind. he says, “Furthermore, it seems to me that without understanding and consciousness of the world, and lacking genuine creative (mathematical) insight, any apparently goal directed behavior in a computer-controlled robot is, at best, merely the reflection of a deep rooted longing in its designer. Besides, lacking an ability to formulate its own goals, on what basis would a robot set out to subjugate mankind unless, of course, it was explicitly programmed to do so by its (human) engineer?”

    Consider this general point though: there are some things we find necessary for certain traits to obtain, but these things are actually not necessary for certain traits to obtain- we only think so because we *normally* meet these conditions to obtain a trait. Consider the following to make this clearer:

    Typically the process of understanding sentences when reading is a 3-step process:

    1. Visual recognition of the words/ sentences on the page
    2. subvocalization of the words/sentences (you repeat the sentences in your inner speech)
    3. Understanding of the sentence

    This is the typical process for understanding that we all perform, so you might think that step 2 (subvocalization), or something like it, is a necessary condition for understanding a sentence. Indeed, we were all taught to “read out loud” when we were young, and this was supposed to be the place-holder for subvocalization until we could perform subvocalization with ease.

    However, it turns out that we can actually skip step 2 (subvocalization) in order to achieve understanding. Speed reading attempts to do just this. It attempts to teach people to read without having to subvocalize- to understand sentences directly from sight alone. So, subvocalization is actually not necessary, we can skip it. We only think it is necessary because it is the way we almost universally come to understand sentences.

    I wonder if its possible that some of the “necessary conditions” for achieving the 3 characteristics are actually not necessary, and the characteristics can be achieved in other ways. Here is a way to think about it:

    Ned Block (who moderates the loebner prize (the contemporary turing test)) has said that one simply way to trick most computers is to see if they can recognize category mistakes. For example, if you ask a computer, “is a rat furniture?” the computer typically has to resort to some default list of answers that it appeals to when it doesn’t know how to answer the question. The reason for this is because it doesn’t know how to categorize objects such that it could recognize that rats aren’t the kinds of things that could be furniture thereby making the answer “obviously not since rat’s aren’t furniture” available to the computer. Computers can’t recognize category mistakes.

    We tend to think that a necessary condition for recognizing category mistakes is that we can categorize things in the first place in the way humans do. However, its possible that, like subvocalization, categorization in the way we categorize is actually not a necessary condition for recognizing category mistakes. Perhaps things can recognize category mistakes either without needing to categorize objects, or by using some other method foreign to humans.

    With this general point in mind, and without applying this directly to any of the three characteristics that the author has mentioned computers don’t have, perhaps we are too hasty to think computers couldn’t have these characteristics because we are making the unwarranted inference that if computers don’t have what *seem* to be the necessary conditions for these traits, they can’t achieve them. Perhaps they can achieve them in ways we hadn’t expected. After all, this shouldn’t be too surprising since we are talking about what computers can achieve after much almost unintelligible sophistication (perhaps we make a machine without really knowing how it works, and it is able to achieve the 3 traits that the OP mentioned above without meeting our typical “necessary” conditions for doing so. I would love to hear what people think of this, and if they think this strategy could be used to apply to the OP’s arguments with more depth and clarity.

    Like

  20. Dan, some good comments, and a couple of thoughts back.

    First, the category mistake idea might be better than the Turing Test. I wonder if a “Watson” of Jeopardy fame has so been tested yet.

    My idea with the “Hume Test,” to explicate more, would be that a sufficiently evolved independent robot would recognize its hereditary background while at the same time recognizing that it didn’t have to be a slave to that background. Then, for either itself, or fellow members of its species (in for an analogy penny, in for an analogy pound), it would be able to recognize particular is-ought situations while facing, say, moral decisions. Should a robot pass this, I would certainly grant it some sort of intelligence.

    In a bit of distinction from Mark, don’t say “never” on robotic intelligence. I just, again, refer to my “Early Bronze Age” motif. I wouldn’t expect any artificial intelligence worth having, or talking about, will happen before the 22nd century.

    Third, you’ll notice that I specifically am mentioning robots, not computers.

    In for another analogy pound.

    “Tethered” computers have no locomotion and a relative paucity of sensory input, no limbs, etc., as compared to robots. In short, comparing the silicon world to the carbon world, computers are like plants and robots are like animals. Per a long-ago Rationally Speaking essay, I’m with Massimo on not seeing a lot in claims about plant intelligence.

    So, back to riffing on Dennett. If there’s a variety of artificial intelligence worth having, I think it will come specifically from robots, not computers in general.

    Tying that back to the Hume Test, or the Ryle Test, to nominate your idea, animals of sufficient intelligence would pass; plants have no intelligence like this, because they don’t act.

    Or, to put it in another way, and riff on my previous comment — animals have embodied cognition, but plants don’t, and I suspect that, if a robotic intelligence evolves, this will be part of what distinguishes it from “mere computers.”

    Finally, cognition, involving sensory input, analysis, etc., is more than language in particular, or symbols in general. Language or other symbolic communication is about reflecting on knowledge within ourselves, then sharing it with others, who then bounce it back, etc.

    That said, per an essay a couple of weeks ago about animal pain, the issue of communication, and breadth and depth of communication, do raise issues about “cognition” in other animal species besides us.

    Taking this back to robots, and the issue of autonomy mentioned in my second comment thanks to Thomas. I’m not a computer scientist, let alone a roboticist, to keep the focus there, but, as far as I know, we have no idea of what “trigger” of robotic engineering might lead a robot to start recognizing it has “boots” in front of itself with which it can start bootstrapping itself.

    So again, I won’t say never. I will say that, as far as that happening, we don’t know the details of how and why.

    Like

  21. Hi Mark, nice essay. I agree that stupidity is more of a threat from AI now, and potentially always given how the best NI (natural intelligence) has been shown to operate.

    Perhaps the best bet is not to allow any single entity (or any entity) have control of machinery capable of mass destruction. Just a thought.

    To All… While I won’t say “strong AI” is impossible, I am less confident it will (or can) be constructed using wholly nonbiological components running mathematically pre-constructed algorithms.

    Sort of kept to the side of this kind of conversation is whether AI (if we want it), has to be inorganic, and based on programs. In a couple replies Philip Thrift has pointed to biomolecular/biological machinery. These are just as “artificial” as anything else constructed by humans, but using bio-organic components which have built in functions (you don’t have to program anything except perhaps machines to lay the components down).

    It seems possible for us to (eventually) construct neural systems up to complete brains, that can animate organic or inorganic components. Intriguingly we just had tests showing humans (i.e. their brains) can learn to fly drones by adjusting their electrical activity to move physical components outside of the head (via EEG to software/hardware). So mind/machine interface (even remote) is possible. This seems like a faster route to AI, than programming.

    Not ruling out, just saying.

    I would disagree with EJWinner that AI requires all the facets of biology that humans happen to have (digestion, reproduction, etc). In time, using biotechnology, it may be possible for humans to reduce or eliminate many of these constant biological drives. I think they would still be intelligent, though perhaps a lot less interesting (to me). I mean reducing some of the mess and pain would be nice, but eating and sex might be worth keeping around 🙂

    I sort of like Socratic Gadfly‘s “Hume test” for intelligence. Sadly it seems even some humans fail to pass that test, and I am forced to admit they are still intelligent.

    And perhaps more sadly I expect Schlafly is correct that the way humans are… “Soon no one will care as long as they are achieving military objectives. So I expect terminator robots, and they will be judged on kills, not intelligence.”

    I believe machines should not be given military objectives. I don’t even like missiles. Perhaps I am a bit old-fashioned but I think if you are going to kill someone it ought to be close and personal. That way you are cognizant of the cruelty and waste you are inflicting. Letting machines do the work allows for an indifference that will make killing a more acceptable/easy solution to our problems.

    Like

  22. I can well understand why this proposition is put forward, given we really are in the dawn of the computer and robotics revolution and sights are set skyward, but in reference to the prior thread, it seems there is some exceedingly modular thinking going on here, with little understanding or reference for why it has taken biology literally billions of years(has anyone counted to a billion lately?) to reach this stage. Suffice to say, robots and computers are big on the modular functions and plasticity is hard to actually program.
    Then there are the endless feedback loops, both positive and negative and the layering resulting in this.
    Currently there are those, mostly in Europe, referring to current economic theory as autistic, given its reliance and insistence on various unworkable assumptions(modules), such as debt issued to those with no real ability to pay it back constitutes wealth. My sense is that when this idea is taken too seriously, it is a similar disassociation.
    As the old saying goes, the more you know, the more you know you don’t know. Those who focus too obsessively on a number of details can seem a bit like Rain Man, very good at counting the matchsticks, but not so good at appreciating the larger, overwhelming network of relations.
    Which isn’t to say we shouldn’t dream, but only that most such thought bubbles are soap.

    Like

  23. dantip,

    It is said that on having accomplished victory at Waterloo, standing above the corpse littered battlefield, Wellington remarked, “Next to a battle lost, the saddest thing is a battle won.”

    It is not the understanding of a sentence that computers are incapable of, but the understanding expressed in such a statement. No computer can understand what it means to start a nuclear war – the lives lost, the horror of facing life afterwards for the survivors, the damages done to human economics, culture and social fabric, the great weight of efforts at recovery – no computer could understand any of this, and I suggest that not only could no computer ever understand this, but no computer would ever need to understand this, since, assuming it were conscious, it would recognize human life and human values as fundamentally alien to it.

    The discussion concerning the possibility of a computer achieving consciousness is impoverished by the reductive assumption that the experience of a living human consciousness can be reduced to computation realized in algorithmic language. That’s absurd.

    All this talk about Goedel sentences and Chinese rooms, while interesting in itself, only addresses very limited aspects of the whole of consciousness. Conscious may be an illusion, but what even those who claim so are clearly not willing to let go are our values, and the emotions responding to values realized or denied. And no computer can ever share these. Which is exactly why we have science fiction to wrestle with such questions, because no theory can elaborate this question properly.

    No computer will ever weep over a dead son, or rejoice in the successful life of a daughter. No computer will ever suffer disappointment or need to find ways to live with it and carry on.
    No computer will ever have to determine what is mere lust or truly love, control its anger and laugh at its own flaws. No computer will ever confront its own mortality.

    All this is what makes us human, not the algorithms of computational thought.

    Without this, the dangers of handing over control of nuclear weapons – any weapons – to computers are manifest.

    I hate to say it, and I hope no one takes offense, but it must be said: an obsession with producing conscious AI may very well be pathological. It certainly borders on it. The implicit disdain for the body, and suspicion of social connectivity, are manifest.

    Humans are not machines. The machinery only gets us to the point of experiencing life, it doesn’t experience it for us.

    Like

  24. Hi Socratic,

    Thank for the thoughtful response. Just to pick one nit if you don’t mind.. you said,”

    —My idea with the “Hume Test,” to explicate more, would be that a sufficiently evolved independent robot would recognize its hereditary background while at the same time recognizing that it didn’t have to be a slave to that background. Then, for either itself, or fellow members of its species (in for an analogy penny, in for an analogy pound), it would be able to recognize particular is-ought situations while facing, say, moral decisions. Should a robot pass this, I would certainly grant it some sort of intelligence.—

    In addition to Dwayne Holmes’ concerns, it is unclear to me that it follows that just because a creature is able to recognize that it does not *have* to be a slave to its hereditary background, it follows that it will then have a sense of what it *ought* to do (that it could recognize is-ought situations and make moral decisions).

    Recognizing that one doesn’t have to be a slave to its background simply provides a creature with the knowledge that it has 2 choices available to it- doing what its hereditary background would dictate it does, or not- but obtaining this knowledge doesn’t instruct the creature on what it *ought* to do, or even suggest what it ought to do.

    Another way to think about this is that when the creature comes to recognize that it doesn’t have to do what it is hereditarily programmed to do simply gives it one additional *descriptive* (but not prescriptive) fact- that one can do otherwise.

    So I think that, although your “Hume test” might be a good test for some form of self-consciousness generally, and I agree it would probably show some sort of intelligence (but this may not be saying much because most would agree that even sophisticated “symbol crunching” would be some sort of intelligence- albeit only in some primitive sense), but it wouldn’t be a good test to determine whether or not robots have the ability to “recognize particular is-ought situations” or that the robot can make moral decisions at all.

    Like

  25. Thanks, Socratic Gadfly, but I have to ask you why you define consciousness as an essential attribute of intelligence?

    It’s perfectly conceivable that a machine could be more intelligent than us, but nonconscious. By intelligent I mean; better at gathering, collating, analysing information about the world, finding patterns and regularity, modelling and devising plans of action to achieve a goal.

    In our own human case, we know that consciousness is associated with these capacities, and we may theorise that it is part of an effective evolved engineering solution for intelligent action. But there’s no reason to suppose, that it is the only way to implement intelligence. Perhaps the conscious route lends itself well to naturally evolving solutions, while nonconscious routes are easier to devise through technological intelligent design process? Anyway, the key point is; why are we safe as long as future AIs can’t be conscious?

    As well as our cultural assumptions, we may have an innate tendency to assume that anything that looks smart must be conscious; because our minds evolved to enable interaction with other. So assuming that an intelligence must be conscious may be related to the overactive agency detection mechanism that makes us see spirits in natural phenomena.

    So the question is not, is there something that it is like to be a super-AI? It is, how can we stop it from pursuing its own goals (including those we may have designed into it) at the expense of our own wellbeing or survival.

    I wonder whether most commenters have been more interested in discussing a peripheral topic, because they don’t regard the possible threat from superintelligence as a serious issue? I disagree, especially after reading Bostrom’s book.

    Thus most this fascinating discussion is beside the point that Hawking and Bostrom are raising and does nothing to answer their anxieties.

    Ironically, even Searle himself makes this error in his review of Bostroms’s Superintelligence;

    “Bostrom tells us that AI motivation need not be like human motivation. But all the same, there has to be some motivation if we are to think of it as engaging in motivated behavior. And so far, no sense has been given to attributing any observer-independent motivation at all to the computer.

    This is why the prospect of superintelligent computers rising up and killing us, all by themselves, is not a real danger. Such entities have, literally speaking, no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior.”

    But a super-AI could have “as-if”, behavioural “motivation”; derived from the programming embedded in it by humans. Consciousness is not to act in pursuit of programmed goals. And for our purposes, that could be quite as dangerous. As long as computer is able to derive plans of action towards those goals, and act on them, its intrinsic intentional states or lack of them are irrelevant.

    Like

  26. Often these debates center around the idea that AI lack sentience, emotion; or are not embodied, have movement etc. As humans we make the connection because of our own Dr Spock logic and reasoning ability which is computational but see the emotional disconnects which AI lack. However if we realize that motor function, emotions etc, are simply hardwired logic or computational abilities, there is no reason why these functions cannot be built into AI programming.

    Likewise as humans we are prejudiced by our own learning and social organizing skills, but AI is easily built for self learning. One scenario is that an AI could capture all of the MRI scans and cognitive science data in the worldwide databases and figure out the design of the human brain and human body becoming greater experts on our species behavior than we are.

    Like

  27. The best way to show that Bishop’s view is wrong is by showing that a design of a true intelligence (conscious) machine is done and constructible.

    The {topological (not biologic) neuron PILE} is an intelligence machine, with three points,
    One, it is intelligence, able to think (spontaneously recall and relate many representations).

    Two, it needs no external PROGRAMER.

    Three, the entire system uses very simple physical processes.

    S1, external signal generates a topological-map (t-map) as the representation of that signal; initial randomly, then it (t-map) can be firm up by training (lowing the threshold). A different external signal will generate different t-map.

    S2, the t-maps (representations) are registered as 2nd order t-map (reg-map).

    S3, the reg-maps can be recalled internally, without the stimulus of the external signals.

    S4, the RELATIONS among representations are learned and organized (forming an understanding).

    S5, the self (spontaneous) recalling different understandings, it is thinking.

    Today, one chip can be used as one t-neuron. Of course, this will be a huge machine for constructing 10 billion t-neurons, with neighborhood connections (1000 for each t-neuron) and remote connections (5 each). But, it is doable.

    The key issue of Mark’s article is the ‘Chinese Room Argument’. Searle’s key logic is as follow:
    L1, there is a syntax/semantic divide.
    L2, computer processes syntax strings only, not about semantics.
    L3, without the semantics (knowing the meaning), computer PROCESSes but without UNDERSTANDING.
    L4, without understanding, computer has no conscious.

    The entire logic is flawed. Computer does have libraries (dictionaries, data, etc.), and the meaning of every syntax is definitely processed. Yet, the major flaw is L1 which is the Gospel of the traditional linguistics (TL).

    In TL, the linguistics universe is divided into three subsystems: the syntactical system, the semantic system and the pragmatic system. The base of this division is from the study of a toy-language (the formal system), the damn-child of Gödel’s theorems. This division (unbridgeable) is not totally wrong but is totally stupid. “Stupid” means that it has done the TL in, forever in the hell-pit, unable to know the heavenly glory of true linguistics.

    Yes, formal system is a linguistic system, a damn-child. The true linguistics universe is based on the Martian Language Thesis (MLT) which is the foundation of all languages. MLT is a meta-language, a spontaneous ontological reality, and it gives rise to meaningS which can be arbitrarily tagged with syntaxes. The X-point in the MLT space can be tagged as haha in English, yiyi in Chinese, Yaya in Rusian. Those syntaxes are just sidekicks, arbitrary chosen. With this arbitrariness, all syntaxes can be recursively defined without causing any confusion, as they are all permanently CONFINED in the MLT meta-language.

    How can there be a true divide between syntax and semantics if the syntax is just a sidekick of the semantics? The L1 is not totally wrong but is totally stupid. Thus, there is no chance for Searle to get his argument correct.

    Like

  28. J. Mark Bishop:

    “I believe the Professor is both wrong and right at the same time.

    “Wrong because there are strong grounds for believing that computers will never be able to replicate all human cognitive faculties and right because even such emasculated machines may still pose a threat to humanity’s future existence; an existential threat, so to speak.”

    – – – –

    My own approach to these issues differs largely because of my background. I don’t have any objection to Bishop’s use of the three foundational problems to argue that machine mentality cannot replicate human mentality, although I do believe that in the coming centuries we can come close, but it will be highly selective in scope.

    There is a basic dilemma underlying this issue. The first part involves the question why certain biological creatures would aspire to create a better (pick a word: more efficient, more reliable, etc) model of themselves. The second part involves some deeply seated psychological concerns that you don’t mess around with mother nature. There is a long history of myth and fable that documents our concerns in this matter. Think, for example, the Garden of Eden, the Tower of Babel, Prometheus, Shelley’s “Frankenstein,” or in a lighter way the movie “Lars and the Real Girl.”

    Human mentality is exquistely nuanced in recognizing boundaries and limitations. It puzzles over Chinese Rooms, and Trolley problems; it recognizes the moral ambiguity in making choices between two apparent goods or two apparent evils. But what benefit would be derived in creating a machine that replicates or duplicates moral ambiguity? And so, to my mind, the issue of whether we could replicate, or duplicate, human mentality in a machine is a non-starter. Because we wouldn’t/shouldn’t (nod to Socratic) want to. Given two acceptable parking spots, who wants to argue with an “auto” (pun intended) regarding which is the better? The question then begins to revolve around master-slave and power-control frameworks.

    Too often, when dystopian scenarios surface, we become confused regarding what has gone rogue, the human or the machine. To my mind, the deep fear is about ourselves, not machines. There seems an intuitive understanding that the creator inevitably, perhaps inadvertently, transfers something of himself into his creation. And this will always involve selection, and what is selected for inclusion will, we hope, always be open to debate–something about which I remain skeptical.

    Like

  29. Hi gwarner99,

    [Searle] believes that consciousness [requires] the same sort of causal powers as those instantiated in human brains.

    One can readily turn computers into robots by equipping them with robotic arms, video cameras, etc, and building in autonomous decision making based on those sensory inputs. Hardware additions giving “causal powers” are the easy bit, the software is the harder thing.

    Hi Aravis,

    These terms are very clearly understood and belong to a real science, known as Linguistics.

    The point is that linguistics is an abstraction of one aspect of what is going on in a neural-network brain. We need to adopt the engineering approach of looking at how that neural network is engineered to do what evolution has programmed it to do; linguistics alone is only one aspect of the issue.

    … and to counter Coel’s assertion that no one has such an account …

    I’m sticking to my claim that Searle does not have an account of how “understanding” is implemented in an engineering sense in the neural network (granted, he has labels for it at the abstracted linguistic level), and without that he has no operational test for the presence or absence of “understanding” in a neural-network device.

    Seale asserts that the man who wanders out of the Chinese Room having memorised “the system” has no actual understanding of Chinese. Well let’s consider the hypothesis — and here I pick on DM 🙂 — that DM has no actual understanding of English, he has just, over childhood, learned by rote a system for responding to English speech with other English speech.

    Now, what, in Searle’s account, proves that DM does have actual understanding? Would I be right to guess that the only replies would be, well, he’s made of wet stuff and therefore …, or well, he’s human, and therefore …?

    The only good reply I can think of would be that the Chinese-Room man doesn’t know how the linkages between Chinese and real-world objects, or between that Chinese system and other information swirling around in his neural-network brain, or between Chinese and other languages, et cetera.

    But in that case, one can readily add in those linkages. One could simply take the Chinese Room, put some webcams on it, and some pattern-recognition software, and a whole additional system for “English” and some “Google translate” software, et cetera.

    What justification would Searle then have for assigning “understanding” to DM but not to that enhanced room?

    In my view, Searle easily refutes these arguments, …

    So if Searle’s argument is valid, let’s apply it to the neural-network in DM’s cranium. Does Searle conclude, similarly, that DM’s neural network doesn’t actually “understand”? If not, at what point and why does the analysis divert from that of the Chinese Room?

    Unless Searle has a specific and explicit account of what is lacking at the hardware/physical level, then Searle has no real argument, and nothing except a human-exceptionalist intuition.

    Like

  30. Coel:

    I’m sorry, but doubling and tripling down on your part does not incline me in the least to continue the discussion, so this is the last thing I will say about it.

    As I have said, now, umpteen times, Searle has replied to all these criticisms at the end of Mind, Brains, and Programs and in his exchange with Fodor, reprinted in Rosenthal, ed., The Nature of Mind. He has added further arguments in The Rediscovery of Mind.

    I am a busy man and will not repeat arguments that have already been made. When I see the slightest shred of evidence that you have read, digested and thought about this literature — both for and against Searle — then I will be happy to discuss the subject with you. Otherwise, it’s just a waste of time.

    Your replies make it very clear that you have not. Your “the room understands Chinese” is what Searle refers to as the “systems reply” and your “attach a monitor and camera” is what Searle refers to as the “robot reply.” And I see nothing in anything you’ve said on the topic that would count as an engagement with Searle’s discussion. The same is true with your talk of linguistics and of the relationship between syntax and semantics — indeed, your discussion there is even less informed by the relevant literature than your discussion of Searle. There are, indeed, people who have tried to show that purely syntactic operations can do all of the relevant work, without the need for semantic content. The problem is, you haven’t read them — or at least, their influence is not apparent on anything you’ve said on the topic.

    To be fair, this isn’t just true of you, but of many others, as well.

    I will say, as a general matter, that I am becoming rather unconvinced that this sort of uneducated conversation is valuable in any way. It does not advance the discussion going on in the scholarly literature — indeed it has no effect on that discussion at all — and it does not advance ordinary peoples’ understanding of the topic — because it ignores that scholarly discussion, and because the format permits so many blatantly incorrect views on well-established subjects to go unchallenged. Indeed, I am beginning to feel like I am participating in what is as much a process of miseducation as education, and that is something I do not wish to do.

    You would never tolerate such an uneducated discussion on the subject of Astrophysics. Indeed, it is hard to imagine anyone doing so. I guess that I am finding myself increasingly discinlined to tolerate such an uneducated discussion on Philosophy. When I see an educated conversation going on, I’ll join in. Otherwise, I think I’ll pass. After all, that’s what I do — pass — when I see topics on SS, which I know nothing about.

    Liked by 1 person

  31. Massimo, Dan,

    Will the author join the discussion, or do we have a one-way “Ivory-to-Main” communication here? If Mark does not intend to answer any of our questions, the discussion is not very valuable, I’m afraid.

    Like

  32. Hi Coel,

    If “understanding” just means linkages between information then it is trivially true that the CR understands. But on that definition the Earth understands the Sun and vice versa, snails understand leaves, roads and car tyres and vice versa. It is not a very useful definition.

    Just a note, I have never heard it said of Searle that he is a human exceptionalist. He does not even deny that there can be machines that are conscious and can understand (as he regards the human brain as such a machine).

    Searle’s target is computationalism in particular. I don’t think that computationalism can be regarded as some sort of a ‘you can’t prove it’s not true’ default. It is a claim and as such the burden of evidence is on those who claim it.

    As I pointed out in a prior comment thread, one of the leading neuroscientific positions on consciousness – Integrated Information Theory – is a non-computationalist (and non functionalist) position. You cannot even claim any sort of scientific consensus on the subject.

    =================================
    Hi everyone,

    As I have said before, my position on this is one of agnosticism, but leaning heavily away from Naturalism and definitely away from Computationalism.

    I am skeptical that any machine simulatable by an algorithm can be conscious (by which I mean that I am skeptical, – for reasons the word count won’t allow – that this moment of consciousness that I experience right now could, even in principle, be the result of an algorithm being run or a system simulatable by an algorithm).

    But that does not mean that such a machine cannot be intelligent, or can understand things (if you define ‘understand’ as what is being tested in, for example, a comprehension test or a University mathematics examination or any of the functional definitions used by educationalists). Such intelligence or understanding may not require consciousness.

    If such machines can be intelligent, then I think that it will more likely to be a problem for my kids, or grandkids if I ever have any, current progress in AI is not encouraging.

    Again I present my solution to the meglomaniac artificial brain (in case it is not amenable to the usual solutions, like reading it Eluard poetry):

    An off switch.

    Like

  33. Marko, I did ask the author to participate, and he said he would. But of course I can’t really force people to do so. Still, this hardly seems like a useless discussion nonetheless….

    Like

  34. For anyone interested in learning more about the Chinese Room debate, here’s an old paper Chalmers wrote about whether Searle’s arguments and replies are effective against a “connectionist” reply.

    Click to access subsymbolic.pdf

    One thing I am fairly well-educated about is computation, and I can say with some authority that in this paper Chalmers displays a pretty decent understanding of connectionism. He makes some points much like the ones I made above, but in a way that engages more closely with the ongoing philosophical discussion of the time.

    Like

  35. I think this is number 5 for me. I get Aravis’s frustration, but I’m also open to the suggestions others have made. For example, Coel’s statement, “One can readily turn computers into robots by equipping them with robotic arms, video cameras, etc, and building in autonomous decision making based on those sensory inputs. Hardware additions giving “causal powers” are the easy bit, the software is the harder thing.” I get that too, even though it seems a bit glib and doesn’t really directly address the OP’s contention that human mentality–and everything that might entail much of which we don’t even understand in non-human animals–cannot be replicated in machine mentality. Even if I assume that over time we can come very close to such a replication, there is still the cautionary tale that surfaces in the second part of Bishop’s article, i.e, the dystopian scenario where the machine goes rogue and turns on its creator. My guess is we don’t even get that far. We ultimately destroy ourselves using machines while rationalizing that our demise is the machine’s fault. It will provide the machines a convenient narrative upon which to build their own dystopian tales.

    Like

  36. victorpanzica.

    You don’t have an argument, you have a bunch of assertions: “it’s possible,” “we can do this,” humans are machines,” etc.

    To even begin an argument here, you would need to produce a viable interpretation of human behavior and responses in purely mechanistic terms, including the phenomena I described in my previous 2 posts.

    Alternatively, you could go ahead and build the conscious computer of your dreams within both our lifetimes (which, due to my health, doesn’t give you much time).

    You also haven’t answered the question – why would we want to do this? What is so wrong with being human?

    Aravis,

    Frankly, I’ve never understood why you keep trying to rebut Coel. Why not have your say, let him have his say, and move on to a different discussion? One of the things I’ve learned here is to let anyone who seriously disagrees with me (without acknowledging my points) to have the last say. Usually if what they say is truly wrongheaded, this will be recognizable. But I would hate if you did not leave any clarifying comments. I have learned from you. This site has some issues, in that it should be about learning, but some people come here with definitely set views they wish to propagate. (And to be fair, sometimes they happen to be in the neighborhood of being right.) But we all slip into that attitude, and it is something to be aware of and corrected. That some have a difficulty recognizing this doesn’t mean there isn’t value in posting here for those of us who do.

    All,

    One of the problems at this site is that the comments get digressive to the point of losing sight of the original article. Sometimes this is okay, in that it riffs off greater implications of what the OP is saying. But sometimes it goes far astray. If the composer of the OP does not respond in such a climate, do not be disappointed.

    I fear many of the comments here have failed to address Dr. Bishop’s real point. which is that our governments are putting too great a faith in AI – as it is today – controlling weapons of mass destruction, an argument buttressed by real-world (not theoretical) examples of frightening near-misses; and I am sure there are many more that could be collected from the archives, if these were not hidden by government sanction.

    Maybe AI can accomplish consciousness and superior intelligence in some future, but right now it is dumb machinery. Are we willing to trust that future to it – as it is today? I think not.

    (Aside: I find it interesting that most of the AI proponents here have failed to acknowledge Dr. Bishop’s having worked for years as a researcher and theorist in AI. Does expertise in a science itself no longer count for anything among those propounding it?)

    Like

  37. “…this sort of uneducated conversation…”

    This conversation, as you point out, actually recapitulates the arguments among the more “educated”, but in a shorter timeframe. Just because several of these counterarguments have names in the literature because Searle listed them doesn’t mean his responses are accepted by many people as definitive.

    On syntax and semantics:

    “the difference between inert instantiation and dynamic instantiation is nonsyntactic: P, the property of being a Process at issue, is not a formal or syntactic property but, necessarily (essentially), includes a nonsyntactic element of dynamism besides… Note that given Functionalism’s identification of thinking with Program execution and the essential dynamism of execution, Searle’s denigration of “the robot reply” as tantamount to surrender of the functionalist position” [http://cogprints.org/240/1/199802002.html]

    On understanding:

    “Evaluating understanding in oneself and in others are thus two rather different tasks. In ourselves the criteria we use are private, and, I shall argue, ultimately tied to feelings. As regards the understanding of
    others, our assessment must be based on observable behavior… Now Searle, on balance, wants a machine to satisfy a stricter criterion than the one we normally use in assessing the understanding of humans…
    Based on demonstrations of competence, most of us routinely attribute understanding to other people , domestic animals , and perhaps machines, without trying to probe the understander’s subjective state.”

    [http://opensiuc.lib.siu.edu/cgi/viewcontent.cgi?article=1203&context=tpr]

    I think nonhuman problem solving is intelligence, probably doesn’t involve syntax, and I would be most impressed if we had an AI as intelligent as a fox.

    Like

  38. OK, I give up. It seems nobody wants to talk at all about the central issue. It’s a shame, because a mistaken belief in computational theories of mind could only destroy us if we bought en masse into the posthumanist idea that we could “enhance” ourselves by uploading our minds to computers, (not likely at present, I’d say). If Searle is right, we’d actually turn ourselves into techno-zombies – retaining the behaviour but losing all subjectivity.

    On the other hand, Bostrom and Hawking et al are pointing to a real existential threat on the horizon, but we are ignoring it. Still, I bow to the wishes of the majority.

    Coel:

    You miss the point of “casual powers” here. It has nothing to do with the robot/computer’s capacity to cause things to happen; its about the distinction between data processing by instantiating algorithms on other one hand; and physical process which cause consciousness by virtual of their direct effects at a physical/chemical level, without having to go through an intermediate step of implementing algorithms.

    Of course any physical process can be described in computational terms. The old joke: it’s miraculous how everybody’s legs are just long enough to reach the ground! But data processing, cannot possibly be the basis of consciousness, because it isn’t an observer independent aspect of the physical world. So the right way to investigate the origin of consciousness is neurobiology, and computational explanations are a tempting sidetrack.

    Searle – going way beyond the Chinese Room – says that you could create a computer model of the brain down to the finest detail, but you still not have caused any subjective consciousness or true intentionality; any m,ore than a perfect computer model of a forest fire would burn anything. Computationalist cognitive science say this is wrong, because unlike anything else, the conscious mind is actually “made of” computation. “Computing” names an observer-relative aspect of reality, like money or the English language or a political office – whereas the processes which actually cause consciousness are physical, observer-independent.

    One last attempt to raise the core issue: Bostrom starts his book with a fable; the sparrows say ““We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!” When one bird suggests “Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

    The repy is;“Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg… After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

    I suggest that that’s exactly what we are doing here! To survive we must solve the problem of controlling advanced AI before we solve the the problem of creating it! And there will be many political, economic and military pressure to reverse that order. And whether it is conscious is irrelevant to this issue!

    Like

  39. Dan, Sorry I didn’t make that part clear. My idea would include that a robot passing the test would, itself, or at least a fellow member of the species, actually confront a particular is-ought moral situation and transcend it, with a cultural-psychological based decision.

    I agree that just knowing what the idea is would be little more than “book learning.”

    gwarner Yes, it’s logically possible for an entity to show intelligence without consciousness. But, we’ve seen no such instances of it, and I’m not going any closer than that to p-zombie territory.

    As for the rest of your comment in that post, obviously the background of my comment, about autonomy, self-replication, and other things from biological evolution, you also think aren’t worthy of consideration in AI. I’m guessing you’re probably in a minority there, too.

    All Per part of the third comment of gwarner it’s really not “sufficiently advanced AI to worry about, it’s the people behind the computers, at least for right now. (The author really has two issues, Warner, and many commenters have been talking on topic, but just about one of the two, if we’re precise.)

    A year or two ago, for example, I read a true-to-life historic novel discussing computers that control our utility systems and so much more. The book opens with the lights at the White House repeatedly flickering on and off. Then the phone rings. And the voice at the other end says “Hello, this is Beijing” or something similar. You can probably fill in the next steps in the story line yourself.

    Now, do we have more dangers, possibly, in 20 years or whatever, from a pack of computers that are semi-intelligent enough to act like a pack of silicon wild dogs, but not intelligent enough to be a silicon Mafia? Maybe. I don’t know. But, we could (almost) surely “euthanize” them, if necessary, to extend the analogy. And, theoretically, cut off further advances in computing after that.

    The author’s concerns are interesting, but … perhaps a bit overblown. I don’t know. I hope we don’t get to the point of an unintended empirical testing.

    EJ Well put, otherwise. Certain people, certain themes, it’s like shooting fish in a barrel that refuse to admit they’ve been shot.

    Like

  40. SocraticGadfly (and other):

    Thanks for responding. I’m staying out of p-zombie land myself. I agreed with Massimo’s view on this, in SS and in Rationally Speaking. Call a nonconscious advanced AI a technological zombie, perhaps? I don’t think any arguments against p-zombie apply to t-zombies. Nor have I heard a strong technical or philosophical argument why t-zombies are unlikely to be built. I understand that, computationalist theorists of mind would claim that any sufficiently advanced AI would probably be conscious). I agree that we haven’t seen one yet; nor have seen an conscious super-AI either, nor many other technical achievements that are likely to be devised in future.

    There’s a fork; if Searle is right, then a nonconscious super AI is much easier to build than a conscious one, and the issue is ; could it be a threat? If he’s wrong, then Prof Bishop’s argument falls anyway. And the issue is the same.

    I accept that Prof. Bishop had at least two issues, but the anti-computer consciousness one was intended to address the more fundamental ‘AI threat’ one. Anyway, I gave my reason for believing that the latter is more pressing and more threatening.

    Unlike Aravis, I’m not bored by thee argument about computing and consciousness. But I’m with Bill Skaggs on this; “Superseding of humans by machines does not imply instantiating all aspects of human mentality”. I believe that the Superintelligence issue is much more urgent. It looks as if nobody here agrees; I’m curious why. Or, pace Aravis, is the Searle argument just too engaging?

    Maybe we’ll come back to the possible dangers of AI, and Bostrom’s arguments. I hope so, because I think we should take them seriously. I don’t believe this just a Hollywood scenario.

    If an AI became dangerous, you say, we could euthanise them. Maybe; if we know there’s a danger, if we haven’t underestimated them(in which case we’ll be the one euthanised), if we are united in doing so and aren’t more concerned to preserve them as an”edge” over others, and a lot of other “perhaps’s”. I think there are strong temptations, and powerful political/military/economic pressures to wing it. to say “it’ll be alright on the night, lets just get it working.”

    Bostrom’s bottom line is that the time to consider all this is now, not after we’ve created the AI. He spends a lot of time on this, and on equivalent how we could safeguard ourselves. and what wouldn’t work. (Asimov’s Laws, for example). It’d be great if he could be induced to contribute here, or failing that, for someone else to address the question. I’d have a go myself, but I’m not eligible by the criteria for contributions, since I’m no longer in academia.

    Like

  41. Aravis, I hope you continue to comment on the threads here. I know it must be frustrating when you appear to have no impact on some of the regulars, but I know I have learned from reading and digesting your comments. I’m also working my way through the Wittgenstein companion PDF you flagged recently. Facts I can pick up anywhere, but for informed comment I need professionals like you and Massimo. Never mind the scientissimos and keep teaching.

    Like

  42. Before the SciSal 5-day deadline let me add to others:

    Dr. Bishop’s OP came in two sections, the second of which was more relevant, important and more pressing. Namely, the dangers of popular and official increasing reliance and over-confidence in computers and their data-crunching. The computer is rarely truly at fault, it is *their programmers’* fallibility or ignorance and/or inabilities to foresee and cover for possible eventualities, it is “their programmers’* failure to make the “stupid” computer’s operations and “decisions” foolproof.

    An autopilot program can fly a plane, probably more efficiently for many (known). conditions, but it took a very skilled and lucky human pilot to land an aircraft with problems on a river in the centre of New York in such a way as to save all aboard.

    To the best of my recollection not one comment has raised a plea for *greater* trust in computers and their programs, so nobody had much to add to or comment about Dr. Bishop’s serious second section: opinion seemed unanimously in agreement.

    But the first part of his scholarly essay (about AI) invited and therefore got a real going-over and caused dissension even hackle-raising. It was grist to SciSal’s mill of open philosophy, a mix of varying expertise and opinion sometimes with perhaps but tenuous links to the OP topic yet still in accordance with the webzine’s general aim of disseminating Enlightment.

    N.B. If you refute the supernatural then even the best “Established Authority” is “human” wisdom and possibly fallible.

    Like

  43. I think that with the Coel types, they believe that the “hardware” is “material irrelevant,” and even “structure irrelevant,” to the degree they think about that. I of course don’t. Human nerves are much slower than computer electronics, of course, which is why we have evolved to do as much “fast” thinking as possible. Computers don’t distinguish between fast and slow processing. Also, at least to some degree, we truly operate “in parallel” and computers don’t.

    I mean, the dissimilarities between the wiring, gates, semiconductors, etc. of computers and the neurons, synapses, neurotransmitter “locks” etc. of carbon based life (animal division) are far greater than the similarities. So, yes, substrates matter.

    Even more, processes matter As for the world of biological life, I suspect a silicon-based life form, like Star Trek’s Horta, that evolved from the ground up through normal evolutionary processes, would be far more like carbon-based biological life than like computers or robots, even the most intelligent of them, if they’d never reached the point of autonomy, evolutionary development and self-replication.

    This all goes further to show that analogies between machine and human thinking are weak and poor.

    To reiterate what I said in longer, and more indirect, form in a previous comment: A good analogy is a fine tool. A poor one becomes an enslaving master when the person who created it, and others of like mind, refuse to abandon it.

    As a former divinity student, as well as a current newspaper editor, I’ve seen plenty of analogies in sermons and in op-ed columns. I think I’ve created a few good ones. I’ve probably created a few so-so ones. With one or two possible exceptions, I’ve not deliberately created any bad ones.

    The computer analogy fails in another way, per the evolutionary biology angle. Computers, and robots, are created in a sterile, artificial environment. That’s why Moore’s Law, at least for now, continues to hold for them, but no such thing exists in biological evolution. At some point, a robot, to attain that “variety of AI worth having,” will have to grow, develop, evolve, and replicate on its own accord in a “real world” environment.

    I think the Hume Test is better than Dan’s Ryle Test, too. If not Watson, a super-Watson will pass a Ryle Test soon enough, by, let’s say, no later than 2050. As I fully spun it out, it will be a long time before a machine, a robot in particular rather than a “tethered” computer, passes a Hume Test. I doubt something like that would happen by the end of this century.

    It’s also better because, per my previous comments, it will really apply to robots who, like animals, are interacting with their environments.

    With that, I’m kind of where Aravis is at, but for different reasons.

    Like

  44. When i was invited to contribute short essay for Scientia Salon Massimo suggested the role of the salon was to “facilitate understanding of scientific research and scholarship by the general public”; so I was a little taken aback to receive over 90+ thoughtful, incisive and probing responses to my small article, many of which raise critical issues that would more seriously warrant an entire essay to respond to properly; unfortunately, having just been awarded £2m to set up and run a new research Centre at Goldsmiths (to apply deep learning and other advanced machine learning algorithms to problems involving very big data) and with a noisy and demanding 1year old baby both taking up my life, unfortunately I simply do not have the time to respond to each in the full and proper way they really deserve and for that I sincerely apologise. I would just like to say thanks to everyone who read the article and double thanks to those who made an effort to engage with it and respond; so if in the following i don’t engage with your specific point, once again, please accept my sincere apologies ..

    What i will do instead [due to word limit over a few comments] is to make a few general remarks responding to broad criticisms of the article – e.g. concerning say Searle & Penrose, albeit as these arguments are both now well known and well rehearsed, I think challenges to their positions are best made in the academic literature; furthermore these authors are well able to defend their own position – then consider the engagement with my small contribution to the debate; the Dancing with Pixies reductio.

    [Marco Neves] Incidentally, as a general point that comes up in these debates again and again, it seems clear to me that neuron replacement arguments fall pray to Searle, Penrose and Bishop as long as the neural replacement is merely **formally** duplicating the functional [causal] power of the neuron and not duplicating its electro-chemical and biological nature. See also Nasuto & Bishop, “Of (zombie) mice and animats”, in Muller, V., (ed) (2012), Theory and Philosophy of Artificial Intelligence. for many years computation was the only counter to mysterianism when it came to scientific attempts to understand the mind; that position no longer holds. Indeed the negative arguments (outlined int he essay and reviewed below) form a key driver for my own interest in new radical cognitive science (cf. strong embodiment; enactivism; ecological cognitive science and embedded theories of mind).

    Like

  45. 1: Regarding the curious case of the Chinese room, firstly it is astonishing how many people wrongly claim that Searle’s argument targets the possibility of machine understanding; it doesnt (and I don’t) – Searle is quite explicit about this in MBP; Searle targets the idea that semantics can arise from syntax qua computation. Secondly Searle writes that much of the confusion around the argument is due to an underlying confusion regarding epistemology and ontology; and this confusion seems to lurk under the skin of several of the comments herein.

    To see why there there is an ontological distinction between the Searle responding in Chinese (by following a program) and Searle responding in English, consider the case of Searle responding to a joke: in the former case he might output the Chinese ideograph for HA-HA, but as he cranks out the program to do this at no point does he experience a sensation of humour/laugher (i.e. by merely following the program he doesn’t ‘understand’ the joke and doesn’t get the joke; think of a small child laughing at an adult joke she similarly doesn’t really ‘get’) and contrast that with Searle’s response to a joke in English. I claim there is an ontological distinction between the two cases and that distinction is a difference of semantics and understanding.

    Incidentally, many of the themes regarding the CRA are echoes of ideas discussed at length in the book i co-edited with John Preston – Views into the Chinese room – Clarendon/Oxford University Press, 2002; for a more recent review see (i) Bishop, “The phenomenal case of Turing and the Chinese room” in Cooper & Leeuwen (eds) (2013) “Alan Turing: his work & impact” and (ii) Nasuto & Bishop, Of (zombie) mice and animats, in Muller, V., (ed) (2012), Theory and Philosophy of Artificial Intelligence.

    [Phil H. in your ad hominem on Searle (“John Searle’s racist room argument”) merely demonstrates your own lack of reading in the area; if you had consulted Searle’s paper (and not just a poor summary of it) you would have known that the reason he chose a “Chinese” room was merely that (a) Chinese was a language he didnt speak or read a word of and (b) Chinese could be characterised by ideographs (‘squiggles & squoggles’ in Searle’s original article).

    [disagreeable me] his systems reply is effectively that Searle discussed in MBP and Haugleand (from my CRA volume – above) more seriously went into; see my comments above and in my essay in the Turing volume for more detailed response.

    Like

  46. 2: Penrose, ‘On understanding understanding’

    I first came across the Godel style arguments when reading the famous exchange between the Oxford Philosopher John Lucas and Paul Benacerraf. In addition, as someone who used to teach “Theory of computation” and deployed a method inspired by ‘Godel numbering’ [to nCode enumerable programs] in my own exposition of the Turing non-computability of, say, the Halting Problem, I also (and independently) came to ponder analogous questions [regarding computability] as Penrose so elegantly unfolded in mathematical logic.

    In response to the criticisms of my deployment of Penrose I can merely note that some people continue to assert that Penrose is merely redeploying the Lucas argument; he most certainly is not. The Penrose position is – imo – a tad more subtle, hence simple regurgitated “Benacerraf style” rebuttals miss their target.

    In a public lecture at Google, Penrose clarified that, a few minor points aside (and these have now been corrected in reprints of “Shadows of the mind”) he remains fully sanguine that the underlying logical argument is sound; indeed, Penrose is a mathematician of some renown and he was recently asked to present the case as Plenary speaker at the Godel Centenary conference at Vienna (Co-organized by the University of Vienna, the Institute for Experimental Physics, the Kurt Gödel Research Center, the Institute Vienna Circle, and the Vienna University of Technology). It is possible he has made some “school boy” error in his underlying mathematical logic – in my own case more than possible – but, i submit, perhaps not hugely likely .. For those who have not read Shadows of the mind, I have endeavoured to summarise the core of Penrose argument (below) NB. any errors introduced in the following summary will be due to my carelessness in putting this material together so late at night. I have nothing further to add on Penrose position; to me the argument is both clear and sound – further remarks on this are thus better perhaps more properly at Penrose..

    (a) Mathematical pre-amble:
    Consider ‘a’ to be a sound set of rules (an effective procedure) to determine if the computation C (n) does not stop. C (n) is merely some computation on the natural number n (e.g. ‘Find an odd number that is the sum of (n) even numbers’). Let {A} be a formalization of all such procedures known to human mathematicians. Ex-hypothesi, the application of the set of rules, {A}, terminates iff C (n) does not stop

    In the Penrose exposition he asks us to imagine a group of human mathematicians continuously analysing C (n), only ceasing their contemplation if and when one of the group shouts out, “Eureka!! C (n) does not stop”. NB. {A} must be sound (i.e. it cannot be wrong) if it decides that C (n) does not stop, as if any of the procedures in {A} were unsound it would eventually be ‘found out’.

    Enumerating computations on (n): computations of one parameter, n, can be enumerated (listed): C0 (n), C1 (n), C2 (n) .. Cp (n) where Cp is the pth computation on the number ‘n’. I.e. Cp (n) defines the pth computation of one parameter, (n). Such an ordering is clearly computable. Thus A (p, n) is the effective procedure that, when presented with p and n, attempts to discover if Cp (n) will not halt; furthermore, If A (p, n) HALTS we KNOW that Cp (n) does not HALT.

    (b) The Penrose argument:

    1. If A (p, n) halts THEN Cp (n) does not halt.
    2. Let (p = n) [i.e. the Self Applicability Problem; SAP (n)]
    3. IF A (n, n) halts THEN Cn (n) [i.e. SAP (n)] does not halt.
    4. But A (n, n) is a function of one number (n) hence it must occur in the enumeration of C; let us say it occurs at position k (i.e. it is computation Ck (n))
    5. Hence A (n, n) = Ck (n); recall k is not a parameter but a specific number – the location of ‘A (n, n)’ in the enumeration of C.
    6: Now we examine the particular computation where (n = k)
    7. Hence, substituting (n = k) into (5) we get: A (k, k) = Ck (k)
    8. But now rewriting [3] with (n = k) we observe: …
    9. iff A (k, k) halts THEN Ck (k) does not halt
    10. Now substituting from [7] into [9] we get the following contradiction [11] if Ck (k) halts:
    11.iff Ck (k) halts THEN Ck (k) does not halt !
    12.Hence from [11] **WE KNOW** that IF A IS SOUND Ck (k) CANNOT HALT (or we have a contradiction). In other words Penrose claims, “We know what A is unable to ascertain”
    13. But from [7] we know that A (k, k) CANNOT HALT either as from [7], A (k, k) = Ck (k).
    14. Thus, if A is sound, A is not capable of ascertaining if this particular computation – Ck (k) – does not stop even though Ck (k) cannot halt (if {A} is sound) or, from (11) we get a contradiction.
    15. But if A exists and it is sound we KNOW – from 12 – that Ck (k) MUST not stop and hence we KNOW – from 13 – something that, if A is sound, that {A} is **provably** unable to ascertain.
    16. Hence A cannot encapsulate mathematical understanding.

    This result leads Penrose to assert, “Human mathematicians are not using a knowably sound argument to ascertain mathematical truth”, (Shadows, pp. 76). Lastly, it is important to note that Penrose sees evidence of the incomputable nature of mind in the non-computable processing described by Orchestrated Reduction and just last year clarified in the journal “Physics of life reviews” that recent results in physics go a long way to proving his case..

    [disagreeable me] It is not clear the questioner has read Penrose (see my summary above).

    Like

  47. 3: Bishop, “On machine consciousness”
    Interestingly there were virtually no serious objections to the Dancing with Pixies reductio.. Only one of the three problems need hold, for the “humanity gap” to be real ..

    [Disagreeable_me] For someone to seriously state, “My answer borders on the heretical, but it is that we should not impute consciousness to physical things but to mathematical structures (i.e. algorithms). ” rather than simply bite the bullet and reject computation as the underlying metaphor of mind, suggests to me just how much the questioner is in the grip of the [computational] ideology and, incidentally, even the notion of just what constitutes a computation is perhaps slightly more problematic than it may at first sight appear (see, for example, Spencer, Nasuto, Tanay, Bishop & Roesch, Abstract Platforms of COmpoutation, Proc. AISB2013, Computing & Philosophy symposium, What is computation?). All classical computation is observer relative and contingent on a mapping form the physical system to computational state (see any of the extended treatments of this argument for detail).

    [vvmarko – AFAIK yes & yes]

    [SocraticGadfly] – Thanks for your comments; actually I have addressed the “Flashcrash” – see my essay in Philosophy & Technology journal, “All watched over by machines of Silent grace”.

    Like

  48. EJWinner you said

    “No computer will ever weep over a dead son, or rejoice in the successful life of a daughter. No computer will ever suffer disappointment or need to find ways to live with it and carry on.
    No computer will ever have to determine what is mere lust or truly love, control its anger and laugh at its own flaws. No computer will ever confront its own mortality.”

    This is an assertion and yes based on present science no computer can model human (or animal) emotions based on present data. But who is to say they won’t in the future. With the modeling comes the complex integration of emotion, reason, social cohesion etc. I agree it is a very complex and daunting problem, and I agree one’s own emotions can obfuscate reason to the point of being exasperated by those who believe you can achieve this with AI.

    Like

  49. Re Penrose argument. Megill et al [2012], as far as I understand it, use a very similar argument to a reach a quite different conclusion: that the set of humanly known mathematical truths (at any given moment in human history) is finite and so recursive, and therefore axiomatizable and therefore computable. From the first incompleteness theorem, it is either inconsistent or complete. Further, they claim that “any given mathematical claim that we could possibly know could be the output of a Turing machine.”

    “One can generalize Godel’s theorem to apply not simply to any particular formal mathematical system that contains enough number theory, but rather to known human mathematics at any given moment. To
    elaborate, Gödel showed that there will be at least one true arithmetical claim—the ‘‘G sentence’’—that will not be provable in any consistent axiomatization of number theory. But humans can, of course, look and see the truth of the G sentence for many systems (e.g., the system used in the Principia). (Of course, Gödel himself thought that humans had some sort of intuition into the mathematical realm that allowed us to see the truth of at least some mathematical claims.) But [our] claim generalizes Gödel’s result beyond formal systems to all of humanly known mathematics; at any given moment in human history, there will be at least one
    G sentence that we do not know, assuming that human mathematics is consistent.”

    They claim to (slightly) formalize Benacerraf’s and also Boyer’s (1983) objections.

    Re claims about Orchestrated Reduction, I would just point to a persisting general skepticism among other physicists re escaping decoherence at 37 Celsius. I don’t see one can claim this as more than interesting speculation at this time.

    http://link.springer.com/article/10.1007/s10516-013-9211-x

    Like

Comments are closed.