The danger of artificial stupidity

Artificial-Intelligence-Could-Co-Pilot-Future-Combat-Jets-Navy-Officialby J. Mark Bishop

It is not often that you are obliged to proclaim a much-loved international genius wrong, but in the alarming prediction made recently regarding Artificial Intelligence and the future of humankind, I believe Professor Stephen Hawking is. Well to be precise, being a theoretical physicist — in an echo of Schrödinger’s cat, famously both dead and alive at the same time — I believe the Professor is both wrong and right at the same time.

Wrong because there are strong grounds for believing that computers will never be able to replicate all human cognitive faculties and right because even such emasculated machines may still pose a threat to humanity’s future existence; an existential threat, so to speak.

In an interview on December 2, 2014 Rory Cellan-Jones asked how far engineers had come along the path towards creating artificial intelligence, and slightly worryingly Professor Hawking, replied “Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Although grabbing headlines, such predictions are not new in the world of science and science fiction; indeed my old boss at the University of Reading, Professor Kevin Warwick, made a very similar prediction back in 1997 in his book “March of the Machines.” In that book Kevin observed that even in 1997 there were already robots with the “brain power of an insect”; soon, he predicted, there would be robots with the brain power of a cat, and soon after that there would be machines as intelligent as humans. When this happens, Warwick claimed, the science fiction nightmare of a “Terminator” machine could quickly become reality, because these robots will rapidly become more intelligent than and superior in their practical skills to the humans that designed and constructed them.

The notion of humankind subjugated by evil machines is based on the ideology that all aspects of human mentality will eventually be instantiated by an artificial intelligence program running on a suitable computer, a so-called “Strong AI” [1]. Of course if this is possible, accelerating progress in AI technologies — caused both by the use of AI systems to design ever more sophisticated AIs and the continued doubling of raw computational power every two years as predicted by Moore’s law — will eventually cause a runaway effect wherein the artificial intelligence will inexorably come to exceed human performance on all tasks: the so-called point of “singularity” first popularized by the American futurologist Ray Kurzweil.

And at the point this “singularity” occurs, so Warwick, Kurzweil and Hawking suggest, humanity will have effectively been “superseded” on the evolutionary ladder and may be obliged to eek out its autumn days gardening and watching cricket; or in some of Hollywood’s more dystopian visions, be cruelly subjugated or exterminated by machine.

I did not endorse these concerns in 1997 and do not do so now; although I do share — for very different and mundane reasons that I will outline later — the concern that artificial intelligence potentially poses a serious risk to humanity.

There are many reasons why I am skeptical of grand claims made for future computational artificial intelligence, not least empirical. The history of the subject is littered with researchers who have claimed a breakthrough in AI as a result of their research, only for it later to be judged harshly against the weight of society’s expectations. All too often these provide examples of what Hubert Dreyfus calls “the first step fallacy” — undoubtedly climbing a tree takes a monkey a little nearer the moon, but tree climbing will never deliver a would-be simian astronaut onto its lunar surface.

I believe three foundational problems explain why computational AI has failed historically and will continue to fail to deliver on its “Grand Challenge” of replicating human mentality in all its raw and electro-chemical glory:

1) Computers lack genuine understanding: in the “Chinese room argument” the philosopher John Searle (1980) argued that even if it were possible to program a computer to communicate perfectly with a human interlocutor (Searle famously described the situation by conceiving a computer interaction in Chinese, a language he is utterly ignorant of) it would not genuinely understand anything of the interaction (cf. a small child laughing on cue at a joke she doesn’t understand) [2].

2) Computers lack consciousness: in an argument entitled “Dancing with Pixies” I argued that if a computer-controlled robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be present in all objects throughout the universe: in the cup of tea I am drinking as I type; in the seat that I am sitting on as I write, etc., etc.. If we reject such “panpsychism,” we then must reject “machine consciousness” [3].

3) Computers lack mathematical insight: in his book The Emperor’s New Mind, the Oxford mathematical physicist Sir Roger Penrose deployed Gödel’s first incompleteness theorem to argue that, in general, the way mathematicians provide their “unassailable demonstrations” of the truth of certain mathematical assertions is fundamentally non-algorithmic and non-computational [4].

Taken together, these three arguments fatally undermine the notion that the human mind can be completely instantiated by mere computations; if correct, although computers will undoubtedly get better and better at many particular tasks — say playing chess, driving a car, predicting the weather etc. — there will always remain broader aspects of human mentality that future AI systems will not match. Under this conception there is a “humanity-gap” between the human mind and mere “digital computations”; although raw computer power — and concomitant AI software — will continue to improve, the combination of a human mind working alongside a future AI will continue to be more powerful than that future AI system operating on its own. The singularity will never be televised.

Furthermore, it seems to me that without understanding and consciousness of the world, and lacking genuine creative (mathematical) insight, any apparently goal directed behavior in a computer-controlled robot is, at best, merely the reflection of a deep rooted longing in its designer. Besides, lacking an ability to formulate its own goals, on what basis would a robot set out to subjugate mankind unless, of course, it was explicitly programmed to do so by its (human) engineer? But in that case our underlying apprehension regarding future AI might better reflect the all too real concerns surrounding Autonomous Weapons System, than casually re-indulging Hollywood’s vision of the post-human “Terminator” machine.

Indeed, in my role as one of the AI experts on the International Committee for Robot Arms Control (ICRAC), I am particularly concerned by the potential military deployment of robotic weapons systems — systems that can take decisions to militarily engage without human intervention — precisely because current AI is still very lacking and because of the underlying potential of poorly designed interacting autonomous systems to rapidly escalate situations to catastrophic conclusions; such systems exhibit a genuine “artificial stupidity.”

A light-hearted example demonstrating just how easily autonomous systems can rapidly escalate situations out of control occurred in April 2011, when Peter Lawrence’s book The Making of a Fly was auto-priced upwards by two “trader-bots” competing against each other in the Amazon reseller market-place. The result of this process is that Lawrence can now comfortably boast that his modest scholarly tract — first published in 1992 and currently out of print — was once valued by one of the biggest and most respected companies on Earth at $23,698,655.93 (plus $3.99 shipping).

In stark contrast, on September 6th 1983, during a military exercise named “Operation Able Archer,” a terrifying real-world example of “automatic escalation” nearly ended in disaster when an automatic Soviet military surveillance system all but instigated World War III. During the height of what Russia perceived to be an intimidating US military exercise in central Europe, a malfunctioning Soviet alarm system alerted a Soviet colonel that the USSR was apparently under attack by multiple US ballistic missiles. Fortunately, the colonel had a hunch that his alarm system was malfunctioning, and reported it as such. Some commentators have suggested that the colonel’s quick and correct human decision to over-rule the automatic response system averted East-West nuclear Armageddon.

In addition to the danger of autonomous escalation I am skeptical that current and foreseeable AI technology can enable autonomous weapons systems to reliably comply with extant obligations under International Humanitarian Law; specifically three core obligations: (i) to identify combatants from non-combatants; (ii) to make nuanced decisions regarding proportionate responses to a complex military situation; and (iii) to arbitrate on military or moral necessity (regarding when to apply force).

Sadly, it is all too easy to concur that AI may pose a very real “existential threat” to humanity without ever having to imagine that it will reach the level of superhuman intelligence that Professors Warwick and Hawking so graphically warn us of. For this reason in May 2014 members of the International Committee for Robot Arms Control travelled to Geneva to participate in the first multilateral meeting ever held on Autonomous Weapons Systems (LAWS); a debate that continues to this day at the very highest levels of the UN. In a firm, but refracted, echo of Warwick and Hawking on AI, I believe we should all be very concerned.

___

Mark Bishop is Professor of Cognitive Computing at Goldsmiths, University of London, was Chair of the UK Society for Artificial Intelligence and the Simulation of Behaviour (2010-2014), and currently serves on the International Committee for Robot Arms Control.

[1] Strong AI takes seriously the idea that one day machines will be built that can think, be conscious, have genuine understanding and other cognitive states in virtue of their execution of a particular program; in contrast, weak AI does not aim beyond engineering the mere simulation of (human) intelligent behavior.

[2] Searle illustrates the point by demonstrating how he could follow the instructions of the program (in computing parlance, we would say Searle is “dry running” the program) and carefully manipulating the squiggles and squiggles of the (to him) meaningless Chinese ideographs as instructed by the program without ever understanding a word of the Chinese responses the process is methodically cranking out. The essence of the Chinese room argument is that syntax — the mere mechanical manipulation (as if by computer) of uninterpreted symbols — is not sufficient for semantics (meaning) to emerge; in this way Searle asserts that no mere computational process can ever bring forth genuine understanding and hence that computation must ultimately fail to fully instantiate mind. See Preston & Bishop (2002) for extended discussion of the Chinese room argument by twenty well known cognitive scientists and philosophers.

[3] The underlying thread of the “Dancing with Pixies” reductio (Bishop 2002, 2005, 2009a, 2009b) derives from positions originally espoused by Hilary Putnam (1988), Tim Maudlin (1989), and John Searle (1990), with subsequent criticism from David Chalmers (1996), Colin Klein (2004), and Ron Chrisley (2006), amongst others (Various Authors 1994). In the DwP reductio, instead of seeking to secure Putnam’s claim that “every open system implements every Finite State Automaton” (FSA) and hence that “psychological states of the brain cannot be functional states of a computer,” I establish the weaker result that, over a finite time window, every open physical system implements the execution trace of a Finite State Automaton Q on a given input vector (I). That this result leads to panpsychism is clear as, equating FSA Q(I) to a finite computational system that is claimed to instantiate phenomenal states as it executes, and employing Putnam’s state-mapping procedure to map a series of computational states to any arbitrary non-cyclic sequence of states, we discover identical computational (and ex hypothesi phenomenal) states lurking in any open physical system (e.g., a rock); little pixies (raw conscious experiences) “dancing” everywhere. Boldly speaking, DwP is a simple reductio ad absurdum argument to demonstrate that: IF the assumed claim is true (that an appropriately programmed computer really does instantiate genuine phenomenal states) THEN panpsychism is true. However if, against the backdrop of our current scientific knowledge of the closed physical world and the corresponding widespread desire to explain everything ultimately in physical terms, we are led to reject panpsychism, then the DwP reductio proves that computational processes cannot instantiate phenomenal consciousness.

[4] Gödel’s first incompleteness theorem states that “… any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. In particular, for any consistent, effectively generated formal theory F that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory.” The resulting true but unprovable statement G(gˇ) is often referred to as “the Gödel sentence” for the theory (albeit there are infinitely many other statements in the theory that share with the Gödel sentence the property of being true but not provable from the theory). Arguments based on Gödel’s first incompleteness theorem — initially from John Lucas (1961, 1968) were first criticized by Paul Benacerraf (1967) and subsequently extended, developed and widely popularized by Roger Penrose (1989, 1994, 1996, 1997) — typically endeavoring to show that for any formal system F, humans can find the Gödel sentence G(gˇ) whilst the computation/machine (being itself bound by F) cannot. Penrose developed a subtle reformulation of the vanilla argument that purports to show that “the human mathematician can ‘see’ that the Gödel Sentence is true for consistent F even though the consistent F cannot prove G(gˇ).” A detailed discussion of Penrose’s formulation of the Gödelian argument is outside the scope of this article (for a critical introduction see Chalmers 1995; response in Penrose 1996). Here it is simply important to note that although Gödelian-style arguments purporting to show “computations are not necessary for cognition” have been extensively and vociferously critiqued in the literature (see Various Authors 1995 for a review), interest in them — both positive and negative — still regularly continues to surface (e.g., Bringsjord & Xiao 2000; Tassinari & D’Ottaviano 2007).

References

Benacerraf, P. (1967) God, the Devil & Godel. Monist 51: 9-32.

Bishop, J.M. (2002) Dancing with Pixies: strong artificial intelligence and panpyschism. in: Preston, J, Bishop, JM (Eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence, Oxford University Press, Oxford.

Bishop, J.M. (2005) Can computers feel?, The AISB Quarterly (199): 6, The society for the study of Artificial Intelligence and the Simulation of Behaviour (AISB), UK.

Bishop, J.M., (2009a) Why Computers Can’t Feel Pain. Minds and Machines 19(4): 507-516.

Bishop, J.M., (2009b) A Cognitive Computation fallacy? Cognition, computations and panpsychism. Cognitive Computation 1(3): 221-233.

Bringsjord, S, Xiao, H. (2000) A refutation of Penrose’s Go ̈delian case against artificial intelligence. J. Exp. Theoret. AI 12: 307-329.

Chalmers, D.J. (1995) Minds, Machines And Mathematics: a review of ‘Shadows of the Mind’ by Roger Penrose. Psyche 2(9).

Chalmers, D.J. (1996) The Conscious Mind: In Search of a Fundamental Theory, Oxford: Oxford University Press.

Chrisley R. (2006) Counterfactual computational vehicles of consciousness. Toward a Science of Consciousness April 4-8 2006, Tucson Convention Center, Tucson Arizona USA.

Klein, C. (2004) Maudlin on Computation (working paper).

Lucas, J.R. (1961) Minds, Machines & Godel. Philosophy 36: 112-127.

Lucas, J.R. (1968) Satan Stultified: A Rejoinder to Paul Benacerraf. Monist 52: 145-158.

Maudlin, T. (1989) Computation and Consciousness. Journal of Philosophy (86): 407-432.

Penrose, R. (1989) The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press.

Penrose, R. (1994) Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press.

Penrose, R. (1996) Beyond the Doubting of a Shadow: a reply to commentaries on ‘Shadows of the Mind’. Psyche 2(23).

Penrose, R. (1997) On Understanding Understanding. International Studies in the Philosophy of Science 11(1): 7-20.

Putnam, H. (1988), Representation and Reality. Cambridge MA: Bradford Books.

Preston, J. & Bishop, M. (eds) (2002) Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford & New York: Oxford University Press.

Searle, J. (1980) Minds, Brains and Programs. Behavioral and Brain Sciences 3(3): 417-457.

Searle, J. (1990) Is the Brain a Digital Computer? Proceedings of the American Philosophical Association (64): 21-37.

Tassinari, R.P., D’Ottaviano, I.M.L. (2007) Cogito ergo sum non machina! About Gödel’s first incompleteness theorem and turing machines, CLE e-Prints 7(3).

Various Authors (1994) Minds and Machines 4(4), Special Issue: What is Computation?, November.

Various Authors (1995) Psyche, Symposium on Roger Penrose’s Shadows of the Mind. Psyche 2.

Advertisements

103 thoughts on “The danger of artificial stupidity

  1. victorpanzica.

    To say, ‘no computer will ever do this,’ is indeed an assertion. To couple this with an empirical reality – “weeping for a dead son,” etc., forms an enthymeme. An enthymeme is an abbreviated argument deployed rhetorically. The hidden (but not much) premises are that (1) this reality arises from the human experience of being human; and (2) conscious AI, whatever its ontological status, is categorically not human; therefore (3) conscious AI cannot replicate the full experience of these realities.

    Its basically a problem similar to the question whether an alien intelligence could recognize us as an intelligent life form. Computers could be ‘intelligent’ by some definition, and highly responsive, yet if it had a consciousness, it would not be anything like our own.

    The empirically verifiable human reality informing the abbreviated claim in my enthymeme – this weeping, rejoicing, familial relationships, satisfaction with success, disappointment with failure, and all the values that ground these – is what a counter argument must account for. In order to do this one would need to first re-describe this reality in a purely digital, mechanistic schema.

    My personal emotions are irrelevant. But I suspect there are many people who might be offended to find their personal values and experiences reduced to unembodied, unsocialized machinery. That matters politically, in terms of getting funding for AI projects. The military wants unfeeling weapons programs, businesses want uncomplaining robots. But I don’t know if you can get funding based on the promise ‘we will replace you.’

    Like

  2. davidlduffy: the Magill paper looks interesting but is behind a pay wall so I can’t comment on it in any detail other than to trenchantly observe once again that the Penrose argument – from shadows of the mind – is NOT the Lucas argument (this conflation is very common with people who haven’t read Penrose (specifically from Shadows where this is spelt out) but simply commented on what they erroneously believe him to have said); see my summary of the Penrose argument above. If there is an error with Penrose please address that – not the Lucas position ..

    Like

  3. Yes, the key issue here is all about {understanding/consciousness (including the Penrose argument)}, and I will discuss it at “Metaphysics and (lack of) grounding”.

    I have showed that the “Chinese Room Argument” is totally wrong in my comment (https://scientiasalon.wordpress.com/2015/02/27/the-danger-of-artificial-stupidity/comment-page-2/#comment-12430 ). Obviously, you did not get it. Let me try one more time.

    Lemarkle: “To see why there there is an ontological distinction between …, consider the case of Searle responding to a joke: in the former case he might output the Chinese ideograph for HA-HA, but as he CRANKs out the program to do this at no point does he experience a sensation of humour/laugher …”

    Joke is an excellent way to address this CRA issue. But, please take Searle out of the room. I will make four rooms.
    R1, a silicone based machine (not have to be a current computer).

    R2, my brother (with Chinese language above 98% Chinese natives, knowing the language and culture background and with happy attitude).

    R3, a prominent Western Sinologist (able to read and write at Chinese college graduate level, but not knowing all culture-historical references).

    R4, me (Tinenzegong) (knowing more Chinese than my brother and is enlightened Zen master).

    For a joke, it in general must consist three parts.
    P1, a syntax string.
    P2, a language semantic for that string.
    P3, a different interpretation from P2.

    P1 and P2 are bound together in a given language. Yet, there are a few ways to reach P3 from P2.

    Easy way, some syntaxes are KNOWN in the language as {double entendre or ambigrams}. The readers get the joke by knowing them.

    More difficult way, the semantic of some syntaxes could refer to some culture-historical events. Yet, there must be a cue somewhere in the syntaxes/semantics to lead the readers to that different references. So, the readers can get the joke only if he can pick up the cue and is well-versed with that different references.

    Now, here is the Joke-X.
    R1 answered with “haha” and showing the process (finding out the cue and the references).

    R2 answered with “haha” and laughing to the point of tearing.

    R3 answered with “nice story”, without knowing the cue and the reference.

    R4 (me) answered with “well…”, not funny at all in the Zen sense.

    This Joke-X case points out three issues.
    I1, knowledge-based: the great Western Sinologist simply has not enough knowledge to know that it is a joke.

    I2, mentality based: for a Zen master, a joke is not funny at all.

    I3, emotion based: although R1 is fully knowledgeable (intelligent), identical to R2; no emotion ORGAN is implemented in it. Emotion organ is simply a different apparatus from the intelligence and consciousness.

    In Searle’ CRA, what does he mean {he RAN the program}? By punching a few keys is not running an INTELLIGENT program. Searle’ CRA is totally wrong and confused.

    Like

Comments are closed.