Strong Artificial Intelligence

strong AIby Massimo Pigliucci

Here is a Scientia Salon video chat between Dan Kaufman and Massimo Pigliucci, this time focusing on the issues surrounding the so-called “strong” program in Artificial Intelligence. Much territory that should be familiar to regular readers is covered, hopefully, however, with enough twists to generate new discussion.

We introduce the basic strong AI thesis about the possibility of producing machines that think in a way similar to that of human beings; we debate the nature and usefulness (or lack thereof?) of the Turing test and ask ourselves if our brains may be swapped for their silicon equivalent, and whether we would survive the procedure. I explain why I think that “mind uploading” is a scifi chimera, rather than a real scientific possibility, and then we dig into the (in)famous “Chinese Room” thought experiment proposed decades ago by John Searle, and still highly controversial. Dan concludes by explaining why, in his view, AI will not solve problems in philosophy of mind.


Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

Daniel A. Kaufman is a professor of philosophy at Missouri State University and a graduate of the City University of New York. His interests include epistemology, metaphysics, aesthetics, and social-political philosophy. His new blog is Apophenia.

Categories: video

Tags: , , , ,

104 replies

  1. Alex,

    “What is missing? How do we show that it is missing or present? […] Is it then really such an exotic or revolutionary idea that the claim that [placeholder] is present when [placeholder] cannot be shown to be either present or absent should be rejected on grounds of parsimony and burden of evidence?”

    Not when the counter claim, though in itself simpler, doesn’t live up to the global evidence, but applies at most to a subset of the evidence and even then only to a particular interpretation of that subset.

    When I compare biological organisms and mechanical machines, I see more like two continua, say one from hydra to homo sapiens, and the other from dead-fall traps (or simpler(?)) to our most complex and current machines.

    In that context, as far as I can see I have no reason to think we can get from mechanical machines or from strictly mechanistic like conceptualizations to biological organisms. We are missing something(s), there is something(s) we don’t understand. It’s not special pleading, it’s if and when we try to do it, no matter the ways we conceive it or try to do it, we can’t make it work in principle or practice, so either we are missing something or our way of conceptualizing the idea is inadequate, or both.

    That’s not to say that we don’t have lots of interesting and helpful mechanical analogies or that they’re not leading to a lot of research and a continuous wealth of useful discoveries.


  2. nick m,

    “How best, in this case, do you engage an opponent (…) who thinks (e.g.) that the syntax/semantics distinction is an anthropocentric irrelevance” – Well, there’s the problem; one doesn’t make the engagement, because those saying this do so in order to shut down such an engagement. They are insisting we do not live in a world where ‘understanding’ of ‘meaning’ actually takes place except as some profound illusion – Alex SL effectively asserts that such a world is a mystical fairy tale. This is not simply counter-intuitive, nor simply a conflict of intuitions – it is denial of common human experience, and as with all such ideological denials leaves no room for possible disagreement or more subtle clarifications.

    It is not a matter of simply assuming Searle’s argument is a “knock-down argument” or of buying all of the implications of CR that Searle wants us to make of it. The CR is controversial because there are indeed difficulties with it, and there are problems and issues important to consider in response to it. But simply denying the differences between syntax and semantics is not engaging any of those issues or problems, it is simply saying, “pooh on Searle! have faith that strong AI can overcome all objections!” Pardon me if I doubt.

    “(E)ngaging with the more difficult but rewarding task of trying to uncover the presuppositions that make it so much as possible for someone to say – non-metaphorically* – that an i-phone “thinks” or “believes” or “decides” or “desires” X?”

    You should definitely read the next article here, on the ‘Theory of Mind’ problem. An interesting issue here, only touched upon lightly, is that many strong AI enthusiasts are clearly making an anthropocentric mistake in attributing anthropomorphic properties, particularly counter-intuitive agency, to human-made objects, without letting the objects reveal their own unique properties – an uncritical ToM misreading, and quite similar to certain religious beliefs. One wonders who really does believe in fairies, or ghosts in machines?

    “Again, I would hate to sound as if I thought I could dismiss with my amateur internet comments a thought-experiment that has had as much impact as the Chinese Room.”

    Well, I find some thought-experiments easy to dismiss (p-zombies, for instance), when they presume a world so obviously alien to the one in which we actually live. And I think it advisable to question the over-reliance on thought experiment to make arguments, exactly because they model ‘logically possible worlds,’ and not the world we’re stuck with. But Searle’s CR experiment is cogent because the room is clearly and grossly, a machine for syntactical exchange without semantic understanding, and so raises real-world issues in an exaggerated way that is not wholly detached from common experience.


  3. gwarner:

    I echo your appreciation for “The Rediscovery of Mind” and I agree with you that the argument there is as strong as that presented in the Chinese Room. The Chinese Room is by far the more well-known, which is why Massimo and I focused on it.

    Interesting aside: When I was in Graduate School, I took a course taught by Jerry Fodor, whose sole subject was The Rediscovery of Mind, when it was still in manuscript and hadn’t yet been published.


  4. We are nearing the end of the discussion, here, and I want to thank so many of you for your productive and engaging comments.

    One thing I *do* hope is that people who really are interested in these issues read the relevant literature, written by those with the relevant expertise. Linguistics and the philosophy of language are rich, fascinating fields, ripe for exploration, and the areas of syntax, semantics, and pragmatics are some of the most interesting. Please do not be satisfied with handwavings at dictionaries and quick dismissals by people with zero expertise in these disciplines. Just as you wouldn’t look to an art historian to teach you about exoplanets, you shouldn’t turn to a cosmologist to teach you linguistics, regardless of how many times he repeats himself or shouts various inapt epithets at his interlocutors.

    Fortunately, with the internet, it has become very easy to access excellent resources. These, for example, are quite nice:

    In reviewing the last link, I am reminded of how limited and gentle the Searlian critique really is. It makes its point purely on the basis of how computers process symbols and what is left over — and how what is left over is everything crucial to our common notion of “understanding the meaning of a word.” But there are much stronger points to make. In actual human communication elements that belong to what is called “pragmatics” do a ton of the semantic work, whether its by way of implicature, various varieties of illocutionary and perlocutionary force, and the like. The capacity for competent linguistic performance, therefore, relies upon one’s participation in a complex web of social practices — on one’s participation in a “form of life” — an enormous dimension that is completely ignored by the Strong AI theorists.

    Put another way, and entirely separate from anything specific to computation, the Strong AI people have exactly the same problem the Cartesians do, as well as, funnily enough, the reductive physicalists — namely, they think that a complete account of thinking can be given by appeal to nothing but a set of internal processes. Of course, this is exactly what Wittgenstein demonstrated is impossible, by way of his arguments regarding Rule-Following and Private Language, arguments against which being a physicalist — even a “supervenience physicalist”! — rather than a dualist leaves you no better off.

    In the next day or two, my discussion with Ian Ground on Wittgenstein will be up on BloggingHeads/MeaningofLifeTV. We address precisely this issue in substantial detail, and I invite those who are really interested in this question of accounting for how we think to check it out.

    Liked by 1 person

%d bloggers like this: