My philosophy, so far — part II

220px-RobertFuddBewusstsein17Jhby Massimo Pigliucci

In the first part [19] of this ambitious (and inevitably, insufficient) essay I sought to write down and briefly defend a number of fundamental positions that characterize my “philosophy,” i.e., my take on important questions concerning philosophy, science and the nature of reality. I have covered the nature of philosophy itself (as distinct, to a point, from science), metaphysics, epistemology, logic, math and the very nature of the universe. Time now to come a bit closer to home and talk about ethics, free will, the nature of the self, and consciousness. That ought to provide readers with a few more tidbits to chew on, and myself with a record of what I’m thinking at this moment in my life, for future reference, you know.

Ethics, meta- and standard

I have written extensively on ethics, perhaps the most comprehensive example being a seven part series that can be found at the Rationally Speaking blog [20]. Although in the past I have considered myself a realist, my position is probably better described as quasi-realism, or perhaps as a kind of bounded instrumentalism. Indeed, it is not very different  — in spirit, not the details — from the way I think of math or logic (see part I).

So, first off, I distinguish three types of questions one can meaningfully ask about ethics or morality (I am using the two terms interchangeably here, even though some authors make a distinction between the two): where does it come from, how does it work, and how it should work.

The first question is the province of evolutionary biology and anthropology: those are the disciplines that can tell us how a sense of right and wrong has evolved in our particular species of social, large-brained primates, and how it further diversified via cultural evolution. The second question is a matter of social and cognitive science: we want to know what sort of brain circuitry allows us to think about morality and make moral decisions, and we want to know how that circuitry is shaped not just by our biology, but also by our cultural milieu.

It is the third question, of course, that is more crucially philosophical in nature. Still, one can distinguish at the least two levels of philosophical discourse on ethics: how should we think of morality in general (the so-called “meta-ethical” question), and which system(s) of moral reasoning are best suited for our purposes as social beings.

It is in terms of meta-ethics [21] that I am a quasi-realist (or a bounded instrumentalist). I don’t think that moral truths exist “out there,” independently of the human mind, which would be yet another example of Platonism (akin to the mathematical / ontic ones we encountered last time). But I also don’t accept the moral relativist position that there is no principled way in which I can say, for instance, that imposing genital mutilation on young girls is wrong — in a sense of wrong that is stronger than simply “I happen not to like it,” or “I have a strong emotional revulsion to it.”

Rather, I think of moral philosophy as a method of reasoning about human ethical dilemmas, beginning with certain assumptions (more or less analogous to axioms in mathematics, or postulates in logic), plus empirical input (from commonsense and/or science) about pertinent facts (e.g., what causes pain and how much, what policies seem to produce the highest amount of certain desiderata, like the ability to flourish, individual freedom, just distribution of resources, etc.), plus of course the basic moral instincts we have inherited from our primate ancestors (on this I’m with Hume: if we don’t care about X there is no reasoning that, by itself, could make us care about X).

This sounds a bit complicated and perhaps esoteric, but it’s really simple: if you want to see what I mean, just read one of Michael Sandel’s books on moral reasoning [22]. They are aimed at the general public, they deal with very practical questions, and yet they show exactly how the moral philosopher thinks (and, incidentally, why science informs but simply cannot determine, our ethical priorities).

I haven’t forgotten about the second level of philosophical discourse concerning ethics: which ethical framework can best serve our aims as individuals within a broader society? Here the classical choices include deontology (Kant-style, not the Ten Commandments stuff) [23], utilitarianism-consequentialism [24], and virtue ethics [25], though there are others (ethics of care, communitarianism, and egalitarianism, for instance).

Although I have strong sympathies for much of what John Rawls [26] has written (from an egalitarian perspective) on justice, I decidedly embrace a neo-Aristotelian conception of virtue ethics. Actually, I maintain that the two can be brought together in “reflective equilibrium” (as Rawls would say) once we realize that virtue ethics addresses a different moral question from all the other approaches: for Aristotle and his contemporaries ethics was concerned not simply with what is the right thing to do, but with what is the right life to live, i.e. with the pursuit of eudaimonia (literally, having a good demon; more broadly, flourishing). So I think that I can say under penalty of little contradiction that when I ask myself what sort of life I want to live, my response is along virtue ethical lines; but when I ask the very different question of what sort of society I want to live in, then a Rawls-type quasi-egalitarianism comes to mind as the strongest candidate (to be practical, it is the sort of society you find in a number of northern European countries).

Free will

Free will is one of the oldest chestnuts in philosophy, and it has come back into fashion with a vengeance, lately [27], especially because of a new dialogue between philosophers of mind and cognitive scientists — a dialogue that at times is very enlightening, at others just as equally frustrating.

If you consult the Merriam-Webster, its two definitions of the concept are illustrative of why the debate is so acrimonious, and often goes nowhere:

1. voluntary choice or decision

2. freedom of humans to make choices that are not determined by prior causes or by divine intervention

Before we go any further, I believe in (1) and I think that (2) is incoherent.

Now, very briefly, there are basically four positions on free will: hard determinism, metaphysical libertarianism, hard incompatibilism, and compatibilism.

Hard determinism is the idea that physical determinism (the notion that the laws of physics and the universe’s initial conditions have fixed every event in the cosmos since the Big Bang) is true and therefore free will is impossible; metaphysical libertarianism (not to be confused with the political position!) says physical determinism is false and free will is possible; hard incompatibilism says that free will is impossible regardless of whether physical determinism is true or false; and compatibilism accepts the idea of physical determinism but claims that free will (of a kind) is nonetheless possible.

First off, notice that the four positions actually imply different conceptions of free will. For a compatibilist, for instance, free will of type (2) above is nonsense, while that is precisely what the metaphysical libertarian accepts.

Second, given the choices above, I count myself as a compatibilist, more or less along the lines explained at length by Daniel Dennett (see [27] and references therein), but with a fairly large caveat.

I am a compatibilist (as opposed to both a hard determinist and a hard incompatibilist) because it seems to me self-evident that we make choices or take decisions, and that we do that in a different way from that of a (currently existing) computer, or a plant (with animals things become increasingly fuzzy the more complicated their nervous system). I have definitely chosen to write this essay, in a much richer sense of “chosen” than my computer is “choosing” to produce certain patterns of pixels on my screen as a result of other patterns of keyboard hits that I created with my fingers. You may deny that, but that would leave you with a large number of interesting biological and psychological phenomena that go pretty much unexplained, unaccounted for, or otherwise swept under the (epistemic) carpet.

I am also a compatibilist (as opposed to a metaphysical libertarian) because I think that causality plays a crucial and unavoidable role in our scientific explanations of pretty much everything that happens in the universe above the level of sub-atomic physics (more on this in a second). You simply can’t do any “special” science (i.e., any science other than fundamental physics) without invoking the concept of causation. Since the scientific study of free will (I prefer the more neutral, and far less theologically loaded, “volition”) is the province of neuroscience, psychology and sociology — all of which certainly depend on deploying the idea of causality in their explanations — to talk of a-causal or contra-causal free will is nonsense (on stilts).

So, my (and Dennett’s) compatibilism simply means that human beings are sophisticated biological organisms (I reserve the word “machine” for human-made artifacts, as I have a problem with the deployment of machine-like metaphors in biology [28]) capable of processing environmental stimuli (including language) in highly complex, non-linear fashion, and to arrive at decisions on actions to take. The fact that, if exposed to the same exact stimuli we would unfailingly come out with the same exact decisions does not make us a puppet or marionette, those decisions are still “ours” in an important sense — which of course implies that we do deserve (moral) blame or praise for them [29].

What about the caveat to which I hinted above? Well, it’s actually three caveats: i) We still lack a good philosophical account (let alone a scientific theory, whatever that would look like) of causality itself [30]. That ought to make everyone in the free will debate at least a bit queasy. ii) Causality plays little or no explanatory role precisely where the determinist should expect it to be playing a major one: in fundamental physics. Again, someone should think carefully about this one. iii) Hard determinism is, let us not forget it, a philosophical (indeed, metaphysical!) position, not a scientific theory. It is often invoked as a corollary of the so-called principle of the causal completeness of physics [31]. But “causal completeness” simply means that the laws of physics (in general, not just the currently accepted set) exhaust our description of the universe. The notion is definitely not logically incompatible with different, not necessarily reductionist, ways of understanding said laws; nor does it rule out even instances of strong emergence [32] (i.e., the possibility that new laws come into being when certain conditions are attained, usually in terms of system complexity). I am not saying that determinism is false, or that strong emergence occurs. I am saying that the data from the sciences — at the moment, at least — strongly underdetermine these metaphysical possibilities, so that hard determinists should tread a little more lightly than they typically do.

Self and consciousness

And we finally come to perhaps the most distinguishing characteristic of humans (although likely present to a degree in a number of other sentient species): (self)-consciousness.

Again, let’s start simple, with the Merriam-Webster: their first definition of consciousness is “the quality or state of being aware especially of something within oneself”; they also go for “the state of being characterized by sensation, emotion, volition, and thought.”

When I talk about (self)-consciousness I mean something very close to the first definition. It is a qualitative state of experience, and it refers not just to one’s awareness of one’s surroundings and simple emotions (like being in pain) — which presumably we share with a lot of other animal species — but more specifically the awareness of one’s thoughts (which may be, but likely is not, unique to human beings).

The first thing I’m going to say about my philosophy of self & consciousness is that I don’t go for the currently popular idea that they are an illusion, an epiphenomenon, or simply the originators of confabulations about decisions already made at the subconscious level. That sort of approach finds home in, for instance, Buddhist philosophy, and in the West goes back at the least to David Hume.

The most trivial observation to make about eliminativism concerning the self & consciousness is that if they are an illusion who, exactly, is experiencing such illusion? This is a more incisive point, I think, than is often given credit.

But, mostly, I simply think that denying — as opposed to explaining — self & consciousness is a bad and ultimately unsatisfactory move. And based on what, precisely? Hume famously said that whenever he “looked” into his own mind he found nothing but individual sensations, so he concluded that the mind itself is a loosely connected bundle of them. Ironically, current research in cognitive science clearly shows that we are often mistaken about our introspection, which ought to go a long way toward undermining Hume’s argument. Besides, I never understood what, exactly, he was expecting to find. And, again, who was doing the probing and finding said bundles, anyway?

Some eliminativists point to deep meditation (or prayer, or what happens in sensory deprivation tanks), which results in the sensation that the boundary between the self and the rest of the world becomes fluid and less precise. Yes, but neurobiology tells us exactly what’s going on there: the areas of the brain in charge of proprioception [33] become much less active, because of the relative sensorial deprivation the subject experiences. As such, we have the illusion (that one really is an illusion!) that our body is expanding and that its boundaries with the rest of the world are no longer sharp.

Other self & consciousness deniers refer to classic experiments with split-brain patients [34], where individuals with a severed corpus callosum behave as if they housed two distinct centers of consciousness, some times dramatically at odds with each other. Well, yes, but notice that we are now looking at a severely malfunctioning brain, and that moreover this sort of split personality arises only under very specific circumstances: cut the brain in any other way and you get one dead guy (or gal), not multiple personalities.

To me all of the above, plus whatever else we know about neurobiology, plus the too often discounted commonsense experience of ourselves, simply tell me that there is a conscious self, and that it is an important component of what it means to be human. I think of consciousness and the self as emergent properties (in the weak sense, I’m not making any strong metaphysical statements here) of mind-numbingly complex neuronal systems, in a fashion similar to which, say, “wetness” is an emergent property of large numbers of molecules of water, and is nowhere to be found in any single molecule taken in isolation [35].

Now, that conclusion does most certainly not imply the rejection of empirical findings showing that much of our thinking happens below the surface, so to speak, i.e. outside of the direct control of consciousness. Here Kahneman’s now famous “two-speed” model of thinking [36] comes handy, and has the advantage of being backed by plenty of evidence. Nor am I suggesting that we don’t confabulate, engage in all sorts of cognitively biased reasoning, and so forth. But I am getting increasingly annoyed at what I perceive as the latest fashion of denying or grossly discounting that we truly are, at best, the rational animal, as Aristotle said. Indeed, just to amuse myself I picture all these people who deny rationality and consciousness as irrational zombies whose arguments obviously cannot be taken seriously — because they haven’t really thought about it, and at any rate are just rationalizing…

The second thing I’m going to reiterate (since I’ve said it plenty of times before) concerns consciousness in particular. As many of my readers likely know, the currently popular account of the phenomenon is the so-called computational one, which draws a direct (although increasingly more qualified as time goes by) analogy between minds and (digital, originally) computers [37]. For a variety of reasons that I’ve explained elsewhere [38], I do think there are some computational aspects to minding (I prefer to refer to it as an activity, rather than a thing), but I also think that computationalists just don’t take biology seriously enough. On this, therefore, I’m with John Searle (and that’s quite irrespective of his famous Chinese room thought experiment [39]) when he labels himself a biological naturalist about consciousness. The basic idea is that — as far as we know — consciousness is a biological process, not unlike, say, photosynthesis. Which means that it may be bounded not only to certain functional arrangements, but also to specific physicochemical materials. These materials don’t have to be the ones that happen to characterize earth-bound life, but it is plausible that they just can’t be anything at all.

I think the most convincing analogy here is with life itself. We can’t possibly know what radically different forms of life may be out there, but if life is characterized by complex metabolism, information carrying, reproduction, homeostasis, the ability to evolve, etc. then it seems like it better be based on carbon or something that has similar chemical flexibility. Hard to imagine, for instance, helium-based life forms, given that helium is a “noble” gas with very limited chemical potentialities.

Similarly, I think, with consciousness: the qualitative ability of feeling what it is like to experience something may require complex chemistry, not just complex functional arrangements of arbitrary materials, which is why I doubt we will be able to create conscious computers (which, incidentally, is very different from creating intelligent computers), and why I think any talk of “uploading” one’s consciousness is sheer nonsense [40]. Of course, this is ultimately an empirical matter, and we shall see about it. I am simply a bit dismayed (particularly as a biologist) at how the computational route — despite having actually yielded comparatively little (see the abysmal failure of the once much trumpeted strong AI program) — keeps dominating the discourse by presenting itself as the only game in town (reminds me of string theory in physics, but that’s a whole other story for another time…).

It should go without saying, but I’m going to spell it out anyway, just in case: none of the above should give any comfort to dualists, supernaturalists and assorted mysticists. I do think consciousness is a biophysical phenomenon, which we have at the least the potential ability of explaining, and perhaps even of duplicating artificially — just not, I am betting, in the way that so many seem to think is about to yield success any minute now.

The whole shebang

Do the positions summarized above and in part I of this essay form a coherent philosophical view of things? I think so, even though they are certainly not airtight, and they may be open to revision or even wholesale rejection, in some cases.

The whole jigsaw puzzle can be thought of as one particular type of naturalist take, of course, and I’m sure that comes as no surprise given my long standing rejection of supernaturalism. More specifically, my ontology is relatively sparse, though perhaps not quite as “desert like” as W.V.O. Quine’s. I recognize pretty much only physical entities as ontologically “thick,” so to speak, though I am willing to say that concepts, mathematical objects being a subset of them, also “exist” in a weaker sense of the term existence (but definitely not a mind-independent one).

My take could also be characterized as Humean in spirit, despite my rejection of specific Humean notions, such as the illusory status of the self. Hume thought that philosophy better take on board the natural sciences and get as far away as possible from Scholastic-type disputes. He also thought that whatever philosophical views we arrive at have to square with commonsense, not in the strict sense of confirming it, but at the very least always keeping in mind that one pays a high price every time one arrives at notions that are completely at odds with it. In some instances, this is unavoidable (e.g., the strange world of quantum mechanics), but in others can and if so should be avoided (e.g., the idea that the fundamental ontological nature of the universe is math).

Outside of Hume, some of my other philosophical inspirations should be clear. Aristotle, for one, at least when it comes to ethics and the general question of what kind of life one ought to live. Bertrand Russell is another, though it may have been less clear from what I’ve written here. Russell, like Hume, was very sympathetic to the idea of “scientific” philosophy, although his work in mathematics and logic clearly shows that he never seriously thought of reducing — Quine-style — philosophy to science. But Russell has been influential on me for two other reasons, which he shares with Aristotle and Hume: he is eminently quotable (and who doesn’t love a well placed quote!), and he embodied the spirit of open inquiry and reasonable skepticism to which I still aspire every day, regardless of my obvious recurring failures.

Let me therefore leave you with three of my favorite quotes from these greats of philosophy:

Aristotle: Any one can get angry — that is easy … but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy. (Nicomachean Ethics, Book II, 1109.a27)

Hume: In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence. (An Enquiry Concerning Human Understanding, Section 10 : Of Miracles Pt. 1)

Russell: Men fear thought as they fear nothing else on earth – more than ruin, more even than death. Thought is subversive and revolutionary, destructive and terrible; thought is merciless to privilege, established institutions, and comfortable habits; thought is anarchic and lawless, indifferent to authority, careless of the well-tried wisdom of the ages. Thought looks into the pit of hell and is not afraid. (Why Men Fight: A Method of Abolishing the International Duel, pp. 178-179)

Cheers!

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[19] My philosophy, so far — Part I, by M. Pigliucci, Scientia Salon, 19 May 2014.

[20] Here is the last entry, you can work your way back from there.

[21] Metaethics entry in the Stanford Encyclopedia of Philosophy.

[22] By Sandel, see both: Justice: What’s the Right Thing to Do?, Farrar, Straus and Giroux, 2009; and What Money Can’t Buy: The Moral Limits of Markets, Farrar, Straus and Giroux, 2012.

[23] Deontological ethics, SEP.

[24] Consequentialism, SEP.

[25] Virtue ethics, SEP.

[26] John Rawls, SEP.

[27] Here is one of my favorite examples. And here is the obligatory SEP entry.

[28] See my paper with Maarten Boudry, Why Machine-Information Metaphors are Bad for Science and Science Education, Science and Education 20 (453):471, 2011.

[29] See the following SEP entries: Causal processes, The metaphysics of causation, Causation and manipulability, and Counterfactual theories of causation.

[29] I was recently having an enlightening discussion about this with my friend Maarten Boudry, and we came up with another way to conceptualize in what sense, say, I could have hit a penalty kick that I actually missed (soccer, you know), whereas I couldn’t have written Hamlet. The idea is to deploy the logical concept of possible worlds (see the pertinent SEP entry). It should be obvious that — given exactly identical circumstances — I would have kicked the penalty in exactly the same way. But there is (in the logical sense of “is”) a nearby possible world in which the circumstances are different, say because I focused more on the task at hand, and I do hit the ball correctly, thereby scoring a goal. However, the possible world in which I write Hamlet is so distant from the actual world that there is no sense for me to say that I could have written Hamlet. If you find value in logical counterfactuals, this way of thinking about free will is very helpful. If not, I’ll try something else some other time.

[30] See Causal determinism, SEP.

[31] On the causal completeness of physics, by M. Pigliucci, Rationally Speaking, 27 February 2013.

[32] On emergence, see a series of four essays I wrote for the Rationally Speaking blog.

[33] For the basics on proprioception, see the Wiki entry.

[34] See The split brain: A tale of two halves, by David Wolman, Nature 14 March 2012.

[35] Which is why, incidentally, I think Dennett’s famous model of consciousness as made possible by stupider and stupider robots all the way down to individual neurons is too simplistic. In the case of wetness, there is a level of complexity below which the property simply does not apply, and I think the same can be said for consciousness.

[36] Thinking, Fast and Slow, by D. Kahneman, Turtleback, 2013.

[37] The computational theory of mind, SEP.

[38] See the following essays from the Rationally Speaking blog: Philosophy not in the business of producing theories: the case of the computational “theory” of mind (29 July 2013); Computation, Church-Turing, and all that jazz (5 August 2013); Three and a half thought experiments in philosophy of mind (6 September 2013).

[39] The Chinese room argument, SEP.

[40] See David Chalmers and the Singularity that will probably not come, Rationally Speaking, 5 October 2009; and Ray Kurzweil and the Singularity: visionary genius or pseudoscientific crank?, Rationally Speaking, 11 April 2011.

Advertisements


Categories: essay

Tags: , , , , , , ,

286 replies

  1. Give it a rest my friend. This is a forum for discussion, not a peer review journal. I’m sure you can find plenty if those around.

    Like

  2. DM, “in a manner indistinguishable”: the problem is making sense of this in terms that are not anthropocentric. You often hear people describe non-human behavior, say, a dog’s behavior, in human terms. But that doesn’t mean the dog’s behavior is in a manner indistinguishable from human behavior. They reinterpret the behavior so that it makes human sense. I don’t know what it means to say “this virtual person would behave in the same way as a physical person” because I can’t say with certainty how the physical person will behave. Humans take “mulligans,” they act in ways that surprise and seem unpredictable to me. So, let’s say you replicate yourself. How do you input your future behavior into the replicant? You will be endlessly revising up to the point you no longer exist. Then the replicant will have to do replicant things or lock-up. But this would be distinguishable from you because you are no longer existent. A third party would no longer be describing exact human behavior. It would be describing virtual or replicant behavior in human terms. The language breaks down and is insufficient to the task.

    Like

  3. I think I have answered the objections….

    Just giving an answer repeatedly without reflecting on the questions asked is hardly satisfactory. I would suggest you go back to the beginning and define your terms in a way that allows each to be tested and not in the vague manner in which you have done. I realize that you have convinced yourself you are correct, but you haven’t convinced many here. You believe that is because we misunderstand your arguments and if we just had your superior knowledge we would agree. Have you ever considered that you might be wrong?

    Like

  4. Give it a rest my friend

    Hmm, you call BMM ‘my friend’ and you call me ‘my friend’. This makes me worry that I come across like BMM does…

    Like

  5. Ah! Are you saying I should be more careful with the company I keep? 😉

    Like

  6. Hi Thomas,

    “in a manner indistinguishable”: the problem is making sense of this in terms that are not anthropocentric.

    I agree that the Turing Test is not ideal, and in particular I agree that it is anthropocentric. I still think it is the best we will ever be able to manage. So it is not supposed to be a one-to-one detector of consciousness, but a test with a high enough bar that only conscious entities (and specifically those which are behaviourally similar to humans, since humans are the only entities we can be sure are conscious) can pass it. People may describe a dog’s behaviour in human terms, but no dog can pass the Turing Test.

    don’t know what it means to say “this virtual person would behave in the same way as a physical person” because I can’t say with certainty how the physical person will behave.

    I agree. This was not intended as an objective test, but as a thought experiment or intuition pump. If you believe in naturalism, and if you believe that physical laws can be simulated, then it follows that a faithful simulation of a person would behave as that person would in that situation. This is not intended as something that can be tested in practice but as a thought experiment that illustrates by example what it might mean for a virtual entity to behave like a human.

    Like

  7. Just that I worry that ‘my friend’ is a form of address you reserve for irksomely persistent commenters on your blog!

    Like

  8. Hi Michael,

    Just giving an answer repeatedly without reflecting on the questions asked is hardly satisfactory.

    I am not aware of having done that, but I realise that perspectives vary. What questions would you like me to answer?

    I would suggest you go back to the beginning and define your terms in a way that allows each to be tested

    I don’t think these are empirical questions, so I don’t agree that the terms ought to be testable. I have attempted to define my terms. For example, when I said that computer systems believe, I was tried to clarify that I meant belief in a consciousness-agnostic sense. If I am not even clearer still, it is because it is not always obvious to me what is likely to be misunderstood by my interlocutors. What I say seems perfectly clear to me as I’m sure what you say seems perfectly clear to you, but sadly this clarity is often lost in transit.

    You believe that is because we misunderstand your arguments and if we just had your superior knowledge we would agree.

    Or if I had better ability to express myself. I don’t think that a belief that I’m right is a character flaw unique to me, is it? I think it could just as easily be said of the others in this thread, and if the majority is against me it may be because of the way this community has been selected. I think if you were to comment on lesswrong.com you would find the situation reversed.

    What I actually want is to have a conversation with a knowledgeable, committed opponent over Skype. I feel this medium is not working. It’s too disconnected and misunderstandings seem to propagate uncontrollably.

    Have you ever considered that you might be wrong?

    All the time. This is why I so desperately want to discuss these issues with those of opposing views, and why I am not commenting on lesswrong.com. If my beliefs are wrong I want to know they are wrong. Unfortunately I have not yet found any reason to think that they are, but I remain open to the possibility that someone could point out a flaw in my reasoning if there were a sufficiently in-depth conversation.

    Like

  9. My understanding is the blog author seeks a “rest” from mention of peer-reviewed information.

    Like

  10. No, you insist in willfully misunderstanding the purpose of this forum. It’s okay, no big.

    Like

  11. Well, since i see no evidence for free will or “personality” explanations of behavior I think not. lol

    But likely everyone would benefit from further explanation the purpose of the forum.

    Like

  12. Further explanation about the purpose of Scientia Salon is readily available on the About tab, not to mention in my first extended essay, which got the whole thing started.

    Like

  13. DM,
    Saying that something is “possible in principle” does not mean that “what is that principle” is a sensible question.
    Sorry, but I can’t agree with you. The statement means that there is an underlying principle why it should be possible but that the practical path is not known. Any other interpretation is meaningless. It is entirely reasonable to ask what that principle is.

    Like

  14. Thomas,
    that is a tough experience to go through.

    How will these concepts change over time as a result of technological advances? … How will we explain these matters? It would seem, perhaps unsurprisingly, that none of these notions have a static, fixed character.

    You are right, the world is changing in ways that over turn earlier notions. I think the answer is provided by philosophy in the study of ethics. Whatever one decides it should be preceded by careful consideration that is well informed by an understanding of the best ethical thinking.

    Unfortunately ethical thinking has become a casualty of today’s culture of hedonistic happiness.

    Like

  15. DM,
    If we accept for a moment that a computer or robot can behave in the same way as a person, there is no reason to think that the same feelings of empathy and emotional connection could not be evoked in a human regarding the software. Either you believe that it is impossible for a human to be emotionally connected to a piece of software

    You have completely failed to examine my argument from Goleman. That is an important reference and you speak as if you did not read it. When one quotes an authoritave source it should be taken seriously. The clearest example of that is when you declare that the Goleman Test sets a much lower bar than the Turing Test. Nothing could could be further from the truth. The Goleman Test sets a very high bar compared to the Turing Test. I can only conclude you paid no attention to my argument.

    Like

  16. DM,
    how would you treat such an argument coming from a person who believes that a piece of computer software is conscious?

    With laughter. The manner in which we detect consciousness in my dog is so dissimilar to the way we experience computers that there is no possibility of a comparison. The point of my Goleman quote is that there is a brain to brain link between people which is managed by a rich plethora of very subtle signals of which we are mostly unaware. My drill squad example illustrated this. We worked in perfect synchronism with ease and yet were completely unaware of how we did this. The way my dogs read my mind illustrated this.

    Goleman’s point is that we don’t just observe others, we are linked to others. We are in a social/neural web linking each other’s minds where ripples go out from person to person. This brain to brain link allows me to perceive other minds and experience them. It gives me a theory of mind. Dogs, after 13,000 years of co-evolving with us are able to participate in our neural web and so we intuitively recognise them as possessing a high level of consciousness. My dogs recognised my intent by reading a variety of very subtle signals I was not aware of emitting. It is by this ability to react to and share this rich set of signals that we recognise consciousness(among other things).

    My argument is that the only means we have of recognising consciousness is through the brain to brain link provided by the neural web. We will recognise consciousness in computers when computers can participate in this neural web in the same way. That is the Goleman Test. It is a very complex test because it requires the computer to have a finely honed sensitivity to all my emotions, reading them through through the rich array by which I subconsciously broadcast them. It requires the computer to similarly feel such emotions and also to broadcast them in ways that I intuitively feel. It must do these things fluidly, spontaneously and quickly, responding to my emotions, echoing them and affirming them. This is how my dogs and I interact. It is a natural process that neither of us have to think about, it just happens. I recognise it as authentic because it mirrors what I experience with other people. Added to this is a clear sense of identity. Recognising distinctly different identities or personalities is essential to our perception of consciousness. What is identity? This is a rich subject on its own. The Turing Test is a crude concept developed long before the concepts described by Goleman were known.

    Like

  17. Hi labnut,

    It is entirely reasonable to ask what that principle is.

    Hmm, I don’t know. I still think “possible in principle” just means it ought to be possible notwithstanding practical considerations. Nevertheless I can try to answer your question by explaining why I think it ought to be possible.

    The principle is:

    If naturalism is true (as I and most computationalists believe), then whatever happens in the physical world is a result of physical laws. If physical laws are computable, then it is possible to build a computer to simulate them accurately, practical considerations aside. If the physical state of the particles in a human being (particularly a human brain) are fed into this simulation, the simulated human must necessarily behave in the same way as an actual human. If the behaviour of the simulated human is interpreted as the behaviour of the computer system, it follows that it must be possible to make a computer system which behaves like a human. The only reason we can’t do this in practice is because we lack the ability to scan a human brain in fine detail and we lack the computational resources to simulate such a complex system.

    This establishes the principle. My very strong suspicion is that there are more feasible ways to achieve human behaviour in a computer. The whole body (or whole brain) simulation approach is an upper bound on the amount of computing power required.

    Like

  18. Okay, on this one I tend to agree with labnut. I get irritated when people say that something is possible “in principle,” it’s too easy a move. I won’t go as far as asking what specific principle you are referring to, because I don’t think there is any such. But at the least you could clarify whether you consider this a biological possibility, a physical one, a metaphysical one (however cashed out), or a logic one. They are different.

    Like

  19. Hi labnut,

    Indeed I have not read Goleman, and there is no chance I will for months, by which time this conversation will be long gone. I’ve already spent most of my available time this weekend reading papers recommended by Aravis because they are more closely related to my chief interest. If you really really think I should read it, perhaps I may, but I won’t be able to talk to you about it until Christmas.

    So I am responding to what I understand of Goleman based on your account. I will explain your argument back to you so that you can explain to me how I misunderstand you, or alternatively accuse me of mischaracterising you.

    Goleman explains that we respond strongly to other people. We are social beings, and we crave to connect with other people. During social interactions, the brains of each participant are conducting a sort of dance whereby the brains are connected in some way and respond to each other in a complex and engaging fashion.

    I have no problem with any of this. I think this is correct.

    You extend this to your dogs. I also think this is correct.

    You then take this to mean that this intimate interaction means that we are capable of detecting consciousness. This does not follow as far as I can see, because I do not think that there is a direct link between minds (and I would be surprised if Goleman did either, based on your extract), but a link which is mediated by physical signals such as facial expressions, body language, tone of voice and verbal communication. As such, I think the intimate emotional connection you are talking about is vulnerable to trickery by an unconscious system which mimics the appropriate physical systems. In this view, only one party is participating in the dance while the other dancer is an illusion.

    As evidence to support my view, I offered the example of digital girlfriends, who seem to be able to elicit much the same kind of emotions in their boyfriends as your dogs do in you without being conscious themselves.

    As such, I think the digital girlfriends could pass (or almost pass) the Goleman test as might your dogs, whereas both would fail the Turing Test. This is why I think the Turing Test sets a higher bar.

    In summary, I do not dismiss Daniel Goleman’s views out of hand, and I have every reason to believe that what he writes is correct though I have not read it. I disagree with the conclusions you are drawing from it and I have explained why in detail (twice).

    Like

  20. Hi Massimo,

    I won’t go as far as asking what specific principle you are referring to, because I don’t think there is any such.

    Then we are substantially in agreement, because this is my objection to labnut. I simply don’t understand his question.

    But anyway, I think my answer to him ought to answer your question also? I think it is a physical possibility, given vast computational resources and perfect information regarding the physical state of a brain.

    Like

  21. How can something which you now claim is physical – not be testable? You once declared it abstract and not physical – and you can’t see why we are confused?

    Like

  22. DM,
    the phrase ‘it is possible in principle’ is a short hand form used between people who have a common understanding of the problem and both know the ‘principle’ perfectly well. It is a terse form that avoids unnecessary detail. In this case it is acceptable but dangerous. Dangerous because unstated assumptions are the cause of many errors in argument.

    But, when the ‘principle’ is the dispute then the phrase is little more than a dodge. It is a form of begging the question.

    In our discussion you and I certainly do not agree on the principle and so to use the phrase ‘possible in principle’ is inappropriate. You must spell out the argument and not hide it.

    To expose your inappropriate use of the phrase I challenged you to enunciate the principle. At first you refused and I challenged you again.

    You have spelled it out, as you should have done in the first place and this only confirms my disagreement.

    You can’t glide under the radar by claiming it is possible in principle when we disagree on the principles. That is merely a semantic dodge.

    Like

  23. Hi Michael,

    How can something which you now claim is physical – not be testable? You once declared it abstract and not physical – and you can't see why we are confused?

    Yes, people are confused, and I think this is a problem with the medium, particularly since I am engaging in a number of conversations in parallel.

    Your question reveals confusion on two fronts.

    Firstly, I have made at least two distinct claims. One is that there is no physical reason why a computer could not be as intelligent as a person. This is the claim that this latest comment pertained to. The second claim is that consciousness is not objectively detectable. So I draw a distinction between intelligence and consciousness, but then so do most people on this thread.

    The other way in which I think you may possibly be confused is the idea that something which is physically possible in principle ought to be testable. I don’t think it is, or at least the assertion that something is possible in principle it is not always falsifiable. I can maintain until the end of time that it is possible in principle to build an intelligent machine without ever actually managing to do just that, and it seems to me almost inconceivable that it could ever be falsified (although perhaps I’m wrong here).

    Like

  24. Hi labnut,

    But, when the ‘principle’ is the dispute then the phrase is little more than a dodge. It is a form of begging the question.

    I did not understand your question. If you had asked me to explain why it was possible in principle, or what I meant by saying it was possible in principle, I would have been happy to oblige. “What is the principle” was not a phrasing I was able to interpret.

    In any case, the original usage of the phrase was not begging the question in context, because I was clarifying what computationalists believe and not defending that belief. If you read back to the origin of this argument you will see that I was correcting what I see as your “mischaracterisation” (if I may borrow the term) of the computationalist position.

    Like

  25. DM,
    If physical laws are computable, then it is possible to build a computer to simulate them accurately, practical considerations aside.

    I’m surprised you make this argument again. Massimo has already demolished this statement.

    Let me give you an example. I can write a program that simulates the action of gravity on a moving object. The mathematical laws are well known and I can exactly calculate every step of the simulation.

    But now there is a strange thing. I may have ‘simulated’ gravity but it is not gravity and will never be gravity.

    The problem lies in the use of the word ‘simulated’. What one is really doing is modelling the activity of gravity in another medium. When one uses the word ‘modelling’ it becomes clear that this is not gravity.

    What you are describing is ‘modelling’ consciousness and the model is merely a convenient representation of the real thing but it is not the real thing.

    You have a built in assumption that because the modelling takes place in a computer then it can be the real thing. Nothing could be further from the truth. A model of something on a different substrate cannot be the same thing. Modelling gravity on a computer is not gravity and exactly the same thing applies to consciousness.

    And we haven’t even begun to talk about the possibility of modelling consciousness. Edward Feser has already given very strong arguments why that is not possible.

    Like

  26. DM, now do you begin to see why I had to challenge your slipshod assertion ‘it is possible in principle’?

    Like

  27. Guys, especially labnut and DM. I’m calling for a temporary moratorium on this thread. I think you have exhausted the current possibilities, and everyone (including yours truly) needs a bit of rest. Please take a break, and recharge your batteries in preparation of tomorrow’s post…

    Like

  28. Hoping the moratorium is over now:)

    Thomas:

    I don’t know what it means to say “this virtual person would behave in the same way as a physical person” because I can’t say with certainty how the physical person will behave.

    Honestly this seems like a bit of a dodge to me, but it seems to come up often when this example comes up.

    If we were talking about modelling C Elegans would you have said “I don’t know what it means to say ‘this C Elegans would behave in the same way as a physical C Elegans’ because I can’t say with certainty how the physical C Elegans will behave”?

    Would you have said the same thing about an ant simulation? A mouse simulation?

    If C Elegans were being modelled then we know that not all C Elegans go through precisely the same set of behaviours, it would depend on the particular environment and the particular configuration of the organisms brain.

    But that would not mean that someone modelling the C Elegans could have no possibility of gauging success – of course they could. The same thing would go for an ant, for a mouse etc.

    So why would anybody be coy about saying that the externally observable behaviour of a human organism could be modelled computationally?

    I think that it is a reasonable is that sufficiently a accurate, sufficiently detailed computationally simulated human would probably report being conscious in the same way that most people do. It could have discussions about consciousness. If we suggested that it was a P-Zombie and was not conscious at all then it would probably react as just about anybody would to that suggestion, with derision.

    Like

  29. Massimo wrote:

    I don’t see how that follows at all. Of course, one can write a clever program that behaviorally mimics a human being, like ELIZA. So what?

    However ELIZA is not a simulation of our biological processes. The point of the question is not that a program might be written to mimic the behaviour of a human being, but that a simulation specifically of our biology would mimic the behaviour of a human being.

    Would you concede that Naturalism implies that there could be a computational simulation of a human that could produce the same kinds of observable behaviour as the physical system?

    If not then what is it about this particular configuration of atoms that would resist mathematical description?

    If so then the question of whether or not the simulated being would be conscious (ie have consious states like you or I are experiencing right not) or not has to be asked.

    Like

  30. Massimo wrote:

    Okay, on this one I tend to agree with labnut. I get irritated when people say that something is possible “in principle,” it’s too easy a move. I won’t go as far as asking what specific principle you are referring to, because I don’t think there is any such. But at the least you could clarify whether you consider this a biological possibility,

    I assume that the “principle” in this case is the principle that we can describe physical systems mathematically.

    I had no idea that this principle would be controversial in this site.

    Like

  31. No, that can’t be the principle. Everyone agrees we can describe things mathematically. The issue is ontological, not epistemological.

    Like

  32. But the discussion never got to the ontological question because the push back was on the epistemic question, the proposition that there could be a computational simulation of a human body and suitable environment that would demonstrate behaviour consistent with the externally observable behaviour of a physical human being.

    If everyone could have simply stipulated that such a simulation is possible, given sufficient computing power and enough understanding of the biology of the brain and the body then the interesting ontological question could be addressed.

    Like

  33. Robin, that’s because I like my epistemology to line up with my metaphysics, otherwise the latter is pure speculation. At any rate, the pause is now in effect, so I’d like to really move on from this thread, if you don’t mind. Cheers.

    Like

  34. Ah, I wish I’d visited earlier, when this was first posted.

    I think your caveats about causality are spot-on and beautifully stated. And I think the general perspective expressed there applies to the discussion of consciousness as well. We find ourselves running into problems thinking about free-will because we lack the sort of conceptual scaffolding required to construct a coherent narrative of it (for one thing, discussing the causality of a conscious choice as if it were a simple extension of sub-molecular, deterministic, “billiard ball” causality is something close to a category error).

    The problems we have with our incomplete/undeveloped concepts of causality are mirrored by our computational metaphors for cognitive processes. With causality, we may not get very far, because causality operates at such a basic cognitive level (i.e., we tend to use causality as a building block for other concepts, but not the other way around, and when we try to examine our concept of causation, we find surprisingly little “content”). But I think the issue we have with consciousness and computation is that a better metaphor (a more apt/sufficient conceptual framework) is not available to us right now.

    You were saying on the SGU that philosophy often plays a larger role during the time when a scientific discipline is less developed. I think that philosophy has a big role to play in the neurological sciences by helping to develop the concepts that will scaffold a more satisfying theory of mind.

    Like

  35. DM,
    >>>Kind of. I suspect there is a continuum. I don’t think that small things are big, but that doesn’t mean that they have no size at all.

    Continuum of consciousness? Seems to me there is a difference in what we mean by consciousness then as I see it as more qualitative aspects of experience, not necessarily other things such as behaving. Otherwise, even a thermostat is can be said to have some very small unit of consciousness (as Michio Kaku talks about). This however misses the central challenge of qualitative/subjective (which I’ll call qualia from here on out even though I realize that might not be the most accurate use of it) experience part of consciousness, for which I don’t think there is a continuum based on more complex behaviors. There maybe a continuum of qualia across organisms and throughout evolutionary history.

    >>>Just to be clear, in case it isn’t, I don’t think complexity automatically entails consciousness. I think consciousness is a property of certain complex computations that have a certain organisation, and I think this organisation is necessary to perform some of the tasks that humans are capable of.

    Thanks for the clarification. I actually think we can explain all of human behavior without explaining qualia at all mainly because even simple behaviors that we do have aspects of qualia , such as reacting to various stimuli (seeing red), we can explain by environment X behavior relationships (as well as many other ways) and there is no explanatory gap left that qualia can fill. To me, understanding qualia is a distinct and actually less interesting question from why we do what we do. Ultimately though it’s an empirical question and we will have to see if we hit up against road blocks in psychological/neuroscience research that would require us to understand qualia.

    >>>I think consciousness is bound up with having a sense of self, ability to introspect, etc. I think that computations which have a suite of such abilities are conscious. I think that existing computer systems with similar abilities may be very dimly conscious and what keeps them from being truly conscious is a lack of complexity and insufficient understanding of the world.

    I’d go as far as saying computers can have behaviors that would seem like they have a sense of self but I don’t think that is the same as having a qualia like experience of self but only the outward behavior. However, it seems this is a point neither one of us can really prove either way but I think it’s more defensible to say we don’t know if computers have qualia because we simply don’t understand qualia. What we do know is that we (humans) are the only ones that we know for sure that have it and other organisms (to a lesser degree of certainty). As such, it’s a hard case to make that computers can also have qualia IMHO.

    >>>I agree that it ought to be possible in principle for us to make a machine which will behave exactly like a human, and I agree that we do not have to understand consciousness to do so. However, I believe that such a machine would be conscious because I think consciousness is a necessary property of any algorithm which can perform at a human level. Like complexity, consciousness is not an ingredient that needs to be added in, it is simple a property which it is impossible to build an intelligent system without, whether we understand this or not.

    This is perhaps the central disagreement between our views in that I think we can and will developed he robot/machine that will behave exactly like humans but have no consciousness (or worse, we just won’t be able to tell, not even sure where to begin with trying to figure out how to do that). I’d be more convinced if there was something that obviously pointed to showing the explanatory power of qualia impacting human behavior but as far as I can know, we don’t take that into consideration when furthering the scientific understanding of human behavior. I don’t see the reason why it would become an issue in the future either.

    >>>I should emphasise that I am not absolutely certain that all systems which behave like humans must be conscious. I am merely extremely confident. I am much more certain, however, that any algorithm which not only functions like a human brain but works analogously to a human brain will be conscious. As such, my preferred example is a hypothetical simulation of a human brain built from a brain scan, complete with input sensory data and output motor control. It is this virtual brain that I am certain must be conscious. This only serves to establish the most convincing example of a computer sustaining consciousness, but I think that simpler or more directly designed computer systems are probably also capable of consciousness.

    I want to agree here with the brain that is built artificially should also have consciousness but my hesitation here is that we would have to assume we know everything about the brain, and hence replicated it fully. The other hesitation is that again, I don’t see how we could test whether or not this artificial brain actually has qualia. The turing test to me doesn’t cut it.

    >>>So it seems we simply have contrasting intuitions. It seems at first glance as though both viewpoints are viable, however I think that further reflection shows that my view is more coherent, as I would be happy to discuss.

    Possibly but I think for my primary interest, understanding human behavior scientifically to the best of my ability and towards the goal of influencing it to meet goals of society, I have not found it convincing that qualia or other related topics are worth my professional research time. This is not to say they are not interesting in other ways and I would love to see someone figure out qualia same way I would love to have abiogenesis explained. I just don’t think it will effect my work by adding any explanatory power to my model of explaining human behavior but I’m open to being proven wrong 🙂

    Like

  36. Hi imzasirf,

    I think that qualia are something of a misleading phenomenon. I think it’s wrong to say they do not exist, but I think that the impression that they are more than an ability to discriminate between different sensory data is something of an illusion.

    I don’t think we would need to understand anything much about the brain in order to simulate it, not if we have science fiction technology that can scan any physical system and simulate the atoms. I am not talking at all about what is practical or feasible. My position can be summed up with the view that such a “scanned” physical simulation of a brain would be just as conscious as a real brain. It’s not clear to me if this accords with your intuitions or not.

    I have not found it convincing that qualia or other related topics are worth my professional research time.

    Well, my view is that they are not worth anybody’s research time. My view is that we need to solve the intelligence problem (or the “easy” problems of consciousness), and when we’ve done that, what we’ve built will have qualia and consciousness because I don’t think qualia and consciousness are anything more than the abilities they afford. You are conscious because you believe you are conscious, you have qualia because you believe you have qualia, and any simulation of you would have the same beliefs and so the same consciousness and qualia. Any intelligence system needs to have qualia because it needs to be able to discriminate different sensory input, and qualia are just the labels by which these different senses are known.

    Like

%d bloggers like this: