My philosophy, so far — part II

220px-RobertFuddBewusstsein17Jhby Massimo Pigliucci

In the first part [19] of this ambitious (and inevitably, insufficient) essay I sought to write down and briefly defend a number of fundamental positions that characterize my “philosophy,” i.e., my take on important questions concerning philosophy, science and the nature of reality. I have covered the nature of philosophy itself (as distinct, to a point, from science), metaphysics, epistemology, logic, math and the very nature of the universe. Time now to come a bit closer to home and talk about ethics, free will, the nature of the self, and consciousness. That ought to provide readers with a few more tidbits to chew on, and myself with a record of what I’m thinking at this moment in my life, for future reference, you know.

Ethics, meta- and standard

I have written extensively on ethics, perhaps the most comprehensive example being a seven part series that can be found at the Rationally Speaking blog [20]. Although in the past I have considered myself a realist, my position is probably better described as quasi-realism, or perhaps as a kind of bounded instrumentalism. Indeed, it is not very different  — in spirit, not the details — from the way I think of math or logic (see part I).

So, first off, I distinguish three types of questions one can meaningfully ask about ethics or morality (I am using the two terms interchangeably here, even though some authors make a distinction between the two): where does it come from, how does it work, and how it should work.

The first question is the province of evolutionary biology and anthropology: those are the disciplines that can tell us how a sense of right and wrong has evolved in our particular species of social, large-brained primates, and how it further diversified via cultural evolution. The second question is a matter of social and cognitive science: we want to know what sort of brain circuitry allows us to think about morality and make moral decisions, and we want to know how that circuitry is shaped not just by our biology, but also by our cultural milieu.

It is the third question, of course, that is more crucially philosophical in nature. Still, one can distinguish at the least two levels of philosophical discourse on ethics: how should we think of morality in general (the so-called “meta-ethical” question), and which system(s) of moral reasoning are best suited for our purposes as social beings.

It is in terms of meta-ethics [21] that I am a quasi-realist (or a bounded instrumentalist). I don’t think that moral truths exist “out there,” independently of the human mind, which would be yet another example of Platonism (akin to the mathematical / ontic ones we encountered last time). But I also don’t accept the moral relativist position that there is no principled way in which I can say, for instance, that imposing genital mutilation on young girls is wrong — in a sense of wrong that is stronger than simply “I happen not to like it,” or “I have a strong emotional revulsion to it.”

Rather, I think of moral philosophy as a method of reasoning about human ethical dilemmas, beginning with certain assumptions (more or less analogous to axioms in mathematics, or postulates in logic), plus empirical input (from commonsense and/or science) about pertinent facts (e.g., what causes pain and how much, what policies seem to produce the highest amount of certain desiderata, like the ability to flourish, individual freedom, just distribution of resources, etc.), plus of course the basic moral instincts we have inherited from our primate ancestors (on this I’m with Hume: if we don’t care about X there is no reasoning that, by itself, could make us care about X).

This sounds a bit complicated and perhaps esoteric, but it’s really simple: if you want to see what I mean, just read one of Michael Sandel’s books on moral reasoning [22]. They are aimed at the general public, they deal with very practical questions, and yet they show exactly how the moral philosopher thinks (and, incidentally, why science informs but simply cannot determine, our ethical priorities).

I haven’t forgotten about the second level of philosophical discourse concerning ethics: which ethical framework can best serve our aims as individuals within a broader society? Here the classical choices include deontology (Kant-style, not the Ten Commandments stuff) [23], utilitarianism-consequentialism [24], and virtue ethics [25], though there are others (ethics of care, communitarianism, and egalitarianism, for instance).

Although I have strong sympathies for much of what John Rawls [26] has written (from an egalitarian perspective) on justice, I decidedly embrace a neo-Aristotelian conception of virtue ethics. Actually, I maintain that the two can be brought together in “reflective equilibrium” (as Rawls would say) once we realize that virtue ethics addresses a different moral question from all the other approaches: for Aristotle and his contemporaries ethics was concerned not simply with what is the right thing to do, but with what is the right life to live, i.e. with the pursuit of eudaimonia (literally, having a good demon; more broadly, flourishing). So I think that I can say under penalty of little contradiction that when I ask myself what sort of life I want to live, my response is along virtue ethical lines; but when I ask the very different question of what sort of society I want to live in, then a Rawls-type quasi-egalitarianism comes to mind as the strongest candidate (to be practical, it is the sort of society you find in a number of northern European countries).

Free will

Free will is one of the oldest chestnuts in philosophy, and it has come back into fashion with a vengeance, lately [27], especially because of a new dialogue between philosophers of mind and cognitive scientists — a dialogue that at times is very enlightening, at others just as equally frustrating.

If you consult the Merriam-Webster, its two definitions of the concept are illustrative of why the debate is so acrimonious, and often goes nowhere:

1. voluntary choice or decision

2. freedom of humans to make choices that are not determined by prior causes or by divine intervention

Before we go any further, I believe in (1) and I think that (2) is incoherent.

Now, very briefly, there are basically four positions on free will: hard determinism, metaphysical libertarianism, hard incompatibilism, and compatibilism.

Hard determinism is the idea that physical determinism (the notion that the laws of physics and the universe’s initial conditions have fixed every event in the cosmos since the Big Bang) is true and therefore free will is impossible; metaphysical libertarianism (not to be confused with the political position!) says physical determinism is false and free will is possible; hard incompatibilism says that free will is impossible regardless of whether physical determinism is true or false; and compatibilism accepts the idea of physical determinism but claims that free will (of a kind) is nonetheless possible.

First off, notice that the four positions actually imply different conceptions of free will. For a compatibilist, for instance, free will of type (2) above is nonsense, while that is precisely what the metaphysical libertarian accepts.

Second, given the choices above, I count myself as a compatibilist, more or less along the lines explained at length by Daniel Dennett (see [27] and references therein), but with a fairly large caveat.

I am a compatibilist (as opposed to both a hard determinist and a hard incompatibilist) because it seems to me self-evident that we make choices or take decisions, and that we do that in a different way from that of a (currently existing) computer, or a plant (with animals things become increasingly fuzzy the more complicated their nervous system). I have definitely chosen to write this essay, in a much richer sense of “chosen” than my computer is “choosing” to produce certain patterns of pixels on my screen as a result of other patterns of keyboard hits that I created with my fingers. You may deny that, but that would leave you with a large number of interesting biological and psychological phenomena that go pretty much unexplained, unaccounted for, or otherwise swept under the (epistemic) carpet.

I am also a compatibilist (as opposed to a metaphysical libertarian) because I think that causality plays a crucial and unavoidable role in our scientific explanations of pretty much everything that happens in the universe above the level of sub-atomic physics (more on this in a second). You simply can’t do any “special” science (i.e., any science other than fundamental physics) without invoking the concept of causation. Since the scientific study of free will (I prefer the more neutral, and far less theologically loaded, “volition”) is the province of neuroscience, psychology and sociology — all of which certainly depend on deploying the idea of causality in their explanations — to talk of a-causal or contra-causal free will is nonsense (on stilts).

So, my (and Dennett’s) compatibilism simply means that human beings are sophisticated biological organisms (I reserve the word “machine” for human-made artifacts, as I have a problem with the deployment of machine-like metaphors in biology [28]) capable of processing environmental stimuli (including language) in highly complex, non-linear fashion, and to arrive at decisions on actions to take. The fact that, if exposed to the same exact stimuli we would unfailingly come out with the same exact decisions does not make us a puppet or marionette, those decisions are still “ours” in an important sense — which of course implies that we do deserve (moral) blame or praise for them [29].

What about the caveat to which I hinted above? Well, it’s actually three caveats: i) We still lack a good philosophical account (let alone a scientific theory, whatever that would look like) of causality itself [30]. That ought to make everyone in the free will debate at least a bit queasy. ii) Causality plays little or no explanatory role precisely where the determinist should expect it to be playing a major one: in fundamental physics. Again, someone should think carefully about this one. iii) Hard determinism is, let us not forget it, a philosophical (indeed, metaphysical!) position, not a scientific theory. It is often invoked as a corollary of the so-called principle of the causal completeness of physics [31]. But “causal completeness” simply means that the laws of physics (in general, not just the currently accepted set) exhaust our description of the universe. The notion is definitely not logically incompatible with different, not necessarily reductionist, ways of understanding said laws; nor does it rule out even instances of strong emergence [32] (i.e., the possibility that new laws come into being when certain conditions are attained, usually in terms of system complexity). I am not saying that determinism is false, or that strong emergence occurs. I am saying that the data from the sciences — at the moment, at least — strongly underdetermine these metaphysical possibilities, so that hard determinists should tread a little more lightly than they typically do.

Self and consciousness

And we finally come to perhaps the most distinguishing characteristic of humans (although likely present to a degree in a number of other sentient species): (self)-consciousness.

Again, let’s start simple, with the Merriam-Webster: their first definition of consciousness is “the quality or state of being aware especially of something within oneself”; they also go for “the state of being characterized by sensation, emotion, volition, and thought.”

When I talk about (self)-consciousness I mean something very close to the first definition. It is a qualitative state of experience, and it refers not just to one’s awareness of one’s surroundings and simple emotions (like being in pain) — which presumably we share with a lot of other animal species — but more specifically the awareness of one’s thoughts (which may be, but likely is not, unique to human beings).

The first thing I’m going to say about my philosophy of self & consciousness is that I don’t go for the currently popular idea that they are an illusion, an epiphenomenon, or simply the originators of confabulations about decisions already made at the subconscious level. That sort of approach finds home in, for instance, Buddhist philosophy, and in the West goes back at the least to David Hume.

The most trivial observation to make about eliminativism concerning the self & consciousness is that if they are an illusion who, exactly, is experiencing such illusion? This is a more incisive point, I think, than is often given credit.

But, mostly, I simply think that denying — as opposed to explaining — self & consciousness is a bad and ultimately unsatisfactory move. And based on what, precisely? Hume famously said that whenever he “looked” into his own mind he found nothing but individual sensations, so he concluded that the mind itself is a loosely connected bundle of them. Ironically, current research in cognitive science clearly shows that we are often mistaken about our introspection, which ought to go a long way toward undermining Hume’s argument. Besides, I never understood what, exactly, he was expecting to find. And, again, who was doing the probing and finding said bundles, anyway?

Some eliminativists point to deep meditation (or prayer, or what happens in sensory deprivation tanks), which results in the sensation that the boundary between the self and the rest of the world becomes fluid and less precise. Yes, but neurobiology tells us exactly what’s going on there: the areas of the brain in charge of proprioception [33] become much less active, because of the relative sensorial deprivation the subject experiences. As such, we have the illusion (that one really is an illusion!) that our body is expanding and that its boundaries with the rest of the world are no longer sharp.

Other self & consciousness deniers refer to classic experiments with split-brain patients [34], where individuals with a severed corpus callosum behave as if they housed two distinct centers of consciousness, some times dramatically at odds with each other. Well, yes, but notice that we are now looking at a severely malfunctioning brain, and that moreover this sort of split personality arises only under very specific circumstances: cut the brain in any other way and you get one dead guy (or gal), not multiple personalities.

To me all of the above, plus whatever else we know about neurobiology, plus the too often discounted commonsense experience of ourselves, simply tell me that there is a conscious self, and that it is an important component of what it means to be human. I think of consciousness and the self as emergent properties (in the weak sense, I’m not making any strong metaphysical statements here) of mind-numbingly complex neuronal systems, in a fashion similar to which, say, “wetness” is an emergent property of large numbers of molecules of water, and is nowhere to be found in any single molecule taken in isolation [35].

Now, that conclusion does most certainly not imply the rejection of empirical findings showing that much of our thinking happens below the surface, so to speak, i.e. outside of the direct control of consciousness. Here Kahneman’s now famous “two-speed” model of thinking [36] comes handy, and has the advantage of being backed by plenty of evidence. Nor am I suggesting that we don’t confabulate, engage in all sorts of cognitively biased reasoning, and so forth. But I am getting increasingly annoyed at what I perceive as the latest fashion of denying or grossly discounting that we truly are, at best, the rational animal, as Aristotle said. Indeed, just to amuse myself I picture all these people who deny rationality and consciousness as irrational zombies whose arguments obviously cannot be taken seriously — because they haven’t really thought about it, and at any rate are just rationalizing…

The second thing I’m going to reiterate (since I’ve said it plenty of times before) concerns consciousness in particular. As many of my readers likely know, the currently popular account of the phenomenon is the so-called computational one, which draws a direct (although increasingly more qualified as time goes by) analogy between minds and (digital, originally) computers [37]. For a variety of reasons that I’ve explained elsewhere [38], I do think there are some computational aspects to minding (I prefer to refer to it as an activity, rather than a thing), but I also think that computationalists just don’t take biology seriously enough. On this, therefore, I’m with John Searle (and that’s quite irrespective of his famous Chinese room thought experiment [39]) when he labels himself a biological naturalist about consciousness. The basic idea is that — as far as we know — consciousness is a biological process, not unlike, say, photosynthesis. Which means that it may be bounded not only to certain functional arrangements, but also to specific physicochemical materials. These materials don’t have to be the ones that happen to characterize earth-bound life, but it is plausible that they just can’t be anything at all.

I think the most convincing analogy here is with life itself. We can’t possibly know what radically different forms of life may be out there, but if life is characterized by complex metabolism, information carrying, reproduction, homeostasis, the ability to evolve, etc. then it seems like it better be based on carbon or something that has similar chemical flexibility. Hard to imagine, for instance, helium-based life forms, given that helium is a “noble” gas with very limited chemical potentialities.

Similarly, I think, with consciousness: the qualitative ability of feeling what it is like to experience something may require complex chemistry, not just complex functional arrangements of arbitrary materials, which is why I doubt we will be able to create conscious computers (which, incidentally, is very different from creating intelligent computers), and why I think any talk of “uploading” one’s consciousness is sheer nonsense [40]. Of course, this is ultimately an empirical matter, and we shall see about it. I am simply a bit dismayed (particularly as a biologist) at how the computational route — despite having actually yielded comparatively little (see the abysmal failure of the once much trumpeted strong AI program) — keeps dominating the discourse by presenting itself as the only game in town (reminds me of string theory in physics, but that’s a whole other story for another time…).

It should go without saying, but I’m going to spell it out anyway, just in case: none of the above should give any comfort to dualists, supernaturalists and assorted mysticists. I do think consciousness is a biophysical phenomenon, which we have at the least the potential ability of explaining, and perhaps even of duplicating artificially — just not, I am betting, in the way that so many seem to think is about to yield success any minute now.

The whole shebang

Do the positions summarized above and in part I of this essay form a coherent philosophical view of things? I think so, even though they are certainly not airtight, and they may be open to revision or even wholesale rejection, in some cases.

The whole jigsaw puzzle can be thought of as one particular type of naturalist take, of course, and I’m sure that comes as no surprise given my long standing rejection of supernaturalism. More specifically, my ontology is relatively sparse, though perhaps not quite as “desert like” as W.V.O. Quine’s. I recognize pretty much only physical entities as ontologically “thick,” so to speak, though I am willing to say that concepts, mathematical objects being a subset of them, also “exist” in a weaker sense of the term existence (but definitely not a mind-independent one).

My take could also be characterized as Humean in spirit, despite my rejection of specific Humean notions, such as the illusory status of the self. Hume thought that philosophy better take on board the natural sciences and get as far away as possible from Scholastic-type disputes. He also thought that whatever philosophical views we arrive at have to square with commonsense, not in the strict sense of confirming it, but at the very least always keeping in mind that one pays a high price every time one arrives at notions that are completely at odds with it. In some instances, this is unavoidable (e.g., the strange world of quantum mechanics), but in others can and if so should be avoided (e.g., the idea that the fundamental ontological nature of the universe is math).

Outside of Hume, some of my other philosophical inspirations should be clear. Aristotle, for one, at least when it comes to ethics and the general question of what kind of life one ought to live. Bertrand Russell is another, though it may have been less clear from what I’ve written here. Russell, like Hume, was very sympathetic to the idea of “scientific” philosophy, although his work in mathematics and logic clearly shows that he never seriously thought of reducing — Quine-style — philosophy to science. But Russell has been influential on me for two other reasons, which he shares with Aristotle and Hume: he is eminently quotable (and who doesn’t love a well placed quote!), and he embodied the spirit of open inquiry and reasonable skepticism to which I still aspire every day, regardless of my obvious recurring failures.

Let me therefore leave you with three of my favorite quotes from these greats of philosophy:

Aristotle: Any one can get angry — that is easy … but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy. (Nicomachean Ethics, Book II, 1109.a27)

Hume: In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence. (An Enquiry Concerning Human Understanding, Section 10 : Of Miracles Pt. 1)

Russell: Men fear thought as they fear nothing else on earth – more than ruin, more even than death. Thought is subversive and revolutionary, destructive and terrible; thought is merciless to privilege, established institutions, and comfortable habits; thought is anarchic and lawless, indifferent to authority, careless of the well-tried wisdom of the ages. Thought looks into the pit of hell and is not afraid. (Why Men Fight: A Method of Abolishing the International Duel, pp. 178-179)

Cheers!

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[19] My philosophy, so far — Part I, by M. Pigliucci, Scientia Salon, 19 May 2014.

[20] Here is the last entry, you can work your way back from there.

[21] Metaethics entry in the Stanford Encyclopedia of Philosophy.

[22] By Sandel, see both: Justice: What’s the Right Thing to Do?, Farrar, Straus and Giroux, 2009; and What Money Can’t Buy: The Moral Limits of Markets, Farrar, Straus and Giroux, 2012.

[23] Deontological ethics, SEP.

[24] Consequentialism, SEP.

[25] Virtue ethics, SEP.

[26] John Rawls, SEP.

[27] Here is one of my favorite examples. And here is the obligatory SEP entry.

[28] See my paper with Maarten Boudry, Why Machine-Information Metaphors are Bad for Science and Science Education, Science and Education 20 (453):471, 2011.

[29] See the following SEP entries: Causal processes, The metaphysics of causation, Causation and manipulability, and Counterfactual theories of causation.

[29] I was recently having an enlightening discussion about this with my friend Maarten Boudry, and we came up with another way to conceptualize in what sense, say, I could have hit a penalty kick that I actually missed (soccer, you know), whereas I couldn’t have written Hamlet. The idea is to deploy the logical concept of possible worlds (see the pertinent SEP entry). It should be obvious that — given exactly identical circumstances — I would have kicked the penalty in exactly the same way. But there is (in the logical sense of “is”) a nearby possible world in which the circumstances are different, say because I focused more on the task at hand, and I do hit the ball correctly, thereby scoring a goal. However, the possible world in which I write Hamlet is so distant from the actual world that there is no sense for me to say that I could have written Hamlet. If you find value in logical counterfactuals, this way of thinking about free will is very helpful. If not, I’ll try something else some other time.

[30] See Causal determinism, SEP.

[31] On the causal completeness of physics, by M. Pigliucci, Rationally Speaking, 27 February 2013.

[32] On emergence, see a series of four essays I wrote for the Rationally Speaking blog.

[33] For the basics on proprioception, see the Wiki entry.

[34] See The split brain: A tale of two halves, by David Wolman, Nature 14 March 2012.

[35] Which is why, incidentally, I think Dennett’s famous model of consciousness as made possible by stupider and stupider robots all the way down to individual neurons is too simplistic. In the case of wetness, there is a level of complexity below which the property simply does not apply, and I think the same can be said for consciousness.

[36] Thinking, Fast and Slow, by D. Kahneman, Turtleback, 2013.

[37] The computational theory of mind, SEP.

[38] See the following essays from the Rationally Speaking blog: Philosophy not in the business of producing theories: the case of the computational “theory” of mind (29 July 2013); Computation, Church-Turing, and all that jazz (5 August 2013); Three and a half thought experiments in philosophy of mind (6 September 2013).

[39] The Chinese room argument, SEP.

[40] See David Chalmers and the Singularity that will probably not come, Rationally Speaking, 5 October 2009; and Ray Kurzweil and the Singularity: visionary genius or pseudoscientific crank?, Rationally Speaking, 11 April 2011.

286 thoughts on “My philosophy, so far — part II

  1. I just get confused, in these scientific areas, when people say “we should be able to do X eventually.” Being a complete, lay outsider in science, I don’t know heads or tails about the details, but I observe that it’s often pretty hard to be sure of the actual results that scientists have achieved right now, let alone what they might achieve eventually (which might be centuries from now, some say).

    For example, cosmologists got very excited about the apparent recent discovery about gravity waves and inflation in the beginning of the universe, but most of them were careful to caution us that this apparent discovery wasn’t yet confirmed, and we should keep the champaign corked until it become a little clearer whether it was really a discovery or a mistake.

    Similarly, enthusiasts for faster-than-light travel point to a few people with good scientific credentials who are contemplating the possibilities in this area, but even those researchers (most of them, anyway) are quick to point out that there is no actual empirical evidence that there’s even a ghost of a chance that it is physically possible.

    “If nature can do it then we should be able to, eventually.” Yes, but where’s the hard evidence backing up that principle? There are zillions of things nature does that we’re nowhere near being able to do now, and there’s no good argument that I know of that shows that nothing will ever be discovered that would prevent us from doing everything nature does (with or without requiring huge amounts of energy).

    Scientists are always telling us lay people that humanity doesn’t even know what it doesn’t know. That makes sense to me.

    Like

  2. Hi Massimo,

    I completely understand, don’t worry. You are touching on a lot of points here, which is why I said it would be nice if you revisited them in more detail. I know you feel you have answer my points before, but the way it looks from my point of view is that you answer me once or twice appearing to miss my point or failing to answer the critical issues and then move on because the next post comes out or you hit your limits for how much time you are willing to devote to responses.

    If you want me to try for more concision, the crux of my post is the eight points where you seem to be inconsistent. Answer any of these you feel could be most productively answered. Here they are again:

    (1) Mathematical objects exist, but they do not exist mind-independently, yet mathematical objects (say tic tac toe) can be discovered independently by separate minds, so how are they not mind-independent?

    (2) You agree that mathematical objects can have relations without relata, but you deny that the physical world can be a mathematical object because it doesn’t make sense for it to have relations without relata. This is circular.

    (3) You insist that your morality is not founded on feelings such as revulsion, and yet you later concede that it is (ultimately at least) founded on evolved instincts.

    (4) Your naturalism is in tension with your vew that human decision-making is fundamentally unlike the kind of decision-making that might be achieved by a computer simulating the natural processes occurring in a brain. Furthermore, you say you agree with Dennett’s account of compatibilist free will, but this account of free will is compatible with decision-making by computers.

    (5) You compare the products of consciousness to sugar, but you don’t acknowledge that sugar is a substance while consciousness (or the control and coordination it enables) is not. (The signals and chemicals produced by a brain can also be produced by a computer with some simple hardware attached, so this is not an adequate response.)

    (6) You don’t give much of a positive argument for why a computer could not be conscious, but you do present (dubious) reasons why it is not necessarily true that it could. Agnosticism would seem to be more reasonable than outright anti-computationalism.

    (7) Naturalism implies brain function could be simulated. Such a brain would effectively believe itself to be conscious (reporting that it conscious and so on). This shows a tension between your view that consciousness cannot be an illusion but that computers cannot support consciousness.

    (8) Following from 8, naturalism but anti-computationalism means it is possible to have all the outward behaviour of a conscious entity without being conscious. How come we are conscious then? What was the point of evolution producing consciousness if it was not intimately bound with the intelligence of human brains?

    Like

  3. Yup, I agree. Computationalism is dualism, at least to a first approximation, but it’s a respectable sort of dualism. It’s really the same sort of dualism as Platonism, since software is just an abstract mathematical object.

    Of course ultimately it becomes Monism once again once you realise that the physical world is just another abstract object on the MUH.

    Like

  4. It is in terms of meta-ethics [21] that I am a quasi-realist (or a bounded instrumentalist). I don’t think that moral truths exist “out there,” independently of the human mind (sorry, Kant!), which would be yet another example of Platonism (akin to the mathematical / ontic ones we encountered last time).

    Independent moral truth would not be like Platonism because the Platonic realm is not defined as having empathetic consciousness and morality is meaningless in any other context than empathetic consciousness (this is one thing that Sam Harris says with which I completely agree).

    Like

  5. If you will permit me to copy over a famous back-and-forth from the suttas, this may at least help shed a little light on the Buddha’s thinking about self. This is from the Saṃyutta Nikāya 44.10 (trans. from SuttaCentral):

    Then the wanderer Vacchagotta approached the Blessed One … and said to him:
    “How is it now, Master Gotama, is there a self?”
    When this was said, the Blessed One was silent.
    “Then, Master Gotama, is there no self?”
    A second time the Blessed One was silent.
    Then the wanderer Vacchagotta rose from his seat and departed.

    Then, not long after the wanderer Vacchagotta had left, the Venerable Ānanda said to the Blessed One: “Why is it, venerable sir, that when the Blessed One was questioned by the wanderer Vacchagotta, he did not answer?”
    “If, Ānanda, when I was asked by the wanderer Vacchagotta, ‘Is there a self?’ I had answered, ‘There is a self,’ this would have been siding with those ascetics and brahmins who are eternalists. And if, when I was asked by him, ‘Is there no self?’ I had answered, ‘There is no self,’ this would have been siding with those ascetics and brahmins who are annihilationists.
    “If, Ānanda, when I was asked by the wanderer Vacchagotta, ‘Is there a self?’ I had answered, ‘There is a self,’ would this have been consistent on my part with the arising of the knowledge that ‘all phenomena are nonself’?”
    “No, venerable sir.”
    “And if, when I was asked by him, ‘Is there no self?’ I had answered, ‘There is no self,’ the wanderer Vacchagotta, already confused, would have fallen into even greater confusion, thinking, ‘It seems that the self I formerly had does not exist now.’”

    Like

  6. Great, so you agree that (a) mental representation is necessary, for anything that is a candidate for “believing that P”; (b) we have no reason to believe that computers have the capacity to mentally represent propositions; and therefore (c) that whatever it is you are describing, it is not belief.

    I am happy to give you schmeliefs, although I suspect that what people working in this area are really interested in is whether or not a computational account of beliefs is possible, and there’s no way to get there, without a computational account of mental content. I’d even be inclined to be generous on this front if one could provide merely a plausible *functionalist* account of mental content, which is a lower hurdle to jump.

    Like

  7. I don’t know. How is it making the sugar? If it is in some way analogous to biological photosynthesis it might be reasonable to call it a simulation of biological photosynthesis. If photosynthesis is defined as synthesizing sugar with photons, then it’s not even a simulation, it’s actual photosynthesis.

    But asking is this really photosynthesis is as futile as asking whether a submarine can swim. If by photosynthesis you very explicitly mean implementing the Calvin cycle in an actual chloroplast in a living cell, then your device is not actually photosynthesizing. Similarly, if you narrowly define consciousness to mean the function of a biological brain, then an artificial brain cannot be conscious, by definition. If you take a more reasonable definition I don’t see why it couldn’t be.

    Like

  8. Hi Aravis,

    ” (a) mental representation is necessary, for anything that is a candidate for “believing that P”; ”
    I kind of agree but not really. I believe that computers have actual beliefs supported by primitive informational representations which I see as continuous with more complex mental representations. But if you don’t buy that you can understand me to be talking about “shmeliefs” when I talk about what a computer believes (e.g. what is represented in its database).

    (b) “we have no reason to believe that computers have the capacity to mentally represent propositions;”

    I kind of vaguely agree that current computer systems have no capacity to represent propositions mentally, because I think their representations and information processing are too simple to justify the use of the adverb “mentally”. However I do think that their representations are continuous with the kinds of representations in human minds, and they are sufficient to justify the usage of the word “belief”.

    “(c) that whatever it is you are describing, it is not belief.”

    No, I don’t agree with this, because I don’t think belief requires a sophisticated mental representation. I think a belief that P only requires that P be represented in some way in the system such that the system behaves in a way consistent with P, e.g. reporting P to be true or taking P into consideration when choosing a course of action. But you don’t need to buy this to understand what I mean when I discuss what a computer system believes or knows.

    I’d even be inclined to be generous on this front if one could provide merely a plausible *functionalist* account of mental content, which is a lower hurdle to jump.

    I don’t expect you to be swayed, but I have at leased convinced (or deluded) myself into thinking that such a plausible account exists in the links I posted above.

    Here they are again:

    http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-semantics-from.html
    http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-meta-meaning.html

    Like

  9. Also, I made the point that Platonism entails the existence of all mathematical objects, whereas moral realism is typically construed as the existence of one objectively correct moral framework to the exclusion of all others. Moral anti-realism is just another way of saying that all moral systems exist and are equally valid from a dispassionate objective standpoint.

    Like

  10. OK, so how is the benefit of an ethical statement measured? By whom? Over what period of time? Who are the stake holders? It seems the suggestion is solely on semantics – in academic English mainly.

    Biology teaches us that the only “good” is more reproductive success, which leads to more reproductive success as the ultimate “good.” So are their core principals of rite/wrong independent of biology? If so, how can they be reconciled if in conflict?

    Where is the evidence the “human mind” even exists. In language?

    No Free Will
    Ah, so there is a strong human exceptionalism POV in all this. Fair enough.

    “it seems to me self-evident that we make choices or take decisions” OK, then it should be easy to find and prove in the lab. On all actions? Every key typed? How long does each of those take?

    Will stop here. There are no questions or counter arguments, if human exceptionalim is presumed and no evidence against is allowed.

    On the idea of human consciousness and felt experience being meaningful – how can it ever be separated from self-reports using language?

    Like

  11. Well, any prediction about what we might or might not eventually be able to do is uncertain. If you want iron certainty, don’t make any predictions. But where’s the fun in that? Once we decide that we will attempt it, the principal of whether or not it’s physically possible becomes an important one.

    Now, I’ll grant that we might not ever be able to make our own star, or our own black holes, which is why I specified the energy qualification. But if birds can fly, we can fly. If electricity can travel through the air and metals (lightening), we can harness it. If bats can use echolocation, we can build devices that can do it too. And if nature can build thinking brains, I see no reason why we won’t be able to eventually, at least short of discovering that there is a non-physical irreducible, unanalyzable, mysterious ghost component, which we have no indication of at this point.

    Like

  12. This conversation in this passage does not suggest to me that Gotama is advocating for the existence of a permanent self. He describes the inherent unavoidable contradiction that either answer would imply. So he is silent on the question, he seems to me to say Mu.

    I believe that the 2nd Buddha (Nararjuna) would respond with four answers applying the Catuskoti.
    1) Yes, there is a self
    2) No, there is not a self
    3) There is neither a self, or not self (which seems to be Gotama’s answer in the above passage)
    4) There is both self and not a self

    This makes interpretation of Nagarjuna difficult, but I like Graham Priests take which is to detailed to cover here as I am leaving work now. According to Preist the logic of the Catuskoti allows for and suggests an approach for dealing with such contradictions.

    Anyway my point is that Buddhist thought on this topic is I think quite varied and not versions include a permanent self

    Like

  13. “No, I don’t agree with this, because I don’t think belief requires a sophisticated mental representation.”
    —-
    You’re going to need some sort of argument, here, because virtually everyone working in the philosophy of mind thinks that propositional attitudes involve mental representation.

    And I should say that I *did* read your two links. Having had almost as much graduate training in linguistics as in philosophy, I can only say that your assertion that syntax=semantics is just plain wrong. Semantics, at a minimum, requires reference, whereas syntax involves nothing more than form.

    Like

  14. Massimo, thank you for all this work. Well done.

    “Of course, this [consciousness?] is ultimately an empirical matter, and we shall see about it. ”

    What would you need to see to decide that consciousness existed in a man-made machine?

    “I am simply a bit dismayed (particularly as a biologist) at how the computational route […] dominating the discourse by presenting itself as the only game in town.”

    When it seems, apparently to many people, that the computational route is sufficient for explanation, and there is no other explanation that is sufficient, how could it not be the only game in town?

    Keep up the good work.

    James

    Like

  15. “Hard determinism is the idea that physical determinism (the notion that the laws of physics and the universe’s initial conditions have fixed every event in the cosmos since the Big Bang) is true and therefore free will is impossible.”

    This is a genuine philosophical statement but is totally wrong in this ‘physical’ universe because of the following equation.

    Permanent confinement = Total freedom

    This is a ‘physics’ equation and cannot be evaluated philosophically. After all, there is something physical which is beyond the reach of philosophy. Furthermore, this equation goes much beyond the ‘asymptotic freedom’ in the strong force (while it is indeed a good example). In fact, the concepts of free-will, self, consciousness and intelligence must be described by (or link to) the physics ‘laws’. Of course, it will take 100 to 1,000 years from now for physicists to ‘probe’ those concepts with any gadget. This is why I have showed the other epistemology-tools. With these tools, we can discuss the ‘physics’ of this equation.

    All BSM (beyond the Standard Model) theories (such as, SUSY, M-string theory, multiverse, etc.) are theories which are the ‘extensions’ of SM but are not ‘languages’ describing the SM. Now, I want to show a new physics by using the second part pf the “Principle of ‘necessary true’: the ‘language’ is necessary true if the ‘system’ which is described by that language is an established knowledge.”

    Although the SM is known to be not complete, but its quark/lepton structure is accepted as a fact of nature. The following is not a BSM but is a ‘language’ for that structure.

    A G-string language (symbolic representation) consists of three different line-strings (vocabulary). And, each string carries a (½ ħ).
    Line-string (1) = (r, y, b 1)
    Line-string (2) = (r, y, b 2)
    Line-string (3) = (r, y, b 3)

    Every line-string has three nodes (or chairs), and each node can be symbolically represented with two symbols, V and A (alphabets).
    V is transparent and carries 0 electric charges.
    A is opaque and carries 1/3 electric charge.

    With them, there are some rules (theorems or grammar) for this language system.
    1. (V, V, V) = (r, y, b) = white = colorless, as V is transparent.
    2. (A, A, A) = colorless = white, as A is opaque.
    3. (V, A, A) = (r, A, A) = red, (A, V, A) = yellow, (A, A, V) = blue
    4, (V, V, A) = (r, y, A) = blue (complement of r + Y)

    With the above language, all 48 known quark/lepton particles can be ‘described’, as below,

    String 1 = (V, A, A 1) = {1st , red, 2/3 e, ½ ħ} = red up quark.

    String 2 = (-A, V, V 1) = {1st , red, -1/3 e, ½ ħ} = red down quark.

    String 3 = (A, A, V 1) = {1st , blue, 2/3 e, ½ ħ} = blue up quark.

    String 7 = (A, A, A 1) = {1st, white (colorless), 1 e, ½ ħ} = e (electron).

    String 8 = (V, V, V 1) = {1st, white, 0 e, ½ ħ} = e-neutrino.

    String 9 = (V, A, A 2) {2nd , red, 2/3 e, ½ ħ} = red charm quark.

    String 48 = -(V, V, V 3) – {3rd, white, 0 e, ½ ħ} = anti-tau-neutrino.

    As this part of the Standard Model (the 48 matter particles, the quark/lepton structure) is an a-knowledge (already proven), its ‘describing’ language (not a theory) must be ‘necessary true’, and there can be no argument about it. Are physicists ever able to probe the meaning of those symbolic alphabets (V and A)? This is really a non-issue, as it is only a ‘language’, not a theory.

    Yet, with this language, both proton and neutron can be ‘described’ as glider of the game of Life while that glider is a base for constructing a Turing computer (see http://www.prequark.org/Biolife.htm ). Now, we have found that a computing device is embedded in the elementary particles. And, this provides the first linkage to the high level bio-intelligence with the laws of physics. So, the next task will be the linkage of consciousness with the laws of physics. After all these, the issue of free-will can also be addressed with the laws of physics.

    Like

  16. James,
    your example is not even remotely like strong AI.
    It is just clever programming. QED.

    As a computer professional I admire the clever programming that has solved a complex problem but I also recognize that it is just a rote mechanical process. The cleverness of the programmer does not automagically turn a rote mechanical process into intelligence.

    Like

  17. DM,
    However I do think that their representations are continuous with the kinds of representations in human minds, and they are sufficient to justify the usage of the word “belief”.
    You have to justify that statement, it is far too broad and makes sweeping assumptions.
    1) How do beliefs happen in our minds?
    2) What exactly is a belief?
    3) Compare that with the representation in a computer.
    4) In what way are they continuous?
    5) To be continuous they must be of the same kind. Can you show they are of the same kind?
    6) To be continuous the kind must be able to vary by degree. How does this happen?

    Read Aravis’ statement again. It is a terse, precise, complete and definitive statement of the problem. You have not answered it.

    Like

  18. DM,
    I can only say that your assertion that syntax=semantics is just plain wrong. Semantics, at a minimum, requires reference, whereas syntax involves nothing more than form.

    Once again Aravis hits the nail on the head with his terse and precise statements. Form does not contain meaning or belief. The computer contains symbols (form) which it merely manipulates and that becomes another symbol or form. Becoming another form has not solved the problem, it still does not contain meaning or belief.

    Like

  19. Coel,
    the difference is that we have no understanding of how the brain forms beliefs, therefore we have no way of reproducing that process in another medium. We have no way of knowing that a this process in a natural medium(the brain) can be be reproduced in an artificial medium. So far it is plain wishful thinking that has produced no results.

    Like

  20. Then consider persistence of vision as an alternative hint of what illusion means. Or confabulation of memory. Or the sensation of emotions in the viscera. Or the confusion of taste misled by visual cues (or simply the confusion of taste and smell.) Visual errors when something is misidentified. Epiphenomenalism doesn’t come into it, unless you insist that if it’s just an illusion, it can’t have any importance. But the sensorium, by itself a major component of the consciousness, is “just” an illusion, an appearance, absolutely essential to locomotion. It is “just” a point of view, not Dennett’s Cartesian theater. Point of view is pretty much synonymous with consciousness. There is in truth no “just” that isn’t put there by religious and philosophical prejudices.

    And besides, we are at our most self-conscious, undistracted by outside influences and aware only of interior states, when we are dreaming. I suppose if you believe in a rational soul as the essence of self-consciousness this seems absurd. I can only What really is absurd is the idea that we are only most self-conscious, aware of our interior state, when we are awake, engaged with the exterior world, and that this means there is some vague, undefined sort of entity called the ego or mind. I can only attribute this to the astonishing power of religion and philosophy to sow confusion.

    Chains are constraints. Neither a whole person nor the mystical will are free when they are constrained. Breaking habits or creating habits, changing your attitudes, forbidding yourself to panic or to sleep, choosing to deny emotional needs, deciding to change your desires are examples of the acts of volition that really matter. The constraints on them are exceedingly important. Choices made to achieve ends may be acts of volition but there is a sense in which they are not free will. The “free” will worth having is the one which conforms our choices to our reason. Doing what we want is obedience, doing what we think we should is will. Daily life has made it plain that simply deciding between two gratifications tests the limits of our so-called free will.

    I believe compatibilism is wrong for three big reasons. One, it’s not true. There is little control over our desires, and what we have is costly, not free. Two, any notions of pragmatic sanctions based on this false conception I believe will be unable to even judge which sanctions are necessary and effective. I know of no theory of moral responsibility which does not pragmatically advocate exemplary punishment (or adulation.) There is no concept of excessive punishment or reward. Worst, compatibilism implies that resistance or obedience to norms is always willful, to which the pragmatic response is to increase the sanctions. Three, efforts by individuals to live up to the moral responsibility enjoined by compatibilism sometimes constitute a kind of psychological violence, self alienation. I think there’s something ugly and nasty at the root.

    Like

  21. Hi Aravis,

    Beliefs certainly need representation. I would not say that everyone thinks they need ‘mental’ representation, which implies that beliefs can only exist in a mind. Dennett, for instance, defines beliefs in terms of the intentional stance, saying that a belief exists in an entity when it is useful to imagine that it believes in order to understand its behaviour.

    I can only say that your assertion that syntax=semantics is just plain wrong. Semantics, at a minimum, requires reference, whereas syntax involves nothing more than form.

    It’s not plain wrong, Aravis, not if you understand the post. If I’m wrong it is for subtle and sophisticated reasons which will take a little bit more explaining before I agree with you.

    Of course I know there is a definitional difference between semantics and syntax, but the idea that the two can never be unified is the whole reason for the confusion about how semantics can come from syntax. If we see how to unify the two, one of the greatest mysteries in consciousness research will be solved. This is not a problem for AI alone, but also for the understanding of biological research. If you are a naturalist, you must agree that there must be some solution, and you can bet it will be unintuitive so proposals should be considered with an open mind.

    And I think the approach I outline is the correct one. (Not that it is originally mine or anything — I believe the view I present here is somewhat mainstream within the AI community although I’m not sure I can back that up with references).

    Concepts in your head have semantics. Words on a page are just symbols or syntax. When we say the words have semantics, we really mean that you can associate those symbols with mental concepts, so semantics are really all about evoking concepts in minds. We therefore say that these concepts are referenced by the words.

    OK, so that’s what it means for words to have semantics. What about the mental concepts themselves?

    We imagine that these refer to objects in the world (or abstract objects, but let’s leave that for now). That’s certainly one way of looking at it. Another way of looking at it is that they refer only to each other, but that they are correlated with objects in the world, such that seeing the visual image of a tree evokes the activation of the corresponding mental concept and so on. I think this correlation is really all that associates mental concepts with the world, and such correlation is no mystery as we can achieve the same kinds of correlations with computer systems.

    But if semantics are about evoking concepts in minds, then concepts in minds necessarily have semantics by definition. The job of unifying semantics and syntax is therefore the job of understanding how they can also be viewed as a kind of syntax or formal structure.

    I would suggest that you cannot really think about an actual tree, you can only think about your mental concept of a tree, a subtle distinction but one which I think may be important as we try to unify semantics and syntax. So, when you learn that Bob loves trees, you are not really learning to associate ‘Bob’ himself with a ‘love’ of actual ‘trees’, but you are creating links between your symbols for ‘Bob’, ‘love’ and ‘tree’. In this view, none of your mental concepts need to refer to anything in the real world (as indeed sometimes they do not). They refer only to each other, but correlation with the real world is enough to make them useful.

    Syntax is only form, but there is much we can understand and think about with even bare forms, if only how different meaningless symbols relate to each other. This understanding can be thought of as the ‘formal semantics’ of a syntax, so not what the symbols refer to but what they mean in terms of the context of the formal structure alone. This kind of limited semantics is inherent in the syntax and can be inferred without any explanation of what concepts the syntax is supposed to represent.

    It is my view that the web of symbols in your head is so sophisticated, so complex, that the formal semantics inherent in this web accounts for all your understanding. Everything you know about a tree, every associated sensation or emotion, is represented as the relationships between these symbols and the meaning comes from the richness of this web and the correlations with the outside world which are made possible by the integration of sensory data. The semantics of your understanding is in my view nothing more than the formal semantics of an incredibly complex web of symbols, any of which taken alone is meaningless and references nothing (but the activation of which may be correlated with things in the outside world).

    Like

  22. Hi Aravis,

    So you don’t believe that moral judgments/imperatives are normative?

    No I don’t. There is no objective standard of morality (which has been pretty obvious since Euthyphro, and is pretty obvious if you think about it from a biological perspective: the human moral system is just cobbled together to do a job in the same way that our aesthetic system or our immune system is); all there is is human feelings and opinions.

    The idea of moral realism is a complete red-herring, and is likely to be an illusion programed into us by evolution to make our moral system more effective by seeming to give our opinions greater weight (being an unwarrated extrapolation from “I want” to “we should want” or “God wants”).

    Like

  23. Hi labnut, just to clarify, I do not maintain that the database has a deep understanding of Irishness. It does not know about St Patrick’s day. It does not think I am likely to be alcoholic or catholic or pugnacious. It does not associate me with the colour green or with the Blarney Stone or with literary greats like Beckett, Yeats, Wilde or Joyce.

    Neither does it really understand the concept of nationality. It doesn’t understand cultural identity or nationalism or nation states.

    Nevertheless, I think it is reasonable to say that it believes that my nationality is Irish. What I mean by this is that it believes that the value of the “nationality” property of the object “DM” is the sequence of characters “Irish”.

    This is a very sparse, limited, dry sort of fact, but I do not think it is distorting the meaning of the words to say that the computer system believes this fact.

    Like

  24. Massimo,
    I don’t think I need a concept of telos, since it doesn’t play a part in my worldview. As for eudaimonia, the Aristotelian view is good enough as a first approximation: it means living a fulfilling, purposeful, moral life. Am I missing something?

    I want to suggest that you do have a telos, in a narrower, restricted sense. Your telos is bound up, as far as I can see, in your role as a philosopher and educationist. It is this that gives focus to the virtues that exemplify your life. But hey, you are the expert on your life 🙂

    it means living a fulfilling, purposeful, moral life. Am I missing something?
    Agreed. No, you are not missing something. Your reply is necessarily broad and brief because you have so much material to cover(including our indefatigable DM 🙂 ).

    Julia Annas (Intelligent Virtue) maintains that the virtues are the template for flourishing. I agree with that but think it comes perilously close to being a circular definition. Moreover, such a sweeping definition makes understanding of eudaimonia more difficult.

    DM dismissively called the virtues a ‘laundry list’. I think he is missing something important. Our collective experience as a human species has revealed that certain character traits are of foundational importance to human flourishing or eudaimonia.

    These foundational character traits can be expanded into all of the many virtues we recognise. I maintain these foundational character traits collectively define what it means to be flourishing.

    They fall into three groups,
    A) other regarding,
    B) self regarding and
    C) balanced rational self interest which mediates between between other regarding and self regarding values.

    A. Other Regarding
    1) Otherness, Ubuntu – I am because of you, collaboration, cooperation, responsibility, duty, tolerance, fairness, recognition, affirmation, consideration, law abiding.
    2) Compassion/empathy – I feel your pain, loss, harm and am motivated to prevent it or help.

    B. Self Regarding
    3) Excellence, we have an innate need to do things well. We admire excellence
    4) Joy, we need to experience joy, wonderment, exhilaration and awe.
    5) Beauty, we have a deep need to experience beauty in many forms.
    6) Love, we need to give and receive love in its many forms.
    7) Creativity, we have an innate need to create and be creative.

    C. Balance
    8) Rational self-interest, I balance other’s interest with my own interests.
    9) Moderation, restraint, discipline, self-control, frugality, the golden mean.

    I believe then, that these nine foundational virtues, taken together, define what it means to be flourishing. Unlike DM, I think they are not a laundry list or, as others maintain, merely a subjective starting point. I think they are the result of careful examination, by fine minds, of our collective experience and represent our best understanding of who and what we are. They represent the best we can be.

    Like

  25. The invention and reinvention of tic tac toes can have a physicalist basis without introducing anything nonphysical: Human brains (with their natural computational abilities) are game inventors, so it is not surprising that games of simple rules are reinvented.

    Like

  26. DM, I think that’s unfair. You understand well the difference btw soft and hard AI programs, and you also know that the latter is dead, it hasn’t accomplished anything for decades. And, unfortunately, it is the one that matters for your purposes. This doesn’t mean it may not get started again in a new direction, but seems to me that you have to face that fact, it’s the intellectually honest thing to do.

    Like

  27. Our intuition tells us that that subjective experience, mediated only by language, is the most powerful cause of behavior, that we have free will, gods exist, etc. It also tells us the earth is flat.

    How is one dead wrong intuition different from any other?

    Like

  28. Hi labnut,

    1) In human minds, beliefs are how the state of the world is represented. This is in my view no different from how connectionist computational models such as neural networks represent world state. In humans, beliefs are formed mostly because how models held in the mind are found to predict or explain sensory data, but may also be arrived at by deduction from other beliefs. Some beliefs are instinctive or intuitive.
    2) A belief is a proposition that is held to be true by an information processing system such as a person or a database.
    3) The same.
    4) The only difference between a mental representation and a data representation is that the adjective ‘mental’ does not pertain to computer systems which are too simple to regard as having minds. The difference between small and large is also continuous, but I would not call a large object ‘small’ (unless in comparison to a larger object, I guess).
    5) I have argued that they are the same kind on my blog and in particular in my reply to Aravis, which I think is perhaps clearer and reflects more mature thoughts on the subject than on my blog.
    6) Chiefly complexity of the system but also organisation. I think the mind is just a very sophisticated, complex information processing system with a certain organisation granting it abilities to deliberate, respond to the environment, introspect and so on. The representation of beliefs held by such a system can be called mental, but I would not use the adjective mental to describe anything about a simple information processing system even though I don’t really think there’s much of a difference in what a belief is in either case.

    Read my answer to Aravis’s assertion and come back to me.

    Like

  29. It’s not plain wishful thinking. Right or wrong, there is a reason to believe that a computer ought to be able to do whatever information processing a brain can do, because there is no reason to suppose that there is any fundamental principle prohibiting the simulation of a biological brain on a computer.

    Like

  30. Hi labnut,

    DM dismissively called the virtues a ‘laundry list’. I think he is missing something important. Our collective experience as a human species has revealed that certain character traits are of foundational importance to human flourishing or eudaimonia.

    Then we are in complete agreement. Consequentialism is about promoting human flourishing (or eudaimonia, I guess, although consequentialists tend not to use this term). In my view, the virtues are decided with this goal in mind and are a means of pursuing it. This is precisely why they are not a ‘laundry list’. They are only reduced to a laundry list if we ignore the consequentialist justification for them.

    Like

  31. No, it is not surprising at all. However belief that these games exist but mind-dependently is incoherent.

    If you invent a game and I invent the same game, when did the game spring into existence? Who is the actual creator of the game?

    I do not think that the same thing can be created twice, and I don’t think that it makes sense to imagine that something springs into existence when you create it but not when I do the exact same thing a millisecond later. The only resolution I can think of is that it exists eternally and both acts of invention are in fact acts of discovery.

    Like

  32. DM, I think that’s unfair.

    OK, I’ll explain. I think that it is wrong-headed to think we can go directly for Strong AI. I don’t know how that idea even makes sense. It’s like trying to find a cure for all diseases by finding a magical potion.

    If we’re going to cure all diseases and achieve immortality (not that we ever will, or should), it is going to be by tackling one disease at a time. Each problem solved brings us one step closer.

    This is how I see AI. There can be no such thing as a strong AI research program because strong AI is just the summation of all the problems that need to be solved to augment the information processing powers of computers so as to equal those of humans. In my view, each problem solved is progress made towards that goal.

    Like

  33. How is human flourishing defined, measured, by whom, over what period of time. Is getting everything you want, as much money and having as many children as you want flourishing?

    Start with animal flourishing, monkey flourishing – what would that look like. Should animal flourishing be included? Form some people clearly. Is it defined by being in a good mood all the time. Everyone having the same money, things, opportunities?

    Is there no cost to flourishing? Who and what should bear those costs?

    Is flourishing even a sensible ideal?

    Like

  34. Aravis,

    “If this is what you mean, then you are traveling pretty far away from what is commonly understood by ‘obligation’ and by a theory of obligation.”

    Yes, but I don’t see any sensible way to recover a strong sense of moral obligation — remember, I don’t think moral truths are out there any more than numbers are.

    “it’s pretty hard to see how one preserves the normativity of moral judgments and imperatives on a view like this.”

    In the same way in which one preserves the normatively of rationality: IF you want to act rationally THEN you should avoid committing logical fallacies.

    “All the other potential values and norms that one looks to ethical theory to ground — say, justice or liberty or “flourishing” — cannot plausibly be grounded in a purely biological description of human nature.”

    I’m not sure where this comes from. To begin with, I don’t go for a purely biological description of human nature, unless you count culture within biology. Second, it is, for instance, human nature to want to live harmoniously within a social group, to be treated fairly, to be able to pursue one’s own goals, and so forth. Indeed, we probably share at the least the rudiments of these attitudes with other social primates.

    “as MacIntyre points out, is that we could treat persons as irreducibly social – that is as being partly defined by the social connections of individuals”

    See above, I don’t have a problem with that, depending on how exactly this is cashed out.

    “wondered if it would sit well with some of the other commitments you want to make, specifically, re: the Self and also, re: Rawlsianism.”

    Don’t see why not. One doesn’t need to deny the self in order to acknowledge that we are social beings (we are not ants, after all!). And Rawls’ philosophy is eminently social.

    SelfAware,

    “It’s why I was careful to say “substance dualism”. In my view, it’s this type of dualism (property dualism?, hardware/software dualism?) that makes uploading at all plausible.”

    Yes, that distinction is crucial. But as I wrote a number of times, I see no reason to endorse property dualism either, at this point.

    doug,

    “the self is instead a kind of cognitive construct out of those objects. Now, on your terms that might make it a kind of ’emergent property’, although not one that is experienced.”

    Yes, I’m fine with that. But Hume is taken to deny the existence of the self, he certainly didn’t characterize it as an emergent property (which would have been stunning, given that the concept wasn’t around then!).

    “the Buddha explicitly rejects this move, I think because within his context again the self (ātman) was something that was unchanging, permanent, etc., and the aggregates are basically a causally connected bundle of mental and physical events.”

    All the Buddha has to do is to give up the idea that the self is definitionally impermanent and we are home free.

    “to be fair the Buddha’s ‘self’ was thicker in certain respects than Hume’s, in that it was the origin and target of karmic reward and punishment.”

    For that to be the case, wouldn’t the self have to be pretty darn permanent?

    Coel,

    “The concepts of being “interesting” or “positive” and of humans “flourishing” can only be grounded in human opinions and feelings.”

    Sure, morality wouldn’t exist without humans capable of having opinions and feelings. My analogy with math was just that, an analogy — meant to highlight the similarities in reasoning, not to make the stronger claim that morality is just like math.

    Thomas,

    “I would envision Buddha as suggesting that a view of self as something persistent or enduring to be a mistake resulting from conceptual limitation.”

    But as I said above, I don’t think of the self as enduring and static. Also (see my comment above), I actually think the Buddha enters into some kind of tension when he needs an enduring self for karmic reasons.

    “There is reason to think that the difficulty of getting someone to change a belief, even when confronted with overwhelming evidence that a held belief is irrational/wrong/incorrect/immoral, is related to the importance of the belief as reflective of the other’s supposed and rigid sense of self-image or identity.”

    For sure, but see this recent article that explains just how much we can actually do to change people’s mind rationally, and hence why the whole “we are rationalizing animals” thing is a bit overblown: http://goo.gl/1gzhZE

    DM,

    “If you want me to try for more concision, the crux of my post is the eight points where you seem to be inconsistent.”

    Oh, only eight? Well, then it ought to be easy to clear up… 😉

    “Mathematical objects exist, but they do not exist mind-independently, yet mathematical objects (say tic tac toe) can be discovered independently by separate minds, so how are they not mind-independent?”

    There are many instances in the history of humanity when people have hit on the same useful idea multiple times, without having to bring heavy ontological commitments. In a sense, even scientific theories fall into this category: while natural selection occurs “out there,” the theory of natural selection is a human construct to make sense of the world as we observe it. Plus, the wheel. Or chairs.

    “You agree that mathematical objects can have relations without relata, but you deny that the physical world can be a mathematical object because it doesn’t make sense for it to have relations without relata. This is circular.”

    Not if you distinguish between physics and math, which you don’t. Besides, even in math relations have to be between something, in particular dimensionless points, or other such constructs.

    “You insist that your morality is not founded on feelings such as revulsion, and yet you later concede that it is (ultimately at least) founded on evolved instincts.”

    There is a large pertinent literature here (e.g., Primates and Philosophers, by F. deWaal and a number of philosophers who added commentaries). Feelings are the foundations of a sense of morality, as Hume recognized and as we know from contemporary primatology. But moral reasoning requires filtering these feelings, elaborating on the consequences of certain choices, and so forth. And don’t forget that I count cultural, not only biological, evolution as shaping human morality.

    “Your naturalism is in tension with your vew that human decision-making is fundamentally unlike the kind of decision-making that might be achieved by a computer simulating the natural processes occurring in a brain.”

    That’s like saying that since the property of being wet emerges only when there is a sufficiently large aggregate of water molecules therefore there is a tension between solid state physics and quantum mechanics. It doesn’t follow. What I reject is what I see as your ontological confusion between objects and simulations of objects.

    “you say you agree with Dennett’s account of compatibilist free will, but this account of free will is compatible with decision-making by computers.”

    I honestly don’t see the problem. Sure, computers make decisions; so do ants. But if you think they are the same sort of decisions you made in answering this post I think you are missing a big chunk of the human experience.

    “You compare the products of consciousness to sugar, but you don’t acknowledge that sugar is a substance while consciousness (or the control and coordination it enables) is not.”

    And you willfully (at this point) insist in ignoring that consciousness is observed only in wet systems. The burden of proof is squarely on you to show that it is possible in virtual ones.

    “You don’t give much of a positive argument for why a computer could not be conscious”

    One more time: no wetware. See my analogy with other characteristics of biological systems. Do you actually disagree that life itself is limited to certain kinds of chemistry? Do you really think it possible to make it entirely independent of substrate? And if so, you don’t think the burden is on you to show how it can be done?

    “Naturalism implies brain function could be simulated. Such a brain would effectively believe itself to be conscious”

    Non sequitur. You can simulate a waterfall, but the waterfall wouldn’t be wet. Does that contradict naturalism, in your mind?

    “naturalism but anti-computationalism means it is possible to have all the outward behaviour of a conscious entity without being conscious.”

    I don’t see how that follows at all. Of course, one can write a clever program that behaviorally mimics a human being, like ELIZA. So what?

    “What was the point of evolution producing consciousness if it was not intimately bound with the intelligence of human brains?”

    What does that have to do with anything? First off, it isn’t at all clear why either consciousness or intelligence evolved in the first place, though one can speculate. Second, there is a difference between the two, depending on what one means by those words. I have no problem visualizing an intelligent computer who is however not conscious. And I think it is plausible to think there are degrees of consciousness in animals that are not particularly intelligent.

    Robin,

    “Independent moral truth would not be like Platonism because the Platonic realm is not defined as having empathetic consciousness”

    But if moral truths were “out there” they would constitute a type of Platonic realm, albeit not a mathematical one.

    jamessseattle,

    “What would you need to see to decide that consciousness existed in a man-made machine?”

    Damn good question. I don’t know, and I’m open to suggestions. I know one thing I wouldn’t accept: the Turing test. It’s essentially behaviorism, and behaviorism justly went the way of the Dodo in psychological research.

    “When it seems, apparently to many people, that the computational route is sufficient for explanation, and there is no other explanation that is sufficient, how could it not be the only game in town?”

    First, there are dissenters. Second, there are other cases where the apparent only game in town turned out to be wrong (blended theory of inheritance before Mendel) or is currently sterile (string theory), and we still wait for someone to come along with a better idea.

    Steven,

    “Then consider persistence of vision as an alternative hint of what illusion means. Or confabulation of memory.”

    I didn’t say there are no illusions, I said people are too quick to dismiss too much as “just illusions.”

    “Epiphenomenalism doesn’t come into it”

    I believe I was reacting to the analogy with a rainbow, which really is an epiphenomenal illusion.

    “It is “just” a point of view, not Dennett’s Cartesian theater.”

    I’m not advocating for Cartesianism.

    “I can only attribute this to the astonishing power of religion and philosophy to sow confusion.”

    I’m not even sure at this point whether this comment was directed at me. Surely you know that I make a distinction between religion and philosophy, and that I definitely don’t endorse any soul-like stuff.

    “I believe compatibilism is wrong for three big reasons. One, it’s not true. There is little control over our desires, and what we have is costly, not free.”

    This seems confused. First, nobody talks of free will has not having costs. Second, little control is not the same as no control. Third, the actual degree of control we have over our decision is still very much a question for open debate.

    “efforts by individuals to live up to the moral responsibility enjoined by compatibilism sometimes constitute a kind of psychological violence, self alienation. I think there’s something ugly and nasty at the root.”

    I don’t know, I try to hone up to my moral responsibilities, and I don’t feel any psychological violence or alienation for doing so.

    Like

  35. DM,
    I seek to promote whatever I personally interpret the term to mean, which might vary from time to time.

    Hah! That is such a revealing remark. You have just exposed the fatal weakness of consequentialism. I couldn’t have done it better myself.

    Like

  36. Many thanks for your answers Massimo.

    Some answers I feel satisfied with. Particularly on morality, I feel I agree with you, I just disagree with some of way you describe your views (emphasis especially) in the original post. So scratch that one off the list.

    I also think the OSR/MUH problem is less significant than the others and can be dropped.

    Platonism is in my view a semantic discussion, and we probably just have different ideas of what existence entails, so though I think there’s still a bunch to be said on this it can also be dropped.

    However I feel it is unlikely that we can resolve the issues pertaining to consciousness without a conversation or a more detailed correspondence you don’t have time for. I feel like you’re not getting my point and no doubt you feel like I’m not getting yours. There is clearly some amount of talking past each other going on. I’ll drop it for now in the hope that you revisit the subject in a more dedicated article at some point.

    Like

  37. Labnut,

    Virtue ethics has the same problem. There is no canonical, objectively correct set of virtues. Furthermore, you will struggle to find an objectively correct definition of flourishing to justify your virtues just as much as I do, although you may try valiantly. I might make similar attempts, but I have no prior commitment to the idea that this out to be possible so I don’t bother trying.

    Even if you do find an objectively satisfactory definition of flourishing, I will happily adopt it for my consequentialism and you will have solved the problem with consequentialism for me.

    Like

  38. The point is that vague and ambiguous in language – which is normal – is then falsely claimed to be fixed/reduced by magical claims or pretending. All animal brains are uncertainty adverse, not risk BTW, but highly uncertainty avoidant. Magical claims and ideas reduce uncertainty by , magically, pretending things exist that don’t — mainly the mind can control matter or wishful thinking.

    Like

  39. Oh my, if there is anything that is not solipsistic that’s ethics!
    Hee hee, scientismists quickly lose their footing in philosophical waters.

    Like

Comments are closed.