My philosophy, so far — part II

220px-RobertFuddBewusstsein17Jhby Massimo Pigliucci

In the first part [19] of this ambitious (and inevitably, insufficient) essay I sought to write down and briefly defend a number of fundamental positions that characterize my “philosophy,” i.e., my take on important questions concerning philosophy, science and the nature of reality. I have covered the nature of philosophy itself (as distinct, to a point, from science), metaphysics, epistemology, logic, math and the very nature of the universe. Time now to come a bit closer to home and talk about ethics, free will, the nature of the self, and consciousness. That ought to provide readers with a few more tidbits to chew on, and myself with a record of what I’m thinking at this moment in my life, for future reference, you know.

Ethics, meta- and standard

I have written extensively on ethics, perhaps the most comprehensive example being a seven part series that can be found at the Rationally Speaking blog [20]. Although in the past I have considered myself a realist, my position is probably better described as quasi-realism, or perhaps as a kind of bounded instrumentalism. Indeed, it is not very different  — in spirit, not the details — from the way I think of math or logic (see part I).

So, first off, I distinguish three types of questions one can meaningfully ask about ethics or morality (I am using the two terms interchangeably here, even though some authors make a distinction between the two): where does it come from, how does it work, and how it should work.

The first question is the province of evolutionary biology and anthropology: those are the disciplines that can tell us how a sense of right and wrong has evolved in our particular species of social, large-brained primates, and how it further diversified via cultural evolution. The second question is a matter of social and cognitive science: we want to know what sort of brain circuitry allows us to think about morality and make moral decisions, and we want to know how that circuitry is shaped not just by our biology, but also by our cultural milieu.

It is the third question, of course, that is more crucially philosophical in nature. Still, one can distinguish at the least two levels of philosophical discourse on ethics: how should we think of morality in general (the so-called “meta-ethical” question), and which system(s) of moral reasoning are best suited for our purposes as social beings.

It is in terms of meta-ethics [21] that I am a quasi-realist (or a bounded instrumentalist). I don’t think that moral truths exist “out there,” independently of the human mind, which would be yet another example of Platonism (akin to the mathematical / ontic ones we encountered last time). But I also don’t accept the moral relativist position that there is no principled way in which I can say, for instance, that imposing genital mutilation on young girls is wrong — in a sense of wrong that is stronger than simply “I happen not to like it,” or “I have a strong emotional revulsion to it.”

Rather, I think of moral philosophy as a method of reasoning about human ethical dilemmas, beginning with certain assumptions (more or less analogous to axioms in mathematics, or postulates in logic), plus empirical input (from commonsense and/or science) about pertinent facts (e.g., what causes pain and how much, what policies seem to produce the highest amount of certain desiderata, like the ability to flourish, individual freedom, just distribution of resources, etc.), plus of course the basic moral instincts we have inherited from our primate ancestors (on this I’m with Hume: if we don’t care about X there is no reasoning that, by itself, could make us care about X).

This sounds a bit complicated and perhaps esoteric, but it’s really simple: if you want to see what I mean, just read one of Michael Sandel’s books on moral reasoning [22]. They are aimed at the general public, they deal with very practical questions, and yet they show exactly how the moral philosopher thinks (and, incidentally, why science informs but simply cannot determine, our ethical priorities).

I haven’t forgotten about the second level of philosophical discourse concerning ethics: which ethical framework can best serve our aims as individuals within a broader society? Here the classical choices include deontology (Kant-style, not the Ten Commandments stuff) [23], utilitarianism-consequentialism [24], and virtue ethics [25], though there are others (ethics of care, communitarianism, and egalitarianism, for instance).

Although I have strong sympathies for much of what John Rawls [26] has written (from an egalitarian perspective) on justice, I decidedly embrace a neo-Aristotelian conception of virtue ethics. Actually, I maintain that the two can be brought together in “reflective equilibrium” (as Rawls would say) once we realize that virtue ethics addresses a different moral question from all the other approaches: for Aristotle and his contemporaries ethics was concerned not simply with what is the right thing to do, but with what is the right life to live, i.e. with the pursuit of eudaimonia (literally, having a good demon; more broadly, flourishing). So I think that I can say under penalty of little contradiction that when I ask myself what sort of life I want to live, my response is along virtue ethical lines; but when I ask the very different question of what sort of society I want to live in, then a Rawls-type quasi-egalitarianism comes to mind as the strongest candidate (to be practical, it is the sort of society you find in a number of northern European countries).

Free will

Free will is one of the oldest chestnuts in philosophy, and it has come back into fashion with a vengeance, lately [27], especially because of a new dialogue between philosophers of mind and cognitive scientists — a dialogue that at times is very enlightening, at others just as equally frustrating.

If you consult the Merriam-Webster, its two definitions of the concept are illustrative of why the debate is so acrimonious, and often goes nowhere:

1. voluntary choice or decision

2. freedom of humans to make choices that are not determined by prior causes or by divine intervention

Before we go any further, I believe in (1) and I think that (2) is incoherent.

Now, very briefly, there are basically four positions on free will: hard determinism, metaphysical libertarianism, hard incompatibilism, and compatibilism.

Hard determinism is the idea that physical determinism (the notion that the laws of physics and the universe’s initial conditions have fixed every event in the cosmos since the Big Bang) is true and therefore free will is impossible; metaphysical libertarianism (not to be confused with the political position!) says physical determinism is false and free will is possible; hard incompatibilism says that free will is impossible regardless of whether physical determinism is true or false; and compatibilism accepts the idea of physical determinism but claims that free will (of a kind) is nonetheless possible.

First off, notice that the four positions actually imply different conceptions of free will. For a compatibilist, for instance, free will of type (2) above is nonsense, while that is precisely what the metaphysical libertarian accepts.

Second, given the choices above, I count myself as a compatibilist, more or less along the lines explained at length by Daniel Dennett (see [27] and references therein), but with a fairly large caveat.

I am a compatibilist (as opposed to both a hard determinist and a hard incompatibilist) because it seems to me self-evident that we make choices or take decisions, and that we do that in a different way from that of a (currently existing) computer, or a plant (with animals things become increasingly fuzzy the more complicated their nervous system). I have definitely chosen to write this essay, in a much richer sense of “chosen” than my computer is “choosing” to produce certain patterns of pixels on my screen as a result of other patterns of keyboard hits that I created with my fingers. You may deny that, but that would leave you with a large number of interesting biological and psychological phenomena that go pretty much unexplained, unaccounted for, or otherwise swept under the (epistemic) carpet.

I am also a compatibilist (as opposed to a metaphysical libertarian) because I think that causality plays a crucial and unavoidable role in our scientific explanations of pretty much everything that happens in the universe above the level of sub-atomic physics (more on this in a second). You simply can’t do any “special” science (i.e., any science other than fundamental physics) without invoking the concept of causation. Since the scientific study of free will (I prefer the more neutral, and far less theologically loaded, “volition”) is the province of neuroscience, psychology and sociology — all of which certainly depend on deploying the idea of causality in their explanations — to talk of a-causal or contra-causal free will is nonsense (on stilts).

So, my (and Dennett’s) compatibilism simply means that human beings are sophisticated biological organisms (I reserve the word “machine” for human-made artifacts, as I have a problem with the deployment of machine-like metaphors in biology [28]) capable of processing environmental stimuli (including language) in highly complex, non-linear fashion, and to arrive at decisions on actions to take. The fact that, if exposed to the same exact stimuli we would unfailingly come out with the same exact decisions does not make us a puppet or marionette, those decisions are still “ours” in an important sense — which of course implies that we do deserve (moral) blame or praise for them [29].

What about the caveat to which I hinted above? Well, it’s actually three caveats: i) We still lack a good philosophical account (let alone a scientific theory, whatever that would look like) of causality itself [30]. That ought to make everyone in the free will debate at least a bit queasy. ii) Causality plays little or no explanatory role precisely where the determinist should expect it to be playing a major one: in fundamental physics. Again, someone should think carefully about this one. iii) Hard determinism is, let us not forget it, a philosophical (indeed, metaphysical!) position, not a scientific theory. It is often invoked as a corollary of the so-called principle of the causal completeness of physics [31]. But “causal completeness” simply means that the laws of physics (in general, not just the currently accepted set) exhaust our description of the universe. The notion is definitely not logically incompatible with different, not necessarily reductionist, ways of understanding said laws; nor does it rule out even instances of strong emergence [32] (i.e., the possibility that new laws come into being when certain conditions are attained, usually in terms of system complexity). I am not saying that determinism is false, or that strong emergence occurs. I am saying that the data from the sciences — at the moment, at least — strongly underdetermine these metaphysical possibilities, so that hard determinists should tread a little more lightly than they typically do.

Self and consciousness

And we finally come to perhaps the most distinguishing characteristic of humans (although likely present to a degree in a number of other sentient species): (self)-consciousness.

Again, let’s start simple, with the Merriam-Webster: their first definition of consciousness is “the quality or state of being aware especially of something within oneself”; they also go for “the state of being characterized by sensation, emotion, volition, and thought.”

When I talk about (self)-consciousness I mean something very close to the first definition. It is a qualitative state of experience, and it refers not just to one’s awareness of one’s surroundings and simple emotions (like being in pain) — which presumably we share with a lot of other animal species — but more specifically the awareness of one’s thoughts (which may be, but likely is not, unique to human beings).

The first thing I’m going to say about my philosophy of self & consciousness is that I don’t go for the currently popular idea that they are an illusion, an epiphenomenon, or simply the originators of confabulations about decisions already made at the subconscious level. That sort of approach finds home in, for instance, Buddhist philosophy, and in the West goes back at the least to David Hume.

The most trivial observation to make about eliminativism concerning the self & consciousness is that if they are an illusion who, exactly, is experiencing such illusion? This is a more incisive point, I think, than is often given credit.

But, mostly, I simply think that denying — as opposed to explaining — self & consciousness is a bad and ultimately unsatisfactory move. And based on what, precisely? Hume famously said that whenever he “looked” into his own mind he found nothing but individual sensations, so he concluded that the mind itself is a loosely connected bundle of them. Ironically, current research in cognitive science clearly shows that we are often mistaken about our introspection, which ought to go a long way toward undermining Hume’s argument. Besides, I never understood what, exactly, he was expecting to find. And, again, who was doing the probing and finding said bundles, anyway?

Some eliminativists point to deep meditation (or prayer, or what happens in sensory deprivation tanks), which results in the sensation that the boundary between the self and the rest of the world becomes fluid and less precise. Yes, but neurobiology tells us exactly what’s going on there: the areas of the brain in charge of proprioception [33] become much less active, because of the relative sensorial deprivation the subject experiences. As such, we have the illusion (that one really is an illusion!) that our body is expanding and that its boundaries with the rest of the world are no longer sharp.

Other self & consciousness deniers refer to classic experiments with split-brain patients [34], where individuals with a severed corpus callosum behave as if they housed two distinct centers of consciousness, some times dramatically at odds with each other. Well, yes, but notice that we are now looking at a severely malfunctioning brain, and that moreover this sort of split personality arises only under very specific circumstances: cut the brain in any other way and you get one dead guy (or gal), not multiple personalities.

To me all of the above, plus whatever else we know about neurobiology, plus the too often discounted commonsense experience of ourselves, simply tell me that there is a conscious self, and that it is an important component of what it means to be human. I think of consciousness and the self as emergent properties (in the weak sense, I’m not making any strong metaphysical statements here) of mind-numbingly complex neuronal systems, in a fashion similar to which, say, “wetness” is an emergent property of large numbers of molecules of water, and is nowhere to be found in any single molecule taken in isolation [35].

Now, that conclusion does most certainly not imply the rejection of empirical findings showing that much of our thinking happens below the surface, so to speak, i.e. outside of the direct control of consciousness. Here Kahneman’s now famous “two-speed” model of thinking [36] comes handy, and has the advantage of being backed by plenty of evidence. Nor am I suggesting that we don’t confabulate, engage in all sorts of cognitively biased reasoning, and so forth. But I am getting increasingly annoyed at what I perceive as the latest fashion of denying or grossly discounting that we truly are, at best, the rational animal, as Aristotle said. Indeed, just to amuse myself I picture all these people who deny rationality and consciousness as irrational zombies whose arguments obviously cannot be taken seriously — because they haven’t really thought about it, and at any rate are just rationalizing…

The second thing I’m going to reiterate (since I’ve said it plenty of times before) concerns consciousness in particular. As many of my readers likely know, the currently popular account of the phenomenon is the so-called computational one, which draws a direct (although increasingly more qualified as time goes by) analogy between minds and (digital, originally) computers [37]. For a variety of reasons that I’ve explained elsewhere [38], I do think there are some computational aspects to minding (I prefer to refer to it as an activity, rather than a thing), but I also think that computationalists just don’t take biology seriously enough. On this, therefore, I’m with John Searle (and that’s quite irrespective of his famous Chinese room thought experiment [39]) when he labels himself a biological naturalist about consciousness. The basic idea is that — as far as we know — consciousness is a biological process, not unlike, say, photosynthesis. Which means that it may be bounded not only to certain functional arrangements, but also to specific physicochemical materials. These materials don’t have to be the ones that happen to characterize earth-bound life, but it is plausible that they just can’t be anything at all.

I think the most convincing analogy here is with life itself. We can’t possibly know what radically different forms of life may be out there, but if life is characterized by complex metabolism, information carrying, reproduction, homeostasis, the ability to evolve, etc. then it seems like it better be based on carbon or something that has similar chemical flexibility. Hard to imagine, for instance, helium-based life forms, given that helium is a “noble” gas with very limited chemical potentialities.

Similarly, I think, with consciousness: the qualitative ability of feeling what it is like to experience something may require complex chemistry, not just complex functional arrangements of arbitrary materials, which is why I doubt we will be able to create conscious computers (which, incidentally, is very different from creating intelligent computers), and why I think any talk of “uploading” one’s consciousness is sheer nonsense [40]. Of course, this is ultimately an empirical matter, and we shall see about it. I am simply a bit dismayed (particularly as a biologist) at how the computational route — despite having actually yielded comparatively little (see the abysmal failure of the once much trumpeted strong AI program) — keeps dominating the discourse by presenting itself as the only game in town (reminds me of string theory in physics, but that’s a whole other story for another time…).

It should go without saying, but I’m going to spell it out anyway, just in case: none of the above should give any comfort to dualists, supernaturalists and assorted mysticists. I do think consciousness is a biophysical phenomenon, which we have at the least the potential ability of explaining, and perhaps even of duplicating artificially — just not, I am betting, in the way that so many seem to think is about to yield success any minute now.

The whole shebang

Do the positions summarized above and in part I of this essay form a coherent philosophical view of things? I think so, even though they are certainly not airtight, and they may be open to revision or even wholesale rejection, in some cases.

The whole jigsaw puzzle can be thought of as one particular type of naturalist take, of course, and I’m sure that comes as no surprise given my long standing rejection of supernaturalism. More specifically, my ontology is relatively sparse, though perhaps not quite as “desert like” as W.V.O. Quine’s. I recognize pretty much only physical entities as ontologically “thick,” so to speak, though I am willing to say that concepts, mathematical objects being a subset of them, also “exist” in a weaker sense of the term existence (but definitely not a mind-independent one).

My take could also be characterized as Humean in spirit, despite my rejection of specific Humean notions, such as the illusory status of the self. Hume thought that philosophy better take on board the natural sciences and get as far away as possible from Scholastic-type disputes. He also thought that whatever philosophical views we arrive at have to square with commonsense, not in the strict sense of confirming it, but at the very least always keeping in mind that one pays a high price every time one arrives at notions that are completely at odds with it. In some instances, this is unavoidable (e.g., the strange world of quantum mechanics), but in others can and if so should be avoided (e.g., the idea that the fundamental ontological nature of the universe is math).

Outside of Hume, some of my other philosophical inspirations should be clear. Aristotle, for one, at least when it comes to ethics and the general question of what kind of life one ought to live. Bertrand Russell is another, though it may have been less clear from what I’ve written here. Russell, like Hume, was very sympathetic to the idea of “scientific” philosophy, although his work in mathematics and logic clearly shows that he never seriously thought of reducing — Quine-style — philosophy to science. But Russell has been influential on me for two other reasons, which he shares with Aristotle and Hume: he is eminently quotable (and who doesn’t love a well placed quote!), and he embodied the spirit of open inquiry and reasonable skepticism to which I still aspire every day, regardless of my obvious recurring failures.

Let me therefore leave you with three of my favorite quotes from these greats of philosophy:

Aristotle: Any one can get angry — that is easy … but to do this to the right person, to the right extent, at the right time, with the right motive, and in the right way, that is not for every one, nor is it easy. (Nicomachean Ethics, Book II, 1109.a27)

Hume: In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence. (An Enquiry Concerning Human Understanding, Section 10 : Of Miracles Pt. 1)

Russell: Men fear thought as they fear nothing else on earth – more than ruin, more even than death. Thought is subversive and revolutionary, destructive and terrible; thought is merciless to privilege, established institutions, and comfortable habits; thought is anarchic and lawless, indifferent to authority, careless of the well-tried wisdom of the ages. Thought looks into the pit of hell and is not afraid. (Why Men Fight: A Method of Abolishing the International Duel, pp. 178-179)



Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[19] My philosophy, so far — Part I, by M. Pigliucci, Scientia Salon, 19 May 2014.

[20] Here is the last entry, you can work your way back from there.

[21] Metaethics entry in the Stanford Encyclopedia of Philosophy.

[22] By Sandel, see both: Justice: What’s the Right Thing to Do?, Farrar, Straus and Giroux, 2009; and What Money Can’t Buy: The Moral Limits of Markets, Farrar, Straus and Giroux, 2012.

[23] Deontological ethics, SEP.

[24] Consequentialism, SEP.

[25] Virtue ethics, SEP.

[26] John Rawls, SEP.

[27] Here is one of my favorite examples. And here is the obligatory SEP entry.

[28] See my paper with Maarten Boudry, Why Machine-Information Metaphors are Bad for Science and Science Education, Science and Education 20 (453):471, 2011.

[29] See the following SEP entries: Causal processes, The metaphysics of causation, Causation and manipulability, and Counterfactual theories of causation.

[29] I was recently having an enlightening discussion about this with my friend Maarten Boudry, and we came up with another way to conceptualize in what sense, say, I could have hit a penalty kick that I actually missed (soccer, you know), whereas I couldn’t have written Hamlet. The idea is to deploy the logical concept of possible worlds (see the pertinent SEP entry). It should be obvious that — given exactly identical circumstances — I would have kicked the penalty in exactly the same way. But there is (in the logical sense of “is”) a nearby possible world in which the circumstances are different, say because I focused more on the task at hand, and I do hit the ball correctly, thereby scoring a goal. However, the possible world in which I write Hamlet is so distant from the actual world that there is no sense for me to say that I could have written Hamlet. If you find value in logical counterfactuals, this way of thinking about free will is very helpful. If not, I’ll try something else some other time.

[30] See Causal determinism, SEP.

[31] On the causal completeness of physics, by M. Pigliucci, Rationally Speaking, 27 February 2013.

[32] On emergence, see a series of four essays I wrote for the Rationally Speaking blog.

[33] For the basics on proprioception, see the Wiki entry.

[34] See The split brain: A tale of two halves, by David Wolman, Nature 14 March 2012.

[35] Which is why, incidentally, I think Dennett’s famous model of consciousness as made possible by stupider and stupider robots all the way down to individual neurons is too simplistic. In the case of wetness, there is a level of complexity below which the property simply does not apply, and I think the same can be said for consciousness.

[36] Thinking, Fast and Slow, by D. Kahneman, Turtleback, 2013.

[37] The computational theory of mind, SEP.

[38] See the following essays from the Rationally Speaking blog: Philosophy not in the business of producing theories: the case of the computational “theory” of mind (29 July 2013); Computation, Church-Turing, and all that jazz (5 August 2013); Three and a half thought experiments in philosophy of mind (6 September 2013).

[39] The Chinese room argument, SEP.

[40] See David Chalmers and the Singularity that will probably not come, Rationally Speaking, 5 October 2009; and Ray Kurzweil and the Singularity: visionary genius or pseudoscientific crank?, Rationally Speaking, 11 April 2011.

286 thoughts on “My philosophy, so far — part II

  1. the poorly conceived Turing test is the consequence of our failure to understand consciousness. Once we understand consciousness we will be able to devise a test for it.

    I disagree. I think we already understand it in broad strokes, or at least some of us do. If I’m right, there will never be such a test, because consciousness is not a physical phenomenon. The Turing Test is not ideal, but I believe it’s the best test we will ever have.


  2. Of course I’m guessing, and of course are guesses are different. But mine are founded on whatever little understanding of the biology of brain and consciousness we have. Yours seem to me just speculation.


  3. DM,
    I think we already understand it in broad strokes, or at least some of us do.
    I look forward to the imminent and momentous announcement of the Nobel prize. It may be the most important prize ever awarded.

    My dogs are for sure conscious. There cannot be the slightest doubt about that.


  4. DM,
    …or at least some of us do.
    I doubt that when they can claim ‘semantics = syntax’ or that today’s computers have beliefs.


  5. Hi Michael,

    I am not arguing that consciousness is complexity, I am arguing that complexity is a better analogy for consciousness than glucose is, so your criticism that complexity can mean anything doesn’t really stack up.

    Also, though glucose is complex on some level, it is not very complex at all. In fact it could be considered positively simple compared to other compounds. But yes, it is more complex than other things, so it has some amount of complexity, the way an atom has some amount of largeness.

    Similarly. I think that simple information processing systems may be on the consciousness spectrum even though I would not regard them as sophisticated enough to warrant calling them conscious. As with simplicity/complexity, I see no reason to doubt that there is a continuum from unconscious to conscious.

    Can you pick up sweetness and put it in a jar?

    No, because sweetness is not a physical substance.

    OK, so instead of saying virtual photosynthesis doesn’t produce real sugar, you’re changing it to say that virtual photosynthesis doesn’t produce real sweetness. This is a much better analogy, and now, instead of being embarassingly, ridiculously bad, it is merely unpersuasive for reasons I will now explain.

    Sweetness is a physical property, like mass, temperature and charge, although like colour it is more a function of how something is interpreted by human senses than something inherent to it. But for our purposes, it is is physical and not abstract, because we can build detectors to test for it and different observers will typically agree that it has this property.

    It is not an abstract property. Abstract properties include complexity, beauty, intelligence, organization, purposefulness, usefulness etc. There are no detectors for such properties and they are usually realizable by physical objects which bear no straightforward resemblances to each other at a physical level. Simulations of physical things which have abstract properties will also typically have those abstract properties, or at the very least we can say that they sometimes do.

    With regard to the photosynthesis analogy, it is only my contention that the analogy is unpersuasive because it only works if consciousness is a physical property like temperature and not an abstract property like intelligence. My arguments for why consciousness should be considered an abstract property are elsewhere, but since this possibility remains open, I grow tired of assertions that “a simulation of X is not X”.


  6. Of course I’m guessing, and of course are guesses are different. But mine are founded on whatever little understanding of the biology of brain and consciousness we have. Yours seem to me just speculation.

    Yes, that is how it seems to you, however to me it seems like mine is the only view that is coherent and that is why I believe what I do. I would be very happy to explain why but I don’t think you have the time to commit to such a discussion.


  7. DM,
    Turing Test is not ideal, but I believe it’s the best test we will ever have.
    Why do you believe that? How is it possible to believe that when we don’t understand consciousness?


  8. I’d be happy to read yours or anyone’s impressions regarding the article at that I cited. This seems a fruitful approach regarding some of these questions. These were actual conscious beings who were reduced to so-called “minimally conscious” states. It seems to me these situations raise all sorts of compelling conceptual questions regarding issues such as what we mean by intention, awareness, and wakefulness along with the issues of self-identity, free will, and ethical action–all of which seem to me more pertinent to this post than whether computers can simulate what we don’t fully understand while I’ll concede that our understanding of such matters may in fact aid us in designing AI that might help to support life while biological life is prolonged to effect its own repairs.


  9. Wow, this is a very expansive set of claims with not a single bio/medical citation or even biological principal! Well, done that. But how is this set of claims different from claims for black magic, etc?


  10. DM wrote:

    “So there is a real world out there, and interpretations of sentences really are true or false, but when we agree that a sentence is true it is only because of its mapping to our mental model and how that is analogous to the real world.”
    This is just such an eccentric view of the truth-conditions of very straightforward sentences that I really have nothing to say about it. It’s just not a view that’s on any radar I care to follow. The idea that the name “Fido” doesn’t refer to my dog, such that the statement “Fido is five years old” is true, by virtue of my dog being a specific age, but rather falls under some complex account of concepts and models is just not the sort of theorizing that I find either interesting or explanatorily useful.

    I’ll stick with the very common view in linguistics and the philosophy of language that singular terms refer to individuals and predicates to properties, and the equally common Tarski T-sentences.


  11. DM,
    Yes, that is how it seems to you, however to me it seems like mine is the only view that is coherent and that is why I believe what I do.

    You said that in reply to Massimo. This raises some important procedural points.

    One does not lightly dismiss the considered opinions of experts in the field under discussion. I am not saying they are infallible(I also disagree with Massimo from time to time, but very carefully and after much research). In view of their training and expertise their considered opinions should be accorded respect. What this means in practice is that it is not enough to cavalierly dismiss their opinions and assert your own are superior. There is an onus on you to carefully consider the reasons for their opinions, understand them, evaluate them and then construct a reasoned response to them.

    I have not seen you do this. It is not enough for you to contradict them and argue that your own opinions are better.

    Turning to the way in which you interact with me. You should be careful not to misrepresent people’s opinions and you should consider their entire argument, not ignore some parts of it.

    What I am advising is a great deal more care in your replies and the way you construct your arguments. Slow down and consider things more carefully.

    You are a natural contrarian and that makes for a stimulating conversation. I am not asking you to change that. I enjoy your lively interchanges but wish you would slow down and take the time to construct more considered arguments.

    I sympathise with your contrarian attitude because that characterised my corporate career. Mine was successful because I was always careful to pick my fights and fought them on favourable terrain where I would win. A contrarian attitude is valuable because it exposes all that is slipshod, false and fraudulent. It also exposes opportunities others thought were not possible. It stimulates new thinking and that is wonderful in itself.

    Remember that one of the meanings of the word ‘philosophical’ is an even handed, considered attitude that can be summed up as ‘on the one hand this, on the other hand that’.

    I wish you luck and success and look forward to more stimulating interchanges..


  12. Hi Aravis,

    There’s nothing wrong with the common view as an approximation, but we need more precision when we try to answer the question of how semantics might arise in an information processing system. I’m not interested in redefining semantics or in overturning the philosophy of language, but I am interested in explaining consciousness, and I think the account I present is robust.

    Although I’ll come back to you after I have read those papers. Perhaps they will illuminate some inconsistency in my view I have not previously been made aware of.


  13. Hi labnut,

    One does not lightly dismiss the considered opinions of experts in the field under discussion.

    I do not recognise Massimo as an expert on consciousness. I recognise him as an expert in biology (not neuroscience) and in philosophy generally as well as philosophy of science in particular. I honestly think I understand the issues regarding consciousness better than Massimo does, because his view comes across to me as incoherent. I could be wrong in this respect but because I can’t actually discuss Massimo’s ideas with him in real time I am left unable to communicate with him properly. We’re talking past each other, so my impression is what I am left with.

    I also disagree with Massimo from time to time, but very carefully and after much research

    This is the situation I am in. I have been thinking about and occasionally researching consciousness for about 12 years now, in an informal hobbyist capacity. It is one of my chief interests and my computationalism is certainly one of my most considered views.

    What this means in practice is that it is not enough to cavalierly dismiss their opinions and assert your own are superior.

    Like Massimo did when he called my views wild speculation? I am not offended by this, in particular because he was careful to say they “seem” like wild speculation. I understand that is how it looks from his perspective. I am just explaining how it looks from mine, and this is also why I have been careful to say that I see Massimo’s views as incoherent without asserting that they actually are.

    There is an onus on you to carefully consider the reasons for their opinions, understand them, evaluate them and then construct a reasoned response to them.

    I have done. Many times. The problem is that this is an asymmetrical communication. Massimo cannot dedicate the same amount of time to understanding me as I do to understanding him, so he doesn’t understand why I think his views are incoherent. There’s no point explaining it any more, so I am only making the point that I do not hold my views for arbitrary reasons but because all other views seem incoherent to me.

    Turning to the way in which you interact with me. You should be careful not to misrepresent people’s opinions and you should consider their entire argument, not ignore some parts of it.

    I understand this is your impression of the discussion, but it is quite unlike mine. I misunderstand you and you misunderstand me from time to time. This is the beginning and the end of the misrepresentation you are talking about. You do it to me as much as I do it to you, but I don’t misconstrue it. I do not deliberately ignore parts of your argument, but it is possible that I fail to see the significance of some parts of what you say and so do not respond. There is nothing I can do to keep from misrepresenting you or ignoring parts of your argument because both are unintentional and inevitable.

    I wish you luck and success and look forward to more stimulating interchanges..



  14. Since I believe that functionalism–and computationalism–have been roundly refuted, I have no interest in trying to figure out “how semantics might arise in an information processing system” (since people are not information processing systems).

    And I don’t think that the understanding that would result would be “more robust.” This was my point about horizontal rather than vertical accounts.


  15. It would be more convincing if you could cite some of your own peer-reviewed work on these subjects. Simply linking to pieces published on your own blog is not very persuasive.


  16. Thomas, it was a most disturbing read. Disturbing because so many are reduced to silent suffering. But it was also hopeful in the sense that science is turning its attention to these problems. Crucially, it shows the need to develop better tools for detecting consciousness in the brain. With better detection we can start developing and testing treatment methods that might restore some patients.

    Even so, it does not tell us anything more about the nature of consciousness. Consciousness is located in the brain and now we know which circuits in the brain are crucial for consciousness. That still does not say anything about the nature of consciousness.

    I return to my use of a CPU as an analogy for the brain. There is level 1. I can probe the CPU and see which areas are active when the computer performs different function. The GPU comes into play at certain times and the APU at other times, etc. But I, the programmer, know that this says nothing at all about the purpose and deep structure of the program I wrote. To understand that you must examine my program itself. This is level 2, the functional level. You can examine the screen(the visible output) and see what it does but that still does not reveal how my program works. This is level 3, the output level.

    Today we are in the position where we can examine blood flows in the brain with tools like fMRI. This is equivalent to level 1 in my computer analogy. We can examine the output of the mind, as the psychologist does, or we do by introspection.. This is equivalent to level 3, the output level. But what is completely missing is our ability to examine the mind at the programming level, that is level 2. And yet this is the most important level where we process thoughts, experiences, intents and experience consciousness. And yet, some people think that despite our complete lack of understanding of the programming level 2, we can replicate it in silicon. The mind boggles. Or that silicon can already can do this(experience beliefs). The mind boggles in overdrive.

    We don’t know how to investigate the brain on level 2. Until we do all questions are open.
    We may never know. Without the source code to my programs you cannot understand how my programs do their job. But the source code to my programs(level 2.2) is not contained in the binary, compiled and optimized version(level 2.1) running in the hardware(or wetware). Someone will object that the compiled and optimized version can be de-compiled. That is true and one of my teams in Shanghai did just this with a large program(to the dismay of copyright holders). But when doing the decompiling you already have knowledge of what the decompiled program should look like. For this to work you need access to level 2.1 and know what 2.2 should look like.

    In the case of the mind we do not have access to level 2.1 and don’t have even the beginning of the foggiest idea what 2.2 should look like. Can this problem be solved? It looks like a very, very tough problem. This is why I have no patience with proclamations that we already partially understand consciousness(we have no understanding at all) or that today’s computer have beliefs(total hogwash is the politest opinion I can give).

    Another way to express the problem is in terms of emergence. The solution to the problem may be impossible because we may be dealing with strong emergence. We cannot discern the properties of the upper, emergent layer by looking solely at the lower layer, which is what we are doing today.


  17. Hi Aravis,

    It would be more convincing if you could cite some of your own peer-reviewed work on these subjects. Simply linking to pieces published on your own blog is not very persuasive.

    I’m an amateur. I have never attempted to get anything peer reviewed. I doubt I am saying anything that hasn’t been said before, so I doubt I could get published anyway. The arguments on the blog should stand on their own. If I was a lone crackpot you might have a point, but computationalism is far from a fringe view. If you’re not interested in finding flaws in my arguments that’s absolutely fine.


  18. And yet, some people think that despite our complete lack of understanding of the programming level 2, we can replicate it in silicon. The mind boggles.

    Nobody thinks that. The claim is that it ought to be possible in principle. Nobody is saying we can do it in practice without better understanding of human intelligence, and personally I’m not sure if it will ever be feasible.

    Or that silicon can already can do this(experience beliefs). The mind boggles in overdrive.

    This is only because you interpret beliefs differently. In the language I use, a printed telephone directory believes that “ACME Ltd” has a telephone number of “01234567890” and this does not imply any kind of consciousness or understanding.


  19. Well, that’s a rather facile comment, though I’m going to assume you’re being sincere. If it does nothing except change your debate into a discussion of biological consciousness per se, preferably human, I’d be happy. Look at it this way: Given a hypothetical “last chance” grant, I’d rather make an ethical case for funding research into what these neurologists and neurosurgeons are doing than for funding studies in AI.


  20. DM,
    as Aravis said, it is problematic to cite your own articles.
    We cite other, authorative sources, to illustrate, confirm or buttress the point we are making. Citing oneself just does not do the job unless one really is an expert in the field with many other citations to the article to confirm your expertise.


  21. Since I believe that functionalism–and computationalism–have been roundly refuted, I have no interest in trying to figure out “how semantics might arise in an information processing system” (since people are not information processing systems).

    I find that to be a very strange attitude. I am convinced computationalism is true, and this makes me fascinated with learning about the viewpoints of people who disagree. This is why I am so interested in talking to you. I want to challenge my preconceptions and my interest is sparked by the views of those who can help me do that.

    I do not accept that computationalism has been roundly refuted. I have never come across a convincing argument against it, and as far as I’m aware it remains a popular position within philosophy of mind. For any refutation you can care to mention, I bet I can find a flaw, and I doubt I will be the first to do so (or else perhaps I really ought to publish).

    I’m still reading those papers. I don’t have access to What Psychological States are Not at the moment but I can probably get it on Monday. I’ve read about the first 30-40% of The Troubles with Functionalism so far and do not have any major issue with it yet, so I am interested to see if it changes my mind, convinces me that I am not a functionalist after all but some other kind of computationalist, or whether I find a problem with it.

    I’ll come back to you after I’ve read them.


  22. DM,
    In the language I use, a printed telephone directory believes that “ACME Ltd” has a telephone number of “01234567890″
    That is a trivial redefinition of belief that is not at all useful and it contradicts everyday understanding of the subject.

    Worse than that, you are caught on the horns of a dilemma. In the earlier post you were arguing for X-Phi, saying that opinion surveys of terms would usefully guide us to more accurate definitions.

    Now apply that reasoning to beliefs and ask what the expected opinion of the nature of beliefs is. I am pretty sure the normal definition of beliefs would not be your telephone directory example. In fact I suspect that most people would be gobsmacked by your definition.


  23. DM,
    Nobody thinks that. The claim is that it ought to be possible in principle.
    And what is that principle? Moreover you must answer the considerable objections as well.


  24. C Lqrvy,
    Good point. I agree that it is important to distinguish between the types of behaviorisms and Massimo is certainly right in that many forms of behaviorism, such as the three you mentioned, are not very active areas of philosophical or scientific research currently. However, behavior analytic traditions from Skinner onward (which is what I was mainly talking about) and neo-behavioral theories (which have largely been re-termed into cognitive literature more recent) are very much alive and productive areas of research.

    As for how the turing test relates to behaviorism or more specifically behavior analysis, I agree with Massimo that behaviorism has nothing to contribute to the topic of consciousness. However, behavioral psychology is not necessarily even aiming to contribute that topic so not understanding the nature of consciousness is not a reason to claim that behavioral psychology is dead as a scientific theory. Behavioral psychology tends focus on functional accounts of behavior that look to find environmental variables that can be manipulated to better control and predict behavior, which suit clinical situations better but are also very relevant to basic research. As I mentioned before, work in areas such as relation frame theory have made very important advances in psychology and more recently, have had some great collaborations across cognitive science folks like Jan De Houwer (researcher in implicit biases) and neuroscience. We even have some collaborations happening with evolution science and how behavioral psychology can be integrated into modern notions of evolution occurring across multiple levels (with people like David Sloan and Jablonka).


  25. I guess I question the reason why we would need consciousness to behave in certain advance ways. Presumably you would agree that simple machines or programs that can “behave” in certain limited ways are not conscious so why do more complex machines/programs need to be conscious? Is there a specific action that you think we take that determines we are conscious?

    We have certainly come a long way in terms of explaining some for the variables involved in why we do what we do but not understanding how our subjective experience is generated does not seem to impair our forward progress in terms of better understanding why we do what we do. This makes me think we will eventually be able to create a machine that can behave exactly like a human being but have no consciousness. Moreover, I think we can create such machine without ourselves ever understanding consciousness.

    For example, we could program a machine to react to all aspects of color of red in great detail and based on the all the physical properties, the machine could calculate a behavioral response just like a human but would be no more conscious than a simple machine that was able to detect various wave lengths and read out what color it was.


  26. Thanks for reading the article and your comments, labnut. Articles such as these tend to add relevance to what many scientists suggest that philosophic inquiry lacks. I happen to think such statements are hyperbolic. In fact, even some of the scientists have ethical concerns in this article. But it seems to me philosophers are more likely to frame and refine a whole host of issues by somehow participating in such research. After all, there seems agreement that the patients were once relatively high functioning, conscious beings about whom life and death decisions must now be made. Consider the implications when it is reported that most lay people consider the “state” many of these individuals find themselves in as “worse” than death. You say, “That still does not say anything about the nature of consciousness.” Perhaps not. But perhaps we become lost in our own language when saying “about the ‘nature’ of consciousness.” These are documented cases in which humans considered by many to be clinically dead are proven to be otherwise. It seems to me this area opens the door to both philosophic and scientific inquiry and may present the best opportunity to explore many issues such as awareness, consciousness, intent, free will, and self-identity.


  27. Sorry . . . p.s., such inquiries may help to determine a minimum “threshold” for possible emergence, even though I find the clinical terms personally offensive, but nevertheless perhaps critically important.


  28. Coel: If a person, in what is presumed to be a state of “coma,” can identify who he is and where he is, there is some reason to think this is evidence of belief, true or false. Granted, the method used to make this conclusion may be disputed, but it seems IBE.


  29. I am well aware that you don’t think it has been refuted. That’s your prerogative. I was simply explaining why I don’t find interesting the issues that you think are pressing — like deriving a semantics from nothing more than syntax.


  30. You are quite right that XPhi would likely show me to be wrong if I were claiming to be defining belief universally. But I’m not. I am reserving the right to speak metaphorically, although later I would hope to justify that the metaphorical usage is continuous with what you think of as real beliefs, and is in fact more or less the same thing but in the context of a much simpler system.


  31. Hi labnut,

    And what is that principle? Moreover you must answer the considerable objections as well.

    Saying that something is “possible in principle” does not mean that “what is that principle” is a sensible question. Possible in principle just means that it ought to be possible, notwithstanding practical considerations and knowhow. There is no single simple principle with which to answer your question.

    I think I have answered the objections, if not on this thread then on others on Scientia Salon, Rationally Speaking and my blog. I very much look forward to coming across new objections to consider.


  32. DM,
    We don’t detect that they are conscious. We assume that they are. But perhaps they are not. Who knows?
    Ask any dog lover and you will get back an unambiguous reply that they are conscious. They have a certainty that brooks no argument(I am one of them). What makes them so certain? They clearly believe they detect consciousness in dogs, and as C says, the dogs would never pass the Turing Test.

    The answer was suggested to me by Daniel Goleman’s book Social Intelligence(well worth reading).

    But first two anecdotes to illustrate the point I am going to make. I was a member of our regiment’s competition drill squad(we won the national inter-regimental competition!). As we relentlessly performed our practice drills I introspectively marvelled at what was taking place(drill squad philosophy!) How was it that we intuitively knew what the other was doing and so perfectly synchronized our movements? There I was doing it and yet I did not understand how it was happening, how it was possible. Players in an orchestra report the same experience.

    I will shortly decide to take my two dogs for a walk, the greatest joy in their life. When I make the decision all bedlam breaks loose. They will careen around the house, wild with exhilaration and joy, spinning, cavorting and barking deafeningly. My family thinks my dogs are mind readers. How do my dogs unerringly know when I am going to take them for a walk?

    Goleman had this to say in his opening chapter:
    …we are wired to connect.
    Neuroscience has discovered that our brain’s very design makes it sociable, inexorably drawn into an intimate brain-to-brain linkup whenever we engage with another person. That neural bridge lets us affect the brain—and so the body—of everyone we interact with, just as they do us.
    Even our most routine encounters act as regulators in the brain, priming our emotions, some desirable, others not. The more strongly connected we are with someone emotionally, the greater the mutual force. Our most potent exchanges occur with those people with whom we spend the greatest amount of time day in and day out, year after year—particularly those we care about the most.
    During these neural linkups, our brains engage in an emotional tango, a dance of feelings. Our social interactions operate as modulators, something like interpersonal thermostats that continually reset key aspects of our brain function as they orchestrate our emotions.

    It seems we are all embedded in a social web of dense interconnections where we read the most subtle indicators, transmitting them as well. Our brains are wired to maintain and ‘an intimate brain-to-brain linkup’, a ‘neural bridge’. It is the ability of a person to participate in this dense social web, to maintain a neural bridge, that tells us the other person is functioning consciously. Dogs, by co-evolving with humans have acquired the ability to integrate themselves into our dense social web of interconnections. They read our signals and we read their signals, allowing us to unerringly conclude they are conscious and feeling as we are.

    All of this suggests that instead of a Turing Test we should be using the Goleman Test(my term) to detect consciousness. When and if a machine participates in our dense social web, fluidly, interactively, spontaneously and unhesitatingly, with the same emotions we experience and that that we intuitively recognise as authentic (as happens with my dogs) then we will conclude it is conscious.

    Go at it DM. See if you can program your machine to experience the fierce joy and exhilaration that my dogs will experience when I get up from my computer to take them for a walk. PS, I secretly believe they really are mind readers.


  33. Thomas,
    In fact, even some of the scientists have ethical concerns in this article.
    The ethical concerns are tricky.

    1) should we prolong their lives when it is hardly life and can mean so much suffering?
    2) the impact on family can be devastating. Is it not better to end the life and allow family closure so that they can come to terms?
    3) Is the very high cost of maintaining minimal existence a good allocation of resources?
    4) Is the absolute value of life such 1 to 3 are justified?
    5) Alternatively, if we answer no to (4) and thereby put a price on life, have we not opened the door to moral relativism such that other much more questionable decisions are enabled?
    6) By stubbornly persisting in maintaining life do we not motivate science to further research the matter and thus enable later breakthroughs of great significance?
    7) Is the high value we place on life not one of the most noble attributes of our species?
    8) Should we compromise this noble value? What would be the unintended consequences if we did? If we compromise this noble value do we not lose something defining about ourselves and degrade ourselves?


  34. Hi imzasirf,

    Presumably you would agree that simple machines or programs that can “behave” in certain limited ways are not conscious

    Kind of. I suspect there is a continuum. I don’t think that small things are big, but that doesn’t mean that they have no size at all.

    so why do more complex machines/programs need to be conscious?

    Just to be clear, in case it isn’t, I don’t think complexity automatically entails consciousness. I think consciousness is a property of certain complex computations that have a certain organisation, and I think this organisation is necessary to perform some of the tasks that humans are capable of.

    Is there a specific action that you think we take that determines we are conscious?

    I think consciousness is bound up with having a sense of self, ability to introspect, etc. I think that computations which have a suite of such abilities are conscious. I think that existing computer systems with similar abilities may be very dimly conscious and what keeps them from being truly conscious is a lack of complexity and insufficient understanding of the world.

    This makes me think we will eventually be able to create a machine that can behave exactly like a human being but have no consciousness. Moreover, I think we can create such machine without ourselves ever understanding consciousness.

    I agree that it ought to be possible in principle for us to make a machine which will behave exactly like a human, and I agree that we do not have to understand consciousness to do so. However, I believe that such a machine would be conscious because I think consciousness is a necessary property of any algorithm which can perform at a human level. Like complexity, consciousness is not an ingredient that needs to be added in, it is simple a property which it is impossible to build an intelligent system without, whether we understand this or not.

    I should emphasise that I am not absolutely certain that all systems which behave like humans must be conscious. I am merely extremely confident. I am much more certain, however, that any algorithm which not only functions like a human brain but works analogously to a human brain will be conscious. As such, my preferred example is a hypothetical simulation of a human brain built from a brain scan, complete with input sensory data and output motor control. It is this virtual brain that I am certain must be conscious. This only serves to establish the most convincing example of a computer sustaining consciousness, but I think that simpler or more directly designed computer systems are probably also capable of consciousness.

    So it seems we simply have contrasting intuitions. It seems at first glance as though both viewpoints are viable, however I think that further reflection shows that my view is more coherent, as I would be happy to discuss.


  35. Learning a lot about philosophism on this blog and the notion that philosophy explains all and is meaningful in terms of human behavior or explaining experiences.

    The common strategy, like with theism and other magical beliefs, is denial of the ordinary.
    The reality of all experience is that it is just ordinary. Humans are just another ordinary descendent of prior animals. Daily experience is very ordinary. Nothing special, like consciousness, feelings, thinking, logic, gods, ghosts, deamons, ethics/morality, etc.. Certainly nothing magical.

    Human brain functions are ordinary, behavior is ordinary, etc. – there is nothing special about humans vs other animals.

    Apparently, this is a fear inducing reality.


  36. Yes, I always learn a lot on comment boards. Not much new in terms of data, facts, evidence or information sadly, but a lot about defensive written behavior and fear behavior around new ideas.

    My experience with web comment boards is that it is dominated by MA conservative thinking white guys aggressively defending their old ideas. In the beginning there were more younger people and women, they have been scared away, also sadly. With no policing that will happen.


  37. Agree, labnut. And this is why there is more at stake here than simply science. But these unfortunate events are to my mind especially suited for refining our notions of self, identity, emotion, intention, and awareness. To give you an example from my own life. My father lived to age 97, and gave very little evidence of senility for most of it. His kidneys simply gave out so he was on dialysis (something, by the way, that might have been denied him, given his age, in other nations). About 18 months before he died, I took him to see his internist who broached the subject of the DNR. My father initially wouldn’t sign it. But this was largely because he couldn’t immediately grasp what it meant to be machine-dependent with no possibility of being “aware” again. How will these concepts change over time as a result of technological advances? What will it mean when one signs a DNR? How will we explain these matters? It would seem, perhaps unsurprisingly, that none of these notions have a static, fixed character. We can only incrementally refine our explanations of them. No one of them seems to be a controlling factor. They are situational and contextural. And perhaps inherently elusive.


  38. Hi labnut,

    We cite other, authorative sources, to illustrate, confirm or buttress the point we are making.

    My purpose in linking to my blog is not to establish authority but to save myself repetition and typing and to save Massimo the effort of having to read it for moderation. I do not think it is very different from Massimo’s purpose in linking to articles he has written previously on Rationally Speaking on which he is not an expert. You are right that perhaps I should make an effort to provide citations, but performing a literature review is difficult for a hobbyist without any access to academic publications.

    The comments on my posts also serves to provide a more appropriate place for discussion of these ideas than does Scientia Salon.

    Ask any dog lover and you will get back an unambiguous reply that they are conscious.

    I am inclined to believe you, but how would you treat such an argument coming from a person who believes that a piece of computer software is conscious?

    If we accept for a moment that a computer or robot can behave in the same way as a person, there is no reason to think that the same feelings of empathy and emotional connection could not be evoked in a human regarding the software. Either you believe that it is impossible for a human to be emotionally connected to a piece of software (which is patently false if you’re familiar with the worrying increasing prevalence of the phenomenon of digital girlfriends in Japan:, or you must believe it is possible for a human to be mistaken in such a belief. If you think that humans can be mistaken, then you have no grounds for certainty that your impression of your dogs is true while those deceived by sufficiently advanced AI is not.

    However I must emphasize that I personally have no doubt that your dogs are conscious. I am only saying that similar grounds ought to be adequate for believing a computer to be conscious. To clarify, I do not think that digital girlfriends are conscious, because I do think that people can be mistaken in this regard and I think that such Japanese men who are emotionally connected to this software are mistaken, particularly if they believe the software loves them back.

    I have said this before, and I will say it again. I do not conceive of the Turing Test as a test that must have no false negatives, but as a test that must be calibrated so as to make false positives as difficult as possible, because it is the claim that a machine can be conscious that is controversial. As such I am not moved that your dogs would fail the Turing Test. The Goleman test you propose just sounds like a Turing Test with a lower bar and I have no doubt that it could be passed by a computer as is arguably happening already with digital girlfriends.


  39. imzasirf and DM: I guess some of us might some reservations about what it means to say “behave exactly like a human being.” One might say, for example, “What do you mean when you tell me not to behave like an animal?” because I don’t recognize it as such. So it would seem impossible to do such a thing . . . exactly. You could only do, perhaps, what is done in “Blade Runner.” Build replicants that entail the history of a particular human. Even if the replicant could then build on this foundation and add to its history, is this exact human behavior?


  40. “Useful Thing for Philosophers to Do: Show How Pretty Much All Our Beliefs Are Wrong – Including Philosophy” –

    What philosophers can do is stand between evidence based knowledge and cultural/personal beliefs/intuitions expressed in everyday language. Philosophers can analyze the disconnects – just that alone is useful. Maybe. Worth a try.

    The medical facts are in. Behavior is “decided” solely reactively, completely unconsciously and instantly – 140ms. All subjective experience, including feelings, consciousness, deciding, thinking, etc. are probably just epiphenomenal. Maybe not, but the experimental work needs to be done. However, since other animals don’t need any of what makes humans “exceptional” – they probably don’t matter much. Since all subjective experience is just self reports using local language – it looks like words don’t matter much either.

    What would definitely be useful is to show how our mistaken beliefs are embedded in our language. For example, the notion of a stand alone, independent free thinking Me/I. Also language cultural differences in beliefs, language, etc.

    Currently, it seems philosophers are hyper-focused on protecting turf – which is normal. They are spending all their time on looking backwards to old books and attacking new ideas from bench brain science. That appears to be the default of brains overall.

    So, using only everyday language, philosophy can now look forward and integrate medical/biological facts with everyday language – or try.


  41. From my perspective, behaving like a human can be interpreted as behaving in a manner indistinguishable from human behaviour, so passing the Turing Test consistently in other words.

    I can also give the example of a physical simulation of all the particles in a human body and perhaps the environment also (although perhaps we might take a few shortcuts with the environment). This seems to me to be unlikely to be feasible, but ought to be possible in principle and I have no reason to doubt that this virtual person would behave in the same way as a physical person composed of the same pattern of particles.


  42. OK, so what independent evidence is there that philosophy can predict measurable events in the future – independent of cultural beliefs? What philosophical statement can be falsified? What is the proof for any philosophical statement?

    It is “obvious” the world is flat, too.

    BTW, is there going to be real “Scientia” discussed in the “salon?”


  43. “Science (from Latin scientia, meaning “knowledge” ) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.” ??

    Have there been any testable explanations? Well, the ones I share have been tested.


Comments are closed.