Clarifying Sam Harris’ clarifications

Embedding_Ethics_in_Engineering_Education-heroby Dwayne Holmes

[Editor’s note: this essay is an expansion and follow up to the author’s submission to the contest organized by Sam Harris for the best criticism of his arguments on science and ethics, as laid out in The Moral Landscape.]

The semantics of “science” is important

In responding to Ryan Born’s essay [1] — which won the competition giving readers a chance to challenge the arguments Sam Harris made for merging science and ethics in his book The Moral Landscape (henceforth, TML) — Harris undermined most of the discussion concerning the “scientific” nature of his theory with this statement:

“The whole point of The Moral Landscape was to argue for the existence of moral truths … every bit as real as the truths of physics. If readers want to concede that point without calling the acquisition of such truths a “science,” that’s a semantic choice that has no bearing on my argument.”

Harris is right that the choice of calling TML theory a science (or not) is a semantic issue, which would not touch the validity or practical utility of his theory. However, that does not mean the decision is without serious consequence.

Harris’s expanded definition of “science” relies on a loosening of criteria that can become problematic for those in traditional scientific fields such as physics and biology. His most questionable claim is that the existence of answers in principle provides sufficient grounds for defining something as scientific. If that were true, Intelligent Design (ID) theory would become classified as a legitimate science, as there are answers in principle to the questions they ask. The difference between ID theories and traditional scientific theories is that the methodology underlying ID cannot generate answers in practice. For many that is a critical distinction (and Harris admits his approach may not meet that criterion).

If we decide to accept a broad definition of science (just to let Harris’s moral theory “in”), future court cases regarding science education may then hinge on being able to explain the difference between science (for people in lab coats) versus science (for everyone else) such that they shouldn’t be taught together in a “science” class. Why make the difficult job of protecting legitimate science education any harder than it already is?

Harris should concede that TML theory is not science as most people use the term, perhaps adopting “scientia” instead, as Massimo Pigliucci advocates, as a term covering the building of rational knowledge beyond strict empirical approaches.

Distinguishing between moral systems is important 

Harris argues that the concerns of different moral systems reduce to concerns about consequences:

“Similarly, if virtues such as generosity, wisdom, and honesty caused nothing but pain and chaos, no sane person could consider them good. In my view, deontologists and virtue ethicists smuggle the good consequences of their ethics into the conversation from the start.”

By using the most generic conceptions of “consequence” and “good,” it is possible to force deontology and virtue ethics to fit into the category of consequentialist theory. But that would not change the fact that traditional consequentialist theories (like utilitarianism or TML theory) are characterized by vastly different ideas regarding what consequences are desired (including for whom) and how to compare choices when making a moral judgment. In fact, outside the extreme end of avoiding absolute misery for everyone, Harris has not made a case that alternative systems compare practical consequences at all while rendering moral judgments, much less in the same way as consequentialist theories do.

Different moral systems can legitimately place importance on the way people conduct their actions, with a view toward perfecting the individual or society (in an esthetic sense) rather than toward overall gain (in a practical sense). For example, a well done action that saves no one (and costs one’s own life) can be viewed as holding greater moral merit than a devious or slipshod action that saves lives.

One real life example could be the martial code of Bushido in Japan, exemplified in the story of the 47 Ronin [2]. This is clearly not a story about calculating the maximization of some practical ‘good’ or ‘flourishing’ (beyond that of an individual’s character). The gravitas of the story comes from a commitment to one’s duties or beliefs that are held sacred (virtuous) for themselves, in spite of massive costs.

If “consequentialism” is broadened to such an extent that the interests, mechanics, and results of Bushido get lumped together with Mill’s utilitarianism, that term has lost much explanatory power. Eventually people will have to reinvent terms similar to the ones we already use to distinguish such systems, though now as subsets of consequentialism. Ultimately, accepting Harris’s argument simply shifts debate to what flavor of consequentialism is correct and so should be practiced.

“Ryan seems to believe that a person can coherently value something for reasons that have nothing to do with its actual or potential consequences… It is true that certain philosophers have claimed this… But I don’t find this claim psychologically credible or conceptually coherent.”

That an abstract principle might be chosen (credibly) over practical consequences can be seen with a simple hypothetical. Imagine that scientific evidence emerges that a false belief in wholly benign star fairies (who help when all natural/scientific measures have been exhausted, and require no other false beliefs or actions against others) leads to greater happiness, health, and longevity. According to traditional consequentialist theories (including Harris’) it would be right to maintain that false belief and promote it in others. More important, it would be wrong to promote doubt in others.

However, many people would find that an unacceptable moral conclusion. Those practicing “atheism” regarding these fictional beings, because they prefer honesty (or curiosity, truth seeking, etc), are coherently valuing something other than practical consequences.

There are many more hypotheticals that can be considered, such as refusing to engage in cannibalism, killing children, or forcing a woman to become pregnant even if one of these actions were required to save humanity from extinction. It is psychologically and conceptually valid to say that a world that requires that to maintain its existence (even if temporarily) is not a world worth saving.

It may be noted that such a concern (let’s take cannibalism) is still about consequences and does not reduce just to “cannibalism is bad.” Specifically, it is taking into account the psychological consequences one would face from such an experience. However, that move simply supports the overall argument being advanced. A traditional consequentialist theory (including Harris’s) would not, and theoretically could not, prioritize consequences from a specific act (or for a specific individual) over others. Traditional consequentialist theories are about maximizing a specific goal, in Harris’s case “well-being,” regardless of means.

If the fate of all humanity came down to one person having to choke down some man-flesh for a while, the choice would be crystal clear to a traditional consequentialist. In contrast, deontological theorists and virtue ethicists can take into account, and prioritize, specific methods or consequences to specific groups. This is true even in the face of extinction, making them very different moral systems indeed.

Distinguishing factual errors from moral errors is important 

“[T]he inner and outer consequences of our thoughts and actions seem to account for everything of value here. If you disagree, the burden is on you to come up with an action that is obviously right or wrong for reasons that are not fully accounted for by its (actual or potential) consequences… [and]… I don’t believe that any sane person is concerned with abstract principles and virtues — such as justice and loyalty — independent of the ways they affect our lives.”

Several examples of actions being judged right/wrong without appeal to practical consequence have already been given above. It seems especially hard to accept the label of insanity for those valuing truth over the beneficial delusion (placebo effect) of fictional beings.  Alternative challenges have been advanced by writers such as Massimo Pigliucci who flip the problem back to Harris. For instance, assuming that science found that cultural practices oppressing women actually resulted in net positives for societies, would Harris switch to accepting them [3]? To this I might add the question: in such cases, would it really require insanity to oppose them?

As it happens, (even without hypothetical benefits) the practice of female genital mutilation (FGM) does shift the burden of proof while revealing a critical flaw in Harris’s moral theory. This is because it shows how, by basing moral judgment on outcomes, TML theory loses the ability to distinguish mistakes (factual errors) from intentions (commonly considered the basis of moral errors).

TML contains scathing criticism of FGM, suggesting a comparison between FGM cultures and a sadist cutting up young girls for pleasure [4]. However, there is a clear difference in intent (mental states) between the two. The intention of parents practicing FGM is to help their child and their society (even if they are horribly mistaken about what they factually achieve). This is obvious when one considers that those practicing FGM have sought modern medicine to remove any similarity between the inadvertent outcomes of the procedure (physical suffering and danger) and the intended results of the sadist. Indeed, the only remaining “problem” would be the injustice of altering the physical features of a child without their consent (which, of course, is something commonly accepted for males in the West). Intriguingly, some people have moved to block access to such medical services (in the West and abroad), with the intent of preventing acceptance of FGM, despite the fact that their actions inherently lead to the very suffering and death of innocent girls that FGM practitioners were seeking to avoid.

A purely results-based consequentialist theory (which ignores mental states) cannot discriminate between these alternatives, treating them as roughly morally equivalent despite the vast differences in intent. It is reasonable to find such a conclusion mistaken.

Indeed, for many, conflating factual error with moral error would seem to be a major misfire during the initial test run of any moral theory.

Distinguishing between descriptive and prescriptive ethics is important (and shouldn’t his fans care?)

Harris certainly did clarify a mistaken impression with these statements:

“I also disagree with the distinction Ryan draws between “descriptive” and “prescriptive” enterprises. Ethics is prescriptive only because we tend to talk about it that way… We could just as well think about ethics descriptively… In my view, moralizing notions like “should” and “ought” are just ways of indicating that certain experiences and states of being are better than others… There need be no imperative to be good — just as there’s no imperative to be smart or even sane…. [and separately]… Ryan, Russell [Blackford], and many of my other critics think that I must add an extra term of obligation — a person should be committed to maximizing the well-being of all conscious creatures. But I see no need for this.”

While it is clear that Harris (in TML) equated statements of how to achieve well-being with “oughts,” I (and others) apparently misread that as elevating factual claims to the level of moral imperatives (that one ought to do it). It was not obvious that Harris had intended a wholesale assault on prescriptive ethics, by going the other direction and reducing oughts to mere shorthand descriptions.

On the contrary, TML appeared (again, to me) to be an opening shot against moral relativism and anti-realism. Perhaps this confusion arose from Harris’ claim of being a moral realist, while repeatedly attacking both moral relativists and anti-realists. According to moral realism right and wrong exist, and so do imperatives. Otherwise, how would this view differ in a practical sense from the antirealists who challenge the objective existence of prescriptive moral claims?

It is also hard to square the emotionally charged language found throughout TML, with the purely descriptive enterprise Harris now claims to be conducting:

“[p. 42, my emphasis] The difficulty of getting precise answers to certain moral questions does not mean we must hesitate to condemn the morality of the Taliban — not just personally, but from the point of view of science.”

If morality is about solving navigational problems, and judgments of “bad” are shorthand for failing to act intelligently or sanely, how exactly does “condemn” fit into the picture? Does one talk about condemning inadequate navigational charts? People with low IQs? People with neurological or psychological disorders? And what is the moral difference if one “hesitates” to condemn such people, given that the Taliban must be incapable of understanding the condemnation (analogous to lacking sufficient intelligence or sanity)? “Condemn” seems to carry a greater connotation than suggesting that they could improve their game (if they want), or that they lack sufficient ability to understand moral concepts.

And finally, TML showcases several direct instances of using prescriptive terms:

“[p. 49, emphasis in the original] We can think more clearly about the nature of moral truth and determine which patterns of thought and behavior we should follow in the name of morality.”

“[p. 80, italics are Harris, underline is mine] We have already begun to see that morality, like rationality, implies the existence of certain norms, that is, it does not merely describe how we tend to think and behave; it tells us how we should think and behave.”

Given statements such as these, people cannot be criticized for holding an impression that Harris had already built obligations into the structure of his moral theory. It seems hard to read “should” as meaning simply “if you happen to want to do X, then…”

But let’s take him on his word. This clarification means he just pulled the rug out from under all of his fans, who thought they could use scientific-sounding moral statements to promote or reject practices with some sort of moral force. It would be interesting to know how many people were surprised or disappointed by this clarification (or if neither, then actually understood the implications).

Truly, “throwing acid in the faces of young girls is bad” is now directly translatable to (according to Harris’ TML 2.0) “throwing acid in the faces of young girls is not the best thing you could do, but there’s no reason you have to do any better if you’re fine with being kind of like stupid or crazy.” And while Nazis were factually not maximizing the well-being of the undesirables they were stuffing into crematoria, and so descriptively being “bad,” there was no moral imperative to be doing anything other than stuffing undesirables into crematoria.

Of course, in his clarification, Harris makes a comment that tries to have it both ways:

“[my emphasis] What does it mean to say that a person should push this button? It means that making this choice would do a lot of good in the world without doing any harm. And a disposition to not push the button would say something very unflattering about him… I think our notions of “should” and “ought” can be derived from these facts and others like them. Pushing the button is better for everyone involved. What more do we need to motivate prescriptive judgments like “should” and “ought”?”

His apparent return to advocating a prescriptive ethical theory will be left for readers (and Harris) to work out.

Admittedly, moral skeptics (and other anti-moralists such as myself) could be quite comfortable with the kind of limited motivation and weight for “prescriptive judgments” Harris outlines here. But this still seems incongruous with a position of moral realism, and the level of charges he wants to make against specific practices and cultures.

Does Harris really hold (and do his fans accept) a moral judgment that Nazis simply exhibited “unflattering” dispositions?

Harris’s known unknowns are important (and troublesome)

“Following Hume, many philosophers think that “should” and “ought” can only be derived from our existing desires and goals — otherwise, there simply isn’t any moral sense to be made of what “is.” But this skirts the essential point: Some people don’t know what they’re missing. Thus, their existing desires and goals are not necessarily a guide to the moral landscape. In fact, it is perfectly coherent to say that all of us live, to one or another degree, in ignorance of our deepest possible interests. I am sure that there are experiences and modes of living available to me that I really would value over all others if I were only wise enough to value them.”

It seems true that experiences may generate novel interests, or allow us to discover interests we had not realized were within us. These may become important in ways we could not have predicted earlier. But this does not counter the fact that we can only derive oughts from our existing desires and goals. It simply suggests that our existing desires could be transient, not permanent.

But let’s take those last two sentences at face value. If indeed Harris can be ignorant of his deepest possible interests, that there are other modes of living that he might value more, is it not possible that he is ignorant of his ability to value things other than gradable (and so maximizable) single parameter consequences?

More importantly, if the argument he makes here is valid, how can any moral landscape map, much less conclusions from a moral landscape map, be made at this time? It seems implausible for anyone to have known all possible perspectives so as to discount any future changes in desires or goals. This is especially true for someone admitting ignorance on this topic, thereby undercutting all moral judgments Harris made against different practices in TML.

Maximizing well-being for everyone will create conflicts in the real world

“And unless one were to posit, against all evidence, that every person’s peak on this landscape is idiosyncratic and zero-sum (i.e., my greatest happiness will be unique to me and will come at the expense of everyone else’s), the best possible world for me seems very likely to be (nearly) the best possible world for everyone else.”

It only takes the inherent conflict/incongruity between one person’s peak and another’s to challenge the mechanics of TML. I have addressed the impact of inherently conflicting interests/values on TML in my full response to Harris (using his own analogy of health to the value of well-being) and so will not repeat them here [5]. Instead I will try a different tack on the same problem.

Some people require well-ordered environments, with near if not factual military regimen (i.e., externally driven order) to enjoy a sense of personal flourishing. This removes distractions allowing for increased productivity, contentment, and happiness. Others find such conditions suffocating. Similarly, some find extensive social networks useful, while others need solitude.  Some find value in conflict (even if nonviolent) while others demand calm cooperation. Some tradition, others novelty.

It is impossible for these people to maximize their personal concepts of well-being in the company of the other. However, in a world of finite space, time, and resources it is necessary for all these people to interact. Some sacrifice of maximized well-being (based on personal assessment) will be necessary. Harris says that his best possible world will likely be (nearly) the best possible world for everyone else. That “nearly,” even if accepted as the limit of sacrifice required, is crucial. Will he accept (nearly) the best world for himself, so that someone else can reach the actual best for themselves?

More importantly, what is the objective procedure by which we can decide who should sacrifice and how much? Harris’ silence on these matters, in TML and in his clarification, are holes in his theory and not merely frontiers to be explored.

The false analogy between Harris’ “science of well-being” and the factual “science of medicine”  

“Ryan writes that “Science cannot show empirically that health is good,” but he admits that, without this assumption, “the science of medicine would seem to defy conception.” I believe morality is also inconceivable without a concern for well-being and that wherever people talk about “good” and “evil” in ways that clearly have nothing to do with well-being they are misusing these terms. In fact, people have been confused about medicine, nutrition, exercise, and related topics for millennia. Even now, many of us harbor beliefs about human health that have nothing to do with biological reality.”

Speaking as a researcher in medicine I can confirm that one does not have to assume that “health” is “good.” All medicine requires is a desire to achieve or avoid specific physical effects, and an instrumental curiosity regarding how to reach those goals. This desire does not have to be shared (assumed “true”) by everyone involved in the process. Granted that there are many common or colloquial concepts of “health” that would be shared by most people. But the differences defy uniform consensus.

Harris has used this vagueness in defining “health” to suggest “well-being” holds some scientific merit. In the essay I submitted in response to Harris’ challenge [6] and in the full response [5], I explained why this is not the case.  Briefly, “health” provides no useful unit of measure for the scientist. Assuming for the sake of argument that the goal of scientists was to improve health because they are motivated by a belief that it is “good,” they would not be using some landscape map with “health” as the single unit of measure. Even something as clearcut as “death” can require multiple parameters to make a scientific judgment. If well-being truly is analogous to “health,” then a science of well-being would require specific, well-defined parameters that allow for relatively clear measurements. Improved well-being would be a cumulative moral “diagnosis” based on many different landscape maps keyed to different, well-defined parameters.

It is at this point that Harris’ mapping system for well-being falls apart. Moral parameters do not share an equivalent meaning across patients as those related to health can. Also, moral parameters effect more than one person, which is never the case for physical parameters in medicine (unless you are pregnant or a conjoined twin). My blood pressure levels, and what I must do to maintain them, do not effect you. An increase in concern for social order, however, will have a direct negative effect on everyone else’s personal autonomy.

It is true that many people have been confused for millennia about medicine, nutrition, et cetera.  One of the primary sources of confusion has been listening to people who talk in vague, ill-defined but important sounding terms, suggesting that they can be trusted to deliver solid results.  The solution has been listening to other people that, even if not promising the world (and getting a bit boring with technical details), provide the clear definitions and evidence required to connect actions to results. The former deal in “health” as “healers” and “health experts,” the latter deal in the “science of medicine” as researchers and medical professionals.

Since Harris has conceded he is not referring to people in lab coats (and the strict criteria used by such people) when using the term “science,” he needs to retract the false equivocation between his “science of well-being” and the factual “science of medicine.”  Modern medical science is driven exclusively by people in lab coats (except when wearing smocks, or in the office).

Empirical truth claims are not (necessarily) contingent on well-being

“I would argue that satisfying our curiosity is a component of our well-being, and when it isn’t — for instance, when certain forms of knowledge seem guaranteed to cause great harm — it is perfectly rational for us to decline to seek such knowledge… I’m not even sure that curiosity grounds most of our empirical truth-claims. Is my knowledge that fire is hot borne of curiosity, or of my memory of having once been burned and my inclination to avoid pain and injury in the future?”

The first sentence is challenged by the earlier example of maintaining atheism, despite benefits from false beliefs. While it can be rational to choose safety over curiosity (truth seeking), that does not mean it is irrational to choose the other way around. Different moral preferences can be equally rational and valid.

The second sentence only shows that truths can be discovered by happenstance, rather than exploration, or that curiosity can be prompted by external events rather than self-initiating. The kid who gets burned does not learn anything more about fire unless it stimulates his curiosity to discover more about fire.

In any case, this argument seems to ignore that many people choose to learn about fire without ever getting burned. At least for them, “I want to know because it is interesting” can be the motivating factor. How many have pursued such knowledge, despite the fact that they could end up being burned in the process?

Reading Hume is important

“We have certain logical and moral intuitions that we cannot help but rely upon to understand and judge the desirability of various states of the world. The limitations of some of these intuitions can be transcended by recourse to others that seem more fundamental. In the end, however, we must work with intuitions that strike us as non-negotiable.”

If this is Harris’s position on morality, he might be interested in picking up a copy of An Enquiry Concerning the Principles of Morals. It was published in 1751, and in it David Hume argues basically the same thing. He also went on to argue that the fundamental moral intuition of humans, that non-negotiable intuition that others collapse into, was promoting the interests of mankind. This is also known as utility and bears a striking resemblance to Harris’s concept of well-being.

Here are three quotes from Hume on this subject:

“In all determinations of morality, this circumstance of public utility is ever principally in view; and wherever disputes arise, either in philosophy or common life, concerning the bounds of duty, the question cannot, by any means, be decided with greater certainty, than by ascertaining, on any side, the true interests of mankind.”

“Upon the whole, then, it seems undeniable, that nothing can bestow more merit on any human creature than the sentiment of benevolence in an eminent degree; and that a part, at least, of its merit arises from its tendency to promote the interests of our species, and bestow happiness on human society.”

and just to show he also lumps virtues in with consequentialist theory…

“… public utility is the sole origin of justice, and… reflections on the beneficial consequences of this virtue are the sole foundation of its merit…”

The key areas of incompatibility are that for Hume: 1) moral judgments cannot be founded solely on logical intuition/empirical study (that is what the is/ought distinction means), and 2) in order to distinguish between moral error and factual error intentions rather than results must be the focus of moral judgment. These differences are of course critical, and could be used to fix some of the flaws in TML theory.

Distinguishing moral certainty from lapses in moral cognition is important

“The universe is whatever it is. To ask whether it is logical is simply to wonder whether we can understand it. Perhaps knowing all the laws of physics would leave us feeling that certain laws are contradictory. This wouldn’t be a problem with the universe; it would be a problem with human reasoning. Are there peaks of well-being that might strike us as morally objectionable? This wouldn’t be a problem with the universe; it would be a problem with our moral cognition.”

It is unclear how this admitted “problem with human reasoning/moral cognition” does not open the door to the very moral relativism Harris attacks in TML. For example, how is the Nazi attempt to maximize well-being for everyone left alive, by removing the undesirables (causing a permanent loss of well-being for a minority within a short window of time), not simply another “moral” peak that some find objectionable because they suffer from deficits in their moral cognition? Or at least how is this avenue of argument not open to Nazis?

Surely this kind of argument undercuts Harris’s critique of practical relativists (such as anthropologists) who actively assume the possibility their moral cognition is wrong in order to better study and so understand the moral systems of others.

So, in summary

There seems to be a great discrepancy between the heated, objective sounding judgments pouring forth from Harris on the attack, and the dispassionate, purely descriptive, potentially accommodating moral system Harris claims to use when pressed to defend his theory.

Given all the above, my hope is this:

1) Harris will retract his claim that TML theory is science, and that answers in principle are sufficient grounds for a scientific theory. Broadening the definition of science is unnecessary, and potentially damaging to traditional science education.

2) He will no longer use the false equivalence/analogy linking his moral theory of well-being to the science of medicine. Just because people are motivated by similarly vague terms does not mean they use equally precise methods.

3) He will actually read Hume’s works on moral philosophy to understand what he is trying to criticize. His criticisms have consistently been straw men, and in reality he might find less impediments and some fixes for his theory.

4) He will stop claiming that moral judgments can be made regarding cultural practices using TML theory (or science) at this time. His mapping system based on polar extremes of well-being/misery is clearly incomplete and lacking relevant data.

_____

Dwayne Holmes is a PhD student in neuroscience at Amsterdam’s VU University Medical Center, with prior degrees in philosophy and molecular biology. He is particularly interested in how science and philosophy impact our understanding of ethics (from molecules to social norms), and when he gets a chance he runs a website (using an increasingly pointless pseudonym) devoted to a form of moral skepticism/antirealism (thegooddelusion.blogspot.com).

[1] The Moral Landscape Challenge: The Winning Essay.

[2] Forty-seven Ronin.

[3] About Sam Harris’ claim that science can answer moral questions, by Massimo Pigliucci, Rationally Speaking, 6 April 2010

[4] The Moral Landscape (2010), by Sam Harris, Free Press, p.46.

[5] Against a moral landscape: full response, The Good Delusion.

[6] Challenge essay, The Good Delusion.

Advertisements

275 thoughts on “Clarifying Sam Harris’ clarifications

  1. As a Humean of sorts, I definitely appreciate how you’ve addressed Harris’ ongoing attempts to work around the is/ought issue. You’re right. He does need to pick up “An Enquiry,” because it seems he’s quite unfamiliar with it.

    On the utilitarian angle, also good. You quote him:

    “And unless one were to posit, against all evidence, that every person’s peak on this landscape is idiosyncratic and zero-sum (i.e., my greatest happiness will be unique to me and will come at the expense of everyone else’s), the best possible world for me seems very likely to be (nearly) the best possible world for everyone else.”

    Agreed that my “peak” may be maximized in a way far different than yours, and even conflicting with yours, and yet, neither of us is sociopathic, or close.

    More narrowly, on the issues of distributive and retributive justice, Walter Kaufmann crushed this line of reasoning in “Without Guilt and Justice.”

    Like

  2. I agree with Sam Harris. Nature exists and can be scientifically defined. Science predicts such truths and repeating such predictions, successfully repeated, create theory.

    Theory provides truth. Such scientific truths currently number in the billions. Call them truths, facts, or nature. Science predicts nature.

    Neuroscience predicts behaviour, factually. Sam Harris is a neuroscientist, not a philosopher.

    Seems that philosophy better check the best before date on its interpretation of science, using science, not ignorant words, ideas and error-based reason.

    Like

  3. I agree with much of what Harris writes in The Moral Landscape – indeed, I credit the book with transforming me from a moral relativist to a moral realist – but have long been concerned with his use of the term science in that context. The author’s suggestion that the “unity of knowledge” approach that Harris advocates would better be classified as “scientia” goes, I think, a long way toward remedying the conflict.

    Regarding the “star fairies” hypothetical, though, I still don’t think that gets around the problem. Harris’ atheism is built on the idea that holding false beliefs – such as those about star fairies – is fundamentally harmful. Harris may well be wrong about this, and if one thinks that he is wrong (particularly with evidence to back it up), that person would be able to argue that it is moral to teach about these star fairies within Harris’ Moral Landscape view. The moral argument would be between the evidence of immediate benefits in the star fairy belief vs. evidence of the long-term benefits of believing true things, but both sides of the argument are ultimately about the best consequences.

    Like

  4. The article is about morals, not the prediction of behavior. Your response is completely unresponsive to the substance of the article.

    Seems like you’d “better check” your own best-before date.

    Like

  5. Well put, other than noting that neuroscience doesn’t have much of a prediction track record yet. Should we trot out the dead salmon’s brain lighting up the fMRI? 🙂

    Like

  6. Hi, thanks for replying.

    I understand there are limits to my star-fairies example, which is why I supplied a couple others along the same lines. In it’s defense I am basically rephrasing what people like Dawkins (and I believe Harris) have stated regarding the possible benefits of false beliefs. In short, they would not want to believe in such things even if there were proven benefits (long or short term). So this hypothetical is constructed such that there can be no harm arising from it, and therefore as you suggest according to the TML it really would be moral to teach it.

    Also, I would argue it is a false dilemma to say just because you have to believe in one false thing (to gain certain benefits), you have to believe in other false things. I believe it was part of my hypothetical that beliefs in star-fairies did not require any other beliefs contrary to science. So there is no real trade off required/being considered.

    But let’s say you are correct. As I state in my essay even if it were true that these decisions reduced to concerns about consequences we’d still have to be able to define the other moral systems in some way, because they evaluate consequences in fundamentally different ways. That is they would just become flavors of consequentialism.

    I hope that makes sense.

    As a heads up it is after midnight in my time zone so it is unlikely I will be able to get to many more answers before tomorrow (after work) which would be about noon on the east coast of the US, or 6pm in Europe. That said I intend to respond to replies so don’t be discouraged if I don’t answer right away.

    Thanks!

    Like

  7. Hi Graham, although I may not agree with all of your claims regarding how science functions, I agree wholeheartedly that science is an important tool for understanding nature and that we should be careful to avoid using ignorant words, ideas, or error-based reason.

    If it matters to you, Sam Harris and I share nearly identical educational backgrounds, with undergraduate degrees in philosophy followed by graduate work in science. He and I are both neuroscientists, and I am willing to discuss any findings from neuroscience that pertain to this topic.

    Of course, Sam Harris has described himself as being more or less a philosopher that happens to work in science (using it to find answers), and on this topic he is arguably working in philosophy alone. His clarification which I am addressing would seem to concede that point.

    As another person suggested, prediction of behavior does not seem to be pertinent to the issues addressed in TML, his clarification, or my own. If you feel it is, please explain further.

    Like

  8. Hi Andrew, I accidentally hit the wrong reply button when answering your email. Please see the general reply below regarding your concern about the star-fairies hypothetical.

    Like

  9. There’s quite a lot of evidence that suggests a correlation between religion and happiness, in fact. Presumably Harris would simply argue that when counterbalanced with all the harm religion does it’s heavily weighted towards the ‘negative’ side. I think that’s disputable, particularly considering how many people in the world follow some sort of religion and are, for the most part, entirely peaceful and tolerant individuals.

    Interestingly Harris himself, in a blog post a while back, defended not telling his daughter that she didn’t have free will, because “some knowledge is best left unsaid.” He himself has conceded that ‘delusions’ (by his own claims) are useful.

    You’re right, there are considerations to do with weighing long term versus short term benefits, but this highlights the enormous difficulty this kind of moral bean counting entails in trying to weigh these competing interests, difficulties that particularly rear their ugly heads when we’re talking about future predictions of a level of complexity like entire cultures and social relations, where we can’t really do the ‘experiment’ first to get the results and where ‘getting it wrong’ could be disastrous on an enormous scale.

    Like

  10. Hi Dwayne,

    I entered TML challenge myself, and I have many of the same criticisms of Harris. In particular, I criticise the analogy to health for similar reasons and I agree with you that well-being is not one thing — different people will have different ideas about what the moral landscape looks like.

    I don’t much care whether Harris calls for “science” or “scientia”. The main thing I take him to task for is moral realism, but since he seems to be backing off a little on that perhaps I have less to disagree with him on.

    In response to your post, I did want to say a few words in defence of consequentialism.

    Harris has not made a case that alternative systems compare practical consequences at all while rendering moral judgments, much less in the same way as consequentialist theories do.

    Like Harris, I view deontology and virtue ethics as being largely compatible with and even arising out of consequentialism. Consequentialism considers moral actions to be those with good consequences. However, it is not generally possible to know what the outcomes of any particular action will be. We can only guess. It is also difficult to conduct an analysis of all possible outcomes at every moment of our lives. As such, we need heuristics. Virtue ethics and deontology are such heuristic systems. The virtues and deontological rules we endorse should be those which have a good chance of producing good consequences. This saves us from having to think about consequences all the time.

    Sometimes, rigid adherence to virtue ethics or deontology can lead to bad consequences, as in your example from 47 Ronin. Whether this is a problem or not depends on the bigger picture. For the same reasons that sometimes we need to release dangerous criminals because of a technicality (a bad consequence), it may be that rigid adherence to deontology producing local bad consequences is globally for the good. If deontological rules are negotiable or flexible, then they may be corrupted and unable to produce the good consequences they are intended for.

    You are right that accepting Harris’s framework now just shifts the debate to which virtues or deontological systems are correct. But that, Harris would say, is exactly the kind of empirical question we should be investigating with science (or “scientia”). We should not, in other words, be content to sit back and assume there is no right answer.

    On the star fairies, I agree that Harris would probably prefer truth. But I think you may be mistaken in thinking that this is because of an a priori commitment to truth despite consequences. Rather, I think Harris is committed to truth precisely because he thinks this will yield better consequences. As such, I don’t think he accepts the premise of your argument.

    On distinguishing factual errors from moral errors, I don’t actually think this is important in the way you do. Consequentialism, in my view, is not about rendering after-the-fact judgements about whether some act was moral or not. That role is instead filled by a legal system (the laws and practices thereof to be chosen on consequentialist grounds). Consequentialism is not about judging but about deciding. Whether we say an act is moral or not after the fact has nothing to do with it.

    But the difference between a loving parent practicing FGM and a sadist is of course important. I think this difference is that the parent has concern for the consequences for their daughter but the sadist does not. As such, each case needs to be approached differently.

    Parents of children who are mutilated may be thinking consequentially (e.g. they may believe their daughters will be unable to marry unless they are mutilated). If so, it may be they are doing it wrong. The right approach here is education to help them do it right and legal sanction to add further deterrence.

    Of course, they may be doing it right. Maybe their daughters will indeed have better lives as a result of FGM, but in that case the problem is with their society as a whole. In this case, it is the society that needs to be educated so as to change the moral calculus for parents choosing whether to mutilate or not. Legal sanctions may again help to change views.

    So the difference is that education is important for FGM, but it is not particularly important for sadism, for the sadist knows the adverse consequences of what he does but simply doesn’t care.

    I would also draw a distinction between accidentally killing someone (e.g. by hitting them with a car) and deliberately killing someone. Though the consequences are the same, a consequentialist should endorse harsher punishments for the latter. Punishing someone for accidentally killing someone is not as important because the guilt is perhaps almost punishment enough — it is a crime that doesn’t especially need to be deterred because nobody wants to commit the crime in the first place. Piling many years in prison on top of that just adds negative utility to a bad situation. Instead, we would do better to concentrate on producing and enforcing deontological laws (e.g. traffic regulations) which mitigate the risk of accidental homicide.

    Like

  11. Great response! Just a little reference of potential interest for the section “Distinguishing between moral systems is important”: Some years ago Campbell Brown wrote a brilliant article explaining in detail why it is not possible to “consequentialize” any ethical theory.
    http://www.jstor.org/stable/10.1086/660696
    The article being brilliant is not only my opinion. According to Philosopher’s Annual it was one of the ten best philosophy articles published in 2011 (http://www.philosophersannual.org/),

    Like

  12. Hi Dwayne,

    4) He will stop claiming that moral judgments can be made regarding cultural practices using TML theory (or science) at this time. His mapping system based on polar extremes of well-being/misery is clearly incomplete and lacking relevant data.

    I would disagree here and say that the mapping system is an intrinsically flawed concept.

    Throughout the book Harris treats well-being as though it were a single valued thing, thus there can be a “continuum” and “peaks” and “troughs”.

    But all those things would imply there was an objective measure for comparing the points on the landscape.

    In the book he uses a thought experiment of a world with only two people, Adam and Eve. He treats of two possibilities – 1) they fight and both Adam and Eve’s well-being decreases and 2) they co-operate and both Adam’s and Eve’s well-being increases. In my own entry to the TML challenge, I point out that Harris has left out another possibility – 3) Eve exploits Adam and increases her own well being at the expense of Adam’s

    While we might make a simple comparison of 1 and 2 there is no such simple comparison of 2 and 3, although we can intuitively see that 2 is morally better than 3.

    Suppose Eve had to sacrifice a little of her own well being in order to get from 3 to 2, on what basis would she make that decision? The answer can only be her personal values.

    And if the comparison cannot be made for a two person world it certainly cannot be made for this world. Sadly the moral challenges that face us in the real world are full of such situations – one group increasing their own well being at the expense of another group.

    Harris seems to think that there can be no real argument about what is a peak and what is a trough but the world is full of examples of such disagreements. In a highly unequal distribution of wealth, the wealthy seem to consider it a peak, but the poor would clearly consider it a trough.

    So this continuum takes it’s shape, not from facts about nature, but from our own subjective values.

    Harris claims that we can ground our values in this continuum.

    But in fact we would just be grounding our values in a continuum that was grounded in our values.

    No amount of data is going to change that. The whole “Moral Landscape” concept is intrinsically flawed.

    Like

  13. For my own part I think that it is a fundamental mistake to use “well-being” or “utility” as the base variable for a system of morality.

    I think that the base variable must be freedom and that well-being is something which can be derived from that.

    If you think about it, a mentally competent adult is probably best placed to know where his or her well-being lies and if given the freedom to do so will seek it.

    Similarly we can assume that a suffering person would not actually choose to suffer (except in the cases where it is chosen for some benefit that can be derived from it, like studying hard or extreme exercising).

    So I would suggest that if we were looking for an axiom on which to base a moral system (and even Harris agrees that some values must be presupposed), the axiom would be something like: “The good of a society consists of each of it’s members having the maximum practically possible ability to do as they choose, so long as this does not negatively impact any other person’s ability to do the same”.

    Like

  14. I think various consequentialist systems in general suffer from not being able to provide a truly disinterested “view from nowhere,” and, on the time side, from not having a view from infinity, or, per the old Christian phrase, sub specie aeternitatis.

    Like

  15. “The good of a society consists of each of it’s members having the maximum practically possible ability to do as they choose, so long as this does not negatively impact any other person’s ability to do the same”

    By definition it seems to me that your definition is not only about ‘freedom’, but also about ‘constraint’. One persons idea of freedom is constrained by how it’s resulting effects impact on others idea of freedom. I think this illustrates the problem with identifying a singular moral grounding value. I think looking at the way in which complimentary opposites (like freedom and constraint), can coexist in a mutually supportive bigger picture dynamic is then a useful contemplation.

    Like

  16. Greetings Robin and thanks for the compliment above. I will answer your last two replies in this one.

    First, I certainly agree with you that the mapping system is intrinsically flawed. And I like your play on the Adam/Eve example. However, I am sort of taking short easy steps here, and my “wish list” was limited to what I think might reasonably be expected from Harris’s reading my arguments against his clarification. It might be a bit of a stretch for him to throw in the towel just based on that. Of course, I am hoping he will move from that to my larger response to view more direct and complete attacks on the system itself. But in the meantime, it would be useful for him to admit that even if it were a plausible system, it is currently nonfunctional.

    Second, I don’t agree with freedom being the base for morality. Don’t get me wrong though, I would want to live in a society that operates on the axiom you described. I really wished most people felt that way. But I don’t think many do. And some that believe they use that axiom still end up drawing lines on freedom, very different lines than I would, based on different concepts of what negatively effects someone else. I am also a bit worried that “freedom” starts crossing the line from morality to law. For example, I think legal systems should be based around that axiom, but I’m not sure moral judgments can/should.

    In future essays, I hope to describe my ideas regarding an appropriate moral system. I will say right now that I think basing moral judgment on any single value (while understandable) will end up falling apart (especially when applied by different individuals) based on a lack of clarity, given that any single value will have to be pretty generic. I guess this is to say that I think a common mistake is trying to find a unifying element across/underlying all moral judgments. The result is getting more general, rather than more precise terms.

    Of course I could very well be wrong!

    Like

  17. Hi Gadfly, thanks for your earlier compliments. I of course agree with your assessment. I think it would have been interesting (though doomed to failure) if Harris had decided to stake out some specific vantage point and temporal resolution and argued this is what morality should be based on due to how humans process information.

    Like

  18. “I think this illustrates the problem with identifying a singular moral grounding value. I think looking at the way in which complimentary opposites (like freedom and constraint), can coexist in a mutually supportive bigger picture dynamic is then a useful contemplation.”

    Yes. I tend to take this view as more productive. And then I’d add more sets of opposing values!

    Like

  19. Dwayne Holmes: “The semantics of “science” is important … Harris should concede that TML theory is not science as most people use the term, … Distinguishing between moral systems is important … Distinguishing factual errors from moral errors is important … distinguishing between descriptive and prescriptive ethics is important (and shouldn’t his fans care?) … Empirical truth claims are not (necessarily) contingent on well-being …”

    Well, is ‘Morality’ (in general, not about Harris’ theory) a subject of science? Science is only a human endeavor, and thus, its scope goes all over the map. That is, this is not really an issue.

    The key issues are three:
    One, morality is a non-theory-loaded empirical fact (fact … fact …).
    Two, then what are the rules in this morality fact sphere to judge the goodness vs evil?
    Three, what the heck do these rules come from? That is, what is the morality?

    Obviously, the point three is the only issue. In my previous comments (in this Webzine), I stated two points.

    First, every point (all and each one) in the highest tier manifestation (such as mind) ‘must’ have a connection to the ‘base’ (laws of physics) via a ‘linking-thread’.

    Second, for the ‘mind’, it has at least three attributes.
    1. Intelligence (the linking-thread is ‘computing’. see https://scientiasalon.wordpress.com/2014/07/21/is-quantum-mechanics-relevant-to-the-philosophy-of-mind-and-the-other-way-around/comment-page-1/#comment-5018 )
    2. Consciousness
    3. ‘Well’ of morality

    For consciousness, it is defined as ‘the ability of distinguishing the self from all others’. If Mr. A and Ghost B is indistinguishable, Mr. A can never be conscious of Ghost B. So, the consciousness is hinged on the issue of “Can all the entities of this universe be tagged uniquely?” Obviously, all fermions are uniquely tagged with unique quantum number, demanded by the Pauli’s excluding principle. Yet, all bosons are not uniquely tagged (fortunately, they are not ‘matter’ particles). However, can ‘all’ the composited objects be uniquely tagged?

    Anything which is uniquely tagged is called a ‘self’. Yet, there are two types of self: the immutable self (ball-like; a closed system) and the temporal self (tube-like, with flows and changes). Can these two types of self uniquely tagged? The answers are both yes.

    There is a ‘four-color-theorem’ which guarantees that all ball-like selves can be uniquely tagged.

    Then, there is a ‘seven-color-theorem’ which guarantees that all tube (torus)-like selves can be uniquely tagged.

    Thus, every individual life is uniquely tagged with four codes (A, G, T, C). And, every temporal life is uniquely tagged with seven codes {A, G, T, C, P (past), N (now), F (future)}. Every species is uniquely tagged with {A, G, T, C, M (male), F (female), K (kid)}. Of course, the entire ‘elementary particles’ are also uniquely tagged with seven (7) codes {Red, Yellow, Blue, White, G1, G2, G3}.

    Now, we have find the ‘linking-thread’ for consciousness, and it connects the human consciousness all the way back to the laws of physics. Yet, the most important thing about this connection is that it rules out all those fourth generation quark nonsense.

    Now, let’s get to the point; what is the ‘linking-thread’ for morality? It is the ‘free-will” which in fact gives rise to the morality sphere. However, freewill has two parts; the free and the will. It is not too difficult to show and to derive the following equation.

    Intelligence + Consciousness = will

    But, I will skip this derivation here. Schopenhauer’s description on ‘will’ is good enough for this discussion. So, the point is now about ‘free’.

    In physics (also in nature language), the ‘free’ means free from any ‘external’ force. Yet, the only free particle in this entire universe can only be found in an ‘infinite’ deep energy ‘well’ which means a permanent confinement. Thus, the equation for ‘free’ is as below.

    Permanent confinement = Total freedom

    In physics, this total freedom is expressed as ‘asymptotic freedom’. Thus, the free-will is the result that we are permanently confined by the laws of physics. Again, the connection between the top-most tier manifestation (the morality) and the laws of physics (permanent confinement, asymptotic freedom) is now clear. In fact, I have showed this free-will issue many times in this Webzine, see the links below.

    http://rationallyspeaking.blogspot.com/2014/03/this-isnt-free-will-youre-looking-for.html?showComment=1394690233246#c1832738504891602291

    https://scientiasalon.wordpress.com/2014/05/22/my-philosophy-so-far-part-ii/comment-page-1/#comment-2412

    https://scientiasalon.wordpress.com/2014/05/22/my-philosophy-so-far-part-ii/comment-page-1/#comment-2432

    https://scientiasalon.wordpress.com/2014/05/22/my-philosophy-so-far-part-ii/comment-page-1/#comment-2515

    As the Sam Harris’ The Moral Landscape (TML) theory does not connect the morality to physics laws directly, his theory is as good as any others’ in my view.

    Like

  20. Hi disagreable (it’s funny but your avatar does not look disagreeable yet the one they assigned me by default does)! I will try to get to all of your points…

    1) I don’t believe that virtue ethics (VETs) and deontological theories (DTs) are merely heuristic systems for larger consequentialist theories. I see that some could operate that way, but I don’t believe they must, even if I were to agree that they were subsets of consequentialism. To be a heuristic for a general consequentialist target requires intention to maximize a specific goal for everyone (a global good), but that is simply not the case for all VETs/DTs. The 47 Ronin is a nice example. That is not a cautionary tale of a heuristic gone wrong. The idea is that this is what people should strive for, to better themselves (be honorable, dutiful). There is no concept that by doing this everyone else would get something out of it (global good). This is what gives such systems an aesthetic rather than practical flavor.

    2) I agree with your assessment of what Harris wants to do (questioning which flavor of consequentialism is best) and how he would try to argue against the star-fairy hypothetical. While I could try to argue against his position on either of these, at this moment (meaning in this reply) I’d rather just accept them for sake of argument and move to the next issues, which I think deliver the knockdown blows. For consequentialism it would be attacking the system he developed (which it seems you agree is flawed) and instead of star-fairies I would turn to the example of having to accept FGM if it were shown that it did improve “well-being.”

    3) Factual error vs moral error. First I should make clear that this subject was not supposed to be about consequentialism per se. The point I wanted to make is that these types of errors are two very different things, and any functional theory will need to be able to distinguish between the them. I like the fact that you are discriminating between legal and moral judgments. That said, I am not certain that moral systems never deal with past events, and are only about making choices. FGM is a perfect example of where Harris (and the author he was quoting) were clearly labeling prior events as immoral… not just whether they should have it done on their own children.

    4) “Of course, they may be doing it right. Maybe their daughters will indeed have better lives as a result of FGM, but in that case the problem is with their society as a whole. In this case, it is the society that needs to be educated so as to change the moral calculus for parents choosing whether to mutilate or not. Legal sanctions may again help to change views.”

    I am not sure if I understand how you made the move from the (assumed) fact that their daughters would be better off, to the practice being a problem with their society. In that case, why would it not be the case that our moral calculus needs to change?

    5) “So the difference is that education is important for FGM, but it is not particularly important for sadism, for the sadist knows the adverse consequences of what he does but simply doesn’t care.”

    I would agree with this, as it shows how important such a distinction is between the two groups. And so how mere results do not mean anything. Intent is important to understanding the situation, not just results. But I want to know, do you agree with me that this is a problem with TML? Should FGM be considered identical to/indistinguishable from sadistic torture?

    Like

  21. Sam Harris is a gleaming example of why it requires more than a bachelors’ degree level of study to do serious, professional work in a particular area of inquiry. Just as I wouldn’t try to do neuroscience, without having done advanced study in the area, Sam Harris really shouldn’t try to do philosophy (and yes, Ethics is part of philosophy), without more study of…well…philosophy.

    Had he done this, he wouldn’t make the sorts of mistakes for which I routinely take off points on introductory level exams. To construe Kant’s categorical imperative as really being consequentialist is to fundamentally misunderstand it. Of course, Mill makes this claim in the Introduction to Utilitarianism, but this is commonly understood as being a rhetorical opening shot. Unlike the case of Harris, I am under no illusions that Mill has fundamentally misunderstood Kant.

    Talk of Mill is really beside the point, when speaking of Harris, however, because if we are to attribute any sort of Consequentialism to him, it will have to be of the Benthamite variety. You see, Mill is as much a eudemonist as a Utilitarian–the higher/lower pleasure distinction cannot be made empirically, regardless of Mill’s hand-waving in the direction of “competent judges”–and as such cannot be employed by Harris, who wants to insist that moral knowledge is ultimately empirical. Bentham *might* make such a claim possible — his argument for Utilitarianism, at the beginning of the Principles of Morals and Legislation is essentially naturalistic: i.e. that pleasure and the absence of pain are intrinsically good is due to the fact that they constitute two of the most fundamental imperatives that govern our behavior — even though it assumes an equivalence between “fundamental natural imperative” and “intrinsically valuable” that many would want to question. But Bentham won’t help Harris much either, as his narrow, monochromatic conception of the Good and Right are implausible in their own right and subject to any number of devastating arguments (there is a reason, after all, why Mill abandoned Benthamism, in favor of a more eudemonistic Ethics).
    Of course, all of this flirting with Consequentialism and Benthamism serves only the “Ethics is empirical!” side of Harris’s “theory.” Given his appeal to ‘well-being’, which he says must be “defined as deeply and as inclusively as possible,” and thus, clearly, not limited to the sort of physical comfort definitive of Benthamism, he can only be some sort of eudemonist. The trouble of course, is that eudemonism–and the virtues that comprise any eudemonist account of Ethics–cannot be grounded or defined empirically.

    All of this is pretty straightforward stuff, IF one has studied enough Ethics to know the difference between Millian and Benthamite Utilitarianism, IF one has studied enough Ethics to understand that no appeal to competent judges and thus, no empirical argument, can ground a purely qualitative distinction between pleasures, IF one has studied enough Ethics to understand that eudemonism is predicated on a teleological conception of the thing to which one is ascribing a particular virtue, which renders the ascription non-empirical, IF one has studied enough Ethics to know that the consequentialist reading of Kant is a misreading, etc. But if one has not–as Harris has not–then none of this is straightforward and one makes the sorts of elementary mistakes that Harris makes in the Moral Landscape. Of course, if one has already created a popular industry around oneself, one can publish ones mistakes–though it’ll have to be on a trade press, as The Moral Landscape is (since it wouldn’t make it past the first round of editors, on an academic press), but that doesn’t make them any less mistakes.

    Like

  22. Hi tien,

    “Science is only a human endeavor, and thus, its scope goes all over the map. That is, this is not really an issue.”

    Well I’d argue that it does become an issue in certain places like the US where people are trying to replace scientific methodology with religious doctrine. This has made me a bit more interested in the accuracy of terms/definitions with regards to science. Harris is admittedly trying to broaden the definition of science, in order to get his moral theory labeled “science”, by weakening methodological expectations. The result is that starts reclassifying other things as science.

    You say its scope goes all over the map. The question is if you agree that it does not require an ability to generate answers in practice?

    “As the Sam Harris’ The Moral Landscape (TML) theory does not connect the morality to physics laws directly, his theory is as good as any others’ in my view.

    Do you believe that you could use it to make moral judgments, as it was described?

    Like

  23. Hi Dwayne,

    Thanks for the thought-provoking essay. You write:

    >That an abstract principle might be chosen (credibly) over practical consequences can be seen with a simple hypothetical. Imagine that scientific evidence emerges that a false belief in wholly benign star fairies (who help when all natural/scientific measures have been exhausted, and require no other false beliefs or actions against others) leads to greater happiness, health, and longevity. According to traditional consequentialist theories (including Harris’) it would be right to maintain that false belief and promote it in others. More important, it would be wrong to promote doubt in others.<

    I'm mostly on your side in these matters but I'm concerned that this line of argument is question-begging. You use an example in which there is a truth of the matter apart from consequences, which, regarding ethics, is just what Harris denies. What you need to argue, it seems, is that ethical claims can be true apart from consequences, that is, that ethical claims have a ground of truth other than consequences, in whole or in part.

    Also note that an example of reasonable belief in an abstract principle apart from consequences might not be enough to show that Harris is wrong as reasonable belief might not entail that what is believed has a ground of truth other than consequences. This, this seems to be essentially a matter of truth rather than belief, etc.

    Like

  24. Hi Dwayne,

    My avatar is a pun, being the hero from a cartoon series called Avatar (which I quite liked, incidentally).

    I see that some could operate that way, but I don’t believe they must, even if I were to agree that they were subsets of consequentialism.

    1) I actually completely agree. We could, for instance, imagine a deontological system which advocates slaughter and cruelty. FGM may be seen as something like this. In such cases, my position is that these are flawed moral systems. They may produce some benefits, but on the whole it would be better if they were eliminated and replaced with something more sound from a consequentialist perspective. So, my position is that those virtue ethical and deontological systems which should be endorsed are those which can be derived from consequentialism. Without such a derivation, there is no basis for preferring one set of ethics or rules over another.

    But I am not a moral realist. I don’t necessarily feel that there is one true way of looking at morality, and I don’t necessarily claim that my way of looking at it eliminates all paradoxes and problems. Rather, I am arguing for consequentialism as the most robust foundation for morality — the best of a bad lot, essentially.

    2) If you could show that FGM improved well-being, then Harris would have to accept it. But then so would I. I oppose it because I don’t believe it improves well-being. Even if it does so for individuals, that is only because it is a harmful meme that has infected a society. Eradicate the meme and you also eradicate both the harms coming from not being mutilated as well as from being mutilated.

    3) I agree that moral systems can make moral judgments about past events. My position is that this is beside the point of consequentialism. Whether we describe a past action as ‘moral’ or not is in my view ambiguous. I prefer to ask whether the intentions were good, then to ask whether the outcomes were good. I see no need to mush the two questions together into one.

    4) It may be clear from point (2), but I’m assuming FGM provides no direct benefits. What benefits there are, I assume, come from being spared the harms arising from the social effects of not having FGM. If this is the case, we should not institute the practice here but eradicate it everywhere. If FGM actually does provide direct benefits, then I’m for it. But I don’t for a second believe it provides benefits on balance.

    5) Intent matters, but I do not agree that this is a problem for TML. Harris is arguing that we should intend to maximise well-being and that we should use empirical methods and reason in order to meet this goal effectively. Sadists do not intend to maximise well-being, while FGM parents do not use empirical methods or reason. They each behave incorrectly for different reasons, and Harris opposes both, but would use different methods to address each problem.

    Like

  25. The thing that frustrates me most about the whole Moral Landscape debate (aside from Harris’ seeming inability to engage with his interlocutors’ actual points) is that, when you scrape away all the science vs philosophy, descriptive vs prescriptive stuff, Harris actually comes close to holding an interesting philosophical position.

    In his response to Born, Harris says:

    The whole point of The Moral Landscape was to argue for the existence of moral truths—and to insist that they are every bit as real as the truths of physics.

    The idea, which Harris never seems to be able to really elucidate, is that the structure of reality is such that normativity (broadly speaking, things being “better” or “worse”) emerges from it naturally under certain circumstances.

    Again, strip away all the extra crap, and take yourself back to when the simplest organisms began to make copies of themselves. When that happened, a sort of proto-teleology emerged with respect to the organism itself. Certain outcomes became either better or worse for it. Normativity is an ontological entailment of teleology.

    The avoidance of dissolution is basically a tautological axiom of normativity, because any hypothetical organisms that disregarded it would cease to exist over time. Reproducing organisms are ontologically bound to it.

    All of the themes and variations that follow from this as organisms become more complex are still in the natural domain. There is a better and there is a worse, even if it’s nearly impossible to figure out sometimes in practice.

    This is the sense of moral “realism” that I think is at the heart of the whole thing.

    Now obviously to get from proto-teleology to the complex stew of interactions that is human society would require some serious philosophical chops. Better ones than I (or probably Harris) possess. But, at least to me, it’s a pretty danged philosophically intriguing idea.

    Like

  26. Oh, I agree. I also agree with your and Robert’s “Adam and Eve” take. Now, scale that up to 7 billion people, or, if you’re Singer, 7 billion people plus countless billions of animals … and the view from nowhere is simply impossible.

    In turn, this makes me riff on Kahnemann, and wonder if we have different “fast” and “slow” moral mental systems.

    Like

  27. …even though it assumes an equivalence between “fundamental natural imperative” and “intrinsically valuable” that many would want to question.

    This, in my opinion, is what Harris should have focused on like a laser. It is possible to make the case that what we call a value is, if not equivalent to, at least necessarily entailed by fundamental natural imperatives. Without that, the whole thing will not stand up. But, as you say, many would want to question it, and that horde is in possession of a pretty large pile of ammo. Making the case would require an engagement with a whole history of ideas that Harris seems to want to wave off as irrelevant or not “worth maintaining”.

    Also, as you’re pointing out, Harris would have to abandon the simplicity of his consequentualism. Even if you believe that value springs from fundamental natural imperatives, it certainly doesn’t stay fundamental or simple for long.

    Like

  28. It seems to me that the major point of contention lies with semantics. You say that, “Harris is right that the choice of calling TML theory a science (or not) is a semantic issue, which would not touch the validity or practical utility of his theory.” Just as Harris concedes that his argument for a single epistemic sphere can be regarded as scientism, you more or less concede his position when you offer “scientia” as a candidate term instead. I’m not convinced that there are practical problems with delineating the boundaries between science in the professional and lay sense. I mean, public education and court rulings? That’s reaching for straws.

    Whether or not TML theory is a science as most people understand the term is an empirical question, and it doesn’t seem like you’ve given it a fair shake as a hypothesis. After all, you did admit to misreading his initial approach of reducing oughts to hypothetical imperatives, without adding a moral imperative component. Maybe we need some Xphi to weigh in, cause my suspicion is that many of TML’s philosophical critics are moral realists and naturalists who more or less agree with Sam, but nonetheless stand resolute in opposition to TML because of the semantics of terms like “science”. If you can replace the word “scientia” with “science”, and read the TML as an argument for moral realism, without too much objection, I’d say that your contentions are ‘much ado about nothing’, and that you’re not looking from the (obviously directed toward) angle of the lay person who has no idea what terms like ‘moral realism’ and ‘epistemology’ mean, but nonetheless are involved in social/moral issues.

    Like

  29. “There’s quite a lot of evidence that suggests a correlation between religion and happiness.”
    Really? But most of the world’s happiest countries according to the Happiness Index (Norway, Sweden, etc) are also the least religious.

    Like

  30. Well to be fair, I myself only have a BA in philosophy. While I don’t think addressing this topic requires a degree at all, I agree with your overall point that it does take a lot of reading. College just happens to be a great place to get the time to read and build a foundation (of knowledge) from which to work. I’m surprised that most colleges (and especially liberal arts colleges) don’t require some basic philosophy. My school actually required religion courses, which ironically cemented my atheism despite having been taught by a minister.

    It baffled me that a guy who has a degree in philosophy seemed so poorly informed about some of the main figures in ethics (not to mention such poor reasoning). OK, maybe he forgot some stuff, but if writing a book on the subject I would have expected he would hit the books to at least refresh his memory, if not move forward. Despite being a Hume fan, having read him a few times already, I still went back and reread his works just to be fresh on what I was going to write. I am admittedly weak on Bentham, but if I was going to discuss him I’d definitely put some time in. Though I also admit it is always a hard time dragging myself to read more Kant 😉

    “Of course, if one has already created a popular industry around oneself, one can publish ones mistakes–though it’ll have to be on a trade press, as The Moral Landscape is (since it wouldn’t make it past the first round of editors, on an academic press), but that doesn’t make them any less mistakes.”

    Yeah, I totally agree with you on that. I’m really getting the feeling this is about a publicity machine. He puts out books with really weak reasoning, but panders to a certain demographic. It seems they sell. Of course I have to say after TML and Free Will, I am no longer planning on buying anything else by him. I helped fill his coffers and got too little in exchange.

    Still, even without an official academic review he has friends that should have helped. Why didn’t Dennett hammer him on TML? Eventually he got around to Free Will, but he seemed to let a lot go by for TML. I happen to like Dennett and I can’t believe he didn’t notice these errors.

    Like

  31. I think that the relationship between constraints and freedom is implied in what I said. Constraints are necessary on any choice that would negatively impact another’s ability to do as they choose.

    Note I didn’t say that freedom was the grounding value I said that it is the grounding variable which is, I think, important.

    Having the freedom to choose we can have the freedom to decide which are our grounding values, so long as these do not negatively affect others.

    I can’t see that any other variable can do that.

    Like

  32. Hi Dwayne,

    Thanks for the reply. I guess my motivation here is a realisation of what a nightmare it would be to live out someone else’s idea of human flourishing.

    Like

  33. Hi Paul, glad you liked the essay.

    You are correct that the star fairies hypothetical artificially separates truth from consequence. Certainly Harris would argue that this rarely occurs in the real world, and for the most part I would agree.

    However, he (as someone else has mentioned) has stated that certain truths could be denied because of a known negative consequence which means that consequences can trump truths. I took this hypothetical as one step in a chain of arguments, trying to show that some people could value truth over consequence. To take his own case of not telling his child the fact (according to his mind, not mine) that there is no free will, he really is choosing ignorance over truth.

    He may believe that in this case (or rather at this time in her life) truth will have negative consequences, but that is true for the star fairies example as well. I think in either case, it would not necessarily be immoral to choose the other way. Or at least many people would choose truth, regardless of the negative outcome. I think Harris would still have to explain why they must be morally wrong.

    You are also correct that a better, or stronger argument is that ethical claims can be true apart from consequences. That is what I hoped to be getting at with my example of the 47 Ronin, and refusing cannibalism (or killing a child or raping a woman) even if it means extinction. I realize that those fit the form of choosing X in spite of bad consequences, and in essence were pitched that way, but I believe they also stand as examples of finding moral truths outside of consequence.

    Certainly the Bushido code found within the 47 Ronin sets out its expectations separate from consequence (beyond an aesthetic component for the self). I think it arguably stands on its own.

    Of course, the FGM example carries some slight separation of moral claim from consequence, though it limited to consequences defined as practical outcomes. It does have moral claims linked to intended outcomes.

    But I take your point and will try to craft (or recraft) something more clearcut.

    I hope that goes some way to addressing your concerns, and I thank you for your input!

    Like

  34. With physicalism*, there is nothing outside the physical universe (including physical humanity, of course) that determines what is moral and immoral. This arguably leads to better decisions for both individuals and governments than by entertaining nonphysical entities. Perhaps Sam Harris could be satisfied with just that approach.

    * http://ukcatalogue.oup.com/product/9780199682829.do
    Human Interests: or Ethics for Physicalists

    Like

  35. You have an interesting view on ethics, and I mean that it a positive way.

    On FGM, if pain and risks to life were removed from the process would you still feel it had to be eradicated? Do you feel the same for male genital mutilation?

    I agree that there can’t realistically be inherent benefits to the practice. But I have to say that as long as pain and risk were eliminated I would find it hard to get too worked up about it. It would still be unfair, but a lot of permanent unfair things happen to kids in the name of building societies, especially in creating unique identities. I don’t see why physical alterations per se are necessarily out of bounds morally (meaning upsetting to the point of wanting it to stop).

    “Whether we describe a past action as ‘moral’ or not is in my view ambiguous. I prefer to ask whether the intentions were good, then to ask whether the outcomes were good. I see no need to mush the two questions together into one.”

    I think that is a nice distinction and then starts a sort of moral feedback mechanism to improve decisions (or make them more accurate for a set goal) in the future.

    “Sadists do not intend to maximise well-being, while FGM parents do not use empirical methods or reason. They each behave incorrectly for different reasons, and Harris opposes both, but would use different methods to address each problem.”

    That is an interesting counterargument, and took me a bit to think over.

    First, I am not sure I agree that FGM practitioners do not use empirical methods or reason at all. That they made a mistake is not to say they did not try to formulate a concept about the world and reason about how they should act toward a goal. More importantly, FGM could continue as purely a cultural institution, separate from the otherwise flawed beliefs for doing it. What if the cultures, after being made aware that they don’t effect anything beyond creating a unique physical identity, decide that that is good enough? Of course I am assuming with Western medical technology to remove pain and risk. If creating a distinct, physical identity is desired by them, it does not seem to defy empirical methods or reason. And it is still different from a sadist in intent.

    Second, I think you are giving Harris too much credit. Or rather you are granting his arguments in TML too much credit. The specific example he gives of FGM is not clearly delineated (between the sadist and FGM practitioners) as you just did, with discussions of different reasons why each is wrong or how to effect them. I mean as some sort of deconstruction of his desires, I think your position is arguable. Maybe that’s what he would like to have said, but that is not what I found in his argument.

    Perhaps I am wrong?

    Like

  36. Hi Dwayne,

    Why didn’t Dennett hammer him on TML? Eventually he got around to Free Will, but he seemed to let a lot go by for TML.

    Has Dennett said anything at all about TML? We know (from the acknowledgements) that Dennett read a draft of the book, but do we know how critical he was? Afterall, Harris has a long track-record of not agreeing with people who disagree with him on this (and as “Free Will” shows, a record of not agreeing with DD when DD doesn’t agree with SH). Indeed, I’m not aware of anyone who does actually agree with TML.

    (By the way, are you going to write a SciSal piece on *your* conception of morals?)

    Like

  37. Perhaps Harris could avoid the need for future clarifications by changing his subtitle in future editions to: “How science (broadly defined) might, in principle, determine values – except for the ones it must presuppose“.

    Like

  38. not very catchy, wouldn’t sell. If as Krauss had entitled his book: “Some ideas from physics about the Universe, but not exactly how it came out from nothing”…

    Like

  39. As far as I know Dennett has said nothing publicly about TML. And it is possible that he was quite critical in any personal commentary to Harris, which Harris then ignored.

    Clearly they disagreed on Free Will, even in the book Harris was acknowledging that fact. Eventually Dennett wrote an article that seemed to be venting a lot of heat after having stayed silent for so long. I am surprised the same thing has not occurred for TML. Or maybe DD saw TML going nowhere (unlike FW) and so didn’t get as hot and bothered?

    As far as people that openly agree with TML, it seems like Dawkins, Krauss, and to some lesser extent Coyne (though he does share my concern regarding calling it science), Steve Pinker, and Michael Shermer. Not that I put a lot of stock on their ideas regarding this subject, but they have some clout which they do wield to some effect in the public sphere.

    Regarding my own conception of ethics… I would love to publish some pieces here. I’m going to start writing another essay that sets a strong foundation for my own fledgling system, uhmmm when I find a bit more time. I have a lot on my plate these days!

    But maybe soon. And then it’s my turn to catch hell 🙂

    Like

  40. Ahhh, yes well it is intriguing. But here is the rub: Harris states that our future depends on a unified morality. Where in any evolved species do we see a trend toward permanent unification of behaviors? So I think his concept is hampered on two levels. First there is the complexity of the “stew” itself, and then there is the fact that it is ever churning, with new ingredients being added all the time.

    Like

  41. Hi Dwayne,

    As far as people that openly agree with TML, it seems like Dawkins, Krauss, and to some lesser extent Coyne (though he does share my concern regarding calling it science), Steve Pinker, and Michael Shermer.

    We should distinguish two different ways of agreeing with Harris. Many would agree that human morality is an entirely natural phenomenon that is therefore entirely within the domain of science, and thus that science can tell us a lot descriptively about human morals I’m not sure how controversial that claim is, but certainly many disagree with it, and the above list are mostly agreeing with Harris on that claim.

    Then Harris goes further, adding an axiom about morals being about “well-being” and proceeding from there to a prescriptive scheme. That is what nearly everyone (as far as I’m aware) disagrees with, and I’m not aware that the above list have stated their agreement with that claim (though I’m open to correction). As an example, here is Coyne explicitly rejecting that view.

    My own opinion would be to agree with Harris on the first paragraph above but to reject the second.

    Like

  42. Hi Jon, well I think you are sort of right, and sort of wrong.

    We should start by realizing that while there is a semantic issue, it is his problem, not mine. Harris came out of the blue claiming to have a moral system that is based in science. It is only natural for people to look at it and see if it fits the normal criteria for scientific methodology. It doesn’t. Which is why he then began arguing that everyone else should change their conception of science to suit his personal definition.

    All I am arguing is that I don’t see why it is necessary for me, or anyone else, to change generally accepted definitions regarding science simply to allow Harris to get his theory considered part of science. Where is the benefit in this? Why should he not be happy calling in philosophy or scientia?

    And on the flipside there could be practical concerns, to making such a move (again just to suit his personal desire to call his theory science). I am uncertain if you live in the united states but there is a very real threat to science education by the intelligent design movement (aka new creationism). Despite losing a major court case, they continue to try to effect education to discount evolutionary theory and install forms of creationism. This does hinge on what is considered science.

    So I am not grasping at straws to exclude his theory from science. I am simply stating facts. And the argument is that we should not have to change, just to fit his whims. If anything it seems like he is grasping at straws to defend an inaccurate claim made in his book.

    I believe I did give his theory a fair shake, and it is not my problem that it fell to pieces. That said, I guess I did not make my point clearly enough in the essay above. I was trying to suggest that Harris did not in fact argue for a purely descriptive moral system in TML, and my statement that I had not understood it properly was muted sarcasm. I thought the evidence I provided afterward made the case that in TML he was talking like a prescriptive moral theorist.

    I am interested in knowing if you are satisfied with a purely descriptive moral system? And if you are, how do you square this with a stance of moral realism (using the normal definition of moral realism, which I would agree is a misnomer)? Those don’t seem compatible to me at all, which is why his clarification seemed to do nothing but muddy the waters.

    But I do agree that whether TML is science or not as most people understand it, is an empirical question. I think that has already been answered, by Harris admitting that his claim (that it is science) has become an albatross around his neck. Even his close friends in the sciences have issues with that claim.

    I take seriously your point that we need to keep lay people in mind. A moral theory that is inaccessible by the public is worthless. That is why I am concerned about Harris’s misuse of common language to make it appear he is saying one thing (to the public), when he is in fact saying something completely different. And then appealing for us to change our definitions to fit him.

    Do you see a reason we should change normal standards regarding science in this case?

    Like

  43. Hi Coel, there seems to be a reply limit so I am replying to myself, but this is to you (hope you see it).

    I was trying to distinguish between the two camps you described using the term “lesser extent” to divide them. You are right about Coyne having practical issues with TML. I am actually in a later thread on the same topic where he repeated his concerns about measuring well-being. And I know that Shermer has his own moral theories (which I also intend to dismantle), so can’t be completely onboard TML. While Pinker has not to my knowledge supported the second (prescriptive) project, he did write a blurb to the book…

    “Harris makes a powerful case for a morality that is based on human flourishing and thoroughly enmeshed with science and rationality. It is a tremendously appealing vision, and one that no thinking person can afford to ignore.”

    That sounds slightly like he drank the kool-aid, but I am willing to give him the benefit of the doubt based on his relative silence on TML since.

    Dawkins and Krauss, on top of writing blurbs for the book, continued to drop references to it as if it were functional. At least I remember hearing them drop references. But it is so many youtube clips ago I can’t say for sure if I am remembering correctly.

    All I can say is that you definitely made me feel better. Maybe it really is going nowhere.

    Like

  44. Hi Dwayne,

    Maybe it really is going nowhere.

    I think so. I read many of the “new atheists” blogs and there is little real support for TML — and over 400 people willing to try to explain why he is wrong, while only he himself is defending TML. Harris achieved note with two well-written books catching a post-9/11 zeitgeist (EoF and LtaCN), but seems to be mostly living off those in reputation terms.

    Like

  45. Harris states that our future depends on a unified morality

    I’m not sure where Harris says this, but I think it’s incorrect. If what I’m saying is true, what we’d probably see in the world is a variety of approaches that interoperate at various levels. The “better” for your family is futile if your larger community is harmed to a significant degree. The “better” for the aggregate of humanity can conflict with your own well-being (or ability to reproduce or whatever).

    This is where empirical investigation, I think, does have a role. Science can look at how various mixes of approaches affect particular outcomes at the level of individuals, families, communities, species, etc.

    Where empiricism is involved, it may be enough to simply view moral propositions (as some philosophers have) as being always conditional. We can do empirical work when we formulate moral propositions in this pragmatic way — *If* you want Y outcome, *then* you should do X. It’s prescriptive but conditioned upon a particular set of values. The values themselves (the “meta-moral theory”) are a philosophical discussion. There are philosophers/scientists mixing the two, like Joshua Greene.

    Like

  46. Hi Dwayne,

    If pain and risk were removed from FGM, you’d still be left with the main purpose of it, which is to remove sexual pleasure from women. And there’s still the social harm for those who don’t want to comply with it for whatever reason. I don’t think either problem is as bad for male genital mutilation, but I’m no fan of male circumcision either.

    Removing pain and risk would be reducing the harm of FGM, but I would be against FGM as long as the amount of harm it caused were non-zero (on balance). What might change a little is my strength of feeling (though I still think it is monstrous to remove or lessen the ability to experience sexual pleasure).

    First, I am not sure I agree that FGM practitioners do not use empirical methods or reason at all.

    if they do, they’re not very good at it. Or else I am not, because what they are doing seems to me to be unnecessary, misogynistic and cruel.

    What if the cultures, after being made aware that they don’t effect anything beyond creating a unique physical identity, decide that that is good enough?

    Again, you’re ignoring the sexual aspect. It’s monstrous, even without pain. Having a physical identify is simply not worth it.

    Second, I think you are giving Harris too much credit.

    Perhaps. Nevertheless some version of his argument seems to me to make sense, as long as we drop the whole silly moral realism/objective morality angle. It’s not so important to me what he said as what kernels of truth may be there to be salvaged.

    Like

  47. Perhaps, rather than discuss this in terms of sexual mutilation, we change the act to say a lobotomy. Would your viewpoints remain the same? How would one assess a lobotomy in terms of good intentions and good outcomes?

    Like

Comments are closed.