Scientism: ‘Yippee’ or ‘Boo-sucks’? — Part I

S30694-masterimage-R4224-the-danger-of-scientism[Editor’s Note: This essay is part of Scientia Salon’s special “scientism week” and could profitably be read alongside other entries on the same topic on this site, such as this one by John Shook and this one by yours truly. My take on the issue is very different from that of the authors who contributed to this special series, and indeed close to that of Putnam and Popper — as it should be clear from a recent presentation I did at a workshop on scientism I organized. Also, contra the author of the third essay in this series (but, interestingly, not the author of the first two!) I think the notion that mathematics is a part of science is fundamentally indefensible. Then again, part of the point of the SciSal project is to offer a forum for a variety of thoughtful perspectives, not just to serve as an echo chamber for my own opinions…]

by Robert Nola

Charges of ‘scientism’ and ‘scientistic’ abound in writings about science, its character and scope. Most agree that these terms have a pejorative connotation, though others are more than willing to accept the pejorative epithet. What often remains elusive is the content of the claims alleged to be scientistic. Below is one attempt to investigate some broad theses to which some adopt an anti-attitude and call them ‘scientistic’ while others adopt a pro-attitude and are happy to be called ‘scientistic.’ In general, scientism concerns whether or not there are limits to a scientific approach to the world. Some claim that there are no bounds, others claim that there are limits to the application of science and its over-extension to some domains is improper or illegitimate, hence the charge of scientism when science claims total dominion.

Some of the contested theses can be broadly characterized as follows. The first is an epistemic thesis which claims that there is just one way of knowing about the world and that is found in science. Those who oppose this thesis claim that there are ways of knowing other than those found in science. Call the condemned thesis ‘epistemic scientism.’ Science simply overextends itself as far as ways of knowing are concerned.

Secondly, there is a methodological thesis, which claims that the methods employed in the natural sciences can be extended to all the other sciences. Those who oppose this thesis claim that there are methods which are distinctive of the human and social sciences. Call the condemned thesis ‘methodological scientism.’ One form the controversy over methodological scientism takes is the debate about the extent to which laws and law-like explanations of the natural sciences can also be found in history and the human and social sciences; those who reject this kind of methodological scientism often advocate explanations which are teleological or purposive in character. This harks back to the old NaturwissenschaftenGeisteswissenschaften debate [1]. A more recent version of this can be found in the opposition to Darwin’s theory of evolution, which eschews teleology and purposiveness [2]. Here a different version of methodological scientism will be discussed as it emerges in Hayek and Popper.

Thirdly, there is a metaphysical thesis that everything in the long run is material (especially the mental). This claim can take a number of forms such as: all the sciences can be reduced to physics; or alternatively all the items in science can be shown to supervene upon the physical. Opponents of this view would want to claim that the non-physical sciences are emergent, or have some form of independence from the physical sciences. Call the condemned thesis ‘materialistic scientism’ (or ‘physicalistic scientism’ if the older talk of materialism is no longer apposite). Opponents would also reject the attempt to find a “theory of everything” as part of the hubris of materialistic scientism. They might also wish to have a place in their ontology for souls or the supernatural; the invasion of science into these domains, and the domain of religion in particular, is simply more of the hubris of scientism. In what follows, these three kinds of scientism, and others, are explored further.

Contestations of Scientism

For some the use of the word ‘scientistic’ is deeply condemnatory of certain intellectual tendencies. For others it is an epithet worn as a badge of honor. In a highly condemnatory mood Putnam tells us:

… I regard science as an important part of man’s knowledge of reality; but there is a tradition with which I would not wish to be identified, which would say that scientific knowledge is all of man’s knowledge. I do not believe that ethical statements are expressions of scientific knowledge; but neither do I agree that they are not knowledge at all. The idea that the concepts of truth, falsity, explanation, and even understanding are all concepts which belong exclusively to science seems to me to be a perversion. [3]

It is hard to know what to make of the last sentence as most are able to resist the perversion of thinking that the listed concepts belong exclusively to (some? all?) empirical science. Though much philosophical analysis has gone into each of them, they remain at best part of the meta-theory, or philosophy, of science — and other endeavors as well. However, a substantial epistemic claim is said to be scientistic: ‘scientific knowledge is all of man’s (sic) knowledge.’ In addition we also find an expression of what we might call ‘ethical scientism,’ the illegitimate extension of the domain of science to the realm of ethics.

In contrast Rosenberg happily accepts the pejorative label of scientism which, he says, has two related meanings:

According to one of these meanings, scientism names the improper or mistaken application of scientific methods or findings outside their appropriate domain, especially to questions treated by the humanities. The second meaning is more common: Scientism is the exaggerated confidence in the methods of science as the most (or the only) reliable tools of inquiry ….’ [4]

Rosenberg’s first account of scientism, once its pejorative connotations are dropped, is closely related to the thesis of Descriptive Scientism (DS) discussed below. His second account is akin to Putnam’s epistemic scientism (I’ll get to this a bit later) — except Rosenberg endorses what Putnam condemns.

After spelling out their own stance concerning empiricism and materialism in opposition to much current metaphysics, Ross, Ladyman and Spurrett say: “Let us call the synthesized empiricist and materialist — and resolutely secularist — stance, the scientistic stance. (We choose this word in full awareness that it is usually offered as a term of abuse.)” [5] To further raise the hackles of those who would abuse them, they call the first chapter of the book ‘In Defence of Scientism’! True, those who adopt the scientistic stance have no truck with any kind of supernaturalism. But it remains an open matter if any descriptive account of scientism ought to be empiricist in any sense. Perhaps a better case can be made for the scientistic also adopting materialism, or physicalism.

Ladyman et. al. endorse a special kind of naturalized metaphysics. At a good guess (and setting the niceties of interpretation aside) it has some affinity with the doctrine of metaphysical materialism that Putnam excoriates:

… metaphysical materialism has replaced positivism and pragmatism as the dominant contemporary form of scientism. Since scientism is, in my opinion, one of the most dangerous contemporary intellectual tendencies, a critique of its most influential contemporary form is a duty for a philosopher who views his enterprise as more than a purely technical discipline. [6]

Strong stuff! Here the term ‘scientism’ is not merely pejorative (after all it is materialistic scientism); it is also given some descriptive content by linking it to a version of materialism or physicalism. We may take it that the metaphysical materialism which Putnam condemns and the naturalized metaphysics which Ladyman et. al. praise are, as doctrines, both in the same ball-park as what has been characterized here as “materialistic scientism.”

As a final example, consider Margolis’s “unravelling of scientism” when he takes on board two aspects of scientism already distinguished:

“Scientism” signifies the assured possession of a privileged methodology or mode of perception, or even the assured validity of a metaphysics deemed ineluctable or overwhelmingly favored by the self-appointed champions of “Science” [7]

Here we can detect not only epistemic scientism but also a materialistic scientism which adopts a metaphysics based exclusively in science. But like many doctrines that are said to be scientistic, considerable effort needs to be devoted to discovering just what positions are being attacked.

A helpful approach to these issues can be found in the on-line Oxford English Dictionary. It gives two broad uses of the term ‘scientism,’ one which is descriptive and the other negative and pejorative.

  1. A mode of thought which considers things from a scientific viewpoint.
  2. Chiefly depreciative. (1) (a) The belief that only knowledge obtained from scientific research is valid, and (b) that notions or beliefs deriving from other sources, such as religion, should be discounted; (2) extreme or excessive faith in science or scientists. Also: (3) the view that the methodology used in the natural and physical sciences can be applied to other disciplines, such as philosophy and the social sciences. [numbering added]

Both (A) and (B) need unpacking as they harbor a number of different claims which need to be distinguished. For example, B1(a) is a thesis of epistemic scientism while B1(b) is at best an instance of what scientistic advocates of (a) would rule out. Finally, a picky point about B1(a); being valid is confused with being true. One of the features of the claim ‘a person knows that p’ is that p is true (and not valid even when p is a logical truth).

Scientism as a Descriptive Claim

Sometimes ‘scientism’ is linked to the attributes of a particular kind of person, as Hayek points out:

Murray’s New English Dictionary knows both “scientism” and “scientistic,” the former as the “habit and mode of expression of a man of science,” the latter as “characteristic of, or having the attributes of, a scientist (used depreciatively)”. [8]

However not all uses of the term ‘scientistic’ need be pejorative; having the characteristics or attributes of a scientist can be merely neutrally descriptive in some contexts. What needs to be added is that these characteristics (in their context) are misplaced or inappropriate (as in (B2) in the definition). Here descriptive scientism will be taken to be a thesis independent of the characteristics of persons, as in the Dictionary definition (A) above.

(A) needs clarification. Are scientific considerations to be applied to all or only some things? And what is meant by ‘things’? Talk of things is too narrow. Let us say that each science is about some domain which includes not only things but also (observable or unobservable) happenings, facts, kinds, the properties and relations of things, and the like. We can also say that a scientific viewpoint is adopted towards such a domain. One example of such a domain is astronomy; this started with the Ancient Greeks who considered mainly the motion of the heavenly bodies and geometrical models of the motion, but it is now a vastly expanded domain with quite different models and theories. Other domains include human and animal physiology, the evolution of species, organic chemistry, and the like.

Not anything ought to count as a domain for science; so restrictions need to be imposed. Is mathematics to be counted as a domain? If we understand the sciences to be an empirical investigation into our contingent world of items to be found in the space-time framework then mathematics is not a domain for science so understood. On the whole mathematics is commonly a deductive enterprise (allowance needs to be made for probabilistic inference and deductions using computers); it is not an empirical investigation into contingent mathematical items to be found in space-time (though for some extreme nominalists and empiricists about mathematics this would be open to dispute). Also logic is not to be counted as a domain of science for the same reasons. Of course, both mathematics and logic play an important role in science and without either the sciences would be impoverished. But from this it does not follow that mathematics and logic are domains of science. Other contentious domains will be mentioned shortly.

Let us introduce the idea of the scientification of some domain. This is the process in which, up to some time, a domain had not been considered from a scientific point of view, but was so after that time. What the process of scientification involves can be left open. But as a scientific stance is taken towards a domain some of the features characteristic of a science will emerge, such as: the collection and/or classification of data; the construction of models or theories; the proposal of rival hypotheses and their testing; carrying out experiments; the application of mathematics, etc. Thus to take an historical example, the domain of electrical happenings which includes lightning was not scientized at the beginning of the 18th century; but shortly after that time the domain became progressively scientized (and this is still an on-going project as we learn more about electrical happenings).

Now we can state a quite general version of (A) from the dictionary definition which we can call ‘Descriptive Scientism’ (DS):

(DS) For every domain of facts or happenings D there is some theory or scientific stance which yields a science of D (perhaps after a period of progressive scientization).

Now (DS) is quite general, unlike the less specific version (A). (A person could endorse (A) but not (DS).) (DS) holds for those domains which have been scientized; and it also holds for those domains for which a scientific stance does not now exist and which remains to be developed in the future (i.e., the process of “scientification” has hardly commenced now). The generality of (DS) entails that for any as yet un-scientized domain there is a time at which it will ultimately become scientized (for example, the case of the domain of electricity at the beginning of the 18th century). (DS) expresses an as yet uncompleted project concerning the application of science to all domains.

There is much inductive evidence in support of (DS). Since the beginning of the scientific investigation of domains within the natural, life and social worlds, (DS) has many clear instances in its favor. (DS) should also allow for progressive scientization: a science might undergo a “revolutionary” change as it develops theories about its domain (for example the change from Newtonian physics to relativistic and quantum physics). And with progress it should allow for the subsumption of one domain within another (e.g., the domain of classical thermodynamics has been subsumed within the domain of mechanics).

(DS) understood in this way has strong affinities with Quine’s account of naturalism. For Quine the whole area of epistemology is a domain which is to be scientized, and so falls within the scope of (DS): “Epistemology, or something like it, simply falls into place as a chapter of psychology and hence of natural science.” [9] Quine adds other domains such as linguistics. In fact, the whole human subject is taken to be a domain which is to undergo scientization, thereby creating a further instance of (DS): “We are studying how the human subject of our study posits bodies and projects his physics [in fact any science] from his data, and we appreciate that our position in the world is just like his” [10]. Here the study goes reflexive since it will also involve how we do science; that is, how the subject goes from its input of stimulation of its perceptual apparatus to its output which is “a description of the three-dimensional external world and its history” [11].

Those with a pro-attitude to (DS) (like Quine) with its underlying programmatic character can be said to be scientistic (in a descriptive sense). But those with a negative attitude to (DS) are condemnatory in calling it ‘scientism.’ It is not merely that the program of (DS) cannot be completed; there is something undesirably illegitimate in supposing that it could be. This pejorative sense is captured in B2 of the Dictionary definition; there is excessive faith in the total dominion of science.

Setting aside pro- and anti-attitudes to what is intended to be a descriptive thesis, are there any clear counterexamples which would falsify (DS)? The task is to find at least one (suitably restricted) domain D for which not merely is there no scientific theory now, but also there is no such theory even in the longest of long runs, and thus no scientization is possible. Here contentious disputes can begin. One disputed domain not discussed here is the relation of science (which is meant to be descriptive) to the normative as found in the norms of reasoning or the norms of ethics. If one accepts Hume’s claims about the invalidity of is-ought reasoning, then there seems to be an insurmountable hurdle for (DS) to overcome in the domain of ethical norms. But this does not mean that science cannot bear in other ways on normative matters. Another disputed domain is that of religion. Some have alleged that scientific inroads have been made in the following ways: textual and historical investigations into holy books like the Bible and the way it fails to comport with science [12] that help support the view that they are of fallible human origin; advances in evolutionary psychology and the mechanisms it proposes as causes of religious belief; investigations into the alleged efficacy of prayer for those who have undergone serious operations [13]; and so on. This will not be discussed here. However, in the final section of this essay I will briefly mention one contentious domain, viz., how (DS) has a positive bearing on aspects of literature and literary criticism. After I will discuss the idea of alternative ways of knowing, a further important qualification will be made about how (DS) is to be understood.

_____

Robert Nola is a professor of philosophy at the University of Auckland in New Zealand. His interests include philosophy of science, metaphysics, epistemology, selected areas in social and historical studies of science, atheism and the relationship between science and religion. With David Braddon-Mitchell he co-edited the volume Conceptual Analysis and Philosophical Naturalism (2008).

[1]  The Naturwissenschaften-Geisteswissenschaften debate in social science and history and the role of different accounts of explanation and causality as opposed to teleology will not be discussed here. Though he does not use the word ‘scientistic’ see the following:  G. H. von Wright (1971) Explanation and Understanding (Ithaca, Cornell University Press).

[2] See Thomas Nagel (2012) Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (Oxford, Oxford University Press).

[3] Hilary Putnam (1979) Mathematics, Matter and Method: Philosophical Papers, Volume 1 (Cambridge, Cambridge University Press, second edition): xiii.

[4] Alexander Rosenberg (2011) The Atheist’s Guide to Reality: Enjoying Life Without Illusions (New York, W. W. Norton): 6.

[5] James Ladyman, Don Ross, David Spurrett and John Collier (2007) Every Thing Must Go: Metaphysics Naturalized (Oxford, Oxford University Press): 64.

[6] Hilary Putnam (1983) “Why there isn’t a ready-made world,” Chapter 12 of Realism and Reason: Philosophical Papers Volume 3 (Cambridge, Cambridge University Press): 211.

[7] Joseph Margolis (2003) The Unravelling of Scientism: American Philosophy at the End of the Twentieth Century (Ithaca, Cornell University Press): 6.

[8] Frederick A. Hayek (1942) “Scientism and the Study of Society Part I,” Economica, New Series, Vol. 9, No. 35 (Aug., 1942), 267-91: p.269, fn 1. Reprinted in F. A. Hayek (1952) The Counter-Revolution of Science, Free Press, Glencoe Il: 207 fn 6.

[9] W. V. Quine (1969) “Epistemology Naturalized,” in Ontological Relativity and Other Essays (New York, Columbia University Press): 82.

[10] Ibid: 83.

[11] Loc. cit.

[12] There are many books on the conflict between science and religion. There is an amusing take on science and the Bible in Steve Jones (2013) The Serpent’s Promise: The Bible Retold as Science (London, Abacus).

[13] See the interesting (negative) results of a clinic trial into the efficacy of prayer in: Herbert Benson, J. A. Dusek, J. B. Sherwood, P. Lam, C. F. Bethea, W. Carpenter, S. Levitsky, P. C. Hill, D. W. Clem, M. K. Jain (2006) “Study of the Therapeutic Effects of Intercessory Prayer (Step) in Cardiac Bypass Patients: A Multicenter Randomized Trial of Uncertainty and Certainty of Receiving Intercessory Prayer,” American Heart Journal 151(4): 934-942.

210 thoughts on “Scientism: ‘Yippee’ or ‘Boo-sucks’? — Part I

  1. Hi Asher,

    By that definition, every physical interaction in the universe has meaning, because they all follow this pattern. You’re defining meaning as simple causality.

    My definition restricted to “information processing systems”, so doesn’t apply to all physical interactions. By “information processing systems” I meant things like mammal brains and devices such as computers. The point about these is that they are goal-oriented, and I’d restrict “meaning” to such contexts.

    Like

  2. Morning Everyone,

    Back to the chess-playing computer. Let us, as a thought experiment, equip our chess-playing computer with a video camera and image-processing software. This enables it to “know about” the external chess board that we set in front of it. We’ll also give it a robotic arm, which enables it to move the chess pieces. Now let’s examine a few concepts about this robotic chess player.

    (Note that none of this is about linguistics, it’s simply about computation and function.)

    “Aboutness”: the robot is clearly “about” playing chess and “about” winning a game on the chess board in front of it.

    Reference: The computer has internal representations of things like “bishop on square e4”, and this obviously refers to the bishop on square e4 on the external board. The linkage between the two is through the video camera and image-processing software, feeding information to the information-processing unit.

    Quoting Aravis:

    What’s missing is that the computer is not aware of the world-symbol relation. It does not *know* that “squiggle” refers to dogs and “squoggle” refers to cats. The operations it performs upon symbols are purely in virtue of their shapes — i.e. their syntactic properties.

    It seems to me that by the addition of the video-camera sensory device and the robotic arm, we have now fixed that. The computer does “know” that the external bishop on e4 that is “sees” through its camera is related to its internal-representation of the bishop on e4, and to the thing that its robotic arm then touches and moves. There is a clear link between it moving the bishop from e4 to d5, it seeing the bishop in a different place, and the internal information-processing of the position on the board.

    Awareness: Cleary the computer has awareness, it knows where the pieces on the board are. It is also aware of things like “if I were to move the knight to c3 then I’d fork the opponent’s king and rook and gain material”. Clearly it needs to be aware of that sort of thing, otherwise it could not play chess to any degree. In fact, when I play a chess computer, it is often a lot more “aware” of why a particular move is a bad idea than I am.

    Agency: Clearly the computer is an agent, since it can move a piece on the external board, as a result of its internal deliberations.

    Deliberation/reflection: What the program is doing is a whole host of thoughts along the lines of “if I do this then he can do that, and then I could do …”, so it is clearly deliberating and reflecting.

    Meaning: Clearly the computer knows about “meaning” in that it can evaluate positions on the chess board and choose moves towards some goal.

    Purpose/goal: Winning the chess game.

    Intentionality: Clearly the internal manipulation of symbols in the computer is a representation of and “about” the game on the external board.

    Does that cover everything? Is there, then, any reason to deny to the chess robot attributes such as “intentionality” that one would readily apply to a human?

    Possible counters:

    — But the robot is simply manipulating symbols/electrons.

    Reply: so are we! (What else is there, unless you’re a dualist?)

    — But the robot doesn’t really “understand”.

    Reply: on what basis do you assert that we humans do understand in a way that the robot does not? Certainly, the chess robot often “understands” the weaknesses of a position better than I do, since it can usually beat me.

    — But all of that was programmed into it by a human.

    Reply: So what?, all of our attributes were programmed into us by evolution. Regardless of how we or the robot came to be in our state, we are both now functioning in all the above ways.

    — But the agency in the robot comes from the agency in the human, whereas Darwinian evolution is not a purposive agent in that sense.

    Reply: Agreed, but so what? The whole point of Darwin’s Dangerous Idea is that it explains the origin of agency, awareness, deliberation, et cetera, out of things that lacked such properties. If you think that agency can only come from agency, then you are thinking like a creationist.

    — But my intuition tells me that we humans are more than that, and are not just mechanistic devices of the chess-robot sort.

    Reply: well yes, that’s because human intuition is dualistic. Which means that human intuition is wrong and misleading on such matters. Go on, admit it, there is no good *reason* to deny the robot any of the attributes that you claim for humans — other than in matters of degree — there is only dualistic intuition for doing so.

    In fact, the above description could just as well *be* of a human! We, after all, have brains capable of playing chess and image-processing, and we have eyes and arms.

    In fact, humans are simply mechanical devices that have evolved precisely to do tasks in the manner of the above chess-playing, only those tasks were instead about catching prey and finding mates and so on.

    Like

  3. Hi Coel,

    I agree with this, but you perhaps take your own interpretation (and mine) a little too much for granted, as indicated pretty much everywhere you say ‘clearly’. It is far from clear to everyone. Statements such as “Clearly the robot has awareness” are not likely to be well received by those who believe that computers cannot be conscious. What they mean by awareness is something more than this, something they assume that we have and that robots don’t. They think there is an aspect of awareness which is more than mechanical information processing in response to stimulus. This is why this conversation is so aggravating to the likes of Massimo and Aravis — you appear to be blithely ignoring their problems with computationalism and asserting that your own vision is correct.

    I’m probably not much better, but I do think accounts of computationalism should take account of how unintuitive the things we say are, and share Asher’s concerns that this indifference to the views of the other camp is making us look bad.

    Like

  4. Hi DM,

    you perhaps take your own interpretation (and mine) a little too much for granted, as indicated pretty much everywhere you say ‘clearly’.

    Yes, granted, I was deliberately echoing the style of those who pronounce that “clearly” the chess-playing robot does not have these attributes (which comes back the earlier point that this debate is as much about assertions based on intuition as about actual arguments). Admittedly that deliberate echoing is a slightly flippant style of arguing, but, heck, if the philosophers get to argue by declaring things “clear” then why can’t I?

    They think there is an aspect of awareness which is more than mechanical information processing in response to stimulus.

    Agreed, and I submit that they think that owing to dualist intuition.

    … and share Asher’s concerns that this indifference to the views of the other camp is making us look bad.

    How do you suggest that we word things?

    Like

  5. Asher Kay wrote:

    An example: a lot of people talk about “mental states”. If you’ve done any work at all with ANNs, one of the first things you realize is that if the brain works *anything* like that, mental states are a fiction.

    ======

    Well, you’ve just walked right into what Massimo calls “It’s all an illusion” crowd, in our BHTV discussion. And it is fallacious, for all the reasons he describes.

    Like

  6. Hi Coel,

    How do you suggest that we word things?

    It’s a difficult one. Asher seems to have some opinions in this regard so he might help.

    My attempt would be something like this.

    There seems to be some sense in which one could say, perhaps speaking informally, that the robot “knows” or “is aware” of the positions of the pieces on the board. My feeling is that this ostensible awareness is essentially the same as actual awareness, perhaps differing only in degree, e.g. of complexity or intelligence. If you disagree, I would ask you to articulate clearly what pertinent attributes you know we have and the robot does not. If you cannot, my suggestion is that you are under the influence of a subtly dualistic intuition which prevents you from considering the possibility that the robot truly is aware in the same sense we are, that true awareness is just the subjective experience of what in the robot you would call pseudo-awareness.

    Like

  7. Coel, this attitude is hardly conducive to a constructive exchange of ideas. “The philosophers” here have simply been remind you that there is a huge literature on these topics which you seem to ignore or dismiss out of hand, helping yourself to your now idiosyncratic definitions of well established concepts. As Aravis said, this would be like someone ignoring how astronomers define planets and then going on and on about his own “intuitions” about what planets really are. And I’m sorry to say, but some of the things you wrote are indeed *clearly* wrong, just like, I’m sure, things I would write about quantum mechanics would likely look that way to a quantum physicist. (Which is why every time I venture in that direction I pester friends like Sean Carroll to check and make sure I got at the least the basics right.)

    Like

  8. Hi David,

    I am still confused by these attempts to split off mathematics and logic from the core beliefs of a scientism-ist – as being unscientific? I can’t think of a definition of science that won’t include scientific reasoning somewhere, and I thought the Quine-Putnam argument hinges on the inseparability of mathematics from physics.

    Well said!

    Like

  9. Hi Aravis,

    We start with the concept of linguistic meaning, which comes from Semantics.

    Since langauge is about communicating meaning, it follows that “meaning” is more fundamental than the communication of it, and thus it is justifiable to bypass the linguistics and go straight for a computational account of “meaning”.

    … but unfortunately, what every person working in the relevant fields is interested in, is …

    You continually, it seems to me, define “relevant fields” somewhat narrowly. There are whole worlds of science and computing and engineering out there beyond the somewhat narrow confines of academic philosophy. And, let’s remember that science, computing and engineering are all fields that make clear and rapid progress.

    Linguistics is interesting, but if I were interested in how human brains work, and their relation to other animal brains and to artificial computational devices, I think that computer scientists and neuroscientists are likely to tell us more than philosophers of language.

    Like

  10. Well, then, we really don’t have anything to discuss further with one another. That you willfully ignore what Linguistics–an empirical science–has to say about meaning and “bypass it” to run straight to computer science means that we can’t have a productive conversation on the subject of natural language, since computer science tells us exactly nothing about it. (And yes, I noticed the little backhanded slap, re: sciences that have made “clear and rapid progress,” presumably in contrast with Philosophy and Linguistics.) Fortunately, the serious study of language, by linguists, continues apace, regardless of what you choose to “bypass,” and will continue to grow and flourish without your contribution.

    You have fulfilled the description of a Scientismist that Massimo and I painted, to a “T”, indeed, almost to the point of caricature. I know that this doesn’t bother you — indeed, that you are proud of it — but to me, it confirms pretty much everything I am inclined to think about that particular orientation.

    Like

  11. Hi Aravis,

    That you willfully ignore what Linguistics–an empirical science–has to say about meaning …

    I’m not ignoring it at all, I just think that the concept “meaning” is wider than linguistics (which is about one form of communication of meaning).

    Fortunately, the serious study of language, by linguists, continues apace, … without your contribution.

    Fine, but I’m not trying to contribute to linguistics, I’m considering meaning and how that arises in computational devices.

    I also note that you are insisting that the linguists are the experts on “meaning” while also saying that they have not yet worked out how a physical brain manages to generate meaning.

    An impasse on a fundamental point like that is often a sign that one is conceiving of the problem wrongly.

    Like

  12. Coel, you are begging the question when you say “I’m considering how meaning arises in computational devices.” So far, there is no evidence that it has (which doesn’t mean it can’t), and the only sensible measure of whether computer scientists will succeed is to see if they fulfill linguists’ definition of meaning. If they just make up their own their game is irrelevant (and irritating).

    Like

  13. Hi Massimo,

    and the only sensible measure of whether computer scientists will succeed is to see if they fulfill linguists’ definition of meaning.

    But the problem is that the linguists’ definition of meaning cannot be precisely articulated — it seems to be a fundamentally ineffable intuition. As Aravis has illustrated, meaning can be defined as reference, which can be defined as picking out, and so on, but this is just a carousel of synonyms.

    The definition of meaning Coel is working with is not an idiosyncratic private language, though it may appear to be. It can be used to derive and explain the way the term is used by linguists.

    Let’s quickly sketch out an example.

    To the linguist, the word “dog” refers or picks out to a concept, [dog], which can also be picked out by ostension, e.g. by perceiving an actual dog. In this way, the word “dog” the concept [dog] and actual dogs are all inter-related. The word “dog” is syntax, the concept [dog] is semantics and an actual dog is a physical object.

    But something very like this can also be achieved in a computer. A keyword “dog” in an input can pick out an internal symbolic representation [dog] which could also be picked out by its detecting (e.g. through a video feed) an actual dog. The actual dog has the same relationship to the symbolic representation [dog] as it does to the mental concept [dog], and the same relationship to the keyword “dog” as to the English word “dog”. To me and Coel, this suggests that “dog” means dog to the computer in the same way that it does to a person.

    This sketch of meaning I think captures something of the linguistic concept. I don’t think it is completely unrelated in any case, though it presumably seems to you to be missing something.

    But can you articulate clearly, and without begging the question, what the difference is between this and the word “meaning” as you use it? We may be mistaken, but from here it looks very much like the difference you see is purely due to the fact that you find it unintuitive to imagine that a computer could be conscious and you make consciousness a criterion of intentionality. Meanwhile, the capacity to have intentionality is a criterion for consciousness.

    This leaves us at an impasse, because there are two solutions to these criteria — either computers can be conscious and intentional, or they cannot be conscious and they cannot be intentional. Computationalists prefer the former and anti-computationalists prefer the latter.

    Like

  14. Hi Massimo,

    So far, there is no evidence that it has …

    Since we know that the neural-network computational devices in our skulls generate meaning, isn’t the sensible conclusion that neural-network computational devices can indeed generate meaning, and isn’t the burden of proof firmly placed on anyone denying that to come up with a good reason why not, or some account of what else is going on inside our skulls?

    Like

  15. You are begging the question massively, again, by simply assuming that our brain is sufficiently like a computational device. In part, it certainly is, but so far that analogy is just that, an analogy, nothing like a scientific theory of brain functioning.

    Like

  16. Well, you’ve just walked right into what Massimo calls “It’s all an illusion” crowd

    I am experiencing all sorts of mental states this morning. I can assure Asher that they are not fictional.

    You both could have read what I said more charitably, but let me rephrase.

    Our intuitions about mental states are very misleading. There is very little that could be called “stateful” about what we’re ostensively pointing to when we say “mental state”.

    I didn’t use the word “illusion”, but when something appears to be very different than it is, it could reasonably be called illusory.

    What I was *not* saying is that when we say “mental state” we are pointing to something that doesn’t exist at all.

    Like

  17. Asher, okay, of course we can be wrong about our inner mental states, just as we can be wrong about sensory data from the outside. But unless one actually goes so far as to negate the very existence of mental states, I don’t see how that’s helpful in the context of this discussion.

    Like

  18. By “information processing systems” I meant things like mammal brains and devices such as computers. The point about these is that they are goal-oriented, and I’d restrict “meaning” to such contexts.

    A thermostat is not goal-oriented. As Massimo pointed out, any seeming goal-orientedness in a thermostat resides in the human beings who create and use them.

    Like

  19. But unless one actually goes so far as to negate the very existence of mental states, I don’t see how that’s helpful in the context of this discussion.

    The helpful thing I was trying to point out to DM was that the conceptual tools we use can change our intuitions. This was in response to his assertion that our misunderstandings here were intuition related.

    There is a reason why people who have a thorough understanding of neural networks have different intuitions than people who don’t. When philosophers talk about mental states, they can make all kinds of subtle category errors because they are following the intuition of a static thing.

    There are thousands of examples of this. One is to see mental operations as manipulations of symbols. One is to think of people as having “faculties” of imagination, judgement, etc. One is to think of reasoning as “pure”, abstract and a-priori. One is to think that there’s an in-kind difference between perception and cognition. And there are many, less pithily statable examples.

    I think it could be argued that a lot of these issues are at the heart of our disagreements here.

    Like

  20. Hi Asher,

    A thermostat is not goal-oriented. As Massimo pointed out, any seeming goal-orientedness in a thermostat resides in the human beings who create and use them.

    I beg to differ. The goal might have been programmed in by the human, but I would still regard the thermostat itself as being goal-oriented.

    (With the usual proviso that such things are continua, and that a thermostat is well down the scale on such things.)

    Like

  21. The goal might have been programmed in by the human, but I would still regard the thermostat itself as being goal-oriented

    I think this way of thinking ends up being really misleading. If you’re talking about a cat or an ant, I’m good. Cats and ants are *self*-organised and *self*-sustaining systems, and their teleology arises from that.

    Take your halving and reverse it. Double the thermostat’s “goal-directedness”. It doesn’t work because the goal is not held by the thermostat.

    Deacon is really worth checking out on this issue.

    Like

  22. Hi DM,

    As Aravis has illustrated, meaning can be defined as reference, …

    And it seems to me that my chess robot (with the video-camera sensory device feeding into a neural network, and an output from the neural network linking to a robotic arm) does “reference” just fine, in that it directly links the neural-network state to the external-world chess pieces. I’m baffled as to what is missing so that that does not qualify as “reference”.

    Of course, I don’t think that such a reference can be a *sufficient* account of meaning, in that lots of other linkages are important.

    For example, if I see a lion hiding in some long grass then the “meaning” could include (depending on context):

    “I may be about to get eaten if I’m not careful”
    “The people who paid me to guide them on safari will be pleased”
    “I should phone the zoo to tell them where their lion got to”. Et cetera.

    Lots of those concepts do not have a straightforward relation to sense data (e.g. being “careful”, being “pleased”, and “their” lion). But no problem, this is all coped with by the neural network, because that is exactly what complex neural networks do, handling complex relations between all sorts of different things in the neural network (coupled with taking information from input devices and passing it to output devices).

    And all of that works just fine with my definition of “meaning”. Tracking all of that *linguistically* through the neural network would not be possible, because neural networks are not linguistic, they are computational. Which, again, is why seeking an answer to how computation generates meaning in terms of *linguistic* analysis may be the wrong way of thinking about it.

    As Asher asked, I wonder if people who don’t accept this way of thinking have actually played with neural networks and have a feeling for how they work?

    Like

  23. Hi Asher,

    Double the thermostat’s “goal-directedness”. It doesn’t work because the goal is not held by the thermostat.

    Why isn’t the goal held by the thermostat? Because it is not conscious? But then the ant might not be either. Functionally, the “goal” is there, internal to the thermostat (it can act autonomously, long after the maker has departed). Again, if we’re considering the very lowest level of “goal-oriented behaviour” and “aboutness” I think the thermostat qualifies, but to much lesser degree than the ant.

    Like

  24. Asher seems to have some opinions in this regard so he might help.

    I don’t have a good answer. I’ve tried quite a few different approaches and to be honest, I think a decent amount of what I write seems wack to people.

    I think it’s like two people in the woods separated by some large distance. Person A is standing by some cool thing and is trying to get person B over there to see it by standing at that spot yelling. What would work a lot better is for person A to go to where person B is and show them the way over from there. To do that, you have to both know exactly where person B is and also how to get from there to the cool place.

    Like

  25. The point about these is that they are goal-oriented, and I’d restrict “meaning” to such contexts.

    The question is – why would you so restrict it? And if you can restrict it in this way, why not accept others’ restriction that “goal-directed behavior” requires a context in which the intentional thing formulates its own goals?

    When you have an “information processing system”, you are describing it as a system in which events here cause changes over there. Just like you’re saying that the chess-playing program is essentially the same as a chess-playing person in this respect, I’m saying that any causal system – a box of air, say – is processing and transmitting information. There’s really no difference.

    The box of air has a goal – entropy. It takes discrete steps to get to that goal by bouncing molecules off of other molecules. Information about the total temperature inside the box is transmitted throughout the system.

    Like

  26. I did think of something else, which always strikes me when Aravis says things like, “reference is the most basic concept in Semantics”.

    If you were to go look at a gloss of “reference”, say at the Stanford Encyclopedia, you would see a lot of theories about how reference works. It’s all really well-known stuff — Quine, Putnam, Kripke, Searle, etc. What you would *not* see is any mention of facts about how mental processes physically work. The set of theories – the whole discussion – is bounded in an abstract area. If you were to bring up something like, say, the way an association might operate within a neural network as it pertains to someone referring to something, it just wouldn’t belong anywhere in the discussion.

    But then, from within this bounded area, Aravis starts talking about computers and how computers don’t know things or aren’t aware of things. But that discussion has to do with how physical processes (computers) work, and there’s nothing in those theories to connect that up to, because those theories don’t see reference as arising from physical processes. It doesn’t fit anywhere — It has nowhere to go. Reference is fundamental in those theories because there’s nowhere in that abstract space for them to come from.

    I personally can’t think about reference without thinking of a brain inside a person doing stuff that, when we talk or think about it, we call “reference”. To me, it’s something that arises within complex, self-organized systems (us) that have developed symbolic language. I can’t imagine that this wouldn’t seem like a really weird starting point to some people, but I can’t think where else there would be to start. If we were talking about some behavior or property that a machine exhibited, we’d discuss it by talking about how that machine operated, and how that operation might give rise to the behavior.

    The idea of reference from *that* starting point is really a completely different category of thing. It’s like the difference between talking about “causality” as the interaction of elementary particles and talking about “causality” as political forces that change societies.

    Like

  27. Hi Massimo,

    You are begging the question massively, again, by simply assuming that our brain is sufficiently like a computational device.

    Or, rather, I am adopting the standard scientific and parsimonious stance that the burden of proof is on anyone claiming that there is more to it than that to demonstrate why the computational account is insufficient.

    Like

  28. Coel, I think you got the burden of proof thingy exactly wrong: you are making the positive claim that computationalism is all that is needed to understand the mind, and yet there are plenty of mental phenomena for which you can’t do anything more than handwaving. The burden of proof is squarely on you.

    Like

  29. Hi Massimo,

    and yet there are plenty of mental phenomena for which you can’t do anything more than handwaving.

    Handwaving is in the eye of the beholder. I think you and Aravis are doing some handwaving too. For example your view seems to be, computer representations don’t have meaning because a computer cannot know, and a computer cannot know because it cannot represent meaning. Computationalists are only giving accounts of “simulated” or “virtual” knowing and meaning. What “actual” knowing and meaning are, if more than this, is specified only by handwaving, by definition in terms of equally unhelpful synonyms or by appeal to intuition.

    Like

  30. Hi Massimo,

    We may have to agree to differ on the burden of proof here. You are entirely right that neural networks with 10^14 synapses are so complicated that I/we have only “hand waving” accounts of them.

    But, standard parsimony says that the burden of proof is on anyone claiming that there is more to it than the 10^14-synapse neural network.

    Like

  31. Hello. I’m neither a scientist nor a philosopher; just a layman with an interest in ideas. While I have nothing to contribute to the discussion myself, I find the conversation fascinating and remarkably stimulating. I can see from some of your exchanges that you are beginning to frustrate one another, but rest assured watching you struggle is an educational treat for some of us. You should sell “Team Scientism” and “Team Scientia” t-shirts.

    Like

  32. Hi Asher,

    But that discussion has to do with how physical processes (computers) work, and there’s nothing in those theories to connect that up to, because those theories don’t see reference as arising from physical processes.

    I agree with your whole post here, and this relates to my standard complaint that a lot of philosophy is too compartmentalised and hamstrung by not seeing itself in the physical/scientific context.

    If we want to understand “meaning”, “reference” and “intentionality”, we should start by considering how those terms apply to C. elegans. That’s a system that we have more chance of wrapping our heads round (with only 302 neurons compared to our 10^11).

    If the reply is that C. elegans does not “do” meaning, reference and intentionality, then the reply immediately hits a big problem. We humans evolved out of things like C. elegans, so there is an unbroken succession of parent-child relationships linking something C.-elegans-like to us. You can then ask at what number of neurons the animal does start “doing” meaning, reference and intentionality, and ask what exactly changes between the parent and the child over that boundary.

    If philosophers are not asking themselves that sort of question then they are simply thinking about things the wrong way.

    I still have no idea why my chess-robot does not qualify as doing “reference” and “intentionality”. The photons bouncing off the external chess piece are “about” that chess piece; the signal entering the video camera is thus “about” that chess piece, as is the signal passing through the image-processing unit. The output of that is thus “about” the chess piece, and it feeds into the neural-network where that signal gets entwinned with all sorts of other signals that are “about” all sorts of other things. The output from the neural-network to the robotic arm is “about” the chess piece that the robotic arm is directed to move.

    What is missing from that account such that it lacks “aboutness”?

    Like

  33. “All such biological properties — intelligence, consciousness, intentionality, agency, complexity, etc — are continua, and evolved gradually along that continuum”

    Ok for biological, and the common understanding of intelligence, consciousness, intentionality, and agency. Not sure about complexity, depends on what you mean.

    “The philosophers’ trick is to point to the extreme ends of the continuum (pebbles and humans), ignore all the stuff in the middle (chess-playing computers, frogs, nematode worms, etc), and then invent a “hard problem” by declaring that one can’t get to one end (human minds) from the other (pebbles)”

    Seem to me you have two continua, biological machines (with things like bacteria, cells, hydras, worms, fish, reptiles, mammals) and non biological machines. There is a huge gap between the two, and we don’t have any principle by which we could bridge it.

    Moreover biologically, the more we gain solid empirical knowledge, the more complex the systems become, and the more our fundamental ignorance appears to grow.

    Like

  34. Coel, standard parsimony says nothing of the sort, where do you get that? The reason your model doesn’t even begin to explain things is because you are helping yourself to an idiosyncratic definition of meaning, then proceed to assure us that of course there is nothing else to meaning other than symbol manipulation. Again, this is a massive instance of begging the question.

    Like

  35. Hi Massimo,

    The reason your model doesn’t even begin to explain things is because you are helping yourself to an idiosyncratic definition of meaning, …

    Then let’s discuss “reference” and “aboutness” instead. Why is my chess-playing robot not doing “reference” and “aboutness”?

    Like

  36. Coel,

    I think C. elegans is doing “meaning”, “reference” and “intentionality”, I’d go as far as bacteria too.

    “I still have no idea why my chess-robot does not qualify as doing “reference” and “intentionality””

    It can qualify but because there is no continuum between the biological, and machines like a pump, a car, or a computer, there is no reason to suppose that the anthropomorphizing of machines is more than at best a convenient analogy.

    Like

  37. Hi marc,

    I think C. elegans is doing “meaning”, “reference” and “intentionality”, I’d go as far as bacteria too.

    Excellent! That advances the discussion.

    because there is no continuum between the biological, and machines like a pump, a car, or a computer, there is no reason to suppose that the anthropomorphizing of machines is more than at best a convenient analogy.

    How about if I built a computationally equivalent replica of the C. elegans brain out of silicon? Would that be doing “meaning”, “reference” and “aboutness”?

    Bear in mind that a 302-neuron neural network is within our capability to replicate.

    If the answer is “no” then why not?

    Like

  38. Coel, you have been told about this several times, as Aravis pointed out. It really does feel like Groundhog Day, and I’ve got other things on my to-do list today, sorry.

    Like

  39. Hi Massimo,

    Coel, you have been told about this several times, as Aravis pointed out. It really does feel like Groundhog Day, and I’ve got other things on my to-do list today, sorry.

    You have made several comments such as this. Aravis has made several comments such as this. But when pressed, the best Aravis could do was to offer a number of synonyms for semantics: meaning, reference, picking out and so on, synonyms which could be applied just as well to a computer as to a person. When asked to justify the difference, he said it was because a computer does not know what the references mean, but since this is precisely the point in question it is circular.

    So as much as you think you have answered Coel’s question, as far as I can see you haven’t, not properly. It’s been skirted around plenty but it hasn’t actually been properly addressed. It might be good fodder for an article if you want to give it proper treatment some time — what is it that intentionality is and how can it be defined so that it is not obviously just as applicable to computers as to humans?

    Like

  40. DM, this is going to sound uncharitable, but as Aravis said, I’m not paid for doing this. If you or Coel are interested in a full fledge course on reference, semantics, etc. this isn’t the place. We have given plenty of explanations, repeatedly, as well as references for further readings. In the end, everyone is entitled to his opinion, but I’m also entitled to consider a conversation closed if I don’t think there is progress, or if I have more pressing demands on my time, no?

    Like

  41. Of course, Massimo.

    I’m not objecting to your abandoning of the conversation. I’m objecting to your claim that you have answered these questions many times already. I understand that you believe you have, but I’m making the point that I don’t think any of your answers have been substantive.

    Like

  42. Hi Coel,

    “How about if I built a computationally equivalent replica of the C. elegans brain out of silicon? Would that be doing ? Would that be doing “meaning”, “reference” and “aboutness”? … If the answer is “no” then why not?”

    No. For many reasons. Including, I don’t think anybody can build such a machine, let alone define it in a functional way, and I think it takes more than only a simplified aspect of the brain modeled on silicon to do what we mean when we are speaking about the kind of biological systems I mentioned.

    “Bear in mind that a 302-neuron neural network is within our capability to replicate”

    Within our capability is not the case. What we have now is a description that is completely non-functional of some of the aspects of all the neurons in its nervous system, and I mentioned bacteria, so again I’m not assuming that neurons alone, or their firings, are enough to reliably encompass what we mean by being able to do thinks like “meaning”, “reference” or “aboutness” when speaking of biological systems.

    I also posted this comment but it may be easy to miss and I forgot to say who I was speaking to:

    https://scientiasalon.wordpress.com/2014/08/18/scientism-yippee-or-boo-sucks-part-i/comment-page-1/#comment-6428

    Like

  43. Hi marc,

    Seem to me you have two continua, biological machines (with things like bacteria, cells, hydras, worms, fish, reptiles, mammals) and non biological machines. There is a huge gap between the two, …

    So what is the gap, in your eyes? Is it simply complexity, that our technology is insufficient to make things of biological complexity (or that we currently don’t understand things of biological complexity?), or is there something more to the distinction than that?

    I readily accept that there is more to a brain than the neurons, for example some of the other cells in the brain could be playing a role in information processing, so I don’t want to focus on neurons alone. However, it seems to me that there is nothing about a C. elegans that could not be in-principle replicated in some other physical substrate, and anyhow I’m not at all sure why the physical substrate is relevant for “intentionality” and “aboutness”, which is more about information processing. So can you expound on why you see the biological and the non-biological as so distinct?

    Like

  44. Hi Asher,

    The question is – why would you so restrict it?

    I would restrict “meaning” to information-processing systems, rather than applying it to any and all physical systems, since that is what it seems to me that “meaning” is about.

    And if you can restrict it in this way, why not accept others’ restriction that “goal-directed behavior” requires a context in which the intentional thing formulates its own goals?

    Do we humans formulate our own goals? If I am very hungry I have a goal to eat and not feel hungry. Did I formulate that goal myself, or is it a goal programmed into me by evolution?

    When you have an “information processing system”, you are describing it as a system in which events here cause changes over there. Just like you’re saying that the chess-playing program is essentially the same as a chess-playing person in this respect, I’m saying that any causal system – a box of air, say – is processing and transmitting information. There’s really no difference.

    By “information-processing systems” I am referring to things with aims and goals, systems that are deliberating. I don’t think that “meaning” is a sensible word to apply to any and all physical systems, it only makes sense to me in the context of life forms and purposive devices created by life forms. Thus “meaning” is a product of Darwinian evolution (like consciousness, intelligence, awareness, desires, etc, which are all likewise).

    The box of air has a goal – entropy. It takes discrete steps to get to that goal by bouncing molecules off of other molecules.

    Well, no, that usage of the term “goal” seems too weird to me. A brick falls under gravity, and it has an end-state of having fallen under gravity, but I don’t think that it has a “goal” of falling under gravity.

    Like

  45. Well, no, that usage of the term “goal” seems too weird to me.

    Yeah but so what? Your definition seems weird to a lot of people too.

    A brick falls under gravity, and it has an end-state of having fallen under gravity, but I don’t think that it has a “goal” of falling under gravity.

    A box of air is “programmed” by the 2nd law of thermodynamics to seek entropy, just like the chess programmed is programmed by a person and a person is programmed by evolution. I fail to see the difference between them. If you add a heat source to the side of the box, the air will adjust to that and still relentlessly seek entropy.

    Sure, maybe it has less intentionality than the chess program, but it’s still non-zero. You just have to halve the chess program until you arrive at a box of air.

    Like

  46. “So, we get in deep confusion if we think that the problem with scientism is its metaphysics (physicalism). Physicalism may be wrong, but its not what is wrong with scientism.”‘ – Gregory Gaboardi

    I absolutely agree that physicalism may be wrong, but its not what is wrong with scientism (and I don’t think physicalism is wrong FWIW). Would you agree that one example of scientism, and an example that illustrates how it’s wrong, is when someone says that 1) physicalism has been shown to be true scientifically 2) one can derive physicalism from science or 3) science is incompatible with the claim that physicalism is false.

    I’ve heard versions of each.

    Like

Comments are closed.