On the (dis)unity of the sciences

universeby Massimo Pigliucci

As a practicing scientist I have always assumed that there is one thing, one type of activity, we call science. More importantly, though I am a biologist, I automatically accepted the physicists’ idea that — in principle at the least — everything boils down to physics, that it makes perfect sense to go after a “theory of everything.”

Then I read John Dupré’s The Disorder of Things: Metaphysical Foundations of the Disunity of Science [1], and that got me to pause and think (which, of course, is the hallmark of a good book, regardless if one rejects that book’s conclusions).

I found John’s book compelling not just because of his refreshing, and admittedly consciously iconoclastic tone, but also because a great deal of it is devoted to subject matters, like population genetics, that I actually know a lot about, and am therefore in a good position to judge whether the philosopher got it right (mostly, he did).

Dupré’s strategy in The Disorder of Things is to attack the idea of reductionism by showing how it doesn’t work in biology. The author rejects both the notion of a unified scientific method (a position that is nowadays pretty standard among philosophers of science), and goes on to advocate a pluralistic view of the sciences, which he claims reflects both what the sciences themselves are finding about the world (with a multiplication of increasingly disconnected disciplines and the production of new explanatory principles that are stubbornly irreducible to each other), as well as a more sensible metaphysics (there aren’t any “joints” at which the sciences “cut nature,” so that there are a number of perfectly equivalent ways of thinking about the universe and its furnishings).

But this essay isn’t primarily about John’s book. Rather, it took form while I re-read Jerry Fodor’s classic paper, “Special sciences (or: the disunity of science as a working hypothesis)” [2], together with Nancy Cartwright’s influential book, How the Laws of Physics Lie [3] — both of which came out before The Disorder of Things and clearly influenced it. Let me explain, beginning with Fodor, and moving then to Cartwright.

Fodor’s target was, essentially, the logical positivist idea (still exceedingly common among scientists, despite the philosophical demise of logical positivism a number of decades ago) that the natural sciences form a hierarchy of fields and theories that are (potentially) reducible to each next level, forming a chain of reduction that ends up with fundamental physics at the bottom. So, for instance, sociology should be reducible to psychology, which in turn collapses into biology, the latter into chemistry, and then we are almost there.

But what does “reducing” mean, anyway? [4] At the least two things (though Fodor makes further technical distinctions, you’ll have to check his original article): let’s call them ontological and theoretical.

Ontologically speaking, most people would agree that all things in the universe are made of the same substance (the exception, of course, are substance dualists), be it quarks, strings, branes or even mathematical relations [5]; moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc. Fodor does not object to this sort of reductionism, and neither do I.

Theoretical reduction, however, is a different beast altogether, because scientific theories are not “out there in the world,” so to speak, they are creations of the human mind. This means that theoretical reduction, contra popular assumption, does most definitely not logically follow from ontological reduction. Theoretical reduction was, of course, the holy grail (never achieved) of logical positivism: it is the ability to reduce all scientific laws to lower level ones, eventually reaching a true “theory of everything,” formulated in the language of physics. Fodor thinks that this too won’t fly, and the more I think about it, the more I’m inclined to agree.

Now, typically when one questions theory reduction in science one is faced with both incredulous stares and a quick counter-example: but look at chemistry! It has successfully been reduced to physics! Indeed, there basically is no distinction between chemistry and physics! Turns out that there are two problems with this move: first, the example itself is questionable; second, even if true, it is arguably more an exception than the rule.

As Michael Weisberg  and collaborators write in the Stanford Encyclopedia of Philosophy entry on the Philosophy of Chemistry [6]: “many philosophers assume that chemistry has already been reduced to physics. In the past, this assumption was so pervasive that it was common to read about “physico/chemical” laws and explanations, as if the reduction of chemistry to physics was complete. Although most philosophers of chemistry would accept that there is no conflict between the sciences of chemistry and physics, most philosophers of chemistry think that a stronger conception of unity is mistaken. Most believe that chemistry has not been reduced to physics nor is it likely to be.” You will need to check the literature cited by Weisberg and colleagues if you are curious about the specifics, but for my purposes here it suffices to note that the alleged reduction has been questioned by “most” philosophers of chemistry, which ought to cast at least some doubt on even this oft-trumpeted example of theoretical reduction. (Oh, and closer to my academic home field, Mendelian genetics has not been reduced to molecular genetics, in case you were wondering [7].)

The second problem, however, is even worse. Here is how Fodor puts it, right at the beginning of his ’74 paper:

“A typical thesis of positivistic philosophy of science is that all true theories in the special sciences [i.e., everything but fundamental physics, including non-fundamental physics] should reduce to physical theories in the long run. This is intended to be an empirical thesis, and part of the evidence which supports it is provided by such scientific successes as the molecular theory of heat and the physical explanation of the chemical bond. But the philosophical popularity of the reductivist program cannot be explained by reference to these achievements alone. The development of science has witnessed the proliferation of specialized disciplines at least as often as it has witnessed their reduction to physics, so the wide spread enthusiasm for reduction can hardly be a mere induction over its past successes.”

I would go further than Fodor here, echoing Dupré above: the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, the induction goes the other way around!

Indeed, even some scientists seems inclined toward at least some bit of skepticism concerning the notion that “fundamental” physics is so, well, fundamental. (It is, of course, in the trivial ontological sense discussed above: everything is made of quarks, or strings, or branes, or whatever.) Remember the famous debate about the construction of the Superconducting Super Collider, back in the ‘90s? [8] This was the proposed antecedent of the Large Hadron Collider that recently led to the discovery of the Higgs boson, and the project was eventually nixed by the US Congress because it was too expensive. Nobel physicist Steven Weinberg testified in front of Congress on behalf of the project, but what is less known is that some physicists testified against the SSC, and that their argument was based on the increasing irrelevance of fundamental physics to the rest of physics — let alone to biology or the social sciences.

Hard to believe? Here is how solid state physicist Philip W. Anderson put it already in 1972 [9], foreshadowing the arguments he later used against Weinberg at the time of the SSC hearings: “the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.” So much for a fundamental theory of everything.

Back to Fodor and why he is skeptical of theory reduction, again from his ’74 paper:

“If it turns out that the functional decomposition of the nervous system corresponds to its neurological (anatomical, biochemical, physical) decomposition, then there are only epistemological reasons for studying the former instead of the latter [meaning that psychology couldn’t be done by way of physics only for practical reasons, it would be too unwieldy]. But suppose there is no such correspondence? Suppose the functional organization of the nervous system cross cuts its neurological organization (so that quite different neurological structures can subserve identical psychological functions across times or across organisms). Then the existence of psychology depends not on the fact that neurons are so sadly small, but rather on the fact that neurology does not posit the natural kinds that psychology requires.” [10]

Just before this passage in the same paper, Fodor argues a related, even more interesting point:

“If only physical particles weren’t so small (if only brains were on the outside, where one can get a look at them), then we would do physics instead of paleontology (neurology instead of psychology; psychology instead of economics; and so on down). [But] even if brains were out where they can be looked at, as things now stand, we wouldn’t know what to look for: we lack the appropriate theoretical apparatus for the psychological taxonomy of neurological events.”

The idea, I take it, is that when physicists like Weinberg (for instance) tell me (as he actually did, during Sean Carroll’s naturalism workshop [11]) that “in principle” all knowledge of the world is reducible to physics, one is perfectly within one’s rights to ask (as I did of Weinberg) what principle, exactly, is he referring to. Fodor contends that if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics. There is, it seems, no known “principle” that would guide anyone in pursuing such a quest — a far more fundamental issue from the one imposed by merely practical limits of time and calculation. To provide an analogy, if I told you that I could, given the proper amount of time and energy, list all the digits of the largest known prime number, but then decline to actually do so because, you know, the darn thing’s got 12,978,189 digits, you couldn’t have any principled objection to my statement. But if instead I told you that I can prove to you that there is an infinity of prime numbers, you would be perfectly within your rights to ask me at the least the outline of such proof (which exists, by the way), and you should certainly not be content with any vague gesturing on my part to the effect that I don’t see any reason “in principle” why there should be a limit to the set of prime numbers.

Fine, but does anyone have any positive reasons to take seriously the notion of the impossibility of ultimate theory reduction, and therefore of the fundamental disunity of science (in theoretical, not ontological, terms)? Nancy Cartwright does (and so does Ian Hacking, as exemplified in his Representing and Intervening [12]). Cartwright has put forth a view that in philosophy of science is known as theory anti-realism [13], which implies a denial of the standard idea — almost universal among scientists, and somewhat popular among philosophers — that laws of nature are (approximately) true generalized descriptions of the behavior of things, especially particles (or fields, doesn’t matter). Rather, Cartwright suggests that theories are statements about how things (or particles, or fields) would behave according to idealized models of reality.

What’s the big deal? That our idealized models of reality are not true, and therefore that — strictly speaking — laws of nature are false. Of course the whole idea of laws of nature (especially with their initially literal implication of the existence of a law giver) has been controversial since it was championed by Descartes and opposed by Hobbes and Galileo [14], but Cartwright’s rather radical suggestion deserves a bit of a hearing, even though one may eventually decide against it (I admit to being a sympathetic agnostic in this regard).

Cartwright distinguishes between two ways of thinking about laws: “fundamental” laws are those postulated by the realists, and they are meant to describe the true, deep structure of the universe. “Phenomenological” laws, by contrast, are useful for making empirical predictions, and they work well enough for that purpose, but strictly speaking they are false.

Now, there are a number of instances in which even physicists would agree with Cartwright. Take the laws of Newtonian mechanics: they do work well enough for empirical predictions (within a certain domain of application), but we know that they are false if they are understood as being truly universal (precisely because they have a limited domain of application). According to Cartwright, all laws and scientific generalizations, in physics as well as in the “special” sciences are just like that, phenomenological.

Funny thing is that some physicists — for example Lee Smolin [15] — seem to provide support for Cartwright’s contention, to a point. In his delightful The Trouble with Physics Smolin speculates (yes, it’s pretty much a speculation, at the moment) that there are empirically intriguing reasons to suspect that Special Relativity “breaks down” at very high energies [16], which means that it wouldn’t be a law of nature in the “fundamental” sense, only in the “phenomenological” one. (Smolin also suggests that General Relativity may break down at very large cosmological scales [16].)

But of course there are easier examples: as I mentioned above, nobody has any clue about how to even begin to reduce the theory of natural selection, or economic theories, for instance, to anything below the levels of biology and economics respectively, let alone fundamental physics.

If Cartwright is correct, then, science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.

Here is how Cartwright herself puts it, concerning physics in particular: “Neither quantum nor classical theories are sufficient on their own for providing accurate descriptions of the phenomena in their domain. Some situations require quantum descriptions, some classical and some a mix of both.” And the same goes, a fortiori, for the full ensemble of scientific theories, including all those coming out of the special sciences.

So, are Dupré, Fodor, Hacking and Cartwright, among others, right? I don’t know, but it behooves anyone who is seriously interested in the nature of science to take their ideas seriously, without dismissing them out of hand. We have already agreed that it is impossible to achieve reduction from a pragmatic epistemic perspective, and we have seen that there are good reasons to at the least entertain the idea that disunity is fundamental, not just epistemic. True, we have also agreed to the notion of ontological reduction, but I have argued above that there is no logically necessary connection between ontological and theoretical reduction, and it is therefore a highly questionable leap of (epistemic) faith to simply assume that because the world is made of one type of stuff therefore there must be one fundamentally irreducible way of describing and understanding it. Indeed, ironically it is the anti-realists who claim the mantle of empiricism to buttress their arguments: the available evidence goes against the idea of ultimate theory reduction (it can’t be done in most cases, and the number of theories to reduce is increasing faster than the number of successful reductions achieved so far), so it is a metaphysically inflationary (i.e., unnecessary and undesirable) move to assume that somehow such evidence is deeply misleading. And most physicists wouldn’t be caught dead admitting that they are engaging in metaphysics…

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] The Disorder of Things: Metaphysical Foundations of the Disunity of Science, by J. Dupré, 1993.

[2] Special sciences (or: the disunity of science as a working hypothesis), by J. Fodor, Synthese, 1974.

[3] How the Laws of Physics Lie, by N. Cartwright, 1983.

[4] Scientific Reduction, by R. van Riel, Stanford Encyclopedia of Philosophy, 2014.

[5] Rationally Speaking podcast #69: James Layman on metaphysics; Rationally Speaking podcast #101: Max Tegmark on the mathematical universe hypothesis.

[6] Philosophy of Chemistry, by M. Weisberg et al., Stanford Encyclopedia of Philosophy, 2011.

[7] On the debate about the reduction of Mendelian to molecular genetics, see: Molecular Genetics, by K. Waters, Stanford Encyclopedia of Philosophy, 2007.

[8] Superconducting Super Collider, Wiki entry.

[9] More Is Different, by P. W. Anderson, Science, 177:393-396, 1972.

[10] A “natural kind” in philosophy is a grouping of things that is not artificial, that cuts nature at its joints, as it were. A typical example is a chemical element, like gold. See: Natural Kinds, by A. Bird, 2008, Stanford Encyclopedia of Philosophy. Notice that Fodor here is in tension with Dupré, since the latter denies the existence of natural kinds altogether.

[11] Moving Naturalism Forward, an interdisciplinary workshop, 25-29 October 2012.

[12] Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, by I. Hacking, 1983.

[13] Which she couples with “entity” realism, the idea that unobservable entities like genes and electrons are (likely) real. This position is therefore distinct, and in between, the classical opposites of scientific realism (about both theories and entities) and scientific anti-realism (about both theories and entities). See: Scientific Realism, by A. Chakravartty, Stanford Encyclopedia of Philosophy, 2011, and Constructive Empiricism, by B. Monton and C. Mohler, Stanford Encyclopedia of Philosophy, 2012.

[14] Are there natural laws?, by M. Pigliucci, Rationally Speaking, 3 October 2013.

[15] The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next, by L. Smolin, 2006.

[16] For Special Relativity, see chapter 13 of Smolin’s book. This has to do with the so-called GZK prediction, which represents a test of the theory at a point approaching Planck scale, where quantum mechanical effects begin to be felt. Regarding General Relativity, the comment is found in chapter 1.

Advertisements


Categories: essay

Tags: , , ,

127 replies

  1. Aravis-

    This is *not* merely a Token Physicalism and is clearly false. Sea shells, gold nuggets, pieces of paper, and electronic currents may all be instances of “currency,” but its not true that currency is “composed of” a “pattern” of these distinct physical types and notice that the conjunction of these physical types is not itself a physical type.

    Yes, precisely. Fodor himself has a humorous take on this in his paper when he discusses what he calls the “immortal econophysicist”:

    “…an immortal econophysicist might, when the whole show is over, find a predicate in physics that was, in brute fact, coextensive with ‘is a monetary exchange’. If physics is general – if the ontological biases of reductivism are true – then there must be such a predicate. But (a) to paraphrase a remark Donald Davidson made in a slightly different context, nothing but brute enumeration could convince us of this brute co-extensivity, and (b) there would seem to be no chance at all that the physical predicate employed in stating the coextensivity is a natural kind term, [Fodor notes that should it be the case that ‘is a monetary exchange’ matches a physical natural kind, it “would be an accident on a cosmic scale”] and (c) there is still less chance that the coextension would be lawful (i.e., that it would hold not only for the nomologically possible world that turned out to be real, but for any nomologically possible world at all).” p. 104

    Like

  2. Alexander, Newtonian mechanics treats space and time as distinct, fixed entities. Relativity treats them as two aspects of the same thing, and as continuously dynamic. In what sense of the word “approximate” is the first an approximation of the second?

    The fact that you can’t see how a theory could be empirically adequate and yet false is interesting, but it is pretty well supported by the history of science, and it is a phenomenon caused by the underdetermination of theories by the evidence.

    Like

  3. People have been unhappy with my choice of Weinberg as a champion of indefensible reductionism, even though that is precisely the position he was defending at Carroll’s naturalism workshop.

    Here, then, is a much better and clearcut example: EO Wilson, particularly (but not only) in his book, Consilience.

    This is Fodor’s review of the book: http://www.lrb.co.uk/v20/n21/jerry-fodor/look

    This is biologist Allen Orr’s review: http://bostonreview.net/archives/BR23.5/Orr.html
    Notice that Allen is one of the most sophisticated scientists I know when it comes to understanding of philosophy (it even rubbed on his colleague of many years, Jerry Coyne, who recently took Wilson to task).

    And of course here is a recent article of mine criticizing Wilson’s consilience program in detail:
    http://aeon.co/magazine/philosophy/massimo-pigliucci-on-consilience/

    Like

  4. Hi PanSci,

    thanks for your response.

    As usual it will all come down to definitions of words like “unity” or “simulation” and there might be several reasonable definitions, but given the scope of the discussion I do not find yours helpful. Let me try to explain:

    Of course, if you would only ever refer to a universe as “unified” if you could describe it by a state vector and a set of transitions, then this kind of describability not only follows from unity, it is the same claim!

    Alexander instead was talking about a special kind of “connectedness” in the physical universe, maybe that there are no two “wholly separated domains over space, time and scale”.
    Now, it does not at all follow for me that your type of describability would by necessity be possible in a universe that is unified in this latter sense. Think of a connected geometrical figure whose local neighborhood is smooth and well defined at any point, but the “total structural information” of the figure can nevertheless not be compressed significantly, so that the set of all “accurate representations” contains only the figure itself. Now it would be very interesting to me to understand that such a case could be ruled out for our universe, I just don’t see how.

    A similar issue occurs in some simulation arguments. I assume that their defenders have something like a numerical simulation in mind, in other words a computational process according to some computational model. In the last decades we have really learned a lot about the limits of computation – not only limits in practice, but also limits in theory! I have no problem, for instance, to describe a state transition as some non-computable function using well defined mathematical notation, alas this transition cannot be computationally simulated. Imagine that the time development of the universe were governed by a very sensitive (chaotic) system of differential equations on the basis of some (infinite) subset of the digits of /pi as parameters. These transitions would be perfectly well-defined, but could not be computed and the time development of the system could hence not be computationally simulated.
    “Well, but how does the universe get from t to t+1 then?” one could reasonably ask… “Duh, by some non-computational process of course.”

    Again, if you are prepared to refer to “whatever the universe does” as a computation, you haven’t said much interesting over and above of that you have currently not much clue of what a computation actually is. Additionally you would need to be ready to accept that the only way to “simulate” a universe, were to “build” a universe and “run” it. Then again “building a house” is not “simulating a house” nor “simulating to build a house” in my book – actually building a house is what it is.

    Like

  5. Well, the discussion went pretty much as I thought it would — and pretty much as every discussion on this topic has gone in the past. Plenty of contempt for philosophy and philosophers — for the life of me, I can’t understand why people with so little regard for philosophy and philosophers keep reading and participating in a philosophy webzine — plenty of evasion of all the difficult questions, by way of constant re-framing, and plenty of denial — denial that so-and-so actually said such-and-such; denial that anyone ever said such-and-such; charges of Straw-man raising; and so on and so forth.

    So, in my last comment on this thread, let’s take stock.

    1. Token Physicalism and supervenience are weak, uninteresting theses, which pretty much everyone agrees with. They don’t get you anywhere near anything like the unity of the sciences. Thus far, no one has disputed this or offered any argument of any kind against this point–though I’ve made it over and over again–and yet, the pro-reductionists insist that this is all they and anyone in science means. That’s all fine and well, but it’s still….well….uninteresting and doesn’t get you any of the stuff you want.

    2. Protestations and denials aside, plenty of scientists — including quite prominent ones — have advanced what are clearly versions of Type Physicalism and Strong Reductionism, on behalf of some version of the unity of the sciences thesis. E.O. Wilson wrote a whole book along these lines, which received an enormous amount of attention, and Massimo has recounted *personal* conversations with scientists, in which they have expressed these views. It is a central topic of Ernest Nagel’s “The Structure of Science,” which is likely the most important work of philosophy of science written in the 20th century (IMHO).

    The constant refrain, then, from the reductionist-gallery that “no scientist thinks this” and “strong reductionism is a Straw-man” is not only demonstrably false, but insulting, insofar as it suggests that Massimo and those of us on the other side are either outright lying or arguing in bad faith. (Straw-men are typically *deliberately* constructed in arguments, conducted in bad faith.)

    3. None of the defenders of reductionism here have countered a single one of the very specific arguments against Type Physicalism/Strong Reductionism/Unity of the Sciences. It’s been crickets chirping all along, on that front. (Unsurprising, considering that the reductionist gallery has chosen the “we’re not strong reductionists and never were” strategy.) Even Coel has declined to revive his “money is a neurological state” line.

    Fodor’s thesis stands. Strongly.

    Like

  6. Final brief note to all who defend computationalism in general, and specifically, Neumann’s machine that can do anything, and simulate “property/action X” it supposedly can’t do.

    Long, long refuted.

    Read Gödel, Escher, Bach, specifically, the one dialogue by Hofstadter about unbreakable record players.

    Like

  7. @David: “To be a reductionist you have to believe the physical description does completely describe the animal”.

    Not quite. Consider a canister of gas. The microscopic state is in terms of molecular velocities etc. We then arbitrarily choose to define temperature, a high-level concept. Analyzing the microscopic state in this high-level way reveals the emergent temperature of the gas. The physical, microscopic, description does not talk about temperature, and that concept doesn’t logically follow from it.

    This is how all reduction works – high-level concepts are “reducible” in the sense that they are defined in terms of lower-level ones, but they are “not reducible” in the sense that they do not arise from the lower-level but rather are imposed arbitrarily by us. To paraphrase Democritus – teeth exist by convention, temperature exists by convention, in truth there is only quarks and leptons. (The idea isn’t that teeth don’t exist, but rather that the choice to describe reality in such terms is an arbitrary ‘convention’.)

    The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description. But not “complete” in the sense that it already includes said calculations. Temperature can be calculated from the microscopic state, but the microscopic state does not list what the temperature is.

    @Aravis: “[Reduction] in the sense meant by those making Unity of the Sciences style arguments, entails Type [Reduction].”

    No. Temperature isn’t part of the lower-level description. Reductionists maintain that high-level concepts are manifested because of the underlying dynamics, not that the high-level concepts exist in the underlying levels.

    Such weakly-emergent behaviors can be multiply realizable. Numerous particles and bodies can have “high energy”, for example.

    They can also establish autonomic fields. It doesn’t matter how computation is physically done, as long as the results match the principles of computation theory. Yet, the emergence of the higher-level behavior (whose presence the autonomous field relies on) is always due to the underlying dynamics.

    From Democritus to Weinberg (through me), you are misunderstanding reductionists. When we tell you that in truth there is only atoms and void, we do not mean that currency is an atom. When we tell you that “in principle all knowledge of the world is reducible to physics” we do not mean that you can just crank up the calculators and derive a result in economics. Your “protestations and denials aside”, you are pinning positions on people that don’t hold them.

    @miramaxime:

    Yes, we implicitly assume no Godel-like complications and the simulation criterion is inapplicable if they hold. However, even then we may have practically-computable reduction, e.g. an infinite series of ever-deeper physical theories, each computable and reducing to a deeper one. “Entailment” can also replace computability. Ultimately, if there is “unity” of the kind we define then there is reduction (duh! we define to fit our purposes…).

    @All: This is my final (5th) comment in this thread; if you wish to continue the discussion, you are welcome to do so in my own blog http://thebiganswers.wordpress.com/2014/12/04/reduction-in-two-easy-steps/

    Like

  8. I suppose this is my last, so..
    This is a bit of a chicken vs. egg argument. We know the complex chicken arises from the simple egg and so must complex reality arise from some simple form. The problem is the reductionistic, measurement based methods used, which results in assuming physics is the egg, really only gives us the bones of the chicken, the hard patterns left, when all the soft tissue of context and feedback, not to mention the inherent dynamics being reduced to static measures, are distilled away.
    For one thing, contextual reality is far more thermodynamic, than temporally linear. Keep in mind that entropy is those energies seeking their own thermal equilibrium and not imposed by our convention. Time also emerges as an effect of this activity, as form evolves, future becomes past. The linear narrative of sequence is in fact our imposition and only really apparent in hindsight.
    In fact, eastern philosophies, which are contextual, as western ones are object oriented, view the past as being in front, since both are observed and the future behind, since both are hidden. This is a contextual paradigm, rather than one in which the object moves against context and thus forward into the future.
    In this view, thermodynamics is more fundamental, because the observer is just one of the particles, not the temporal point of reference moving through its context, from one event to the next.
    So it is not just atoms and the void, it’s nodes and the network. If we look at it from the left side of our brain, the nodes seem predominate and if we look at it from the right side of our brain, the network seems predominate.
    Panpsychist, I’ll post this to your blog, since it was partially addressed to you.

    Like

  9. SciSal:

    “..Newtonian mechanics treats space and time as distinct, fixed entities. Relativity treats them as two aspects of the same thing, and as continuously dynamic. In what sense of the word “approximate” is the first an approximation of the second?”

    I like your word “aspects”, very succinct.

    As you know, in Special Relativity, once a frame of reference is chosen, space and time are each fixed, but not before. I don’t think there is much credibility to the doubt expressed about the possibility of approximation. And expressing that doubt as in the last sentence is rather close to the old ‘argument by incredulity’: humans can’t have evolved from bacteria, and Newtonian physics can’t be approximated by relativity, because I can’t think of any way for those to happen!

    Ask, and I (or many others more knowledgeable) can give you lots of sources where Newtonian kinematics is derived as a limiting case of SR’s treatment of spacetime, Newtonian dynamics when SR’s energy-momentum tensor is included, Newtonian gravity as a limiting case of general relativity. ( And similarly for quantum mechanics.) Alternatively, just put—-newtonian relativity limit—- in google. About 5th down are the excellent CalTech notes of your acquaintance Sean (the non-biologist) Carroll. And several other entries there are okay it appears.

    I had in error guessed that this ‘theory falseness’ and ‘non-approximability’ had long been abandoned. The quibbles (not the word a philosopher might use) of Kuhn, Feyeraband and Jammer concerning mass difference between the above theories have no credibility at all with physicists now, and never really did it seems. Matt Strassler’s blog is a really good one for people like me with reasonable education in some mathematical things, but not grad school in physics. For such particle physicists, mass is always what used to sometimes be called rest mass, and energy is what comes out of the mathematician Emmy Noether’s famous theorem, when applied to time invariance symmetry. I won’t claim that that theorem would be quoted by all 3,000 scientists working at the LHC in Geneva, but Strassler and his colleagues there in theory certainly would, and would not ‘equate’ mass with energy. But they would ‘equate’ the space of an observer in SR with the space of Newton, though not the kinematics.

    I dug a bit and got the impression that Fodor had nothing new to say, beyond the above discounted philosophizing, about reduction of biology/chemistry to physics or of Newtonian to later forms of physics; and not much beyond Putnam on a topic I’d avoid pretending to opinionate about, namely reduction of thinking to anything more basic.

    As far as newtonian laws being not ‘true’, of course a huge lesson was learned with the advent of relativity and quantum theories. But that lesson is partly that physicists always would include error bars and limited domains of applicability to everything (except a claimed “final theory” of course). If you wish to say that Newton’s theory with those provisos should no longer be called Newton’s theory, that’s up to you. But with them, the claim of ‘not true’ is simply silly, until and if stricter provisos are shown to be needed. But I doubt that agonizing over ‘true’ versus ’empirically adequate’ is of much interest to people working on fundamental physics.

    Like

  10. Panpsychist,

    “Consider a canister of gas.” I thought we were doing cheetahs but ok.
    “The physical, microscopic, description does not talk about temperature, and that concept doesn’t logically follow from it.” Well yes it does in the relevant way. Temperature is one of the few things that does seem to reduce neatly (I’m not an expert though, I can’t be sure). You can define temperatures in terms of energy states. So we can identify being 40 degrees with being in a certain thermodynamic state. That is reduction. Any instance of temperature description can be exchanged for completely physical description according to rules. Given these rules, “bridge laws” in the philosophical vernacular, being in thermodynamic state A *logically* entails being at temperature T. Just like given F=ma, a body’s having force of 1 newton and an acceleration of 1m/s *logically* entails its weighing one kilogram (if I got my units right). I think you have the notion that reductionism requires that the ordinary language concepts must become embedded in the physics or that in running a simulation ordinary language concepts would somehow appear in the simulation. Not at all. What matters is we have rules of replacement so we can reduce one kind of description to the other.

    “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.”
    Well what you seem to have in mind by “calculation” is computing one kind of description by bridge laws, where you put, say, mental description in and get physical description out; that is philosophical reductionism pure and simple. Anti-reductionists, like me, think we have reasons to think such calculations impossible. Aravis quite correctly pointed out you havent addressed any of them.

    On Aravis “pinning” views on you:
    When you say that natural kind terms like “tooth” and even “beauty” are definable in purely physical terms you are literally giving the definition of type reductionism. There just is no ambiguity on this point. I’m sorry to be blunt but if you continue to maintain that you are not a type reductionist or maintain a supervenience view, you are simply proving you do not know what these words mean. Aravis understood you perfectly well. You (and Coel) are equivocating on whether you mean logical or causal entailment. I see it happening moment to moment. “Reductionists maintain that high-level concepts are manifested because of the underlying dynamics” Causal. “The low-level description is “complete” in the sense that it contains all the information needed to calculate the higher-level description.” Logical. The frustration sets in because you, Coel, DM and the rest come in and declare that philosophers are using terms arbitrarily, mistakenly or just in unhelpful ways when it is exactly the philosophers who have spent great time and energy trying to frame and answer these questions. Many scientists seem to reflexively think they have the answers merely because they are scientists, yet in discussion many of the scientists here make basic errors that philosophy diagnosed some time ago. They demand that we should suddenly reframe the debate in their improvised framework. I am going to choose the carefully articulated and examined framework every time. One of the traditional roles for philosophy is teaching people they know less than they thought. I certainly see a need for that here.

    Coel,
    You didnt really get a chance to respond, if you want to comment on Panpsychist’s blog I will see it there. otherwise I’m sure groundhog day will come again. 🙂

    Like

  11. Panpsychist-

    “Not quite. Consider a canister of gas. The microscopic state is in terms of molecular velocities etc. We then arbitrarily choose to define temperature, a high-level concept. Analyzing the microscopic state in this high-level way reveals the emergent temperature of the gas. The physical, microscopic, description does not talk about temperature, and that concept doesn’t logically follow from it.”

    “To paraphrase Democritus – teeth exist by convention, temperature exists by convention, in truth there is only quarks and leptons.”
    _____________________________________

    When I’m freezing in January in Minneapolis, it’s not arbitrary or “not real,” I can assure you. To say that it is, is essentially to fall into Massimo’s “it’s all an illusion crowd” category, which helps us not at all, for all the ontological, epistemological, and other reasons regarding explanation and understanding articulated by Aravis and Massimo below:

    http://bloggingheads.tv/videos/30523

    The more I hear these interminable arguments about reductionism, the more I think Massimo was correct in describing this as a kind of monism, the result he says, of a combination of ignorance of philosophy of science and an Enlightenment hangover. Fodor, too, highlighted this resistance at the end of his 1997 sequel, “Special Sciences: Still autonomous after all these years.”

    Like

  12. ph, the claim that anewtonian theory of space-time is wrong is not silly at all. And this has nothing to do with error bars: the picture of space-time you get from Newton is qualitatively different from the one you get from Einstein. This may not fit the neat realist view of steady progress in science, but then so much the worse for the neat realist view.

    Like

  13. SciSal, Newtonian mechanics certainly is an approximation to relativity, and this is commonly explained in relativity textbooks and on Wikipedia. Space and time are not two aspects of the same thing in relativity. In spite of your claim that causality does not appear in fundamental physics, space-time in relativity has a causal structure that treats space very differently from time.

    It is also not quite right to say that Newtonian mechanics is different because space and time are fixed. Both Newtonian and relativistic mechanics can use coordinate changes in space and time. You say that relativity space-time is continuously dynamic, but it is really the gravitational potential that is continuously dynamic, and a Newtonian potential is also continuously dynamic.

    It is true that relativistic gravity has an interpretation in terms of curvature, but so does Newtonian gravity.

    You also claim that classical and quantum mechanics are radically different in how they treat space and time, and hence cannot be applied to the same problems. I disagree. See the correspondence principle for how classical mechanics is a macroscopic approximation to quantum mechanics, contrary to Cartwright.

    Alexander is quite right that calling Newtonian physics “wrong” is a rather strange use of the word wrong. Perhaps the historian/philosophers use the word to mean something different from everyone else. Phoffman56 explains this correctly, and he is right that the philosopher quibbles have no credibility at all with physicists.

    SciSal argues that Einstein space-time is some sort of exception to the view of steady progress in science. I disagree. The history of relativity was one of steady progress over decades, and was not really so radically different. You can still talk about forces as being the rate of change of momentum, and many other Newtonian concepts carry over.

    Like

  14. Physicists (scientist) who know what they’re talking about (philosophically) are very critical about the idea that all sciences can be unified with, reduced to physics. Others, not so much. That’s the impression I get when following this discussion.

    But I still have the feeling that philosophy sometimes is a blunt instrument to study physics.

    > the picture of space-time you get from Newton is qualitatively different from the one you get from Einstein.

    In the limit of small v/c the Lorentz transformations (special relativity) become the Galilei transformations (classical mechanics). Taking this limit “decouples” space and time (in a certain sense). The fact that a classical picture of space-time is “qualitatively” different form the SR picture becomes irrelevant. The distinction between the two becomes a distinction without a measurable difference, physically speaking. In the limit of small v/c, the “qualities” we’re talking about and that are supposed to be different are not physical qualities.

    Insisting on a certain qualitative difference when v/c is small, seem to me taking a philosophical distinction too far, using it as a blunt instrument. After all, the instruments of philosophy have a limited domain of application too, just like the instruments of the sciences. Using the concept of a qualitative difference in this case is probably perfectly acceptable from the point of view of a philosopher (a good whack with a hand axe is perfectly acceptable if a hand axe is all you have), but it misses what’s important for physics.

    Like

  15. The gravitational property of a physical particle or a dust particle in space plays minimal role in their behavior vs the gravitiational property of the earth; or the electrons in my styrofoam coffee cup play minimal role vs the ones flowing in this computer. It all has to do with environments created that lead to the various scientific disciplines and inability to factor the environments cause the disunity. Massimo’s background in biology is similar to engineers like myself who are interested in mechanisms whether natural biological, cosmological, geological etc. vs manmade. Any mechanism creates its own environement and specific set of behavioral laws.

    Like

  16. Patrick, schlafly,

    “In the limit of small v/c the Lorentz transformations (special relativity) become the Galilei transformations (classical mechanics).”

    Yes, I know about the limit approximation. So here is another opportunity to show that philosophy is not the blunt instrument that you think: the most sophisticated available version of scientific realism is called structural realism, and it argues that what’s “real” about scientific theories is their mathematical structure. What this means is that structural realists can acknowledge that the qualitative picture emerging from Newtonian mechanics is — physically — wrong, but that mathematically there is continuity between it and the picture emerging from relativity.

    Of course, structural realism faces its own problems, such as explaining why most scientific theories do not have a sophisticated mathematical structure, or a mathematical structure at all (think of the theory of natural selection, or continental drift in geology). They also face some issues within physics itself: for instance, people have argued that there is mathematical continuity between Ptolemaic and Copernican astronomy. Maybe so, but talk about two theories who most definitely present qualitatively distinct — and physically incompatible — pictures of the world!

    Like

  17. > people have argued that there is mathematical continuity between Ptolemaic and Copernican astronomy. Maybe so, but talk about two theories who most definitely present qualitatively distinct — and physically incompatible — pictures of the world!

    Of course there is mathematical continuity between Ptolemaic and Copernican astronomy on the kinematic level (the description of movements). It’s just a choice of coördinates (on the kinematic level).

    And again, I have the feeling of a hand axe hitting something. I apologize, I don’t want to be rude or provocative. I am sincere.

    > Of course, structural realism faces its own problems, such as explaining why most scientific theories do not have a sophisticated mathematical structure, or a mathematical structure at all.

    Again, this seems like a philosophical concept taken outside it’s domain of applicability. It works in some instances in the mathematically structured sciences. It obviously doesn’t work when you’re studying natural selection or continental drift. Structural realism looks like a pretty blunt instrument to me.

    Like

  18. By the way, I have to clarify something. I am very, very sceptical of the unification of science etc. In a certain sense I agree with much of what you write. I’m absolutely no fan of the Weinberg’s of this world. He has a hammer – a pretty good one, the Standard Model – but that doesn’t mean everything is a nail.

    But … sometimes I feel that philosophers are guilty of the same attitude, albeit with less arrogance and perhaps not to the same degree. You have your own hammers.

    Like

  19. Patrick, I don’t doubt your sincerity, and this is an open discussion anyway. But — as a scientist *and* a philosopher — I find intriguing when philosophers are accused of using blunt thinking tools, given that much of the point of philosophy is to make subtle conceptual distinctions.

    Like

  20. “…one thing, one type of activity, we call science.” I don’t think reducing “science” to a method instead of a body of knowledge really helps to focus on the issue of how we can recognize knowledge. Forgetting that what we think should be based on our best knowledge instead of wielding an abstract criterion, whether “scientific method” or logic or reason in general or revelation or intuition, to select the propositions/facts we use, seems to me a disastrous approach to any serious issue.

    “…rejects both the notion of a unified scientific method …a pluralistic view of the sciences …a more sensible metaphysics (there aren’t any ‘joints’ at which the sciences ‘cut nature,’ so that there are a number of perfectly equivalent ways of thinking about the universe and its furnishings).” Dupre isn’t available in the public library nor are there any inexpensive editions immediately available, so I’ll have to let this speak for him. Dupre’s rejection of a unified science seems to ignore the unity of nature, despite claiming to believe nature has no “joints.” Science in general is unified by its subject, with individual subdivisions a matter of practicality. Contradicting himself with the assumption that nature really does have joints, so that one part of nature has to be described with one science while another requires a separate science is not just illogical, but an enormous assumption to foist on people. Nobody has a clue how the universe could operate if it is ontologically variegated. The last phrase about “sensible” metaphysics might be read as implying that the different subdivisions of science provide different descriptions of nature. However, it is doubtful in what sense these ways of thinking are different if they are supposed to be “perfectly equivalent.” If they are, we could say that they reduce to each other, except by using a different, somehow less offensive name. I would guess that Dupre is trying to avoid explicitly denying the unity of nature, since that is such an extreme position.

    Like

  21. As to Fodor?

    The number of specialized disciplines taught in colleges is not driven by theoretical reductionism, but by practical considerations, such as vocational utility or possibly social benefit. Fodor did not need to demonstrate the practical difficulties in theoretical reductionism. After all, that’s why most everybody who talks about theoretical reductionism says “in principle.” What he needed to explain is why the practical difficulties are not sufficient to set his mind to rest. He needed to explain what principle other than the ontological disunity of nature would make theoretical reductionism an absurdity. Or at least he needed to give an argument as to how assuming that the unity of nature implies that a unified description is impossible. This seems quite illogical to me, as opposed to the notion that the unity of nature implies that a successful description of it will also be unified. My guess is that he’s basically thinking that science is incapable of describing reality. But cloaking antirealism as antireductionism serves as more effective rhetoric.

    I think a much more useful contribution to the critique of theoretical reductionism would be a discussion of emergentism. As is, antireductionism appears as the public face of a magical emergentism, where metaphysical entities are posited at will, whether it’s EP’s propensities and modules, or the Austrian economist’s time preference or interest rate. The interesting, nontrivial question is whether emergentism is compatible with materialism, or the braver forms of “naturalism?” Given that ontological reductionism is not, contra Fodor, trivial, I can’t see much benefit in this paper.

    Like

  22. As to Cartwright? She also is not available at the public library or inexpensive editions easily obtained, so again I will go by what’s here. She is an open antirealist. Well, the world has been down that road before. Antirealists are the people who informed us… that atoms did not exist but the atomic hypothesis was merely a way to organize experimental results. That electromagnetic fields are just computational or heuristic conveniences. That genes and species are merely labels. That culture is an abstraction over individuals. That history is a narrative imposed aby the historian on disconnected events. These are the people who can’t see why the nonexistence of the poles is a problem for the Ptolemaic theory, because, after all, Ptolemy’s model gave good predictions, better initially, before Kepler used elliptical orbits, than did Copernicus’ model, or so I understand. (Antirealists seem to usually reduce science to an experimental method, viewing theories as instruments to correlate data from acceptable experiments, with success measured by prediction of new results.) So, I must say I can’t see why it behooves us to take this seriously.

    “the standard idea …that laws of nature are (approximately) true generalized descriptions of the behavior of things, especially particles (or fields, doesn’t matter). Rather, Cartwright suggests that theories are statements about how things (or particles, or fields) would behave according to idealized models of reality.”
    Oh my. How can these be separate things? Cartwright seems to be hinting that there is only a coincidental relationship between the causal “things” in theories. Well, this is clearly not the case in physics, where controlled experiments have established the laws of motion, such as conservation of momentum etc I do not think we can meaningfully talk like Cartwright about how the laws of motion are rules deduced from an imaginary game of billiards And it’s not just physics. I don’t think we could view Mendel’s laws as rules deduced from an idealized model of inheritance. Theories are simplified versions of reality because that’s what it means to explain, to highlight the causes. I think Cartwright has a problem with the whole concept of brute matter causing things.

    Cartwright’s antirealist distinction between fundamental and phenomenological laws, aside from ignoring scientific knowledge whenever it wishes, imposes a false dichotomy. On the one hand, we have the instrumentalist view which is tacitly to be the really scientific one because it is experimentalist and predicitivist but has nothing to do with ultimate reality (which appears to be unknowable, because.) Or, there are fundamental laws, which are false because they are not predictive of the actual measurements obtained in everyday reality. The real dichotomy I think is between fundamental theory hypothesizing causal entities and providing explanations, yet possibly being erroneous versus phenomenological theory that provides predictions instead of explanation, which indeed cannot provide grounds even for simple induction. It doesn’t appear that Cartwright has anything useful to offer on this dilemma.

    Cartwright’s notion that the phrase “laws of nature” imply a lawgiver, i.e., God, when the “laws of nature” doesn’t just imply a moral purpose but the possibility of disobedience and the promise of exemption (aka “miracle,”) I will agree with her. As of now, I think it’s just confusionism.

    Like

  23. Steven, that’s not quite right. Cartwright is an anti-realist about scientific theories, but is a realist about the entities postulated by those theories. It’s a position in-between standard realism and anti-realism (as, in a different fashion, is the case also for so-called structural realism).

    Like

  24. No doubt Cartwright is just crushed that I misunderstood her. My apologies to her, and thanks for the correction.

    That said, it seems to me that seeking and finding these causal entities (whether Mendelian genes or tectonic plates or whatever,) is the scientific reductionist program in action. On a theoretical level, it seems to me that emergent phenomena, like temperature or minds or the market, will leave the so-called special sciences necessary to engage with reality. But, like Fodor, Cartwright doesn’t address the nature of valid rather than mystical emergentism.

    Like

  25. “…or a mathematical structure at all (think of the theory of natural selection, or continental drift in geology)”

    I don’t think either of these is a good example. “Origin of Species” doesn’t have any equations, but it did present a model and an algorithm, which have since been mathematized in evolutionary genetics – the recent arguments about group selection seem to swing between mathematical models and observational data. In the case of continental drift, we have straight geometry (fit of coastlines), species distributions, and arguments about the physics.

    I have some sympathy with Mario Bunge’s pragmatic way of thinking – his ontological “levels ladder” (physical, chemical, living, thinking, social, artificial) looks sensible, and highlights emergence (P being a property of an object not possessed by any entities on the preceding level, although every object in the current level is composed of objects from the previous level). One does wonder whether these levels will proliferate indefinitely. And it doesn’t make much of “universal” mathematical models can apply to phenomena at many levels eg diffusion or computation.

    Like

  26. David, I assure you that I understand population genetics theiry very well, it’s my field of research, but it doesn’t get you the sort of formalism that is required by structural realism. Not to mention that there are plenty of aspects of evolutionary theory that simply don’t make it into population genetics.

    Like

%d bloggers like this: