On the (dis)unity of the sciences

universeby Massimo Pigliucci

As a practicing scientist I have always assumed that there is one thing, one type of activity, we call science. More importantly, though I am a biologist, I automatically accepted the physicists’ idea that — in principle at the least — everything boils down to physics, that it makes perfect sense to go after a “theory of everything.”

Then I read John Dupré’s The Disorder of Things: Metaphysical Foundations of the Disunity of Science [1], and that got me to pause and think (which, of course, is the hallmark of a good book, regardless if one rejects that book’s conclusions).

I found John’s book compelling not just because of his refreshing, and admittedly consciously iconoclastic tone, but also because a great deal of it is devoted to subject matters, like population genetics, that I actually know a lot about, and am therefore in a good position to judge whether the philosopher got it right (mostly, he did).

Dupré’s strategy in The Disorder of Things is to attack the idea of reductionism by showing how it doesn’t work in biology. The author rejects both the notion of a unified scientific method (a position that is nowadays pretty standard among philosophers of science), and goes on to advocate a pluralistic view of the sciences, which he claims reflects both what the sciences themselves are finding about the world (with a multiplication of increasingly disconnected disciplines and the production of new explanatory principles that are stubbornly irreducible to each other), as well as a more sensible metaphysics (there aren’t any “joints” at which the sciences “cut nature,” so that there are a number of perfectly equivalent ways of thinking about the universe and its furnishings).

But this essay isn’t primarily about John’s book. Rather, it took form while I re-read Jerry Fodor’s classic paper, “Special sciences (or: the disunity of science as a working hypothesis)” [2], together with Nancy Cartwright’s influential book, How the Laws of Physics Lie [3] — both of which came out before The Disorder of Things and clearly influenced it. Let me explain, beginning with Fodor, and moving then to Cartwright.

Fodor’s target was, essentially, the logical positivist idea (still exceedingly common among scientists, despite the philosophical demise of logical positivism a number of decades ago) that the natural sciences form a hierarchy of fields and theories that are (potentially) reducible to each next level, forming a chain of reduction that ends up with fundamental physics at the bottom. So, for instance, sociology should be reducible to psychology, which in turn collapses into biology, the latter into chemistry, and then we are almost there.

But what does “reducing” mean, anyway? [4] At the least two things (though Fodor makes further technical distinctions, you’ll have to check his original article): let’s call them ontological and theoretical.

Ontologically speaking, most people would agree that all things in the universe are made of the same substance (the exception, of course, are substance dualists), be it quarks, strings, branes or even mathematical relations [5]; moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc. Fodor does not object to this sort of reductionism, and neither do I.

Theoretical reduction, however, is a different beast altogether, because scientific theories are not “out there in the world,” so to speak, they are creations of the human mind. This means that theoretical reduction, contra popular assumption, does most definitely not logically follow from ontological reduction. Theoretical reduction was, of course, the holy grail (never achieved) of logical positivism: it is the ability to reduce all scientific laws to lower level ones, eventually reaching a true “theory of everything,” formulated in the language of physics. Fodor thinks that this too won’t fly, and the more I think about it, the more I’m inclined to agree.

Now, typically when one questions theory reduction in science one is faced with both incredulous stares and a quick counter-example: but look at chemistry! It has successfully been reduced to physics! Indeed, there basically is no distinction between chemistry and physics! Turns out that there are two problems with this move: first, the example itself is questionable; second, even if true, it is arguably more an exception than the rule.

As Michael Weisberg  and collaborators write in the Stanford Encyclopedia of Philosophy entry on the Philosophy of Chemistry [6]: “many philosophers assume that chemistry has already been reduced to physics. In the past, this assumption was so pervasive that it was common to read about “physico/chemical” laws and explanations, as if the reduction of chemistry to physics was complete. Although most philosophers of chemistry would accept that there is no conflict between the sciences of chemistry and physics, most philosophers of chemistry think that a stronger conception of unity is mistaken. Most believe that chemistry has not been reduced to physics nor is it likely to be.” You will need to check the literature cited by Weisberg and colleagues if you are curious about the specifics, but for my purposes here it suffices to note that the alleged reduction has been questioned by “most” philosophers of chemistry, which ought to cast at least some doubt on even this oft-trumpeted example of theoretical reduction. (Oh, and closer to my academic home field, Mendelian genetics has not been reduced to molecular genetics, in case you were wondering [7].)

The second problem, however, is even worse. Here is how Fodor puts it, right at the beginning of his ’74 paper:

“A typical thesis of positivistic philosophy of science is that all true theories in the special sciences [i.e., everything but fundamental physics, including non-fundamental physics] should reduce to physical theories in the long run. This is intended to be an empirical thesis, and part of the evidence which supports it is provided by such scientific successes as the molecular theory of heat and the physical explanation of the chemical bond. But the philosophical popularity of the reductivist program cannot be explained by reference to these achievements alone. The development of science has witnessed the proliferation of specialized disciplines at least as often as it has witnessed their reduction to physics, so the wide spread enthusiasm for reduction can hardly be a mere induction over its past successes.”

I would go further than Fodor here, echoing Dupré above: the history of science has produced many more divergences at the theoretical level — via the proliferation of new theories within individual “special” sciences — than it has produced successful cases of reduction. If anything, the induction goes the other way around!

Indeed, even some scientists seems inclined toward at least some bit of skepticism concerning the notion that “fundamental” physics is so, well, fundamental. (It is, of course, in the trivial ontological sense discussed above: everything is made of quarks, or strings, or branes, or whatever.) Remember the famous debate about the construction of the Superconducting Super Collider, back in the ‘90s? [8] This was the proposed antecedent of the Large Hadron Collider that recently led to the discovery of the Higgs boson, and the project was eventually nixed by the US Congress because it was too expensive. Nobel physicist Steven Weinberg testified in front of Congress on behalf of the project, but what is less known is that some physicists testified against the SSC, and that their argument was based on the increasing irrelevance of fundamental physics to the rest of physics — let alone to biology or the social sciences.

Hard to believe? Here is how solid state physicist Philip W. Anderson put it already in 1972 [9], foreshadowing the arguments he later used against Weinberg at the time of the SSC hearings: “the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science.” So much for a fundamental theory of everything.

Back to Fodor and why he is skeptical of theory reduction, again from his ’74 paper:

“If it turns out that the functional decomposition of the nervous system corresponds to its neurological (anatomical, biochemical, physical) decomposition, then there are only epistemological reasons for studying the former instead of the latter [meaning that psychology couldn’t be done by way of physics only for practical reasons, it would be too unwieldy]. But suppose there is no such correspondence? Suppose the functional organization of the nervous system cross cuts its neurological organization (so that quite different neurological structures can subserve identical psychological functions across times or across organisms). Then the existence of psychology depends not on the fact that neurons are so sadly small, but rather on the fact that neurology does not posit the natural kinds that psychology requires.” [10]

Just before this passage in the same paper, Fodor argues a related, even more interesting point:

“If only physical particles weren’t so small (if only brains were on the outside, where one can get a look at them), then we would do physics instead of paleontology (neurology instead of psychology; psychology instead of economics; and so on down). [But] even if brains were out where they can be looked at, as things now stand, we wouldn’t know what to look for: we lack the appropriate theoretical apparatus for the psychological taxonomy of neurological events.”

The idea, I take it, is that when physicists like Weinberg (for instance) tell me (as he actually did, during Sean Carroll’s naturalism workshop [11]) that “in principle” all knowledge of the world is reducible to physics, one is perfectly within one’s rights to ask (as I did of Weinberg) what principle, exactly, is he referring to. Fodor contends that if one were to call up the epistemic bluff the physicists would have no idea of where to even begin to provide a reduction of sociology, economics, psychology, biology, etc. to fundamental physics. There is, it seems, no known “principle” that would guide anyone in pursuing such a quest — a far more fundamental issue from the one imposed by merely practical limits of time and calculation. To provide an analogy, if I told you that I could, given the proper amount of time and energy, list all the digits of the largest known prime number, but then decline to actually do so because, you know, the darn thing’s got 12,978,189 digits, you couldn’t have any principled objection to my statement. But if instead I told you that I can prove to you that there is an infinity of prime numbers, you would be perfectly within your rights to ask me at the least the outline of such proof (which exists, by the way), and you should certainly not be content with any vague gesturing on my part to the effect that I don’t see any reason “in principle” why there should be a limit to the set of prime numbers.

Fine, but does anyone have any positive reasons to take seriously the notion of the impossibility of ultimate theory reduction, and therefore of the fundamental disunity of science (in theoretical, not ontological, terms)? Nancy Cartwright does (and so does Ian Hacking, as exemplified in his Representing and Intervening [12]). Cartwright has put forth a view that in philosophy of science is known as theory anti-realism [13], which implies a denial of the standard idea — almost universal among scientists, and somewhat popular among philosophers — that laws of nature are (approximately) true generalized descriptions of the behavior of things, especially particles (or fields, doesn’t matter). Rather, Cartwright suggests that theories are statements about how things (or particles, or fields) would behave according to idealized models of reality.

What’s the big deal? That our idealized models of reality are not true, and therefore that — strictly speaking — laws of nature are false. Of course the whole idea of laws of nature (especially with their initially literal implication of the existence of a law giver) has been controversial since it was championed by Descartes and opposed by Hobbes and Galileo [14], but Cartwright’s rather radical suggestion deserves a bit of a hearing, even though one may eventually decide against it (I admit to being a sympathetic agnostic in this regard).

Cartwright distinguishes between two ways of thinking about laws: “fundamental” laws are those postulated by the realists, and they are meant to describe the true, deep structure of the universe. “Phenomenological” laws, by contrast, are useful for making empirical predictions, and they work well enough for that purpose, but strictly speaking they are false.

Now, there are a number of instances in which even physicists would agree with Cartwright. Take the laws of Newtonian mechanics: they do work well enough for empirical predictions (within a certain domain of application), but we know that they are false if they are understood as being truly universal (precisely because they have a limited domain of application). According to Cartwright, all laws and scientific generalizations, in physics as well as in the “special” sciences are just like that, phenomenological.

Funny thing is that some physicists — for example Lee Smolin [15] — seem to provide support for Cartwright’s contention, to a point. In his delightful The Trouble with Physics Smolin speculates (yes, it’s pretty much a speculation, at the moment) that there are empirically intriguing reasons to suspect that Special Relativity “breaks down” at very high energies [16], which means that it wouldn’t be a law of nature in the “fundamental” sense, only in the “phenomenological” one. (Smolin also suggests that General Relativity may break down at very large cosmological scales [16].)

But of course there are easier examples: as I mentioned above, nobody has any clue about how to even begin to reduce the theory of natural selection, or economic theories, for instance, to anything below the levels of biology and economics respectively, let alone fundamental physics.

If Cartwright is correct, then, science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.

Here is how Cartwright herself puts it, concerning physics in particular: “Neither quantum nor classical theories are sufficient on their own for providing accurate descriptions of the phenomena in their domain. Some situations require quantum descriptions, some classical and some a mix of both.” And the same goes, a fortiori, for the full ensemble of scientific theories, including all those coming out of the special sciences.

So, are Dupré, Fodor, Hacking and Cartwright, among others, right? I don’t know, but it behooves anyone who is seriously interested in the nature of science to take their ideas seriously, without dismissing them out of hand. We have already agreed that it is impossible to achieve reduction from a pragmatic epistemic perspective, and we have seen that there are good reasons to at the least entertain the idea that disunity is fundamental, not just epistemic. True, we have also agreed to the notion of ontological reduction, but I have argued above that there is no logically necessary connection between ontological and theoretical reduction, and it is therefore a highly questionable leap of (epistemic) faith to simply assume that because the world is made of one type of stuff therefore there must be one fundamentally irreducible way of describing and understanding it. Indeed, ironically it is the anti-realists who claim the mantle of empiricism to buttress their arguments: the available evidence goes against the idea of ultimate theory reduction (it can’t be done in most cases, and the number of theories to reduce is increasing faster than the number of successful reductions achieved so far), so it is a metaphysically inflationary (i.e., unnecessary and undesirable) move to assume that somehow such evidence is deeply misleading. And most physicists wouldn’t be caught dead admitting that they are engaging in metaphysics…

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] The Disorder of Things: Metaphysical Foundations of the Disunity of Science, by J. Dupré, 1993.

[2] Special sciences (or: the disunity of science as a working hypothesis), by J. Fodor, Synthese, 1974.

[3] How the Laws of Physics Lie, by N. Cartwright, 1983.

[4] Scientific Reduction, by R. van Riel, Stanford Encyclopedia of Philosophy, 2014.

[5] Rationally Speaking podcast #69: James Layman on metaphysics; Rationally Speaking podcast #101: Max Tegmark on the mathematical universe hypothesis.

[6] Philosophy of Chemistry, by M. Weisberg et al., Stanford Encyclopedia of Philosophy, 2011.

[7] On the debate about the reduction of Mendelian to molecular genetics, see: Molecular Genetics, by K. Waters, Stanford Encyclopedia of Philosophy, 2007.

[8] Superconducting Super Collider, Wiki entry.

[9] More Is Different, by P. W. Anderson, Science, 177:393-396, 1972.

[10] A “natural kind” in philosophy is a grouping of things that is not artificial, that cuts nature at its joints, as it were. A typical example is a chemical element, like gold. See: Natural Kinds, by A. Bird, 2008, Stanford Encyclopedia of Philosophy. Notice that Fodor here is in tension with Dupré, since the latter denies the existence of natural kinds altogether.

[11] Moving Naturalism Forward, an interdisciplinary workshop, 25-29 October 2012.

[12] Representing and Intervening: Introductory Topics in the Philosophy of Natural Science, by I. Hacking, 1983.

[13] Which she couples with “entity” realism, the idea that unobservable entities like genes and electrons are (likely) real. This position is therefore distinct, and in between, the classical opposites of scientific realism (about both theories and entities) and scientific anti-realism (about both theories and entities). See: Scientific Realism, by A. Chakravartty, Stanford Encyclopedia of Philosophy, 2011, and Constructive Empiricism, by B. Monton and C. Mohler, Stanford Encyclopedia of Philosophy, 2012.

[14] Are there natural laws?, by M. Pigliucci, Rationally Speaking, 3 October 2013.

[15] The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next, by L. Smolin, 2006.

[16] For Special Relativity, see chapter 13 of Smolin’s book. This has to do with the so-called GZK prediction, which represents a test of the theory at a point approaching Planck scale, where quantum mechanical effects begin to be felt. Regarding General Relativity, the comment is found in chapter 1.

Advertisements

127 thoughts on “On the (dis)unity of the sciences

  1. If I understand correctly, then I would comment that no physicist should buy into theoretical reductionism for the simple reason that there is a glaring and fundamental example of an “emergent phenomenon” in classical physics: the emergence of the second law of thermodynamics (with monotonically increasing entropy), which is not inherent in Newtonian mechanics (which is symmetric under time reversal). There are many other examples, but already in 1900 it would seem that theoretical reductionism – and in the most fundamental of the sciences – should have been dead. Am I missing something?

    Like

  2. Hi Massimo,

    I automatically accepted the physicists’ idea that — in principle at the least — everything boils down to physics, that it makes perfect sense to go after a “theory of everything.”

    We should be clear what the “ physicists’ idea” actually is (by that I mean the idea that physicists actually hold, as oppose to the idea attributed to them by some philosophers!).

    To a physicist, “reductionism” and “everything boiling down to physics” means: (1) ontological reductionism, and (2) the idea that a complete simulation of a system at a lower level would manifest the emergent higher-level properties. The first is explicitly accepted in this essay, and I don’t see any dispute over the second.

    That does not say that higher-level concepts are superfluous, nor that they can be sensibly re-written in lower-level terms. The phrase “theory of everything” (always a somewhat tongue-in-cheek phrase) is a committment to those ideas (1) and (2). But it does not imply the — utterly absurd — idea that you can start with a model of particle physics, do a few lines of algebra, and produce a cheetah stalking an antelope.

    You could, in principle, put together a sufficiently complete, comprehensive and large-scale particle-physics simulation, let it run for a few billion years of simulated time, and out of that might emerge a cheetah-like entity in the same way that it actually did in the real world. But that is not at all the same thing as a neat and simple theoretical model “bridging” the different levels. That latter idea doesn’t work, since the world is far too messy and contingent for that. And I don’t think that that idea is at all prevalent among physicists or any scientists.

    Thus, the idea of “theoretical reduction” — if it is supposed to mean something in addition to the above two ideas — is close to a strawman, which the philosophers are welcome to set fire to, if it pleases them.

    But, it seems to me that reductionism in the sense of the above two ideas is sufficient to regard the sciences as “unified”. Further, if physicists are resistant to talk of abandoning “reductionism”, it is because by that term they mean the above two ideas (does anyone want to attack either of those?). As ever, there is a risk of miscommunication between scientists and philosophers if they mean different things by the same terms.

    Like

  3. The essay above expresses how I see it playing out too from a programming-language theoretic perspective.

    Domain-specific languages (DSLs) are being created within all the various sciences, e.g. in biology[1] (just to pick one example). When various sciences (even various parts of a science) are expressed in their own DSL, then there is no one language of everything.

    [1] http://ceur-ws.org/Vol-724/paper3.pdf
    International Workshop on Biological Processes & Petri Nets (2011)
    GReg (Gene Regulation Language) is a Domain-Specific Language designed to describe genetic regulatory mechanisms. We built it in order to illustrate the DSL approach, and the benefits it provides to research in the life sciences domain.

    Like

  4. I wanted to pick up on Coel’s comment above me. I would suggest the title of “computational reductionism” for the stance that he advocates, then “theoretical reductionism” would just be “computational reductionism” but with a very efficient reduction; where the last use of reduction is in the cstheory sense of the word. I would agree that — if pressed — most physicists would probably subscribe to computational reductionism over theoretical reductionism, however, I would argue that many of them would greatly underestimate the difficulty of simulation. Even in a hypothetical world where the laws of physics where Newtonian, simulating physics with any hope of “getting cheetah like entities” out would be nearly impossible, for the usual mathematical and computational issues with prediction.

    Main point: if you are going to commit yourself to computational reductionism then you have to take computation (and its theoretical limitations) seriously.

    A second point that I wanted to touch on (that I wish Pigliucci would have raised) is why do people like to say things like “everything is reducible to physics”? As we see, it is seldom because they actually then proceed to reduce the thing in question to physics and offer us some new and deep insight. Instead, it often seems to be an act of oppression, a way to belittle the special science being discussed. Although not directly related to theoretical reductionism — instead related to the broader theme of plurality of methods — I often see this abused in misuses of ‘this is not falsifiable’. Of course, taking this further will move us into the demarcation of science from pseudoscience and other areas on which Pigliucci can give us much insight.

    Like

  5. Hi Massimo,

    Great article! The only comment I have after reading it is that somehow you seem to be tiptoeing around the actual conclusion, without ever stating it plainly — reductionism is a failed idea.

    “So, are Dupré, Fodor, Hacking and Cartwright, among others, right?”

    Yes, of course they are. Harry Ellis nailed it in the first comment above — second law of thermodynamics is irreducible to the fundamental laws of physics, and thus constitutes an (by now famous) example for strong emergence in physics and a counterexample for theoretical reductionism. There are also other examples.

    Moreover, there are also examples against the “ontological” reductionism, and their presence is usually nothing short of spectacular when physicists stumble upon them (they typically imply Nobel prizes etc). The solar neutrino problem is my favorite, but historically the most well-known one was the discovery of spin. Finally, at the very conceptual level, Goedel’s first incompleteness theorem essentially kills any idea of a possible “theory of everything”, and with it both ontological and theoretical reductionism.

    As a side note, incidentally I am halfway in the process of writing my second article for SS, on precisely the topic of reductionism and emergence — but from an angle of a physicist. Your article seems to have jump-started the discussion about reductionism that I was planning to get involved into… 🙂

    Like

  6. Hi Massimo,

    I wonder how many people actually hold the rather extreme reductionist views that you’re criticising. As you say, it’s not clear what people mean by “reduce”, but you yourself don’t spell out what interpretation of “reduce” you are targetting. Reading between the lines, it seems to me you are implicitly targetting an extreme interpretation: that higher-level models can be strictly deduced from lower-level models. Once stated explicitly, I think we can refute that position much more straightforwardly than you do. Higher-level models must use concepts that are not mentioned by lower-level models, and you can’t strictly deduce a conclusion about X from premises which don’t mention X.

    But the very fact that the position is so obviously false when it’s spelled out explicitly reinforces for me the conclusion that people probably don’t mean that. They are saying something vaguer, which might be interpreted that way, but doesn’t have to be. I’m inclined to be more charitable, and call their position vague rather than wrong.

    You wrote: The idea, I take it, is that when physicists like Weinberg (for instance) tell me (as he actually did, during Sean Carroll’s naturalism workshop [11]) that “in principle” all knowledge of the world is reducible to physics, one is perfectly within one’s rights to ask (as I did of Weinberg) what principle, exactly, is he referring to.

    You’re certainly entitled to ask for clarification. But if he failed to give any, I don’t think you are entitled to assume he was taking the extreme position. As described here, he didn’t even say he was talking about “theoretical reduction”.

    I also think the expression “theory of everything” is being misinterpreted. As far as I can see, in physics it doesn’t mean a theory from which everything else can be deduced. It’s only supposed to unify several aspects of fundamental physics, and even within that limited scope I don’t think “unify” should be read as “find something from which they can be deduced”. I think that a lower-level model can help us make sense of multiple higher-level ones, and in that sense it “unifies” them. But making sense of is not the same as deducing.

    The idea that most physicists are naive scientific realists, who don’t understand that what they are doing is modelling the world, seems belied by the fact that they call their theories “models”. I think the philosophical debate over scientific realism vs anti-realism is itself confused. I for one am inclined to say that all our descriptions of the world are models, both in and out of science. But I won’t say that they are “only” models, because that implies that there is something else for them to be. The question “truth-capable descriptions or models?” is making a false distinction.

    Like

  7. Hi Massimo,

    Cartwright has put forth … theory anti-realism … a denial … that laws of nature are (approximately) true generalized descriptions of the behavior of things. Rather, Cartwright suggests that theories are statements about how things (or particles, or fields) would behave according to idealized models of reality.

    I’m struggling to see the difference (though I haven’t read Cartwright’s book). Physical theories are models of how the world works. They are models that may be simplified and may be approximately true, but they are adopted because they work (at least, they work better than known alternatives).

    Whether the models are globally true or true in some region of the universe or some region of parameter space, and whether more general models apply more generally, are matters to be decided by empirical evidence (that is, by the standard scientific method of continually seeking to improve models).

    I don’t see anything “anti-realist” in what I’ve just said, but I also don’t see anything “rather radical” about Cartwright’s view. Everyone accepts that the phrase “idealised and approximate model” operates inbetween “physicist’s theory” and “how the world really is”.

    Hi Artem,

    … most physicists would probably subscribe to computational reductionism over theoretical reductionism, however, I would argue that many of them would greatly underestimate the difficulty of simulation.

    Maybe, but I’m not so sure. I think that most of them fully realise the difficulty of simulation and the fact that they’re working with very simplified and inadequate models — afterall, computing that sort of simulation is what physicists spend their time doing these days.

    Taking an example, if an astrophysicist wants to know about, say, massive stars going supernovae, they don’t try “theoretical reductionism”, that’s a non-starter; the problem is way too hard to have any sort of theory linking basic physics to the full complexities of a supernovae explosion.

    So what they do is simulate it (aka, compute it). They throw in all the physics at a low level, and then watch the simulation go bang. They may gain physical insight such that they can report a hugely simplified account in words or equations, but there is no pretence at “theoretical reduction”.

    Ever since the widespread adoption of computers, the whole mindset in the physical sciences has been “simulate the low level and watch the emergent phenomena emerge”.

    Like

  8. What really struck me in this essay was the absence of the words “cause”, “causal” and “causality”.

    When people like Weinberg talk about “in principle” reductionism, are they simply denying emergent or “high-level” causality?

    In other words, rather than being an issue of everything being “made of quarks or whatever”, is ontological reductionism really an issue of whether all causes are reducible to these small, local, “fundamental” causes?

    Like

  9. Some initial thoughts, based on a quick “grokking” of the piece.

    I would, like Massimo, distinguish between Fodor and Cartwright, and would not go as far as Cartwright. (If we understand Smolin as going further than Fodor but not as far as Cartwright, I might be in his territory.)

    Perhaps we could call Fodor a non-realist (if that) if we use anti-realist for Cartwright?

    That said, on Cartwright’s definitional difference, aren’t we looping back to …

    Hume and the Problem of Induction once again? In one sense, every law of nature is ultimately a phenomenological law, not a fundamental law. I don’t know if Cartwright directly trots in Ye Olde Problem of Induction, but it seems like that’s the bottom line here.

    I suppose one could try to trot in Academic Skepticism and the use of probabilities to distinguish “fundamental” from “phenomenological,” but an objector could raise questions about a prioris and other things.

    Massimo: One sidebar question. Can you give at least a thumbnail sketch of how a more generic “philosophy of science” has brachiated into a “philosophy of chemistry” and other individual sciences?

    Marko With my caveat about not being ready to go all the way out with Cartwright, it nonetheless sounds like you have some good thought here, and I’m looking forward to it.

    And, per Harry Ellis:

    Something else irreducible? Time.

    Physics can define a second as X number of radiation cycles of an electron’s emission of a certain electromagnetic frequency changing orbits. Meanwhile, many living creatures have some sort of internal biological clock, one that can often be pretty accurate, but clearly doesn’t reduce to electron shell radiation.

    RichardWein On Weinberg, if he can’t clarify why his idea, in his mind, doesn’t actually quack like a duck, but other people think it does quack like a duck, I think it’s a legitimate inference to say that it does indeed quack like a duck.

    Artem Don’t believe I’ve seen you comment here before, but some very good thoughts. Your second link, about “intimidation” from physics, is very interesting. I would overall agree with your assessment of Coel, including, per your first link, that while this might be achievable in theory, it’s not in reality.

    Like

  10. Isn’t the very act of thinking reductionism? That we take a mass of sensory input and distill out some coherent order.
    As such this process doesn’t produce fundamentals, but only ordered structures. When we boil away all the soft tissue, what is left is the skeleton, not the initial seed. It seems that frequently one is mistaken for the other.
    Such as treating measurements(spacetime) as more fundamental than what is being measured. That the function of measuring distance and duration under similar conditions creates a tautology, doesn’t mean the resulting mathematical formula is foundational to the reality it models. Distance is a measure of space. Time is a measure of action. They are as related as ideal gas laws relate measures of temperature, pressure and volume. We wouldn’t consider temperature or pressure as dimensions of volume.
    If we think of time as the point of the present moving from past to future and physics codifies this as particular measures of duration, it overlooks the broader reality that these events are being created and dissolved, thus future becoming past, as tomorrow becomes yesterday because the earth turns. So every action is its own clock and what we measure are frequencies of specific oscillations. While amplitude en mass is temperature, frequency en mass is noise and so to measure time, we need to isolate those particular signals of specific actions. Just as all knowledge is isolating a particular insight, frame, model, theory, etc. out of the infinite cacophony in which we exist.
    Therefore the very essence of knowledge is to define and thus limit and it is those limits which create definition. Knowledge is finitude. The other side of the coin is not chaos, but energy, which creates and destroys.
    Knowledge is of the past, while the future pushes up through the cracks.

    Like

  11. Harry,

    “no physicist should buy into theoretical reductionism for the simple reason that there is a glaring and fundamental example of an “emergent phenomenon” in classical physics: the emergence of the second law of thermodynamics (with monotonically increasing entropy), which is not inherent in Newtonian mechanics (which is symmetric under time reversal).”

    Yes, but I purposely stayed away from any talk of emergence, because it is in itself controversial, and at any rate raises precisely the same issues that I discuss in the main article: is the appearance of emergent properties an ontological statement about how things are (strong emergence) or an epistemic one about how we understand them (weak emergence)?

    Coel,

    “by that I mean the idea that physicists actually hold, as oppose to the idea attributed to them by some philosophers!”

    I really wished you could make a comment without dissing philosophers, but be that as it may, I mentioned physicists (e.g., Weinberg) who definitely hold the view I attributed to them, and proudly so.

    “means: (1) ontological reductionism, and (2) the idea that a complete simulation of a system at a lower level would manifest the emergent higher-level properties”

    I have actually never heard a physicist framed (2) the way you do. Weinberg isn’t looking for a simulation of the universe, whatever that might look like, but rather for as simple a fundamental theory — expressed in equations — as he can find it. Doesn’t sound at all like a simulation to me.

    “The phrase ‘theory of everything’ (always a somewhat tongue-in-cheek phrase) is a commitment to those ideas (1) and (2).”

    I think you are just wrong about this. A theory of everything implies (1) and it says nothing about (2), as mentioned above.

    “You could, in principle, put together a sufficiently complete, comprehensive and large-scale particle-physics simulation, let it run for a few billion years of simulated time, and out of that might emerge a cheetah-like entity in the same way that it actually did in the real world”

    Here we go again with the simulation thing. A simulation isn’t a theory of anything. Simulations are not scientific theories, though they may embed, or be derived from, scientific theories. Theoretical physics simply is not in the business of producing simulations in this sense, it is in the business of producing theories.

    “And I don’t think that that idea is at all prevalent among physicists or any scientists.”

    Funny, I find it almost ubiquitous.

    “the idea of ‘theoretical reduction’ — if it is supposed to mean something in addition to the above two ideas — is close to a strawman, which the philosophers are welcome to set fire to, if it pleases them.”

    And there you go again. It’s getting tiresome, frankly.

    “I’m struggling to see the difference (though I haven’t read Cartwright’s book). Physical theories are models of how the world works.”

    You may want to read Cartwright. She is using the words “theory” and “model” to indicate distinct activities / objectives, and the distinction is pretty standard in philosophy of science. Your use of the word model is actually closer to Cartwright’s use of the word theory. To use an example, a map of the NYC subway would be a theory, not a model — sensu Cartwright — of the real thing. The difference is that a theory intends to describe the way things are (at the least approximately), while a model is entirely pragmatic: there is no need to assume that it approaches reality, as long as it works. For example, fitting a data set with a polynomial equation may give you accurate predictability but tell you nothing about what is really going on, physically, with whatever phenomenon produced the data in question. This is a situation that actually occurs rather frequently in a number of branches of biology.

    Artem,

    “”theoretical reductionism” would just be “computational reductionism” but with a very efficient reduction”

    Again, I simply don’t see physicists doing anything along the lines of what Coel describes whenever they talk about fundamental theories. Quantum mechanics, general relativity, superstring theory and loop quantum gravity are not simulations of anything, they are mathematically formulated theories, just like Newtonian mechanics.

    “why do people like to say things like ‘everything is reducible to physics’ As we see, it is seldom because they actually then proceed to reduce the thing in question to physics and offer us some new and deep insight. Instead, it often seems to be an act of oppression, a way to belittle the special science being discussed.”

    Yep. Or cultural imperialism. That was, of course, implied in my essay.

    Marko,

    “I am halfway in the process of writing my second article for SS, on precisely the topic of reductionism and emergence — but from an angle of a physicist. Your article seems to have jump-started the discussion about reductionism that I was planning to get involved into…”

    Looking forward to it!

    Richard,

    “I wonder how many people actually hold the rather extreme reductionist views that you’re criticising.”

    I don’t have statistics, but in my career as a scientist I found it to be the default position of pretty much any colleague with whom I’ve had a conversation on these matters.

    “you yourself don’t spell out what interpretation of “reduce” you are targetting”

    I thought I did when making the distinction btw ontological and theory reduction. The latter can be further articulated as Fodor does: one can deduce higher level theories and laws by means of lower level ones plus “bridge” laws or theories. The SEP article on the philosophy of chemistry discusses this in some detail.

    “Higher-level models must use concepts that are not mentioned by lower-level models, and you can’t strictly deduce a conclusion about X from premises which don’t mention X.”

    Unless one is an eliminativist about higher level concepts, i.e. one thinks that once the proper lower level theory is in place the higher level concepts will no longer be necessary. Fodor (and I) think this ain’t gonna happen, of course.

    “You’re certainly entitled to ask for clarification. But if he failed to give any, I don’t think you are entitled to assume he was taking the extreme position. As described here, he didn’t even say he was talking about ‘theoretical reduction’.”

    Actually, from my follow up conversation with Weinberg it seems to me that’s exactly what he was talking about. But he had a rather unsophisticated grasp of what theory reduction is or how it would work.

    “It’s only supposed to unify several aspects of fundamental physics, and even within that limited scope I don’t think “unify” should be read as ‘find something from which they can be deduced’.”

    Agreed. But at the very least this means that fundamental physicists need a serious lesson in humility and they ought to rename what they are after. But, again, I have observed a number of them sliding from the proper interpretation — the one you give — to far more broadly reaching ones, which are clearly unsustainable.

    “The idea that most physicists are naive scientific realists, who don’t understand that what they are doing is modelling the world, seems belied by the fact that they call their theories ‘models’”

    I disagree. First off, there is a difference btw theories and models (in the strict sense), so they ought not to be confused. Second, again, pretty much any scientist I’ve encountered — not just physicist — turned out to be a realist as defined in philosophy of science.

    “I think the philosophical debate over scientific realism vs anti-realism is itself confused”

    I beg to differ. It is one of the clearest debates one can find in philosophy.

    Asher,

    “rather than being an issue of everything being ‘made of quarks or whatever’, is ontological reductionism really an issue of whether all causes are reducible to these small, local, ‘fundamental’ causes?”

    Ah, yes, causality! I stayed away from that one too, partly because I’m thinking of a separate essay about it. The interesting thing is that the concept of causality plays an irreplaceable role in the special sciences, but hardly appears at all in fundamental physics. That’s something to explain, of course, and I’d like to hear a physicist’s opinion of it.

    Socratic,

    “on Cartwright’s definitional difference, aren’t we looping back to … Hume and the Problem of Induction once again?”

    Indeed. Cartwright’s (and Hacking’s) view of laws of nature is definitely Humean. I should add that a survey of philosophers’ opinions about a number of questions within their profession reveals that the majority of philosophers of science are Humean about laws of nature (which is not to say that they endorse the whole of Cartwright).

    “In one sense, every law of nature is ultimately a phenomenological law, not a fundamental law”

    That is precisely what Cartwright thinks. But if that’s true, there is nothing fundamental about fundamental physics, in terms of theory production.

    “Can you give at least a thumbnail sketch of how a more generic “philosophy of science” has brachiated into a “philosophy of chemistry” and other individual sciences?”

    That would likely require a separate post, but my general idea about how philosophy makes progress (forthcoming book by Chicago Press!) is that once a given field — say, science — spins off its philosophical womb then philosophy uses its tools to develop a new discipline along the lines of “philosophy of.” As that field (science) itself splits up in a number of more specialized ones, so does, in parallel, the corresponding “philosophy of.” Indeed, nowadays there isn’t even a philosophy of physics per se, there is philosophy of quantum mechanics, of time and space, etc.

    Like

  12. Hi [b]Massimo[/b], interesting article. This is an issue I really hadn’t given much thought to and had a general “feeling” that all of science could be collapsed to physics level descriptions. I like the split between ontological and theoretical reduction and you’ve managed to plant a seed of doubt in my mind that scientific theories can be reduced (to that extreme degree).

    That said I don’t know if I agree with your claim that: “Mendelian genetics has not been reduced to molecular genetics, in case you were wondering”

    It certainly hasn’t been reduced to that at this time, but unlike other scientific theories considered in your article I don’t see why mendelian (or classical) genetics could not reduce theoretically to molecular genetics. The whole field of bioinformatics seems contingent on that possibility.

    Hi [b]Coel[/b], first I want to note that I had a late reply in your last thread which contained a question. You didn’t reply and I wasn’t sure if you missed it or not.

    On this subject, the idea that a perfect simulation of low level rules or entities would produce emergent properties (presumably even the new rules of a “higher” scientific theory) is seductive. I also sort of had that feeling. But I think there may be real bars to this. A functional theory at a certain level may be useful yet totally incongruous with the rules describing the underlying features that make it up. Or perhaps equally problematic, there may be so many ways that lower level phenomena can be arranged to produce the higher level phenomena, thus the emergence of the higher theory would not itself lend credence to the validity of the lower level model. I guess I’m talking about non-unique solutions.

    Hmmm. I hope that makes sense.

    I also was not clear about the difference between theoretical reductionism and “throwing in all the low level physics” within the following statement: “They throw in all the physics at a low level, and then watch the simulation go bang. They may gain physical insight such that they can report a hugely simplified account in words or equations, but there is no pretence at “theoretical reduction”.”

    Like

  13. Here we go again with the simulation thing. A simulation isn’t a theory of anything. Simulations are not scientific theories, though they may embed, or be derived from, scientific theories. Theoretical physics simply is not in the business of producing simulations in this sense.

    I don’t think a simulation is supposed to be a theory in Coel’s formulation. A simulation in the sense he’s talking about it simply serves as a test for causal reductionism (and more widely, a test of theories themselves). If the same higher-level causal behavior that exists in the world emerges when only low-level causes are at work in the simulation, it shows that the physical theory is causally complete.

    It also – arguably – shows theoretical completeness, because no other theoretical structures besides the low-level ones at work in the simulation are required for the simulation to act as the world acts.

    Coel seems to be arguing that it is this kind of causal/theoretical completeness that Weinberg is embracing. Artem, I think, has it right about how sure we can be about it at this point.

    Like

  14. The interesting thing is that the concept of causality plays an irreplaceable role in the special sciences, but hardly appears at all in fundamental physics. That’s something to explain, of course, and I’d like to hear a physicist’s opinion of it.

    I’d very much like to hear a physicist’s opinion too. I’ve read a bit about it, but I don’t really get how causality “disappears”. If it’s just considered to be some flavor of “patterned behavior”, then causality is patterned behavior, which is fine by me if it does the job.

    Like

  15. Fodor’s target was, essentially, the logical positivist idea (still exceedingly common among scientists, despite the philosophical demise of logical positivism a number of decades ago) that the natural sciences form a hierarchy of fields and theories that are (potentially) reducible to each next level, forming a chain of reduction that ends up with fundamental physics at the bottom. So, for instance, sociology should be reducible to psychology, which in turn collapses into biology, the latter into chemistry, and then we are almost there.

    I question that this is a logical positivist idea – in fact they rejected that this could be stated as an assumption. Otto Neurath was said to shout out “Metaphysics!” any time it was suggested.

    We use the term “Physicalism” these days to refer to a metaphysical thesis and it has been lost that the term was originally coined for a radically anti-metaphysical thesis. Otto Neurath originally proposed that the way to unify science was at the observational level – in fact at the language level. He proposed to unify science around language about the observation of physical things.

    He wasn’t rejecting the idea of theoretical reductions, only that this could be a starting assumption for science. We know from his essay “Physicalism” that other members of the Vienna Circle (such as Carnap, Schick, Gödel) signed onto this idea and major scientists such as Bohr were also on board.

    The theoretical unity of science, they held, was something that had to be established empirically, not assumed from the start. If he had heard Sean Carroll say that, in principle, all knowledge could be reduced to physics, Neurath would have asked him to show how an experiment could be designed to demonstrate this.

    Like

  16. Hi Asher, you said:

    “If the same higher-level causal behavior that exists in the world emerges when only low-level causes are at work in the simulation, it shows that the physical theory is causally complete.”

    Unfortunately that isn’t true, which is what I was trying to get at in my answer to Coel above. Within complex systems there is the possibility of non-unique (i.e. multiply valid) solutions. Failure to produce an emergent property could be used as a way to reject or point to insufficiencies of a theory, but success does not mean the theory is correct or “complete.” There could be others.

    I just attended a conference where a modeler (from the Blue Brain project) admitted this very thing. Several theories regarding biological function could produce in silico the same results found via experiments.

    To anyone, how are people making names bold? I think that is a nice way of calling attention to who you are replying to. Unfortunately my attempt to manually insert the format didn’t work out right.

    Like

  17. @brandholm:

    Within complex systems there is the possibility of non-unique (i.e. multiply valid) solutions.

    Can you give me an example? All of the systems I’m familiar with are deterministic, albeit sensitive to initial conditions.

    I’m careful to say “causally complete” because if two sets of models produce precisely the same behavior, then I’d say those two theories are the same theory. There are many, many problems with levels of detail, indeterminism, etc., but I’m not following how there can be multiply valid solutions in a deterministic system.

    I think people use the standard HTML “b” tag to bold things. There’s no preview, so I can’t test that out. I don’t think BBCode works on this blog.

    Like

  18. I’m not a professional physicist but the impression I have of the discipline is something along these lines. Physicists are certainly ontological reductionists—they think all material things are complex assemblies of vast numbers of elementary parts of relatively few kinds. They also think they know how these elementary parts interact, that is, their dynamics. This makes them Laplacians. They believe that the dynamics of complex wholes is completely accounted for by the dynamics of the elementary parts and nothing else. To them, the complex thus ‘reduces’ to the simple. This, however, is rather an act of faith. The not terribly complex assemblies whose dynamical equations can be solved do seem to behave according to these solutions. Likewise the somewhat more complex assemblies that are amenable to numerical solution (Coel’s simulations). At the other end of the scale very large assemblies can be successfully treated by statistical methods or by approximating them to continua. In between lies everything of interest to the rest of science, and relatively little of this succumbs to the physicist’s methods. Perhaps we could describe this approach as ‘methodological reductionism’ by analogy with methodological naturalism, but it seems to have little contact with the philosopher’s notion of theoretical reduction. Is this picture roughly right, and if so, how does it amount to ‘imperialism’?

    Like

  19. The idea, I take it, is that when physicists like Weinberg (for instance) tell me (as he actually did, during Sean Carroll’s naturalism workshop [11]) that “in principle” all knowledge of the world is reducible to physics, one is perfectly within one’s rights to ask (as I did of Weinberg) what principle, exactly, is he referring to.

    There appear to be some very obvious counter examples. Computation, for example. Computation forms a very important part of our knowledge of the world, not only for computers that we have built but naturally occurring computers – ie brains.And yet we can completely understand computation without needing any reference at all about the physical substrate upon which it might be instantiated.

    We can instantiate computations of a wide variety of physically disparate systems, computations do not even require the laws of physics to be the way they are, not even slightly similar.

    Or did Weinberg mean “except for our knowledge of mathematics”?

    Ontologically speaking, most people would agree that all things in the universe are made of the same substance (the exception, of course, are substance dualists), be it quarks, strings, branes or even mathematical relations [5]; moreover, complex things are made of simpler things. For instance, populations of organisms are nothing but collections of individuals, while atoms are groups of particles, etc. Fodor does not object to this sort of reductionism, and neither do I.

    As you might expect, I am not one of the “most” people who would agree with this.

    For a start I am not sure if “substance” is really meaningful in this context.

    Quarks, strings, branes are, from our point of view, mathematical relations in any case, so this really boils down to “there may or may not be something that these mathematical relations are describing”.

    “Complex things are made of simpler things” seems to suggest that there is a completely non-complex entitity that is capable of accounting for everything. That goes beyond what we can claim to know and seems to me slightly problematic in any case.

    Like

  20. Hi Massimo and Asher Kay,

    “The interesting thing is that the concept of causality plays an irreplaceable role in the special sciences, but hardly appears at all in fundamental physics. That’s something to explain, of course, and I’d like to hear a physicist’s opinion of it.”

    Ok, I’ll bite (and I’m a physicist), what exactly is the problem? What type of causality you say is present in special sciences and not present in fundamental physics? I fail to understand the problem, care to give me an example perhaps?

    By the way, note that the notion of causality is related to the notion of determinism, which I already wrote about. When a ball is kicked, it changes its state of motion. The cause of this change is the force that acted on the ball (interaction with the foot). OTOH, when some unstable atomic nucleus decays, its state is changed without any cause whatsoever (the decay is random and causeless). So if we are dealing with a system that can be described deterministically (at least to a certain level of approximation), one can identify the causal chain of events. However, if we are dealing with a system that cannot be described deterministically, identifying the concept of a “causal chain of events” mostly ceases to make sense. The latter scenario is typical in fundamental physics, given the quantum nature of phenomena involved.

    But I am not sure that this is the answer you are looking for, since I don’t really understand the question precisely enough.

    Like

  21. These are more examples of modern philosophers who are anti-science. You should be immediately suspicious when philosophers announce that scientists do not know what they are doing. Scientists at least have a track record of accomplishments. None of these philosophers have contributed anything worthwhile.

    Cartwright is writing nonsense with: “Neither quantum nor classical theories are sufficient …” Classical mechanics is a macroscopic approximation to quantum mechanics. The disunification she describes does not exist.

    SciSal says: “nobody has any clue about how to even begin to reduce the theory of natural selection”. Sure they do. Darwin said “survival of the fittest”. Then fitness was defined in terms of survival, making it a tautology. Others applied the concept to genes. If someone wrote a computer simulation, such as what Coel describes, no extra programming would be needed for natural selection.

    Ellis, you are missing something with your entropy argument. There are reductionist arguments, as you can easily find on the Wikipedia article.

    Vojinovic’s examples are even more nonsensical, but I guess he will elaborate in his own article.

    The philosophers of chemistry say that chemistry has not been fully reduced to physics. Okay, but did someone say that university chemistry departments were subsets of physics departments? This is no argument against reductionist reasoning.

    No, I do not see where Dupré, Fodor, Hacking and Cartwright have made any points that need to be taken seriously. What they say has very little to do with modern science.

    Like

  22. Coel,

    “Thus, the idea of “theoretical reduction” — if it is supposed to mean something in addition to the above two ideas — is close to a strawman, which the philosophers are welcome to set fire to, if it pleases them.”

    Coel, its very hard to understand your dismissal of this idea when you have defended it so many times yourself. Consider your comments on The Scientism Yippee or Sucks pt 1 article: “Instead scientism says that the different areas mesh seamlessly, without any abrupt ontological or epistemological divides.” (pg 1) Again: “Currencies would be social contracts, and thus ontologically would be about brain states. Brain states are then patterns of physical material.” (pg 2) Philosophical anti-reductionism is concerned with denying exactly these kinds of claims and so the position philosophers term reduction is not one defended exclusively by straw-men, excepting that you yourself are made of straw. (Other scientists not clearly made of straw defended similar, if somewhat vacillating positions at the Moving Naturalism Forward conference often referred to here at SciSal. Of course, Coel, the trouble is that you have systematically equivocated on what you understand by reductionism. Philosophers who would help you clarify that have been suggested to you many times, but then that would mean taking philosophy seriously…

    Like

  23. I’m sure that M-Theory makes many physicists cringe, but for a moment let’s assume it holds water. If so, it is implicit in M-Theory that physics as we know it may in fact and by definition would be situational / “phenomenological” rather than “fundamental”. The laws of physics, and possibly matter as we know it (both at the sub-atomic level and physical level) could be and most likely are qualitatively different in qualitatively different universal planes. Hence, physics cannot form the basis of all science / life / nature. Expanding our notions of physics into outlier dimensions, moves the unifying point away from physics and into situational context.

    . I know it sounds ridiculous to think of imaginary universes other than the ones we observe, but then again that’s what they said about gravity, and quantum physics, and even simple germs at one point.

    Like

  24. SciSal: “The interesting thing is that the concept of causality plays an irreplaceable role in the special sciences, but hardly appears at all in fundamental physics.”

    I am baffled by this comment, as causality is absolutely central to fundamental physics. I cannot think of any part of physics that can function without it. Saying that causality does not appear in physics is like saying energy does not appear in physics. What is physics without energy and causality?

    Perhaps you have been influenced by some misguided philosopher like Bertrand Russell. He wrote a 1913 essay saying that “the law of causality, as usually stated by philosophers, is false, and is not employed in science.” He also said that physics never even seeks causes. I do not know how he could say anything so silly, as all the physics textbooks use causality.

    Vojinovic: “notion of causality is related to the notion of determinism”

    Not really. There are people who work on deterministic and non-causal interpretations of quantum mechanics, and non-deterministic and causal interpretations. So they are not so tightly related.

    I also disagree with your claim that radioactive decay is causeless, and that a causal chain of events makes no sense with quantum phenomena. The atomic nucleus consists of oscillating quarks and gluons, and the decay might be predictable if we could get a wave function for everything in that nucleus. That seems impossible, but there is still a causal theory for how that decay takes place, and the decay can be analyzed as a causal chain of events. Saying that radioactive decay is causeless is like saying a coin toss is causeless. It may seem like it has a random outcome to the casual observer, but there are causal explanations for everything involved.

    Quantum mechanics is all about finding causal explanations for things like discrete atomic spectra, where such an explanation must have seemed impossible.

    Yes, there are philosophers of physics today who work on those non-causal interpretations. As far as I know, nothing worthwhile has ever come out of that work.

    Like

  25. Well; Hume has been mentioned repeatedly here; but the philosopher receiving validation here is Kant. It was Kant who determined that the regularities of nature are what define for us ‘the laws of nature’ (since what we can know are phenomena, not any ‘essences’). Thus such laws must be constructs of the mind. Who knows what the physical thing ‘actually’ is or ‘actually’? does? And that greatly pleases me, because, as the end of ‘classical’ philosophy (from Aristotle), Kant must also be the beginning of modern philosophy.

    Even if we reject Kant, we must admit we are still within the sphere of his problematics.

    There are also other issues; this matter has greater import than first appears. If the sciences are not reducible, then a number of problems come to the fore. Some have already been noted. Here, I will only notice one (because I am writing an essay on it): If the sciences are not reducible to physics, then strict incompatiblist determinism has no foundation.

    Like

  26. There seem to be three different issues here:

    First, whether there is an underlying unity to what happens in the universe. I am seriously unable to even so much as conceive of that not being the case, because how could e.g. what happens at the population genetic level be disconnected from what happens at the quark level if the populations consist of quarks? And as far as I understand, and as discussed in this essay, most people seem to agree that there is unity to the universe.

    Second, whether we can achieve the unity of scientific theories, or the reduction of all to a few fundamental rules, in practice. I understand that it appears arrogant of somebody to say that we should be able to do that in principle even if they currently have no idea where even to start. The thing is, it quite simply follows logically from the previous consideration. If there is unity to the universe, then one should in principle be able to describe it in a unified fashion, although one might need greater capabilities to achieve that aim than humans will ever realistically have at their disposal.

    I am also rather unconcerned about this issue because reductionism is quite simply impractical. I would never dream of describing phylogenetic relationships in terms of particle physics because a simple tree graph is a much more useful summary.

    Third, whether scientific theories are “wrong”. This is where I am most puzzled, even slightly exasperated. Many philosphically minded people seem to consider the observation that science is always tentative as some awesome insight that they just discovered and that all the scientists are naively ignorant of.

    Yes, of course any theory or model we have is only an approximation of what is really happening out there. But we know that, always, and we are always trying to build the next better one. And we will never be able to be sure that our model or theory is now so exactly, precisely and exhaustively a description of what goes on that we will never find a divergence; and we know that!

    All of that does not mean that one should call these theories “wrong”. If they are wrong, then nothing is ever correct, and the two terms have no meaning outside of pure arithmetic.

    So essentially, the claim that all science is wrong is merely a deliberately hyperbolic description of what every half way competent scientist knows: science attempts to describe reality as well as currently possible. It certainly does not demonstrate any fundamental disunity.

    Like

  27. schlafly,
    Your arguments are basically equivocations; the most glaring of which is, “The philosophers of chemistry say that chemistry has not been fully reduced to physics. Okay, but did someone say that university chemistry departments were subsets of physics departments?” Should one take this seriously? The question is, can the epistemology of chemistry, and the knowledge acquired thereby, be reducible to that of physics?

    But you had to know this when you wrote what you did, surely. Flippancy does not constitute argumentation.

    Coel,
    the problem with your ‘simulation’ stipulation is that what you are suggesting is that physicists no longer build a theory and test it, they just chug information into the simulation and report what occurs. Such a process could provide us with new information, but it doesn’t really have much to do with the issue at hand. (After all, such simulations are tantamount to thought experiments; chug information compatible with silicon based life, and after several billion years we might get a silicon cheetah out of it.) Massimo is quite right that a number of physicists have argued for far a stronger ToE than you assert they do (the quotes are too numerous, and I am too busy to collect them here, now; but really, they are out there and known).

    Marko Vojinovic.
    “However, if we are dealing with a system that cannot be described deterministically, identifying the concept of a “causal chain of events” mostly ceases to make sense. The latter scenario is typical in fundamental physics, given the quantum nature of phenomena involved.”
    Well, you kind’a answered your own question.

    David Brightly,
    The question is, not whether physics can be reduced to a certain methodology, but can other sciences be reduced to physics? This would include chemistry and biology, but also, at the outer limits, sociology, psychology, even perhaps, economics, etc.

    But if even chemistry can not be reduced to physics, then the whole game is over.

    Personally, I think the universe is not only more probabilistic and pluralistic than we imagine, but more so than we can imagine.

    We may need to settle for an agglomeration of theories concerning various domains, without any unification possible. If so, the question becomes, why ever did we want such a unification in the first place?

    Like

  28. @Coel
    About your “reduction by simulation” idea:
    – theoretical models (=interpreted math structures which makes the theory true) are not computer models (you need to discretize space-time, approximate things, etc)
    – even for theoretical models there are usually no analytic solutions to the equations, you need semi-classical approximations and the like (e.g. perturbations in QM). You also need to isolate your system frop its environment.
    – even then you cannot deduce chemical properties of molecules from pure physical models, without injecting chemical assumptions (i.e. the simulation fails at the first stage). I doubt you could get a cheetah: too much approximation already.
    – if you could, you’ll then have to solve the measurement problem or you’d have infinitely many worlds where, perhaps, cheetahs are hiding. (Perhaps spontaneous collapse would help, but we don’t know today).
    – even then you’d need to inject initial conditions where your cheetah already exists (unless you want to model the apparition of life, with an infinitessimal chance of actually getting a cheetah)
    – even then I wonder if you don’t need inter-theoretical reduction to say “this is cheetah-like” and declare that the simulation actually worked

    You might want to assume idealistic principles to answer these points (ideal computing efficiency, ideal physics, computing the whole universe…) but then your view will collapse into ontological reductionism simpliciter.

    For these reasons, I think the notion of theoretical reduction (type reduction, not token) is more relevant.

    Like

  29. Hi Massimo,

    I find much to disagree with, and would also endorse the comments already made by Coel and Richard Wein. Like Coel, I fear that a straw man is being attacked, and the kinds of criticisms being levelled are the kind that my view of scientific reductionism would seem to be immune from.

    > it is the ability to reduce all scientific laws to lower level ones

    You need to specify more clearly what it is to reduce a scientific law to a lower level one. I understand Fodor discusses bridge laws, but I don’t think that is how most reductionists think of it. Rather, I think Coel got it right. Reductionism is the view that any model built with appropriate low-level laws will exhibit the behaviour described by high-level laws, i.e. that strong emergence does not exist in nature.

    > the history of science has produced many more divergences at the theoretical level.

    This is not surprising! Even if complete unification were achieved tomorrow, the different disciplines would continue to proliferate indefinitely. For practical reasons it is usually impossible to study high level phenomena using low level descriptions. Disciplines will proliferate as new high level phenomena or ways of studying high level phenomena continue to be identified. This point is therefore entirely useless for the anti-reductionist.

    > the less relevance they seem to have to the very real problems of the rest of science.

    Also unsurprising! I am a computer programmer — every line of code I write reduces to a sequence of ones and zeroes, but I have no idea of the precise details of how this process works, and I don’t need to have such an understanding. It is irrelevant to me in exactly the same way as pointed out by this quote, but that doesn’t mean that there is no such theoretical reduction. Similarly, it is not useful or even possible to use knowledge about quarks in making psychological diagnoses. Simplified models built at higher levels are precise enough for our ends while being vastly more practical. This is only an epistemic limitation and says nothing about theoretical reductionism in the sense that is actually endorsed by anyone.

    > one is perfectly within one’s rights to ask … what principle, exactly, is he referring to.

    If I were to say “I quite like green tea and believe it to be healthy, but in practice I usually drink coffee”, would it make sense to ask “what practice?” I wouldn’t know how to understand your question. “In principle” is simply a synonym for “all practical considerations aside”. The only answer to “what principle” might be something unenlightening like “the principle of reductionism”, or “the principle that all scientific theories reduce to fundamental physics”, since that is simply the principle the original statement expresses.

    Like

  30. Hi Massimo (continued)…

    > But if instead I told you that I can prove to you that there is an infinity of prime numbers

    But nobody is claiming to have such a proof regarding reductionism. It’s more like the assertion that, in principle, one can always find a pair of prime numbers that add up to any even integer. There is no proof for this (Goldbach’s) conjecture, but it is widely believed to be true. “In principle” does not mean that there is a proof, it just means that practical considerations are disregarded. I may not be able to find prime numbers that add up to 123234556456456456, but that is only because the number is too big and doing so is too difficult for a lowly mortal like me.

    Cartwright’s argument seems rather beside the point to me. Nobody is asserting that the current best scientific models are the fundamental laws of nature, rather the realist view is that there are such laws, whether we can ever find them or not. Our current scientific models are approximations. It is possible that those approximations are completely correct in some regards, but we know they cannot be perfect because of the disunity between GR and QM.

    > its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.

    This misses the point that our current best understanding completely fails in situations where both GR and QM would apply at the same time, and these situations do occur in nature. If we manage to develop a theory which covers such situations, then we will have presumably found a theory of everything.

    A quick review of comments written since my last post shows you are not impressed by the simulation argument. The simulation stuff is only an illustration of what reductionism entails. It doesn’t mean that physicists necessarily think in terms of simulations. They think of models. A simulation is one way of representing or manifesting a model. It has the benefit of taking the manual work out of it and letting you easily appreciate the emergent patterns at work, but the same points could be made of any model even on paper.

    > Funny, I find it almost ubiquitous.

    I don’t think that reductionism as you present it is ubiquitous. If Coel rejects it (and Coel is one of the most scientistic reductionist guys out there, I suspect), then I suggest that there has been a miscommunication.

    Like

  31. “If Cartwright is correct, then, science is fundamentally disunified, and its very goal should shift from seeking a theory of everything to putting together the best patchwork of local, phenomenological theories and laws, each one of which, of course, would be characterized by its proper domain of application.”

    The pendulum has now swung too far in the opposite (and wrong!) direction. For many of us this amounts to a reductio ad absurdum of Cartwright. For science clearly is not ‘fundamentally disunified’. The special sciences may not be theoretically reducible to physics and chemistry, but nor do they exist in the glorious isolation that that phrase suggests. Think what a ‘fundamentally disunified’ bunch of sciences would look like. There is a clear and obvious sense in which we require the special sciences to be at least compatible with the fundamental sciences. A biology that broke the second law of thermodynamics would not do at all. So something has gone wrong somewhere in the philosophising.

    Like

  32. Robin Herbert wrote: “Otto Neurath originally proposed that the way to unify science was at the observational level – in fact at the language level. He proposed to unify science around language about the observation of physical things.”

    It seems that the currently increasing number of scientific domain-specific languages runs counter to that.

    Like

  33. Possibly the issue of causality is equating it with temporal sequence. Yesterday doesn’t cause today. The sun shining on a spinning planet causes this sequence of events we who exist at one spot on the surface perceive as days.
    Energy exchange is causal and that is not the point of the present moving from past to future, but change turning future into past, ie, potential, into actual, into residual.
    While the outcome of an event might be determined, the input cannot be fully known prior to the event, since information and the energy carrying it travels at finite speeds.

    Like

  34. brodix,

    “Isn’t the very act of thinking reductionism? That we take a mass of sensory input and distill out some coherent order.”

    That’s not what reductionism means in this context.

    brandholm,

    “It certainly hasn’t been reduced to that at this time, but unlike other scientific theories considered in your article I don’t see why mendelian (or classical) genetics could not reduce theoretically to molecular genetics. The whole field of bioinformatics seems contingent on that possibility.”

    No, I don’t think it is. The issue, broadly, is that in Mendelian genetics genes are hypothetical functional units, while in molecular genetics they are informational pieces of particular substances (nuclei acids). The problem is that there is no one-to-one correspondence between the two, which makes reduction impossible. Of course, this doesn’t mean that molecular genetics is incompatible with Mendelian genetics, or vice versa.

    Asher,

    “I don’t think a simulation is supposed to be a theory in Coel’s formulation. A simulation in the sense he’s talking about it simply serves as a test for causal reductionism”

    But then Coel is changing the conversation. And in doing so he is introducing a thought experiment (the universal simulation) that simply will never be possible, thus making his further point entirely moot.

    Robin,

    “I question that this is a logical positivist idea – in fact they rejected that this could be stated as an assumption. Otto Neurath was said to shout out “Metaphysics!” any time it was suggested.”

    Well, I’m not sure what Neurath was shouting about, but it is a matter of introductory texts in philosophy of science that the logical positivists believed in the (scientific) unification of knowledge, about the same concept that, in a much less nuanced and sophisticated way, is still banded about by people like EO Wilson (whom even Jerry Coyne took to task recently!).

    “Otto Neurath originally proposed that the way to unify science was at the observational level – in fact at the language level. He proposed to unify science around language about the observation of physical things.”

    Correct, but I’m not sure why this would be in contradiction with my statement that the positivists sought a unification of the sciences.

    “He wasn’t rejecting the idea of theoretical reductions, only that this could be a starting assumption for science.”

    Right, but it was a goal.

    “If he had heard Sean Carroll say that, in principle, all knowledge could be reduced to physics, Neurath would have asked him to show how an experiment could be designed to demonstrate this.”

    I knew there was something I liked about good ‘ol Otto…

    “We can instantiate computations of a wide variety of physically disparate systems, computations do not even require the laws of physics to be the way they are, not even slightly similar. Or did Weinberg mean ‘except for our knowledge of mathematics’?”

    I don’t know, and I don’t want to make this a discussion about what Weinberg specifically said at the naturalism workshop. Still, the status of mathematics, and its relationship with physics, is certainly part of the debate. Next week physicist Max Tegmark (whom I interviewed for the Rationally Speaking podcast) will be giving a talk at CUNY’s Graduate Center Philosophy Program, and he is one who defends the idea that the universe is actually *made* of mathematical objects, whatever that means ontologically…

    “‘Complex things are made of simpler things’ seems to suggest that there is a completely non-complex entitity that is capable of accounting for everything.”

    While I expected you to reject ontological monism, I’m not sure why this follows.

    David,

    “In between lies everything of interest to the rest of science, and relatively little of this succumbs to the physicist’s methods. Perhaps we could describe this approach as ‘methodological reductionism’ by analogy with methodological naturalism, but it seems to have little contact with the philosopher’s notion of theoretical reduction.”

    That sounds about right to me. And nothing of what I wrote (or Fodor, or Cartwright) should be construed as an attack on methodological reductionism, which has a pretty good (though sometimes a bit overblown) track record in the sciences. As for the “imperialism” part, I was not referring to methodological reductionism, but to the greedy theoretical variety, along the lines of “it’s all about (fundamental) physics, folks!” As Anderson argued, even most of physics is not about fundamental physics…

    Like

  35. Marko,

    “What type of causality you say is present in special sciences and not present in fundamental physics?”

    It isn’t a question of what type of causality, it is an issue of causality period. According to my understanding of fundamental physics (and yes, I did check this with several physicists, Sean Carroll among them) the concept of causality plays little or no role at that level of description / explanation. Quantum mechanical phenomena “occur” with this or that probability, following the predictions of a set of deterministic equation, but one doesn’t need to deploy talk of causality at all.

    On the contrary, one simply can’t do anything in the special sciences without bringing up causes. This is true for non fundamental physics, chemistry, biology (especially ecology and evolutionary biology), and so forth.

    I suspect this has to do with the fact that equations in fundamental physics are time symmetric, while causality is an inherently time-asymmetric phenomenon, or principle, or whatever. And this, of course, without even touching on the delicate issue that people still don’t even agree on what, exactly, causality *is*. For Hume it wasn’t anything special, it was a creation of the human mind when it observes that Y regularly follows from X in short order. For others causal interactions are physical interactions where an invariant quantity (like energy, or momentum) is exchanged. But there are other accounts.

    “When a ball is kicked, it changes its state of motion. The cause of this change is the force that acted on the ball (interaction with the foot). OTOH, when some unstable atomic nucleus decays, its state is changed without any cause whatsoever (the decay is random and causeless).”

    Precisely. But notice that you just moved from classical to quantum mechanics.

    schlafly,

    “These are more examples of modern philosophers who are anti-science”

    No, it is your comment that is one more example of groundless condescending while refusing to engage with the actual argument in a way that may persuade others to actually take you seriously. Nonetheless, I’ll try…

    “Cartwright is writing nonsense with: “Neither quantum nor classical theories are sufficient …” Classical mechanics is a macroscopic approximation to quantum mechanics. The disunification she describes does not exist.”

    I’m afraid you either don’t get what Cartwright is saying or you don’t understand physics. Or possibly both. The equations of classical mechanics can be derived as approximations of the equations of quantum mechanics, I understand, but the physics put forth by the two theories is radically different, for instance in the way they treat space and time. Besides, Cartwright was simply making the eminently empiricist point that, as a matter of fact, we can’t do everything we want by using only classical or quantum mechanics, hence her comment about their respective insufficiency.

    “SciSal says: ‘nobody has any clue about how to even begin to reduce the theory of natural selection’. Sure they do. Darwin said ‘survival of the fittest’. Then fitness was defined in terms of survival, making it a tautology.”

    Apparently you don’t see that what you wrote after “sure they do” has nothing whatsoever to say that helps with what comes before. And no, the theory of natural selection is not a tautology. I recommend Doug Futuyma’s introductory textbook on the matter, or even Jerry Coyne’s much more accessible Why Evolution is True.

    “The philosophers of chemistry say that chemistry has not been fully reduced to physics. Okay, but did someone say that university chemistry departments were subsets of physics departments? This is no argument against reductionist reasoning.”

    No, it isn’t. It’s only a flagrant demonstration that you are not paying attention.

    “I do not see where Dupré, Fodor, Hacking and Cartwright have made any points that need to be taken seriously”

    Because you are not even trying, I’m afraid.

    “I am baffled by this comment, as causality is absolutely central to fundamental physics. I cannot think of any part of physics that can function without it.”

    See above.

    “Perhaps you have been influenced by some misguided philosopher like Bertrand Russell.”

    Groan.

    cm3,

    “If so, it is implicit in M-Theory that physics as we know it may in fact and by definition would be situational / ‘phenomenological’ rather than ‘fundamental’. The laws of physics, and possibly matter as we know it (both at the sub-atomic level and physical level) could be and most likely are qualitatively different in qualitatively different universal planes.”

    Yes, tha sounds about right to me. Another way to put it would be that the laws of physics are contingent, and they would have a limited domain of application, though that domain may be as broad as an entire universe.

    ej,

    “Here, I will only notice one (because I am writing an essay on it): If the sciences are not reducible to physics, then strict incompatiblist determinism has no foundation.”

    Or at the least it becomes a much less straightforward view than popularly thought. A similar implication is that if Cartwright is correct, then strict physicalism as a metaphysical notion also goes out the window. Here is a quote from the textbook I’m currently using to teach my philosophy of science course (Brown, James Robert, 2012, Philosophy of Science: The Key Thinkers, p. 222):

    “The metaphysical viewpoint attendant upon this [Cartwright’s] view is potentially quite radical. Standard physicalism requires that the world have a fundamental structure which is, in principle, describable in terms of our best theories. If our ‘best theories’ form an inchoate and inconsistent set, more or less applicable across irreducibly diverse domains, then physicalism must be radically mistaken. This aspect of Cartwright’s views have not gone unnoticed by scientists.”

    Like

  36. Alexander,

    “how could e.g. what happens at the population genetic level be disconnected from what happens at the quark level if the populations consist of quarks?”

    It isn’t a question of it being “disconnected,” but rather of basic principles being insufficient. This could happen in a variety of ways, one of which of course is via strong emergence, but also if it turns out that laws of nature have limited domains of application.

    “as discussed in this essay, most people seem to agree that there is unity to the universe”

    Yes, ontologically. But one needs to argue, not assume, a unity of explanatory principles.

    “I understand that it appears arrogant of somebody to say that we should be able to do that in principle even if they currently have no idea where even to start. The thing is, it quite simply follows logically from the previous consideration.”

    Well, no. You are assuming a particular view of the laws of nature, which may or may not hold.

    “I am also rather unconcerned about this issue because reductionism is quite simply impractical.”

    But as a thinking and curious being, ar you concerned only with practicalities? Because if so, most of fundamental physics, and, say, all of cosmology, is utterly non-practical…

    “whether scientific theories are “wrong”. This is where I am most puzzled, even slightly exasperated. Many philosphically minded people seem to consider the observation that science is always tentative as some awesome insight that they just discovered and that all the scientists are naively ignorant of.”

    C’mon, you can do better than schlfly. Both realists and anti-realists about scientific theories are well aware of the tentativeness and fallibility of science. Their arguments do not in the least depend on that. The discussion is really about how we should think of scientific theories (as approximating truth, or as empirically adequate), and even the very goals of science itself (is it after truth, or after empirical adequacy?). Even some scientists are having that sort of discussion, for instance when some quantum physicists talk of belonging to the “shut up and calculate” school of thought (anti-realism), or when some string physicists talk about a “post-empirical” physics (which I would call mathematical metaphysics).

    “All of that does not mean that one should call these theories ‘wrong’”

    Cartwright is much more specific than that. She begins by noting that we *know* that some scientific theories are wrong because they are based on idealizations that do not belong to the real world (think Galileo’s frictionless and perfectly flat inclined planes). There isn’t even a pretense to say that one is describing reality, only that the fiction is useful for real experiments. Her argument is that *all* theories are like that, empirically adequate fictions of the human mind.

    DM,

    “I fear that a straw man is being attacked, and the kinds of criticisms being levelled are the kind that my view of scientific reductionism would seem to be immune from”

    Once more: no, there is no straw man here. The views attacked by Cartwright, Fodor and others are actually held by actual professional scientists. Whether your particular view is or is not immune from this sort of criticism is a different matter, unless you are actually representative of the consensus in the physical community, which — knowing a bit about your approach (and Coel’s) — I do not think is the case.

    “You need to specify more clearly what it is to reduce a scientific law to a lower level one.”

    That’s why I linked to plenty of SEP articles about reductionism, as well as to Fodor’s original paper.

    “Reductionism is the view that any model built with appropriate low-level laws will exhibit the behaviour described by high-level laws”

    No, Coel is simply wrong about this, as argued above. Computer models are not scientific theories.

    “Disciplines will proliferate as new high level phenomena or ways of studying high level phenomena continue to be identified. This point is therefore entirely useless for the anti-reductionist.”

    Yes, I bet you wish it were, but no, the point is eminently empirical: instead of seeing more and more successes of (theoretical) reductionism we see more and more special principles and laws being deployed by scientists. This is not, in itself, sufficient to clinch the case, but it needs to be taken seriously, no shrugged off as “obvious.”

    “I am a computer programmer — every line of code I write reduces to a sequence of ones and zeroes, but I have no idea of the precise details of how this process works, and I don’t need to have such an understanding.”

    No, because you are a computer programmer. But if science isn’t about fundamental understanding I don’t know what it is about. And at any rate, nobody is asking for the “precise details,” only for general principles. You would make a pretty bad programmer if one of your clients asked you for a flow chart of your program and you responded with “well, it works, and it’s all zeros and ones.”

    “it is not useful or even possible to use knowledge about quarks in making psychological diagnoses.”

    Then stop making grandiose claims about how it is all about physics. It clearly, empirically, isn’t. It just makes physicists feel good to say so.

    “This is only an epistemic limitation and says nothing about theoretical reductionism”

    This, seems to me, is a statement of faith, not the result of either empirical observations or sound deductive reasoning.

    “If I were to say ‘I quite like green tea and believe it to be healthy, but in practice I usually drink coffee’, would it make sense to ask ‘what practice?’”

    No, but your example seems to me to have nothing at all to do with the sort of question I asked Weinberg.

    “The only answer to ‘what principle’ might be something unenlightening like ‘the principle of reductionism’”

    Which would be begging the question.

    “But nobody is claiming to have such a proof regarding reductionism”

    Which is precisely (one of) the problem(s).

    “There is no proof for this (Goldbach’s) conjecture, but it is widely believed to be true.”

    Good analogy, but I’m not convinced. Mathematicians believe the conjecture because it hold up empirically, with no exceptions. This is most certainly not the case for theory reduction in science, quite the opposite.

    “Cartwright’s argument seems rather beside the point to me. Nobody is asserting that the current best scientific models are the fundamental laws of nature, rather the realist view is that there are such laws, whether we can ever find them or not”

    Cartwright is not claiming the former, but rather questioning the latter.

    “It is possible that those approximations are completely correct in some regards, but we know they cannot be perfect because of the disunity between GR and QM.”

    Right. Cartwright is merely raising the possibility that such disunity may turn out to be fundamental, rather than provisional. Smolin seems attracted by the same idea too.

    “situations where both GR and QM would apply at the same time, and these situations do occur in nature. If we manage to develop a theory which covers such situations, then we will have presumably found a theory of everything.”

    Or we would have found yet another phenomenological law / theory which applies to yet another domain. Look, I stated clearly in the essay that I don’t buy Cartwright’s position, I’m agnostic. But I find it interesting to see how vehement the reaction to her writings has been by some in the physics community. It smells of wounded pride…

    “The simulation stuff is only an illustration of what reductionism entails. It doesn’t mean that physicists necessarily think in terms of simulations.”

    That’s why I’m not impressed, the argument entirely misses the point, changing the conversation. I’d like the conversation to stay where I put it in the essay.

    “I don’t think that reductionism as you present it is ubiquitous. If Coel rejects it (and Coel is one of the most scientistic reductionist guys out there, I suspect), then I suggest that there has been a miscommunication.”

    How many professional scientists have you (or Coel) talked to about this? In my case, it’s a lot, since I’ve had more than a quarter century of a career as a scientist interested in epistemological issues.

    David,

    “For many of us this amounts to a reductio ad absurdum of Cartwright. For science clearly is not ‘fundamentally disunified’. The special sciences may not be theoretically reducible to physics and chemistry, but nor do they exist in the glorious isolation that that phrase suggests.”

    I think you are overplaying my words. “Fundamentally” here doesn’t mean that the pieces are “in glorious isolation,” but only that there is no fundamental theoretical unification possible, as I stated in my essay, which I think does reflect Cartwright’s view fairly accurately.

    “There is a clear and obvious sense in which we require the special sciences to be at least compatible with the fundamental sciences. A biology that broke the second law of thermodynamics would not do at all. So something has gone wrong somewhere in the philosophising.”

    Here we go again with the dissing of philosophizing. Hmpf. No, nothing has gone wrong, because nobody, certainly not Cartwright, nor myself, has ever said that the special sciences are incompatible with anything at all that comes from fundamental physics. If they were, this really would be an extraordinary claim, and there is neither evidence nor argument to back it up.

    Like

  37. Dear Everyone,

    First, comment limits prevent me replying to everyone who has replied to or about me. Second, I still think there is a huge amount of miscommunication here owing to different concepts of “reductionism”. So, here I try to clarify what seems to me the dominant view about reductionism among physicists:

    … but look at chemistry! It has successfully been reduced to physics!

    If one knew physics perfectly, and used that knowledge to make a perfect simulation of sets of atoms at the level of physics, then that simulation would manifest chemical behaviour. It would manifest emergent properties such as “benzene ring” that are not part of the physics-level description. In that sense, chemistry is entailed by (“reduced to”) physics. That is reductionism as understood by physicists.

    But, that “perfect simulation” is in practice impossible, and, further, any good-enough simulation would be totally unweildy and impractical for most chemistry purposes. Therefore chemistry uses higher-level descriptions and models. These higher-level descriptions are often arrived at empirically, even if, in principle, one could reproduce them with a physics-level simulation.

    Thus, chemists do not spend their time working with physics-level laws and physics-level theory, but instead use chemistry-level theory, it is just much more useful to them. It’s a bit like programming in a high-level language such as Java rather than programming in machine code.

    Granting that, physics and chemistry are still “unified” in the sense that the chemistry-level desciption and the physics-level description do have to be entirely compatible and consistent (just as a Java program does have to be consistent with the machine-code version that actually runs).

    Thus reductionism, as physicists understand it, does not imply that chemists can or should spend all their time working solely with fundamental physics. Nor does it require single as oppose to multiple realisation (it is a claim that a complete low-level description entails the high-level emergent phenomena; it is not a claim that for any high-level phenomena there can be only one low-level implementation, just as many different machine-code versions can map to the same Java code). Nor is reductionism a claim that the simulation of a higher level is always practically possible. Most of the time it won’t be.

    (Note, all the above seems to me consistent with Weinberg‘s essay “reductionism redux” and with this Sean Carroll post.)

    Also, Massimo, I agree entirely that the computer “simulation” is not the fundamental theory. One takes the fundamental low-level theory, and then uses that to simulate the higher-level system, and the claim is that a complete simulation at the low-level would manifest all emergent phenomena. My use of this “simulation” argument is an attempt to make clear and explicit what I mean by “reductionism”.

    Two questions: Does anyone dispute reductionism in the sense I’ve just outlined? (One obvious way it could be wrong would be if vitalism or dualism were true, in which case the low-level simulation would leave out something vital; I’m not suggesting that anyone holds to those, I’m pointing out how reductionism could be wrong.)

    Second, by “theoretical reduction”, do people mean something different from the above account? If so, what do they mean by it? And, can they show, by explicit quotes, that the view is prevalent among physicists?

    Like

  38. Hi Massimo,

    Regarding Cartwright:

    The discussion is really about how we should think of scientific theories (as approximating truth, or as empirically adequate), …

    Both. The only handle we have on scientific truth is through empiricism. By being empirically adequate our theories approximate truth. We can’t do better than that.

    … and even the very goals of science itself (is it after truth, or after empirical adequacy?).

    Both. We have no way of getting at truth other than by empirical adequacy. By “true” we mean “empirically adequate”.

    [Cartwright’s] argument is that *all* theories are like that, empirically adequate fictions of the human mind.

    And it seems to me that she is right.

    The difference is that a theory intends to describe the way things are (at the least approximately), while a model is entirely pragmatic: there is no need to assume that it approaches reality, as long as it works.

    Here again I don’t see the distinction between “it approaches reality” and “it works”. We have no handle on reality other than “it works”. (Of course if something works only partially, say an idealised model where we deliberately neglect some aspect of the system, then it will be an incomplete picture of reality for that reason.)

    For example, fitting a data set with a polynomial equation may give you accurate predictability but tell you nothing about what is really going on, physically, with whatever phenomenon produced the data in question.

    If that polynomial gives explanatory and predictive power then it does tell you something (as opposed to “nothing”) about what is “really” going on. Of course it alone will not tell you much. It may be an ad-hoc heuristic that has very limited explanatory and predictive power. In that case it is not a very good theory/model. It may be that some other theory/model gives much more explanatory and predictive power, in which case that theory/model would be prefered.

    But that’s not because there is something fundamentally different between the two attempts to approximate reality, it’s just that one has more explanatory and predictive power than the other and is thus better.

    That is precisely what Cartwright thinks. But if that’s true, there is nothing fundamental about fundamental physics, in terms of theory production.

    That’s correct, there isn’t. Perhaps you are reading into the word “fundamental” a lot more than most physicists do (afterall, most physicists don’t themselves work on “fundamental” physics). For example, your sentence:

    “Indeed, even some scientists seems inclined toward at least some bit of skepticism concerning the notion that “fundamental” physics is so, well, fundamental. (It is, of course, in the trivial ontological sense discussed above: everything is made of quarks, or strings, or branes, or whatever.)”

    I’d word as: “Most scientists know full well that “fundamental” physics is only “fundamental” in the (call it “trivial” if you wish) sense that everything is made of quarks, or strings, or branes, or whatever.”

    Like

  39. Coel hit the nail on the head. Just because the reductions of various special sciences to lower-level theories don’t conform to Nagel’s (or someone else’s) idea of “reduction” doesn’t mean that reductionism has failed and should be abandoned.

    Thermodynamics has been reduced to Newtonian/quantum physics via statistical mechanics, as any bachelor’s level physicist can tell you.

    And Cartwright’s book is a complete disaster. She wants to claim that the laws of physics are not even APPROXIMATELY true. But she seems to (willfully?) misunderstand basic physics. See my comments here:
    http://somewhatabnormal.blogspot.com/2013/12/physics-lies.html

    Like

  40. Robert, besides the fact that I keep thinking Coel missed the nail rather hit it, so Cartwright *willfully* misunderstands physics? To begin with, that an oxymoron, one cannot misunderstand willfully; second, so much for charitable reading at the least of one’s opponents intentions, if not arguments.

    Like

  41. Schlafly,

    “I also disagree with your claim that radioactive decay is causeless, and that a causal chain of events makes no sense with quantum phenomena. The atomic nucleus consists of oscillating quarks and gluons, and the decay might be predictable if we could get a wave function for everything in that nucleus.”

    I disagree. Even if you had a detailed wavefunction of the nucleus, all it could possibly tell you would be the probabilities of this or that happening inside the nucleus. So the wf is not going to predict a decay, it is only going to predict a probability of a decay. And this isn’t enough to establish a causal chain between a decay and any potential “cause” or “trigger” for the decay.

    Massimo,

    “According to my understanding of fundamental physics (and yes, I did check this with several physicists, Sean Carroll among them) the concept of causality plays little or no role at that level of description / explanation. Quantum mechanical phenomena “occur” with this or that probability, following the predictions of a set of deterministic equation, but one doesn’t need to deploy talk of causality at all.”

    Agreed. 🙂

    “On the contrary, one simply can’t do anything in the special sciences without bringing up causes. This is true for non fundamental physics, chemistry, biology (especially ecology and evolutionary biology), and so forth.”

    Agreed again. 🙂

    “I suspect this has to do with the fact that equations in fundamental physics are time symmetric, while causality is an inherently time-asymmetric phenomenon, or principle, or whatever.”

    I am not sure that time asymmetry plays a fundamental role in identifying causality (although it must be taken into account when present). For example, Newtonian mechanics is time-symmetric (to a large extent), but a causal chain can always be identified. When one reverses the time flow, the cause becomes an effect, while the effect becomes a cause. The causal chain is the same, you just read it upside-down. IOW, it’s a matter of specifying the initial conditions versus the final conditions for a given system.

    That said, I agree that in the vast majority of situations in special sciences, the second law of thermodynamics cannot be ignored, which breaks time-reversal symmetry. The causal chain can again be established, only now there can be no symmetry between causes and effects.

    “And this, of course, without even touching on the delicate issue that people still don’t even agree on what, exactly, causality *is*.”

    Fair enough, a more rigorous definition of causality is necessary for a more serious discussion. 🙂

    “Precisely. But notice that you just moved from classical to quantum mechanics.”

    Oh, now I think I understand what is the problem you are pointing at — causality appears to be important at the classical level, while absent on the (more fundamental) quantum level. Is that the problem?

    I can give two answers to this. First, the transition from quantum to classical theory is not fully understood in physics, courtesy of the measurement problem in QM. As one of the consequences, the appearence of causality in the classical theory is also not fully understood. It’s an open problem.

    Second, while the transition from quantum to classical is not understood completely, it is understood to a certain extent, and we have a good handle on certain aspects of it. So, to the extent we understand that transition, I can say the following — causality is a (weakly) emergent phenomenon, present in a certain subset of classical systems. Specifically, one can define a causal chain for every system that can be described using deterministic effective equations (i.e. the effective classical equations that are local in time coordinate). IOW, causality can be defined for a system which displays deterministic evolution, at a certain level of approximation. One can say that for such a system the stochastic (nondeterministic, nonlocal, quantum…) terms in the equations are small and can be neglected (for some given error-bars), and in those situations one can talk about causes and effects within that system.

    So one can say that causality is a contingent, system-dependent and regime-dependent concept.

    HTH. 🙂

    Like

  42. Massimo,
    “That’s not what reductionism means in this context.”
    Which goes to the point I was trying to make. What is sought is the “seed” from which all higher order processes emerge, but the top down logical distillation(if you prefer) only gives us the skeleton. The most hard and consistent patterns (and entities). Which is basically what (particle!)physics is all about. Consider the above comments about energy and causality. Is this dismissal by physics because they are not fundamental or because it is difficult to model them precisely? Why is time treated as symmetric blocktime? Is it because that is the most effective physical description, or because it is the most efficient mathematical description?
    Is it any wonder then why these seemingly foundational theories don’t fit together, much as discrete bones don’t make fit together, without the connecting tissue.

    Like

  43. Massimo Thanks for your one brief note, that Cartwright’s views have not gone unknown by scientists. How much wresting is done with her ideas, to your knowledge? How much credence is given to them?

    And, now, a philosophy tie-in follow-up. Yesterday I mentioned Hume and the problem of induction. It also seems that Cartwright is tackling what might be called “Platonism” in the natural sciences, and in philosophy of them. That particular sense of anti-Realism (capitalizing for the Platonic connection) I can indeed accept. The universe is not Ideal, and that is probably part of why “all turtles, all the way down” reductionism fails. And, to take the Platonism further, it certainly doesn’t reduce to mathematics.

    And, that’s not just a measurement problem, I don’t think. It’s a possible knowledge problem, in part. It’s also, parallel to the fundamental quantum graininess of the universe, also a reality issue. (I don’t call that last one a problem.) I do think this differentiates between “it approaches reality” and “it works,” contra Coel. The “approaches” is based on idealizing, or Idealizing, reality, but seems to keep in place that there is an Idea “out there” to be Idealized.

    Doesn’t this get to philosophy of science (overall, not individual sciences) in another way?

    If we accept that there are no fundamental laws, to go to yet another school of philosophy, do we accept quasi-fundamental laws on pragmatic grounds if nothing else? Arguably, don’t most laws of various sciences start that way before becoming used, and theoretically confirmed, enough to get called “fundamental”?

    That’s not to say that “elevating” some theories on pragmatic grounds excludes other philosophical considerations, of course.

    Finally, I’ll tie this to Kuhn and his idea of paradigm shifts. When a particular science shifts ground on some issue, do such philosophical considerations come into conscious play? Should they, if they don’t currently?

    David Brightly At the same time, as I noted in my first comment, I’m with you. A “fundamental disunification,” versus what we might call “a loose unification” (Fodor?) does seem to be a bridge too far, regardless of Cartwright’s other stimulating thought.

    EJ I like some of the plurality thought, or at least something like that. I think a good philosophy of science, and philosophies of individual sciences, as well, need to wrestle with the issue of fundamental (sorry, no other word to use) restraints on human knowledge. I think this is most true in physics today, but may become more true in other sciences.

    And, this has been something I’ve thought for years. This is why I think that “dark energy” wasn’t discovered sooner. A positive expansion number for the universe puts paid to the idea (not the reality, just the idea) that theoretically, humans can “take it all in.” The secular version of a quasi-immortality gets killed. Some of this connects with my response to Massimo.

    Marko Nice and interesting follow-up thoughts, especially on the issue of causality, which is itself, arguably, in part a philosophical issue.

    Like

  44. Coel,

    The Sean Carroll post you link to, in turn, references a very good debate on anti-reductionism/reductionism between John Dupre and Alex Rosenberg. Having seen the debate, I was surprised by Carroll’s take on it. Using the idea of two elephants with identical atomic configuration, but perhaps light years apart, Rosenberg posits (and Carroll agrees) that there is no way in which the two elephants are not precisely the same object in kind. Carroll then says this about Dupre’s response which is rather astonishing to me–so I quote:

    “Dupré doesn’t give a very convincing answer, except to suggest that you would also need to know the conditions of the environment in which the elephant found itself, to know how it would react. That’s fine, just give the states of all the particles making up the environment. I’m not sure why this is really an objection.”

    This seems to me the central difference between the two positions on this. If one is required to ‘give the states of all the particles making up the environment’ in order to completely explain any particular configuration of atoms, this is the opposite of reductionism. And in the debate which I link to below, Dupre wins the argument hands down beginning at 25:30 where he ‘goes deep inside the elephant’. At 30:00 Dupre asks Rosenberg “So the strongest thesis you can endorse is that everything supervenes on the physical state of the universe?” To which Rosenberg answers “I’m comfortable with that”. He basically concedes to the most anti-reductionist position imaginable.
    The debate link: http://www.philostv.com/john-dupr-and-alex-rosenberg/

    Like

  45. @Marko

    I think it’s worth burning my last comment to thank you for the explanation 🙂

    Would this mean that what Quentin said above about the measurement problem has an effect on our ability to model a classical world (computationally) using only fundamental theories? And if so, is the problem such that it’s impossible to model or just impractical (even given near-infinite computing power)? And if it’s impossible, is it impossible in theory or could another kind of fundamental theory be model-able?

    Those answers, I think, are what really would affect Coel’s argument. Whether causality is present or not is not really important — the important question is whether whatever *is* occurring could be (computationally) modeled. If it could be modeled, then presumably, causality would emerge at whatever level it emerges at in the real world.

    Like

  46. DM and I seem to be the only people who’ve understood Coel’s simulation definition of reductionism (at least out of the people who’ve mentioned it). I guess it’s always easier to understand people who have similar views to oneself.

    As I see it, this simulation definition is based on something like superveneinece. Our lowest-level models are the most precise. If we could simulate reality at a low enough level of abstraction, that simulation would include everything that makes our higher-level models correct. Anything with the same arrangement of atoms as a wet man is also a wet man. So any sufficiently accurate simulation of those atoms is a simulation of a wet man and (in simulation) behaves like a wet man, including saying “I’m wet”. The simulated atoms in the simulated air behave like atoms carrying the sound of his voice. The simulation is complete: no macroscopic difference without an atomic difference. The macroscopic supervenes on the atomic. But I think we should avoid saying that wetness “emerges at the higher level”. Better to say that wetness is a concept that only comes in useful when we model (or describe) reality (or the simulation) at a higher level.

    (I chose the atomic level for ease of speech, but depending on what you’re simulating, you might need to go to a lower level.)

    According to Wikipedia:

    Ontological reductionism is the claim that everything that exists is made from a small number of basic substances that behave in regular ways (compare to monism). Ontological reductionism denies the idea of ontological emergence, and claims that emergence is an epistemological phenomenon that only exists through analysis or description of a system, and does not exist on a fundamental level.

    The second sentence sounds similar to what I’m saying. And it’s the point I would concentrate on. I think much confusion in this area arises from people thinking there is something more going (or could be going on) than humans modelling reality in various different ways.

    Like

  47. Massimo: “I beg to differ. It [scientific realism vs anti-realism] is one of the clearest debates one can find in philosophy.”

    SEP: “It is perhaps only a slight exaggeration to say that scientific realism is characterized differently by every author who discusses it, and this presents a challenge to anyone hoping to learn what it is.”

    Sorry, Massimo, I couldn’t resist.

    Like

Comments are closed.