Reductionism, emergence, and burden of proof — part I

Duck_of_Vaucansonby Marko Vojinovic

Introduction

Every now and then, the question of reductionism is raised in philosophy of science: whether or not various sciences can be theoretically reduced to lower-level sciences. The answer to this question can have far-reaching consequences for our understanding of science both as a human activity and as our vehicle to gain knowledge about reality. From both ontological and epistemological perspectives, the crucial question is: are all real-world phenomena that we can observe “ultimately explainable” in terms of fundamental physics? What one typically imagines is something like a tower of science, where each high-level discipline can to be reduced to a lower-level one: economics to sociology to psychology to neurology to biology to biochemistry to chemistry to molecular physics to fundamental physics. Is such a chain of reductions possible, or desirable, or necessary, or important, or obvious, or tautological, or implicit in our very concept of science?

The opposing ideas of reductionism and emergence lie at the core of these questions. The first thing to do, then, is to clear up what is actually meant by the ideas of reductionism and emergence in science. Given that fundamental physics is usually located at the bottom of any proposed chain of reduction, it somehow has a privileged position — not only for being the “most fundamental,” but because its mathematical rigor can be employed to make the meaning of reductionism and emergence more clear. The purpose of this article is to shed some light on these subject matters from the perspective of theoretical physics, hopefully answering some of the above questions. The essay is split in two parts [1]: the first one mainly deals with epistemological reductionism, while the second one tackles ontological reductionism.

Preliminaries

Let me start with some definitions. One of the crucial concepts for reductionism is that of “theory,” as reductionism will be understood here as a particular relation between two theories. For the purpose of this article, I will define the notion of a theory in a rather loose descriptive fashion — as a set of mathematical equations over certain quantities, whose solutions are in quantitative agreement with the experimentally observable phenomena which the theory aims at describing, to some given degree of precision, in some specified domain of applicability. This is a reasonable, generic description of the kind of theories that we typically deal with in physics.

There are several important points to note about this definition. First, if the solutions of a theory are not in quantitative agreement with experiment, the theory is considered wrong and should be either discarded or modified so that it does fit the experiment. Second, the requirements of mathematically rigorous formulation and quantitative (as opposed to qualitative) agreement with experimental data might appear too restrictive — indeed, our definition rules out everything but physics and certain parts of chemistry and biology. For example, the theory of evolution is not really a theory according to such a definition (although its population genetics rendition is). Nevertheless, there is a very important reason for both of these requirements, which will be one of the main points of this essay, discussed in the next section. Finally, the phrase “a set of mathematical equations” is a loose description with a number of underlying assumptions. I will mostly appeal to a reader’s intuition regarding this, although I will provide a few comments on the axiomatic structure of a theory in part II of the essay.

In order to introduce reductionism, let us consider two scientific theories, and a relation of “being reducible to” between those theories. In order to simplify the terminology, the “high-level” theory will be called the effective theory, while the “low-level” one will be called the structure theory. These names stem from the general notion that every physical system is constructed out of some “pieces” — so while the effective theory describes the laws governing some system, the structure theory describes the laws governing only one “piece” of the system. Of course, if each such piece can be divided into even smaller pieces, the structure theory can in turn be viewed as effective, and it may have its own corresponding structure theory, thus establishing a chain of theories, based on the size and type of phenomena that they describe. This chain always has a bottom — a theory which does not have a corresponding structure theory, to the best of our current knowledge. I will call that theory fundamental. Note that this definition of a fundamental theory is, obviously, epistemological [2].

It is important to point out one particular relationship between an effective theory and some corresponding structure theory: given that the physical system (described by the effective theory) consists of pieces (each of which is described by the structure theory), it follows that the domain of applicability of the effective theory is a subset of the domain of the structure theory. That is, as much as the effective theory can be applied to the system, the structure theory can also be applied to the same system — simply by applying it to every piece in turn (of course, taking into account the interactions among the pieces). Putting it otherwise, the domain of applicability of a structure theory is usually a superset of the domain of applicability of the effective theory. Thus, the structure theory is said to be more general than the effective theory.

Finally, we are ready to define the relation of “being reducible to” between the effective and the structure theory. The effective theory is said to be reducible to the structure theory if one can prove that all solutions of the effective theory are also approximate solutions of the structure theory, in a certain consistent sense of the approximation, and given a vocabulary that translates all quantities of the effective theory into quantities of the structure theory.

The procedure for establishing reductionism, then, goes as follows. First, the effective and structure theories are often expressed in terms of conceptually different quantities (i.e., variables). Therefore one needs to establish a consistent vocabulary that translates every variable of the effective theory into some combination of structure theory variables, in order to be able to compare the two theories. As an example, think “temperature” in thermodynamics (effective theory) versus “average kinetic energy” in kinetic theory of gases (structure theory). Second, once the vocabulary has been established, one needs to specify certain parameters in the structure theory so that a particular solution of the latter can be expanded into an asymptotic series over those parameters. If the effective theory is to be reducible to the structure theory, such parameters must exist, and they often do — they typically are the ratios between the quantities of the large system and the quantities of each of its pieces. Finally, once the asymptotic parameters have been identified and the solution of the structure theory expanded into the corresponding series, the dominant term in this series must coincide with the solution of the effective theory, and so on for all quantities and all possible solutions of the effective theory, always using the same vocabulary and the same set of asymptotic parameters [3].

If the above procedure is successful, one says that the effective theory is reducible to the structure theory, that phenomena described by the effective theory are explained (as opposed to being re-described) by the structure theory, and that these phenomena are weakly emergent from the structure theory. Conversely, if the above procedure fails for some subset of solutions, one says that the effective theory is not reducible to the structure theory, that phenomena described by those solutions are not explainable by the structure theory, and that these phenomena are strongly emergent with respect to the structure theory. In the next section I will provide examples of both situations.

Examples

Arguably the most well-known example of reductionism is the reduction of fluid mechanics to Newtonian mechanics [4]. As an effective theory, fluid mechanics is a nonrelativistic field theory, whose basic variables are the mass density and the velocity fields of the fluid, along with the pressure and stress fields that act on the fluid. The equations that define the theory are a set of partial differential equations that involve all those fields. As a structure theory, Newtonian mechanics deals with positions and momenta of a set of particles, along with forces that act on each of them. The reduction of fluid mechanics to Newtonian mechanics then follows the procedure outlined in the previous section. We consider the fluid as a collection of a large number of “pieces” where each piece consists of some number of molecules of the fluid, contained in some “elementary” volume. We establish a vocabulary roughly as follows: the mass density field is the ratio of the mass and the volume of each piece, at the position of that piece, in the limit where the volume of the piece is much smaller than the typical scale of the motion of the fluid. The ratio of the two sizes is a small parameter, convenient for the asymptotic expansion. Also, the velocity field is the velocity of each piece of the fluid, at the position of that piece, in the same limit. Similarly, the pressure and stresses are described in terms of average forces acting on every particular piece of the fluid. Finally, we apply the Newtonian laws of mechanics for each piece, and expand them into an asymptotic series in the small parameter. The dominant terms of this expansion can be cast into the form of partial differential equations for the fluid, which can then be compared to the equations of fluid mechanics. It turns out that the two sets of equations are equivalent, which means that fluid mechanics (effective theory) is reducible to Newtonian mechanics (structure theory).

In this sense, the motion of a fluid is described by fluid mechanics, explained by Newtonian mechanics, and all properties of a fluid are weakly emergent from the laws of Newtonian mechanics. Of course, all this works only for phenomena for which the approximation scheme holds.

There are many other similar examples, such as the reduction of the first law of thermodynamics to statistical mechanics, reduction of Maxwell electrodynamics to the Standard Model of elementary particles, reduction of quantum mechanics to quantum field theory, or reduction of Newton’s law of gravity to general relativity. The essence here is that in each case the former can be reconstructed as a specific approximation of the latter.

In contrast to the above, the situations in which reductionism fails are much more interesting. In fact, these are nothing less than spectacular, since they often point to new discoveries in science. For the purpose of this article, I will focus on three pedagogical examples, called the dark matter problem, the Solar neutrino problem and the arrow of time problem. Each of these examples illustrates a different way in which reductionism can (and does) fail. Of course, other examples can be found as well, but the analysis of these three should be sufficient for subsequent discussion.

The first example is the failure to reduce the Standard Model of cosmology [5] (SMC) as the effective theory, to the Standard Model of elementary particles [6] (SMEP) as the corresponding structure theory. Aside from the fact that SMEP does not describe any gravitational phenomena that SMC contains, SMC describes the presence of so-called dark matter, in addition to the usual matter. The presence of dark matter particles cannot be accounted for by any of the matter particles in SMEP. Therefore SMC cannot be reduced to SMEP already at the qualitative level. In order to make SMC reducible to some structure theory, SMEP needs to be modified (in a non-obvious way) in order to account for dark matter particles. In other words, the mere presence of dark matter in cosmology requires us to rewrite the fundamental laws of physics. Here SMEP is considered fundamental because we do not yet have any structure theory for SMEP [7].

According to the terminology defined in the previous section, then, in this example the evolution and properties of the Universe (at large scales) are described by SMC, are not explainable by SMEP, and the existence of dark matter is strongly emergent.

The second example of failure of reductionism is even more interesting. The effective theory that describes our Sun, sometimes called the Standard Solar Model (SSM) [8], also fails to reduce to SMEP. As far as we know, the Sun is composed of ordinary particles that SMEP successfully describes. So both SSM and SMEP can be used to describe the Sun, and qualitatively they in fact do agree. Moreover, they also agree quantitatively, but for a simple factor of three in one of the observables: the fusion process in the core of the Sun generates an outgoing flux of neutrinos, some of which reach the Earth and are successfully measured; all else being equal, the measured flux of neutrinos (as described by SSM) is roughly three times smaller than the flux predicted by SMEP. In the beginning, physicists looked at various ways to account for this discrepancy (essentially by checking and re-checking the error bars of everything involved in both SSM and SMEP), but the discrepancy persisted, and became known as the Solar neutrino problem [9]. Over time, it became increasingly obvious that the Solar neutrino problem is nontrivial, and eventually all mathematical possibilities to reduce SSM to SMEP were exhausted. This generated a whole lot more interest, and subsequent experiments finally showed that the neutrino sector of SMEP needs to be modified (again in a non-obvious way) in order to account for that factor of three. So again, the missing factor of three in one of the observables of one effective theory required us to rewrite the fundamental laws of physics.

According to the adopted terminology, in this example the properties of the Sun are described by SSM, are not explainable by SMEP, and the amount of neutrino flux is strongly emergent.

There is a very important difference between the above two examples that needs to be emphasized. While SMC is not reducible to SMEP already at the qualitative level, SSM and SMEP do agree qualitatively, but not quantitatively. There is an important lesson to be learned here: qualitative agreement between the effective and structure theory is not enough for reductionism. The consequences of this are rather grave, and I will discuss them in the next section.

Finally, the third example of failure of reductionism is what is popularly called the arrow of time problem. It is essentially equivalent to the statement that thermodynamics (as the effective theory) cannot be reduced to any time-symmetric structure theory, nor to SMEP. The second law of thermodynamics [10] implies that the entropy of an isolated system cannot decrease in time, which means that thermodynamics has a preferred time direction, and is not time-reversible. Moreover, the amount of this irreversibility is copious: every physical system with a large number of particles displays time-irreversible behavior. This property makes thermodynamics automatically non-reducible to any time-symmetric structure theory, due to something called Loschmidt’s paradox [11]. As for SMEP, its equations are not completely time-symmetric — technically (pardon the jargon), K-mesons violate CP symmetry, which implies that they also violate the T symmetry due to the exactness of the combined CPT symmetry. However, the amount of time-irreversibility in K-meson processes is extremely small, and nowhere near enough to quantitatively account for the irreversibility of thermodynamics. Moreover, the particles that we most often discuss in thermodynamics (protons, neutrons and electrons) are not the ones that violate time-symmetry in SMEP, so the incompatibility is actually qualitative. Finally, in order to be able to reduce thermodynamics to a viable structure theory, we need to rewrite the fundamental laws of physics, again in a completely non-obvious way.

As in previous examples, we say that the entropy increase law is described by thermodynamics, is not explainable by any time-symmetric structure theory nor SMEP, and that the consequent “arrow of time” is strongly emergent.

The lesson to be learned from this example is that “complexity” can be a source of strongly emergent phenomena. Despite the fact that every particle in a gas can be described by, say, Newtonian mechanics, the gas as a collective displays behavior that is special, and explicitly not a consequence of the laws of Newtonian mechanics. And going further down to SMEP does not help either. The complexity can be manifested via a large number of particles, or because of strong/nonlinear/nonlocal interactions between them. As the level of complexity of a physical system increases, “more” becomes “qualitatively different” and stops being more of the same, so to speak.

Analysis

The examples discussed in the previous section guide us towards a set of criteria one must meet in order to establish reductionism between two theories. In particular, one must have:

(a) a well-defined quantitative effective theory,

(b) a well-defined quantitative structure theory, and

(c) a rigorous mathematical proof of quantitative reducibility.

If any of these desiderata is missing, one cannot sensibly talk of reductionism as defined at the beginning.

Perhaps the main point here is the necessity of quantitative formulation of a theory, and the example of Solar neutrinos is a sharp reminder that qualitative analysis may not be good enough. In order to stress this more vividly, let us consider a highly speculative thought experiment.

Imagine that we have managed to construct some hypothetical fundamental theory of elementary particles (one that is more fundamental and better than SMEP). Moreover, suppose that we have also managed to establish reductionism, in the above rigorous quantitative sense, of all physics, chemistry, etc. to this fundamental elementary particle theory. Reductionism all the way up to neurochemistry. Further, suppose that we have even constructed some effective quantitative theory of consciousness, that describes well all relevant observations. The natural idea is to reduce that effective theory of consciousness to the structure theory of neurochemistry, and consequently to our fundamental theory of elementary particles. Suppose that we attempt to do that, and find that neurochemistry, when applied to a human brain, qualitatively predicts all aspects of our effective theory of consciousness, but that there is a missing factor of two somewhere in the quantitative comparison. For example, suppose that the structure theory predicts a certain minimum total number of synapses in a brain in order for it to manifest consciousness. However, the effective theory tells us that consciousness can appear with half as many synapses. All else being equal (and rigorous), this single observation of the number of synapses in a conscious brain would falsify our theory of elementary particle physics!

As ludicrous as this scenario might seem to be, there are no a priori guarantees that it will not actually happen. In fact, a similar scenario has already happened, in the case of the Solar neutrino problem. It should then be clear that nothing short of rigorous quantitative agreement between two theories could ever be enough to establish reductionism.

There are many places in physics (let alone chemistry and other sciences) where this quantitative agreement has not yet been established, some of those places being pretty fundamental. For example, the mass of the proton has not yet been calculated ab initio from SMEP [12]. Now imagine that someone gets a flash of inspiration and finds a way to calculate it. What will happen if this value turns out to be two times as big as the experimentally measured proton mass? It would mean that the periodic table of elements, the whole of chemistry, and numerous other things are not reducible to SMEP. While nobody in the physics community believes that this is likely to happen, the actual proof is still missing. Thus, strongly emergent phenomena may lurk literally anywhere. Another example of a phenomenon that resists attempts at reductionism is high-temperature superconductivity [13]. The jury is still out, but it might yet turn out to be a strongly emergent phenomenon, due to the complexity of the physical systems under consideration, in analogy to the strong emergence of the arrow of time.

Regarding the analysis of the examples of the previous section, one more point needs to be raised. From the epistemological point of view, whenever we are faced with the failure of reductionism, we can try to modify the structure theory in order to make the effective theory reducible. This approach is reminiscent of the idea of parsimony — do not assume any additional fundamental laws until you are forced to introduce them. However, the three examples above of reductionist failure are a sharp reminder of the level of rigor necessary to claim that we are not forced to introduce a new fundamental law when faced with a complicated phenomenon. This means that we must be weary of applying parsimony too charitably, and opens the question of where does the burden of proof actually lie: is it on the person claiming that an emergent phenomenon is strongly emergent, or on the person claiming it is merely weakly emergent? All discussion so far points to the latter, in spite of the “parsimonious intuition” common to many scientists and philosophers. To this end, I often like to point to the following illustrative quote [14]:

“When playing with the Occam’s razor, one must make sure not to get cut.”

My conclusion regarding the burden of proof seems obvious from the examples discussed so far. Nevertheless, it is certainly instructive to discuss it from a more formal point of view, detailing the axiomatic structure of theories and the logic of establishing reductionism. Part II of this essay is devoted to this analysis, as well as to the issue of ontological reductionism.

_____

Marko Vojinovic holds a PhD in theoretical physics (with a dissertation on general relativity) from the University of Belgrade, Serbia. He is currently a postdoc with the Group of Mathematical Physics at the University of Lisbon, Portugal, though his home institution is the Institute of Physics at the University of Belgrade, where he is a member of the Group for Gravitation, Particles and Fields.

[1] The article is split in two parts mainly due to size constraints and to facilitate overall readability. However, the two parts should be considered an organic unit, since the arguments given in one are fundamentally intertwined with the arguments given in the other.

[2] The definition of a fundamental theory is epistemological since we may yet discover that the most elementary “pieces” we currently know of can be described in terms of even smaller entities, and thus give rise to another structure theory. From the ontological perspective, the existence of a fundamental theory is dubious, since there is always a logical possibility that the “most elementary” particles do not exist. There are also other issues regarding an ontologically fundamental theory, and I will discuss some of them in part II of the essay.

[3] Any typical effective theory has infinitely many solutions, and we cannot efficiently establish reductionism by comparing solutions one by one. Instead, this is done in practice by comparing the actual defining equations of two theories. Namely, one uses the equations of the structure theory, the vocabulary and the set of asymptotic parameters to “derive” all equations of the effective theory through a single consistent approximation procedure. This ensures that all solutions of the effective equations are simultaneously the approximate solutions of the structure equations, in the given approximation regime.

[4] It is usually the first example of reductionism that an undergraduate student in physics gets to learn about in a typical university course.

[5] The Standard Model of Cosmology is commonly called the Lambda-CDM model.

[6] The Standard Model of elementary particles.

[7] There are many speculative proposals for such a structure theory, but so far none of them can be considered experimentally successful.

[8] Standard solar model.

[9] Solar neutrino problem.

[10] Second law of thermodynamics.

[11] Loschmidt’s paradox.

[12] And before someone starts to complain — no, really, it has not been calculated, despite what you might read in the popular literature about it. If you dig deep enough into the actual research papers, you will find that the only thing that was established in the numerical simulations of lattice QCD is the ratio of proton mass to the masses of other hadrons. These ratios are in good agreement with experimental data, but the masses are all determined up to an overall unknown multiplicative constant, which cancels when one calculates the mass ratios. And this constant has not yet been calculated from the theory. For further information, read the “mass gap in Yang-Mills theories,” one of the Clay Institute Millennium Problems.

[13] High-temperature superconductivity.

[14] I always fail to find an appropriate reference for this statement. If anyone has any, please let me know!

Advertisements

67 thoughts on “Reductionism, emergence, and burden of proof — part I

  1. I enjoyed this essay a lot and found it quite illuminating. However, I was a bit confused by your usage of the term “weak emergence”. I would have thought that fluid mechanics supervenes on Newtonian mechanics, since the two theories can be used almost interchangibly to explain the same set of phenomena in a given domain of applicability.

    Weak emergence, I thought, is supposed to describe situations in which no such epistemological reduction were possible (=epistemological emergence), whereas strong (or causal) emergence implies the claim that no ontological reduction is possible. It would be great if you could you clear up the use of terms for this essay.

    Thanks and Cheers!

    Like

  2. I don’t think the “Solar neutrino problem” is a problem in reductionism. Rather, it simply seems to be a problem with current Standard Solar Model, which is another kettle of fish. Figuring out what is wrong with current solar theory will be all the “fix” that is needed.

    So, I wouldn’t consider this an example, unlike trying to harmonize cosmology and quantum-based standard particle theory. Even there, I think “harmonize” is a better word than “reduce,” Marko, because it may be that neither one “collapses” into the other, but that both are ultimately but parts of some larger theory.

    I do think the third issue, time symmetry, is more amenable to reduction, rather than harmonization, but, it too may turn out to be solvable by some Venn diagram type of overlap.

    Beyond that, the “big ticket” items in reductionism have traditionally been along the lines of “Can biology be reduced to chemistry?” or “Can chemistry be reduced to physics?”

    That’s because, in these cases, classical reductionists — or to use the term that Dan Dennett uses without either irony or self-reflection — “greedy reductionists” — have traditionally presented the different major divisions of science as hierarchical. And, even Marko’s better example isn’t really hierarchical.

    Also, I don’t see issues of emergence on the first example, and per parallels, don’t see them as necessarily all that likely on the second example.

    I think Marko’s final analysis section still has some degree of truth for “big ticket” reductionism, but … while it may be necessary (that’s what this discussion will get into, surely), it is probably not sufficient.

    And, since we’re now using philosophy words, probably philosophy of science is part of what’s needed for cross-science issues. Of course, Massimo told us a month ago that, more and more, we’re getting “philosophers of science X” rather than “philosophers of science.”

    We’re not totally losing generalist philosophers of science, are we, Massimo?

    That said, I know that Marko noted this is part one of a two-parter. I don’t know if he is addressing cross-disciplinary issues in the second piece.

    And, that’s my initial observations to get comment rolling.

    Like

  3. Hi Marko,

    First, with regard to epistemology, I would agree that it is not the case that there will always be linkage-statements between theoretical levels, of the sort that you discuss — at least, not unless such linkages are allowed to be arbitrarily lengthy and complex, which rather defeats the point of them.

    But, on your three examples:

    (1) The reduction of cosmology to the Standard Model of particle physics: As you say, the SM does not contain any account of gravity, and cosmology is all about gravity. From that alone we know that the SM is incomplete and cannot serve as a “structure” theory underpinning cosmology. Further, either there is a dark-matter particle that needs to be added to the SM, or there is something we don’t understand about gravity — in either case the “structure” theory is wrong or incomplete.

    We would surely all agree that if the “structure” theory is wrong or incomplete then the attempt to reduce an “effective” theory to it will not work (or work only partially).

    (2) On solar neutrinos. Again, as you say, the answer was that the “structure” model, the standard model of particle physics, was wrong and incomplete. The models that gave a solar neutrino flux of a third the observed value assumed massless neutrinos and thus no neutrino oscillations. Correcting the particle-physics model solved the discrepancy.

    What you are pointing to is a strength of the reductionist conception, namely that we can use it as a tool to find errors. Here the high-level modelling of the Sun exposed flaws in the underlying model.

    (3) On the arrow of time. As we were discussing in a recent thread, I would argue that (1) Newtonian mechanics, plus (2) some degree of random non-determinacy, are sufficient to produce weakly-emergent second-law behaviour and thus the arrow of time.

    Hi Socratic,

    Figuring out what is wrong with current solar theory will be all the “fix” that is needed.

    Actually, the flaw was in the particle physics (treating neutrinos as massless).

    Like

  4. I would agree with SocraticGadfly that harmonize, rather than reduce, would be a more useful term. Specifically that thermodynamics is not reducible to kinetic energy, but that they are two sides of a larger whole. Energy is both conserved and relativistic, in the Newtonian sense that for every action, there is an equal and opposite reaction. When direct kinetic energy is applied, the effect is to disperse the energy, like ripples away from a stone in the water and the broader environmental effects eventually serve to balance the initial force. Such as a boat moving through water, with it being pushed away in front and filling in behind, causes an equal amount to move the opposite direction of the boat.
    So, yes, we can distill kinetic energy out of thermodynamics, but it is still only part of a larger whole and it is primarily due to our perspective as mobile organisms that we would view it as primary. Would a plant view kinetic energy as primary to thermodynamics? Lol.
    When direct contact is made and the energy imparted is dispersed throughout the receiving body/field, this is the basis of entropy, the loss of effective energy for an isolated system, yet in thermodynamic cycles, energy is imparted back into that system/frame, from the ones around it and balance/equilibrium, ie. a thermal medium, is created.
    As for entropy creating the arrow of time, it should be noted that as energy is conserved, it is constantly creating new frames/forms and dissolving old ones. So energy goes from past events to future ones, while these events go from being in the future to being in the past.
    Much of this can be extended to the differences between western, object oriented philosophy and eastern, context oriented views. In fact, the western view of time is that since we move into the future and forward in space, the future is in front of us and the past is behind. While the eastern view is that since we can see what is in front of us and know what has happened, but cannot see behind us and don’t know what the future holds, then the past is in front of us and the future is behind.
    This translates to a western view that we are individual entities, moving through our context and thus against what we encounter, while the eastern view is that one is integrally part of one’s context and what goes round, comes round. The whole chicken and egg thing.

    Like

  5. I’m curious to see what Marko has to say about this, but I find it interesting that people want to shift the vocabulary from reduction to harmonization. IF two theories are describing the same phenomena at two different levels, then reconciling the two theories is by definition a case of reduction. Also, “harmonization” is neither a philosophical nor a scientific term in this context, as far as I know.

    Like

  6. Massimo,
    So which would be the more fundamental level? If energy is eternally conserved and possibly even the concept of beginning and endings only apply at the level of form, then one might argue that kinetic energy is derived from thermodynamic processes.
    I wouldn’t try being that radical and would argue that, like nodes and networks, they are opposite sides of the same coin. Consider as well, what is required, in terms of motion, expression/absorption, medium, etc, required for even the effect of kinetic energy to occur.

    Like

  7. Harmony is probably not the best term for it but I think what people are trying to convey is a view where there is no privileged one true way of describing the world. There are many different “true descriptions” of the world but only one real world but to say one is more fundamental than the other is mistaken.

    The real conflict would be when the different true descriptions are in conflict, as the neutrino example highlighted and in that case, one has to be wrong as the different descriptions have to at the very least not contradict each other.

    At least that is my view, heavily influenced by Susan Haack. I’m not sure quite sure where that places me on the epistemological/ontological reductionism spectrum but it seems to agree fairly well with Marko’s views presented here, which I assume is some form of weak emergence.

    Like

  8. imzasirf, I never bought the not one privileged way of describing the world notion. Yes, it’s true that there are essentially always multiple ways to describe the world (underdetermination of theory by data), but a lot of them are pretty crappy. And at any rate I don’t think that gets one out of the pickle Marko has been describing.

    brodix, I don’t buy the opposite sides of same coin analogy either. First, it’s an analogy, not a rigorous description of what scientists are trying to do. Second, to *explain* temperature, say, in terms of kinetic energy, and thus thermodynamics in terms of the underlying mechanics of particles, is *precisely* what is meant by theory reduction, it is the way both philosophers and physicists themselves use the term (when they do, in the case of the latter).

    Like

  9. A few notes, primarily on vocabulary, a synonym and … Wittgenstein!

    First, primarily to Massimo: This relates tangentially to some comments on your “change of course” thread, as well as to simple disagreement as to how to analyze the examples Marko presents.

    Are we supposed to find online, or create ourselves, a “Philosophical English” to “everyday English” dictionary and find a technical term for “harmonization”? In everyday English, it’s the right word for what Marko’s proposing, as I see it. Per his first example, and referencing Venn diagrams, I noted that current cosmology and current particle theory may have neither one “reducing” to another but instead both being part of a larger theory.

    Let’s go back to physics and look at fundamental forces.

    Would the unification of the weak force and electromagnetism be called “reduction”? Certainly NOT on the physics side of the coin, at least. (I’ve never heard it called such.) Would you call it that on the philosophy side? If “unification” is a better word for what I said in my first post than “harmonization,” fine. The concept I’m presenting is the same. On Marko’s first example, it’s quite possible that the eventual outcome is a unification or harmonization, no “reduction” involved.

    That said, you did qualify your observation about Marko with an “if.” Well, what I’m saying is that your “if” is not what’s at stake, as I see it. Since I don’t see it as being what’s at stake, I stand by either “harmonization” or “unification.”

    So, no, I know the word I was using. (Well, if I had thought of the example from fundamental forces, I would have used “unification” instead of “harmonization,” perhaps.) So, since I’m saying I reject your “if,” I’ll stand by one or the other of the two words.

    “Unification” is a perfectly appropriate word in physics.

    And, in your own scientific field of biology, per an essay of yours a month ago, “synthesis” is an appropriate word.

    And, thus we’ve covered the “everyday to science” translation at least, in terms of everyday language. Since this is a scientific issue first and only secondarily a philosophical issue, and one based on a scientific issue, I don’t worry whether “harmonization” (or “unification” or “synthesis”) is in The Philosopher’s Dictionary or the SEP.

    But, finally, to make it philosophical, let’s turn to our friend Wittgenstein. Brodix clearly understood what I meant, therefore the communication was clear. No need to be silent on use of “harmonization” (or “unification” or “synthesis”) and no private language games involved.

    ==

    Second and briefly, Coel, thanks. I thought that the solar neutrino problem had been solved, but forgot to look it up. That said, I’d disagree with you somewhat on interpretation of Marko’s second example.

    Like

  10. Hi Marko,

    Thank you for an interesting, well-written article..

    I agree with you that it is not always possible to talk of the entities of an effective theory in terms only of the entities of a structural theory. I don’t for example believe that it is possible to define the concept of love in terms of neurons, synapses and neurotransmitters (if you’ll permit me to relax your definition of theory). I agree that this kind of strong reductionism is misguided, but then I doubt that many would disagree.

    However, I do disagree with a number of points in the article..

    In particular, I disagree that the failure of strong reductionism implies “that phenomena described by those solutions are not explainable by the structure theory, and that these phenomena are strongly emergent with respect to the structure theory.”

    Firstly, on explainability, I would argue that there are ways of deriving explanations of effective theories from structural theories without bridge laws or strong reductionism. Careful examination may sometimes be enough.

    An example might be the generalised concept of a glider in Conway’s game of life (GoL). There are an infinite number of kinds of glider, so that it is probably not possible to define in the basic language of the structural theory (which speaks only of cells having neighbours and being alive or dead) what constitutes a glider. And yet the basic laws of GoL entail that all these gliders can exist. By examining these shapes (and particularly by simulating them), we can see that the effective theory we produce to describe these gliders emerges from the structural theory, but not in any strong sense. There is no need for any truly novel laws, rather the effective theory is necessarily true given that particular structural theory. I think this kind of insight amounts to an explanation of the effective theory by the structural theory even if it does not meet the stringent requirements of strong reductionism.

    Secondly, you have argued that disagreements between fundamental theories and experimental data show that high level phenomena are strongly emergent with respect to the fundamental theories. I believe this to be a very unusual way to see things. I hazard that most physicists would instead interpret this state of affairs as indicating that the structural theory is simply incorrect. Indeed, it has to be as it makes predictions which are not consistent with observation.

    You and others seem to imply that the failure of strong reductionism is fatal to the idea of a hierarchy of scientific theories, but I don’t think that is justified. Even without strong reductionism, we can still see that high level descriptions supervene on lower level descriptions. Even if we cannot reduce psychology to neuroscience, that doesn’t mean that the entities of psychology do not supervene on those of neuroscience. There is still a hierarchy, even without reductionism.

    I will save my thoughts on your analysis of the emergence of the arrow of time for another comment.

    Like

  11. Hi Marko,

    Excellent article – clear and thorough. I will be interested to hear your response to Coel but I would have thought that, as a rule of thumb, the burden of proof is always on the person making a claim, whatever it is. That is to say the ‘default’ for any case is not ‘strongly emergent’ or ‘weakly emergent’ but instead ‘we do not know yet’. But that may be my agnostic bias at play.

    I have always said that general reductionism is a hypothesis and not an axiom. I think that this was the position of most scientists prior to about 1950. The situation now appears to have reversed.

    There is something which puzzled me from previous discussions and seems relevant here and that is the time reversibility of Newtonian physics. I may be misunderstanding but it seems to me that there are situations where it is not.

    For example, to take a classic exercise in physics, if I have the equation for a mogul slope and given Newtonian mechanics and the position and mass of an object on a slope and some figure for friction, I can (perhaps after refreshing my rusty maths) derive an expression for the path that it takes down the slope. But I cannot time reverse this and derive an expression for a object up a slope because there is more than one path which could have resulted in this position.

    Even if I have the object’s velocity I still cannot do it because there is more than one prior path which could have resulted in particular vectors.

    There appears to be an asymmetry between “where it will go” and “how it got there”.

    As I say, my understanding of the situation may well be wrong.

    Like

  12. Hi Marko,

    Thank you for posting your thoughts on the arrow of time.

    The issue you have brought to our attention is that the effective theory of the second law of thermodynamics (that entropy always increases) is not reducible to any time-symmetric structural theory. That may be correct for a strong interpretation of “reducible”, but I completely disagree that we cannot explain the second law in terms of a time-symmetric theory. Indeed I think we can derive it.

    I expect you agree that “entropy … cannot decrease in time” is a simplification. Rather, entropy tends not to decrease. Any successive state of a system is likely to be more entropic than any preceding state, but the reverse situation can also arise. If you wait long enough you are effectively guaranteed to see a rise in entropy because the probability of any other result rapidly diminishes.

    This is a subtle point but I think it is important.

    The next point to bear in mind is that entropy can be thought of as a measure of how probable a particular state is, given all the ways the system could be. Having all the molecules of gas gathered into one corner of a room is improbable (low entropy). Having them evenly distributed is probable (high entropy). There are simply more ways of arranging them in the latter configuration than the former.

    Statistically, each successive state is likely to be more probable than each preceding state just because that’s what probability means. As a consequence, whatever the structural laws of a system, if you start them out in a very low entropy state, they will necessarily tend to reach higher entropy states, simply because of regression to the mean.

    This is true whether you play the structural laws forward in time or backward. If you start with a model of the gas particles gathered in a corner of a room and apply the structural theory forward in time or backward, either way you will see them rapidly spread out. You seem to imply that this is incorrect — we don’t see gas collapsing to a corner — but I would say it is correct and this collapse describes a very low probability but possible event.

    There are two ways to express the key point. The first is that the second law of thermodynamics can indeed be explained by a time-symmetric theory because we can prove that it must obtain given even time symmetric theories.

    The second, perhaps more controversial way, is to claim that even the second law of thermodynamics is time-symmetric. The law is actually that systems tend to evolve away from states of low entropy, whether that evolution happens forward or backward in time. It only appears to us to be time asymmetric because we are at one particular temporal side of that moment of minimal entropy we call the Big Bang. If it turns out that time extends before the Big Bang we might expect to see entropy increasing in the opposite direction.

    Like

  13. Philip, that doesn’t seem right to me. Translating from one language to another is just that, translating, it’s like an horizontal move. Reduction is a process of explaining higher level phenomena in terms of lower level ones, a different type of task.

    Like

  14. Massimo,
    Yes, but whence come these “underlying mechanics of particles?” I can understand theorists being reluctant to question basic conceptual tools, but isn’t that within the scope of the philosophy?
    Given that we are mobile entities and our mental process is a function of distilling out sequences of concise frames from large amounts of non-linear energy/information, shouldn’t there be some degree of skepticism, when we reduce/distill out “moving particles” as foundational, that some degree of subconscious bias might be at work? What powers this motion? Often it would seem, at some level, there are polarities, whether electrical or mechanical, in which something pushing/charged in one direction is balanced by something pushing/charged the opposite direction.
    Thermodynamics does examine potential dichotomies of action, though its most useful aspects are how it organizes our understanding of the order of non-linear processes, while kinetic energy is linear.
    I apologize for using the colloquialism. Next time I’ll refer to it as a dichotomy.
    I seem to be using up my post allotments pretty quickly. Fortunately it is broken into two parts.

    Like

  15. I did say ‘rule of thumb’, and there is obviously no stone tablet about the burden of evidence.

    When we say ‘X is true’ ( or ‘X is probably true’) then we like to think that there is a reason why we believe that X is true and that this is a good reason.

    I cannot imagine being satisfied when claiming ‘X is true’ and then introspecting for the reasons why I think that and coming up blank and then saying – ‘but I don’t have the burden of evidence’.

    That only seems to add a new proposition which I am stating as true and which I don’t know why I believe that it is true.

    It seems to me that if I had a good reason for believing the new proposition – ‘I don’t have the burden of evidence’ – was true then this would count as my reason for believing the first proposition.

    Of course this applies to claims that i would maintain in the face of questioning. As a matter of practicality there are innumerable everyday things I have to believe without evidence in order to make it through each day.

    Like

  16. Miramaxime,

    Thanks! I am painfully aware that various people understand and use the words reductionism and emergence in different ways. That is why I provided a definition in the article — and that is the definition I use throughout. Other definitions are of course possible, but I will not use them in this article. 🙂

    SocraticGadfly

    See below for my understanding of the “harmonize” versus “reduce” debate.

    [Regarding] cosmology and quantum-based standard particle theory […] it may be that neither one “collapses” into the other, but that both are ultimately but parts of some larger theory.

    If I may reformulate your statement as a question — does a “larger theory” exist such that both cosmology and SMEP are reducible to it? This is the issue of ontological reductionism, to be discussed in part II of the article. Short answer: it might exist, but it would always be incomplete (see also my response to Coel below).

    “Can biology be reduced to chemistry?” or “Can chemistry be reduced to physics?”

    Epistemologically, it’s almost impossible, since neither chemistry nor biology currently satisfy the (a) criterion (see the Analysis section of the article), and it is unlikely that they ever will. Ontologically, the situation is even worse, but please wait for the part II regarding ontological issues. So overall, I’d answer “no” to both questions. 🙂

    Would the unification of the weak force and electromagnetism be called “reduction”?

    Definitely not. Unification is an entirely different concept from reduction. I haven’t mentioned unification anywhere in the article, not even implicitly.

    Coel,

    We would surely all agree that if the “structure” theory is wrong or incomplete then the attempt to reduce an “effective” theory to it will not work (or work only partially).

    Wholeheartedly agreed! 🙂 But be aware, in part II of the article I will argue that any structure theory must always be incomplete — rendering reductionism as an always-partial endeavour at best.

    What you are pointing to is a strength of the reductionist conception, namely that we can use it as a tool to find errors. Here the high-level modelling of the Sun exposed flaws in the underlying model.

    When a structure theory is used to find an error in the effective theory, then I’d call it a strength of reductionism. When an effective theory is used to find an error in the structure theory, then I’m not so sure what to call it. It is certainly useful for finding errors, but it is somehow weird for “failure to reduce” to be called the strength of reductionism.

    On the arrow of time […] I would argue that (1) Newtonian mechanics, plus (2) some degree of random non-determinacy, are sufficient to produce weakly-emergent second-law behaviour and thus the arrow of time.

    It is plausible that this could be proved, provided that one introduces your requirement (2) into some suitable structure theory. But note that (2) is nowhere to be found in any known fundamental theory, with the weird exception of the collapse postulate in QM (which happens to be an even bigger problem than the arrow of time itself). I’ve heard of attempts to derive the second law of thermodynamics from irreversibility of measurement in QM, but no successes.

    And as a side note, please be careful when you combine randomness and non-determinism in a single sentence — one must always be mindful not to confuse them. Laypeople may even get confused enough to identify the two concepts. 🙂

    Labnut,

    Thanks a lot! 🙂

    Brodix,

    Just two short comments. First, I understand and appreciate the liberal use of analogies for the purpose of finding inspiration. But note that I try to avoid using them for anything but inspiration — in particular, they can only go so far when one wants explanation and rigor. So be mindful that physicists have very precise definitions for many terms you use, and would find some of your statements incorrect, unless they read very charitably. That said, I liked the idea of being unable to see the future connected to the idea of being unable to see behind ones back. 🙂

    Second, note that (fundamentally speaking) energy is not really conserved. Sometimes I disagree with Sean Carroll (or at least think that he overstates some things), but his piece on nonconservation of energy is something that is really well written, and I recommend it:
    Energy is not conserved.

    Massimo and everyone,

    Regarding harmonization — my intuitive feeling for that word assumes some much weaker notion than reduction. Namely, reduction implies that the effective theory is a rigorous consequence of the structure theory, while harmonization could just mean that the two theories are not contradicting each other. So harmonization is implied by reduction, but the opposite clearly does not hold. There are many theories which are mutually “harmonious” (i.e. logically compatible) but still not reducible to each other.

    So I would not consider reduction and harmonization as synonyms or such. Moreover, in the article I have defined what I mean by the former, and haven’t even introduced the latter — so my suggestion is to stick with “reduction” and drop “harmonization”. Intuitively I don’t feel it conveys the meaning that “reduction” is supposed to convey.

    (More replies follow soon…)

    Like

  17. Marko, your 3 examples of reductionism failure are all invalid.

    1. While dark matter is a mystery, there are proposed explanations that would not violate the SM. For example, modified gravity, axions, or an undercounting of known particles. It could also be a supersymmetric particle, and that is favored by those who believe that supersymmetry is needed to complete the SM anyway. If dark matter turns out to be some sort of substance that is only meaningful at a galactic level, then that might be a failure of reductionism, and Nobel prizes would go for showing that. No evidence so far. You have just taken and unknown question and blamed it on reductionism.

    2. It is a gross exaggeration to say that the solar neutrino problem “required us to rewrite the fundamental laws of physics.” All it did was to imply that neutrinos had a tiny nonzero rest mass instead of a zero rest mass.

    3. The arrow of time is not in conflict with the SM. As well-explained by Disagreeable Me, time symmetric laws can and do lead to entropy increasing with time, and hence an arrow of time. Furthermore, it is not true that the SM is (approximately) time symmetric. Besides the CP violation, the SM has what some people call “collapse of the wave function” and others call “decoherence”. This is certainly not time symmetric, and has a distinct arrow of time.

    Your complaint about calculating the proton mass is very strange. Of course the calculation depends on other constants of nature. Do you really want to calculate all those constants of nature from first principles, and not use any measured values? I guess that some string theorists had that unlikely goal. But if one of them claims to achieve the goal, but computes a wrong value for the mass of the electron, are you going to claim that reductionism failed?

    Like

  18. Marko,
    Thanks for the reply, but those were not really my point. I’m asking whether thermodynamics can unequivocally be described as emergent from the kinetic activity of particles, or is there a dichotomous relationship, in which such feedback loops of action are required to explain how these particles come to be and what gives them motion. Presumably every action has a reactive balance and any particle substantial enough to manifest kinetic energy must have originated from somewhere, such as coalescing out of a field and that would seem the sort of action, coalescing, condensing, etc. which non-linear processes might help explain.
    Then there is the fact that once expressed, kinetic energy disperses back out, like ripples away from a stone hitting water.
    Now if we consider thermodynamic processes, they are very effective at creating kinetic energy. Hurricanes and tornados come to mind, while actual observations of such linear actions are always in some larger context. So if kinetic energy is foundational to thermodynamics and thermodynamics is only emergent from it, can you offer examples of where it does manifest in a pure form, outside of those larger feedback loops?

    Like

  19. Hi DM,

    An example might be the generalised concept of a glider in Conway’s game of life (GoL). There are an infinite number of kinds of glider, so that it is probably not possible to define in the basic language of the structural theory (which speaks only of cells having neighbours and being alive or dead) what constitutes a glider. And yet the basic laws of GoL entail that all these gliders can exist. By examining these shapes (and particularly by simulating them), we can see that the effective theory we produce to describe these gliders emerges from the structural theory, but not in any strong sense.

    You need something more than the rules of GoL to get gliders, you need the right initial condition (program).

    But since GoL is a universal computer we can also have Eratosthanes Sieve on it. Would you then say that the basic laws of GoL entail Eratosthanes Sieve? You would also have to say that Eratosthanes Sieve is entailed by the rules of Rule 110 or the rules of a Turing Machine etc.

    Similarly the mathematical formulation of any mathematical or physical theory is implementable on GoL or Rule 110 or a Turing Machine

    But if you say anything that runs on GoL is entailed by the rules of GoL then GoL is entailed by the rules or Rule 110 and vice versa. So it all comes down to everything being the weakly emergent behaviour of (or everything is reducible to ) the abstract concept of a universal computer

    Or perhaps a NAND gate, from some combination of which you could build a universal computer. But would you also say that nothing more is required than the rules of a NAND gate to get quantum physics or that the rules of a NAND gate entails quantum physics?

    Like

  20. Reductionism itself is an emerging phenomenon.

    Reductionism is something that is observed to happen, theory after theory. This being said, reductionism is per force limited, as mathematics and physics are very far from being reductive, or reduced themselves. They both have a “s” at the end of their names for the simple reason that both are hydras floating in the air, with bodies that may, or may not, be connected.

    Nobody knows very well how one gets from Quantum Theory to Classical Mechanics, or from Axiomatics in Logic to any field of mathematics, for example.

    For those who do not know mathematical and physical theory in maximum detail, they appear to reduce to a set of equations. But, in truth, they never do. Even mathematics is not logical, in the sense that all the logic therein is glued by traditional hand-waving.

    This is not obvious, and I drove somebody who had just got a Field Medal into a fury with these ideas. The very fact that this towering mathematician, who was actually a friend, was so deranged by the notion, was proof enough of the correctness of my point of view.

    Famously, Russell said that, when he told a famous mathematician that he wanted to become one too, the professor told him “mathematics is the subject in which we know neither what we are talking about nor whether what we say is true.” Russell, decades later, although he tried to establish foundations, admitted that was true.

    The ascent of Category Theory is the ultimate compliment that way: it’s simply a catalogue of complicated recipes that works. The Foundations are no worry. Reduction to fundamentals is no worry. It’s all about prepared recipes, cooked, ready to eat.

    Fluid Mechanics does not reduce to Classical Mechanics. Not yet. From Classical Mechanics, one can deduce an equation for fluid flow, the Navier-Stokes equation. It is an open problem, one of the ten most famous in mathematics, whether a solution of Navier-Stokes exists. (I guess it does not, because Quantum Mechanics has to be taken into account).

    Overall, saying a theory reduces to a set of equations is not correct. Equations are sentences, propositions. However, in any logic, the propositional calculus has to be accompanied by a “universe”, in which terms of the logic are found.

    One logic is not more fundamental than another. They have different universes, they cannot be reduced one into the other. Trying to “reduce” Plate Tectonics to the so-called “Standard Model” is of no significance. Actually the Standard Model cannot be reduced to Quantum Mechanics, although it includes most of the latter.

    What am I driving at? Reductionism is a goose with the golden eggs chase. And it does produce golden eggs. Namely the supposedly more fundamental theory is often found incomplete. It is for example not clear that Newton really demonstrated Kepler’s laws from his set-up. But out of the main difficulty came, much later, Gauss theorem.

    Like

  21. DM,

    Thanks! 🙂

    on explainability, I would argue that there are ways of deriving explanations of effective theories from structural theories without bridge laws or strong reductionism. Careful examination may sometimes be enough.

    It might be enough, if you are lucky. 🙂 But without a rigorous quantitative reductionism (I am guessing that is what you mean by “strong reductionism”), you can never be sure that your structure theory is not off by a factor of two somewhere, and thus requires a whole section to be completely rewritten (as has happened with SMEP regarding Solar neutrinos).

    Regarding Game of Life,

    I think this kind of insight amounts to an explanation of the effective theory by the structural theory even if it does not meet the stringent requirements of strong reductionism.

    I cannot but notice the absence of the effective theory here. What is it specifically? What are the quantitative rules of the effective theory (of, say, gliders)? To say that gliders exist does not constitute an effective theory — you need rules which govern their behavior, determine their motion, size, and other properties. I don’t know of such effective description, and it would be necessary in order to discuss its reducibility to the GoL structure laws.

    you have argued that disagreements between fundamental theories and experimental data show that high level phenomena are strongly emergent with respect to the fundamental theories. I believe this to be a very unusual way to see things. I hazard that most physicists would instead interpret this state of affairs as indicating that the structural theory is simply incorrect. Indeed, it has to be as it makes predictions which are not consistent with observation.

    This is similar to Coel’s remarks, but I should warn you that it is very slippery to call a theory “simply incorrect”. At a formal level, every theory is “incorrect” in the sense that it is always incomplete (it does not predict all phenomena that might occur in Nature), and it must always be incomplete (I’ll explain why in part II). So we formally always deal with “incorrect” theories.

    Correctness is an ontological issue, so I suggest that you revisit your remark after reading part II of the article. As far as epistemology goes, the only thing we can ask is the following: given two theories, can one of them (effective theory) be regarded as a rigorous consequence of the other (structure theory)? This is a mathematical relationship, a well-defined question irrespective of any (in)correctness of the two theories.

    You and others seem to imply that the failure of strong reductionism is fatal to the idea of a hierarchy of scientific theories, but I don’t think that is justified. Even without strong reductionism, we can still see that high level descriptions supervene on lower level descriptions.

    Regarding hierarchy of scientific theories — I haven’t said that it doesn’t exist. On the contrary, there is a natural hierarchy of theories, where each describes phenomena at a certain scale (i.e. geometric size or level of complexity of the objects being described). The question of reductionism is not to challenge the existence of these theories, but to challenge their mutual (in)dependence — is the effective theory (defined for a given scale) an exact consequence of a structure theory (defined for a smaller scale) or not? They should certainly not contradict each other, but the effective theory could in principle impose additional constraints on the phenomena, which the structure theory does not impose (while remaining compatible with them). Those additional “laws” are called strongly emergent. But strong emergence does not deny the existence of a hierarchy of theories.

    I have to admit that I am not familiar enough with the notion of supervenience, and I have a feeling that it can have a whole spectrum of various meanings, with various degrees of precision. So I cannot really tell what you mean by it.

    Regarding entropy and the arrow of time,

    Any successive state of a system is likely to be more entropic than any preceding state, but the reverse situation can also arise.

    Sure. It just arises much more rarely, i.e. the frequencies of gas expanding in a box and gas retracting into the left compartment are not equal. The problem is that there is nothing in the fundamental laws that specifies these frequencies. In technical terms, the fundamental laws do not specify a probability measure over the phase space of the system in question.

    The probability measure and the entropy are very related, almost equivalent (S = log W). So you cannot reduce one by appealing to the other, it would be circular.

    whatever the structural laws of a system, if you start them out in a very low entropy state, they will necessarily tend to reach higher entropy states, simply because of regression to the mean.

    But the position of the “mean” is defined as a highest-entropy state. So you are saying that if one starts from a low-entropy state, one will reach a high-entropy state because that state is defined… erm… to have high entropy. I think the circularity is obvious. 🙂

    If you start with a model of the gas particles gathered in a corner of a room and apply the structural theory forward in time or backward, either way you will see them rapidly spread out. You seem to imply that this is incorrect — we don’t see gas collapsing to a corner — but I would say it is correct and this collapse describes a very low probability but possible event.

    What makes it a low probability event? The structure theory is silent on that question. It is of low probability because we observed it experimentally to be such, i.e. we have observed the second law of thermodynamics in action. So it’s circular argumentation every time.

    [The second law of thermodynamics] only appears to us to be time asymmetric because we are at one particular temporal side of that moment of minimal entropy we call the Big Bang.

    Sure, as long as you accept that the Big Bang has minimal entropy. But that statement (even if true) is not a consequence of any (known) fundamental laws of physics. It is an independent assumption, just as the second law itself is.

    Robin,

    Thanks! 🙂

    the burden of proof is always on the person making a claim

    Let me just say that “irreducible until proved reducible” has an analogy to “innocent until proved guilty”. As Massimo suggested, this will be more clearly explained in part II of the article. Stay tuned! 🙂

    if I have the equation for a mogul slope and given Newtonian mechanics and the position and mass of an object on a slope and some figure for friction, I can […] derive an expression for the path that it takes down the slope. But I cannot time reverse

    Sure, that is correct. Friction is not a time-reversible force, by its definition. It is also not implied by Newton’s laws of mechanics, you need to postulate it as a force acting between parts of the system.

    It should be noted that Newtonian mechanics (i.e. the three laws of motion) does not form a closed system of equations, since you need to specify which forces act on each body, and this does not come out of Newton’s laws automatically. The specification of forces goes over and beyond Newton’s laws, and can introduce time-irreversibility if you want it to.

    Massimo,

    On the issue of burden of proof […] Maarten Boudry and I have written a paper on this in Philosophia

    This sounds very interesting, thanks for the reference! 🙂

    Philip,

    The ability to “reduce” one theory (in a mathematical language) to another corresponds to the ability to translate programs in one programming language to another.

    No, it doesn’t. A theory in physics is more than just a language — it contains a set of statements that are considered true. What you describe is that the algorithm given in one language can be translated to another language. What reductionism entails is the question whether two different algorithms give the same output (for a certain class of inputs). You can specify the two algorithms in different languages, or in the same language, but neither case can establish reductionism simply by “translating” between languages.

    Like

  22. Marko Thanks for the response. That said, per the physics word I used, re your first example, I think “unification,” not “reduction,” would be the word I would use, for any theory of quantum gravity, which of course is the endpoint of general relativity meshing with quantum mechanics. Again, I would not consider this “reduction” any more than I would the unification of the electromagnetic and weak nuclear forces.

    I mean, the goal of “joining” of gravity and the other three forces is referred to as the Grand Unification Theory. So, with dark matter as a subpart of that, I still would talk of failed unification, not failed reduction.

    ==

    Massimo Thanks for the link. That said, I’m personally kind of leery on the prudential burden of proof, especially if Bayesian stats is part of it. That’s primarily from seeing how Mark Carrier has folded, spindled and mutilated Bayesianism.

    For starters, what if two sides disagree on what relative costs are? So, do we then go, a la Doug Hofstadter, to the prudential burden of proof of the prudential burden of proof? On some the examples you cite, exactly such disagreements happen in the political sector on a regular basis. To take another example, the “loser pays” idea in civil law, since your paper first only references criminal law. Loser pays ideas greatly shift prudential burden of proof, but different political groupings will argue over how heavy of a prudential burden that creates, how much they shift it, and how much we should weight alleged benefits.

    You then cite Pascal’s wager. Well, if, existentially, for a secularist, this life IS eternity, he’s going to put much different value on the terms of the wager than a theist.

    I think the idea sounds promising, but, in reality? It looks like it could be a barrel of Gordian knots. I know you go on to note “the intersubjective agreement among relevant experts is the most suitable candidate to play this role (of avoiding subjective arbitrariness),” but that assumes a degree of intersubjective agreement that may not always avail.

    Let’s look again at law. Nine U.S. Supreme Court justices are theoretically all very expert in constitutional jurisprudence. We see plenty of 6-3 and 5-4 decisions, and except on truly cut-and-dried issues, little 9-0 or even 8-1 votes.

    Per my Hofstadter riff, that’s another “meta” issue. How do two different parties decide on what’s a reasonable cost for trying to establish the prudential burden of proof?

    Beyond your example of pseudoscience or my high-octane partisan politics, more closely linked to science, prudential burdens of proof may spark disagreement. One example would be about how much we should spend on fusion energy research. how much the government should spend on Obama’s BRAIN Initiative (http://en.wikipedia.org/wiki/BRAIN_Initiative). How much of a burden of proof should I bear to show that a super-ITER (http://en.wikipedia.org/wiki/ITER) will produce a macro-scale sustainable energy surplus in the next 50 years, or that Obama’s brain project will cure schizophrenia in the next 50 years?

    Like

  23. Dr. Vojinovic,

    Could you comment on how your views, here, might apply to reductionism with respect to the social and physical sciences? (I understand entirely if you don’t have a view on the subject — it just happens to be where *I* find the most interesting questions regarding reductionism.)

    Like

  24. In a related vein, Rapaport [1998]
    http://philpapers.org/archive/RAPIIS.1.pdf
    discusses the question of *implementation* of an algorithm or program as a physical process: he decides that implementation is not reduction (down to physics), instantiation, or supervenience (I found his discussion of these three terms very useful).

    Another approach linking computation and emergence in physics is James Crutchfield’s (Santa Fe Institute) work on computational mechanics and statistical complexity. He claims that the the description of the intrinsic computational capacity of a given physical system, for example undergoing a phase change, requires moving up one level in a hierarchy of representation.

    Like

  25. Hi Marko!

    Thanks for your response. Given that you wish to stick to your terminology, which is certainly fine, it might be helpful to point out that in your use of the word, the existence of “strong emergence” is perfectly consistent with the reductionist project being correct. After all, we have no a priori reason to expect that we should have figured out all fundamental and effective theories by now.
    I am afraid that this might cause some confusion otherwise.

    In regard to your exchange with Coel, I do not understand why you would not regard a successful modification of a fundamental theory due to an incompatibility with an effective theory as a success for reductionism. After all, it was never the reductionist claim that was at fault (that all effective theories can be reduced to fundamental structure theories) but the fundamental theory. Isn’t the reductionist stance responsible for giving us any reasonable expectation that we can learn something about structure theories while studying effective theories in the first place?

    Like

  26. Aravis,

    I’ll let Marko address your question, of course, but my guess is that he will not have a view on this because he confines himself to reduction between theories that are expressed in a rigorously mathematical way. This pretty much excludes all of the social sciences (with the possible exception of some parts of economics?), much of biology (except population genetics), all of geology, etc.

    Like

  27. > But without a rigorous quantitative reductionism […] you can never be sure that your structure theory is not off

    All features of an effective theory might be perfectly derivable from a structural theory even without being able to make a direct mapping between the language or terms of the two theories. Not having a direct mapping does not mean that we cannot make rigorous quantitative predictions in terms of the effective theory by working from the structural theory.

    > I cannot but notice the absence of the effective theory here. What is it specifically? What are the quantitative rules of the effective theory (of, say, gliders)?

    The basic idea here is not unlike Newton’s first law, and can be put on a mathematical footing if desired. Gliders travel through a homogeneous space in a constant direction with a certain periodicity forever or until they encounter a disruption to the background homogeneity. Gliders can be seen as “particles” which instead of having spin, mass, charge and so on have x-velocity, y-velocity, phase, periodicity, area and so on.

    It is also possible to build Turing machines as structures within the GoL. Given those configurations and the GoL structural theory, the behaviour of the Turing machine is necessary and entailed, and therefore explained, even though the effective theory describing Turing machines is not strongly reducible to GoL. In particular we do not need to posit any additional laws to determine the evolution of the system. Whatever new laws we come up with are useful to us as agents with limited ability to cope with the fine detail of the structural theory.

    > can one of them […] be regarded as a rigorous consequence of the other […]?

    OK. But a false answer does not imply that the effective theory is strongly emergent with respect to the structure theory. At most it just means that it doesn’t reduce to it. Strong emergence is (I feel) an altogether different ontological claim.

    > The problem is that there is nothing in the fundamental laws that specifies these frequencies.

    That’s why I concede that strong emergence may not be possible. There is no direct mapping using only the language of the structural theory. But that doesn’t mean that these frequencies cannot be derived from the fundamental laws by analysis (e.g. comparing the number of ways for the system to be in one macroscopic state as compared to another).

    If you program a system with a simple naive time-symmetric set of laws and you start it off in a low-entropy state, it will automatically reach higher levels of entropy without any additional laws needing to be programmed in.

    > It is of low probability because we observed it experimentally to be such

    Experimental observation is not required. We can predict it is a low probability state from the structural theory alone. Similarly, the structural theory of a coin-flipping system is silent on whether a string of 10 tails in a row is a low-probability outcome, but it is trivial to derive this result.

    Like

  28. Philip, not according to the definition in the Wiki entry, which talks about translations and compiling, not about different algorithms. Not the same thing, as I understand it.

    Like

  29. Hi Marko,

    I will argue that any structure theory must always be incomplete — rendering reductionism as an always-partial endeavour at best.

    Yes, I agree, but the same is true about science overall. We will never have complete knowledge, so this isn’t a flaw of reductionism so much as a feature of science itself.

    When an effective theory is used to find an error in the structure theory, then I’m not so sure what to call it. It is certainly useful for finding errors, but it is somehow weird for “failure to reduce” to be called the strength of reductionism.

    It seems to me that it is! The doctrine of reductionism (by which I mean supervenience physicalism) requires that the low-level and the high-level account be entirely consistent. That requirement produces a powerful tool. If we have a “failure to reduce” then we know that something is wrong either with our high-level account of with our low-level account. So we have to keep tweaking one or both until it does reduce.

    With regard to epistemology this always goes both ways. Indeed, epistemologically we go from the “effective” theory to arrive at the “structure” theory. That’s because we don’t have direct access to the lowest microscopic level. What we obtain from CERN is macroscopic-level data — plots and other human-scale outputs. From those a whole set of theories connects us to the microscopic particle-physics level.

    Thus, ontologically the lowest-level particles are “fundamental” and all else supervenes on that. Yet epistemologically the starting point is human sense data, from which we deduce all else. Reductionism (by which, again, I mean supervenience physicalism) is then a powerful tool for that purpose, so yes I would say that this is indeed a strength of reductionism.

    It is plausible that this could be proved, provided that one introduces your requirement (2) into some suitable structure theory. But note that (2) is nowhere to be found in any known fundamental theory, with the weird exception of the collapse postulate in QM (which happens to be an even bigger problem than the arrow of time itself).

    I am presuming that the randomising element comes from quantum mechanics. This discussion could get into the different interpretations of QM, but any QM variant that reproduces what we observe involves probabilistic behaviour. That’s all that is necessary for the 2nd law, which says (essentially) that it is more probable that systems evolve into more probable states. That is sufficiently tautological that it is entailed whenever we introduce a probabilistic element. (As, e.g., in this simulation that I pointed to, which uses a “rand” function.)

    But, we should note, a probabilistic behaviour is not encodeable in simple algebra (famously, computer code cannot produce a truly random “rand” function, only pseudo-random ones). Thus it follows that there will not always be simple algebraic links between one set-of-equations theory and another-level set-of-equations theory. This is one reason why I don’t advocate the stronger notions of reductionism, but only supervenience physicalism.

    Thus, to summarise, (1) I don’t see that any of your examples show flaws in supervenience physicalism, and weak emergence of higher-level phenomena, (2) I’m quite happy if no stronger notions of “reductionism” are tenable — (1) is entirely sufficient as a powerful tool of science.

    Like

  30. Hi Coel,

    I would just like to note that I think randomness is a red herring. Even deterministic time-symmetric structural theories can lead to time-asymmetric results such as the second law of thermodynamics. If you simulated the bouncing around of molecules of a gas in a simple deterministic 2D kinetic model, you would see results consistent with the second law of thermodynamics even without any kind of randomness at all.

    Have a look at this as an interactive example.

    http://esminfo.prenhall.com/science/BiologyArchive/lectureanimations/closerlook/diffusion.html

    As far as I can see, it’s perfectly deterministic. It starts in a low entropy state, with yellow and blue particles being separated, yet before long it reaches a high entropy state, with them being separated. No randomness required.

    Like

  31. “But note that (2) is nowhere to be found in any known fundamental theory, with the weird exception of the collapse postulate in QM (which happens to be an even bigger problem than the arrow of time itself).”

    As far as I know statistical mechanics, one doesn’t need indeterminism or “random behaviour” to reduce the second law to a time-symmetric structure theory. The underlying theory can be perfectly deterministic (and time-symmetric). The only thing you need is a specific boundary condition, in other words: the system has to start in a state of low entropy.

    Like

  32. Patrick, my guess is that Marko will see (rightly, I think) any talk of introducing boundary conditions as simply restating, not solving, the problem, because those boundary conditions are nowhere to be found in the underlying theory.

    Like

  33. No, I don’t think so. What you are talking about is semantic equivalence. Reduction is about epistemic efficiency, and possibly ontological equivalence. I doubt they are the same thing.

    Like

  34. Massimo, I think Patrick is perfectly right, depending on what you think the important question is.

    If the question is whether the second law of thermodynamics can be reduced (according to Marko’s definition) to the underlying time-symmetric theory, then I am willing to grant Marko’s point that it cannot, and I think Coel would too. As you say, those boundary conditions are nowhere to be found in the underlying theory.

    If the question is whether the second law of thermodynamics can be derived from or explained by the underlying time-symmetric theory, then I think it can. We can introduce the concept of boundary conditions in order to describe macroscopic states of the underlying structure, and furthermore this does not require empirical observation but can be done from the armchair. One can predict such high level results by analysis of the low-level structure without the kind of strong one-to-one reduction Marko is looking for. Indeed, the second law of thermodynamics is a particularly good example precisely because it is so easy to derive. As pointed out already, all we need do is model the structural laws (e.g. in a simulation) and we see that the predictions of thermodynamics are an inevitable and necessary consequence.

    Whether these kinds of explanations of effective theories can be derived from structural theories is to my mind the more interesting question, such that it is wrong to say that explanation in terms of the structural theory is impossible just because we can’t come up with bridge laws and the like.

    It strikes me that the idea of strong reductionism as presented here and elsewhere (Fodor etc) arises out of failed attempts to articulate what it is we are doing when we arrive at these kinds of explanation. Such accounts of reductive explanation when expressed in terms of bridge laws or one-to-one correspondences do indeed describe an endeavour which is often impossible, and so the anti-reductionists are correct to point out these flaws.

    But I think they go far when they reject reductionism as a whole. I think a better approach is to return to the drawing board and try to provide a better account of reductionism, which I take to be the view that all high level theories must be explainable and in some sense derivable (in principle if not in practice) from low-level theories. This is what I take most scientists to mean when they make claims about reducibility, even though this may not accord with how philosophers have come to interpret the term.

    Like

  35. What is the main idea of reductionism? That everything can be deduced from a finite number of axioms. Just there, it’s known it cannot be done, per the Incompleteness Theorems of logic. Those say that arbitrary decisions have to be taken, at some point about whether some axiom is true or not.

    But let’s ignore Mathematical Logic for a moment. Let’s suppose, by abuse of thinking, that there are axioms from which all of physics can be deduced. What would those be?

    Before unreason took over physics, one of the major principles was energy conservation. However, the would-be reductionist Dr. Marko, following today’s fashion, does away with this:

    “…Sean Carroll … piece on nonconservation of energy is something that is really well written, and I recommend it:
    Energy is not conserved.”
    Well written, indeed. Carroll glibly asserts that “see, it was not so hard” (to throw away the most fundamental principle of physics, energy conservation).

    That energy is NOT conserved is essential to enable the creation of universes at the drop of a hat. Nothing is really true anymore, even energy is not conserved, as it costs nothing to create a universe.

    So take two galaxies clusters, G1 and G2. Suppose they separate from the expansion of the universe. Carroll, following the Multiverse fashion, asserts that it cost no energy to separate said galaxies. Then he has a photon P travelling from G1 to G2, and he sees it has lost energy, so energy is not conserved. Multiversists repeat this argument ad nauseam.

    In truth, what they stumbled upon is that the definition of mass-energy in the Theory of Gravitation is not clear. That’s all. The difficulty has been known for generations. However, it does not mean that physics reduces to dust.

    It just means one has to go back to Riemann’s intuition of the 1860s, and reconsider it carefully. Riemann tried to reduce force to geodesic separation. I would suggest to reduce energy to a function related to geodesics density. As geodesics separate, energy is put in the system. With this notion, the fact that it costs nothing to create a universe disappear. https://patriceayme.wordpress.com/2013/08/08/quantum-trumps-spacetime/

    Physicists can’t reduce the universe just to physics, physics has to reduce to mathematics, too, at least in part.

    Amusingly, one may wonder what the Multiversists reduce physics to. Apparently, having done away with energy conservation, a fundamental axiom, they replace it by universe creation. They reduce all of physics to the creation of universes.

    Dark Energy, the accelerated expansion of the universe, questions the entire scheme of present day cosmology. Starting the conversation (logos) by throwing out the most sacred principle of physics (energy conservation), and replacing it with instant karma is as glib as glib gets.

    Instant karma? Thanks to the alleged non-conservation of energy, the creation of trillions of universes per second per cubic meter is eminently reasonable.

    Does that makes Middle –Age theology sitting angels on pinheads a plausible outcome? This is reductio ad absurdum, if I ever saw it.

    Like

  36. DM, forgive me, but you have a tendency to change questions so that you can fit your preferred answers. The question is the first one, not the second one, so Patrick’s solution doesn’t apply and Marko is still on target. Cheers!

    Like

  37. Patrice, what on earth makes you think that Marko is a “would be reductionist”? Is entire essay (and part II, out tomorrow morning) clearly argues the opposite.

    Like

  38. “Patrick, my guess is that Marko will see (rightly, I think) any talk of introducing boundary conditions as simply restating, not solving, the problem, because those boundary conditions are nowhere to be found in the underlying theory”

    I don’t really get this, at least not in the context of the second law. If you don’t start in a situation of low entropy but, say, with a system that’s in equilibrium then the entropy is not going to increase. The entropy is already maximal. The boundary condition is necessary to be able to speak about increasing entropy.

    But neither do I understand this in a broader context. If one refuses to take boundary conditions into consideration, then one can reduce nothing at all. Even in classical mechanics you need boundary conditions to describe the movement of one point particle in a field-free environment.

    Is Marko genuinly saying that because we need boundary conditions we can’t reduce the movement of this particle to … well, the movement of this particle?

    Like

  39. No, Marko is saying, I think, that because of the necessity of empirically determined boundary conditions we cannot reduce the theory of thermodynamics to the theory of particle kinetics. Nobody is denying that temperature is the result of particles’ movements.

    Like

  40. Hi DM (and Patrick),

    I would just like to note that I think randomness is a red herring.

    On this particular point I’m siding with Marko and Massimo.

    The problem is that you have to specify the initial starting positions and velocities. In an entirely deterministic and time-symmetric system it would be possible to specify an initial state such that the system then totally violated the second law (to do that, all you’d need to do is to take any system that had evolved to higher entropy, and then reverse every velocity of the “final” state, and start from there. The point of invoking non-deterministic probability is that the above would no longer work; you then get second-law behaviour regardless of your starting state.

    Have a look at this as an interactive example. As far as I can see, it’s perfectly deterministic.

    In that case I could just “halt” it, then reverse every velocity, and it would then violate the second law by returning to its previous lower-entropy state (and I bet the programmer has a “rand” function in there somewhere).

    If you start […] say, with a system that’s in equilibrium then the entropy is not going to increase.

    But it could decrease (in an entirely deterministic system)! All you’d need to do is pick a very particular set of starting conditions, being the reverse of the motion into that state, and your system would violate the second law.

    There are two possible replies to that. First, you could say, ok, true, but that set of starting positions is vanishingly improbable — but that rebuttal means you have to introduce probability. Or you just introduce some element of probabilistic randomising into the low-level description.

    If you do the latter then, whatever the starting points, the system will then maximise entropy. That’s because maximising entropy is simply heading for the most probable configuration, regardless of starting point, and if you introduce some non-deterministic dice-throwing then that is where you will (overwhelmingly likely) end up.

    Note, by the way, that we already know that our best theory for the low-level particles is probabilistic (namely QM, and yes, you do need to put probabilistic behaviour in there somewhere, even with the esoteric interpretations of it; even if you invoke many-worlds you still have to obey the Born rule for what you observe), so appealing to probability is no issue.

    Like

  41. Suppose you take that animation and then do a 3D plot of it (2D + time) for a sufficient number of iterations and you will see it mostly at equal distribution and very infrequently it will reach a state like the one where it started, or a state with all the circles lined up in a corner, following which there will be a period of entropy increasing until it reaches that even distribution again.

    That would be a toy illustration of his second option, the suggestion that we might just happen to be in that period following one of those points where entropy reaches a minimum.

    Like

  42. But it could decrease (in an entirely deterministic system)! All you’d need to do is pick a very particular set of starting conditions, being the reverse of the motion into that state, and your system would violate the second law.

    Reverse it and keep going – it will violate the second law for a while and then start acting according to the second law again. If we are viewing that universe from the point after it has reached the minimum then the second law holds.

    That is how I understand DM’s second suggestion – we might be viewing a symmetrical process from one side of a local minimum.

    There are two possible replies to that. First, you could say, ok, true, but that set of starting positions is vanishingly improbable — but that rebuttal means you have to introduce probability.

    But “vanishingly improbable” in this context just means that these local minima are very infrequent rather than being about any actual indeterminism.

    Again, think of the 3D plot of that animation through many iterations. If you could see a long enough section of them you would see many of these minima at random (in the sense of distribution rather than indeterminism) intervals with decreasing entropy before and increasing entropy after.

    If the universe was like this then observers would arise close to these minima on one or the other side and so there was a 50% chance that we would be in an increasing entropy world rather than a decreasing entropy world, even though we can’t imagine what an observer in a decreasing entropy world would be like.

    Seems, on the face of it, to be a perfectly respectable conjecture.

    Like

Comments are closed.