*Introduction*

Every now and then, the question of reductionism is raised in philosophy of science: whether or not various sciences can be theoretically reduced to lower-level sciences. The answer to this question can have far-reaching consequences for our understanding of science both as a human activity and as our vehicle to gain knowledge about reality. From both ontological and epistemological perspectives, the crucial question is: are all real-world phenomena that we can observe “ultimately explainable” in terms of fundamental physics? What one typically imagines is something like a tower of science, where each high-level discipline can to be reduced to a lower-level one: economics to sociology to psychology to neurology to biology to biochemistry to chemistry to molecular physics to fundamental physics. Is such a chain of reductions possible, or desirable, or necessary, or important, or obvious, or tautological, or implicit in our very concept of science?

The opposing ideas of reductionism and emergence lie at the core of these questions. The first thing to do, then, is to clear up what is actually meant by the ideas of reductionism and emergence in science. Given that fundamental physics is usually located at the bottom of any proposed chain of reduction, it somehow has a privileged position — not only for being the “most fundamental,” but because its mathematical rigor can be employed to make the meaning of reductionism and emergence more clear. The purpose of this article is to shed some light on these subject matters from the perspective of theoretical physics, hopefully answering some of the above questions. The essay is split in two parts [1]: the first one mainly deals with epistemological reductionism, while the second one tackles ontological reductionism.

*Preliminaries*

Let me start with some definitions. One of the crucial concepts for reductionism is that of “theory,” as reductionism will be understood here as a particular relation between two theories. For the purpose of this article, I will define the notion of a theory in a rather loose descriptive fashion — as a set of mathematical equations over certain quantities, whose solutions are in quantitative agreement with the experimentally observable phenomena which the theory aims at describing, to some given degree of precision, in some specified domain of applicability. This is a reasonable, generic description of the kind of theories that we typically deal with in physics.

There are several important points to note about this definition. First, if the solutions of a theory are not in quantitative agreement with experiment, the theory is considered wrong and should be either discarded or modified so that it does fit the experiment. Second, the requirements of mathematically rigorous formulation and quantitative (as opposed to qualitative) agreement with experimental data might appear too restrictive — indeed, our definition rules out everything but physics and certain parts of chemistry and biology. For example, the theory of evolution is not really a theory according to such a definition (although its population genetics rendition is). Nevertheless, there is a very important reason for both of these requirements, which will be one of the main points of this essay, discussed in the next section. Finally, the phrase “a set of mathematical equations” is a loose description with a number of underlying assumptions. I will mostly appeal to a reader’s intuition regarding this, although I will provide a few comments on the axiomatic structure of a theory in part II of the essay.

In order to introduce reductionism, let us consider two scientific theories, and a relation of “being reducible to” between those theories. In order to simplify the terminology, the “high-level” theory will be called the *effective theory*, while the “low-level” one will be called the *structure theory*. These names stem from the general notion that every physical system is constructed out of some “pieces” — so while the effective theory describes the laws governing some system, the structure theory describes the laws governing only one “piece” of the system. Of course, if each such piece can be divided into even smaller pieces, the structure theory can in turn be viewed as effective, and it may have its own corresponding structure theory, thus establishing a chain of theories, based on the size and type of phenomena that they describe. This chain always has a bottom — a theory which does not have a corresponding structure theory, to the best of our current knowledge. I will call that theory *fundamental*. Note that this definition of a fundamental theory is, obviously, epistemological [2].

It is important to point out one particular relationship between an effective theory and some corresponding structure theory: given that the physical system (described by the effective theory) consists of pieces (each of which is described by the structure theory), it follows that the domain of applicability of the effective theory is a subset of the domain of the structure theory. That is, as much as the effective theory can be applied to the system, the structure theory can also be applied to the same system — simply by applying it to every piece in turn (of course, taking into account the interactions among the pieces). Putting it otherwise, the domain of applicability of a structure theory is usually a superset of the domain of applicability of the effective theory. Thus, the structure theory is said to be *more general* than the effective theory.

Finally, we are ready to define the relation of “being reducible to” between the effective and the structure theory. The effective theory is said to be reducible to the structure theory if one can prove that all solutions of the effective theory are also approximate solutions of the structure theory, in a certain consistent sense of the approximation, and given a vocabulary that translates all quantities of the effective theory into quantities of the structure theory.

The procedure for establishing reductionism, then, goes as follows. First, the effective and structure theories are often expressed in terms of conceptually different quantities (i.e., variables). Therefore one needs to establish a consistent vocabulary that translates every variable of the effective theory into some combination of structure theory variables, in order to be able to compare the two theories. As an example, think “temperature” in thermodynamics (effective theory) versus “average kinetic energy” in kinetic theory of gases (structure theory). Second, once the vocabulary has been established, one needs to specify certain parameters in the structure theory so that a particular solution of the latter can be expanded into an asymptotic series over those parameters. If the effective theory is to be reducible to the structure theory, such parameters must exist, and they often do — they typically are the ratios between the quantities of the large system and the quantities of each of its pieces. Finally, once the asymptotic parameters have been identified and the solution of the structure theory expanded into the corresponding series, the dominant term in this series must coincide with the solution of the effective theory, and so on for all quantities and all possible solutions of the effective theory, always using the same vocabulary and the same set of asymptotic parameters [3].

If the above procedure is successful, one says that the effective theory is reducible to the structure theory, that phenomena described by the effective theory are explained (as opposed to being re-described) by the structure theory, and that these phenomena are weakly emergent from the structure theory. Conversely, if the above procedure fails for some subset of solutions, one says that the effective theory is not reducible to the structure theory, that phenomena described by those solutions are not explainable by the structure theory, and that these phenomena are strongly emergent with respect to the structure theory. In the next section I will provide examples of both situations.

*Examples*

Arguably the most well-known example of reductionism is the reduction of fluid mechanics to Newtonian mechanics [4]. As an effective theory, fluid mechanics is a nonrelativistic field theory, whose basic variables are the mass density and the velocity fields of the fluid, along with the pressure and stress fields that act on the fluid. The equations that define the theory are a set of partial differential equations that involve all those fields. As a structure theory, Newtonian mechanics deals with positions and momenta of a set of particles, along with forces that act on each of them. The reduction of fluid mechanics to Newtonian mechanics then follows the procedure outlined in the previous section. We consider the fluid as a collection of a large number of “pieces” where each piece consists of some number of molecules of the fluid, contained in some “elementary” volume. We establish a vocabulary roughly as follows: the mass density field is the ratio of the mass and the volume of each piece, at the position of that piece, in the limit where the volume of the piece is much smaller than the typical scale of the motion of the fluid. The ratio of the two sizes is a small parameter, convenient for the asymptotic expansion. Also, the velocity field is the velocity of each piece of the fluid, at the position of that piece, in the same limit. Similarly, the pressure and stresses are described in terms of average forces acting on every particular piece of the fluid. Finally, we apply the Newtonian laws of mechanics for each piece, and expand them into an asymptotic series in the small parameter. The dominant terms of this expansion can be cast into the form of partial differential equations for the fluid, which can then be compared to the equations of fluid mechanics. It turns out that the two sets of equations are equivalent, which means that fluid mechanics (effective theory) is reducible to Newtonian mechanics (structure theory).

In this sense, the motion of a fluid is described by fluid mechanics, explained by Newtonian mechanics, and all properties of a fluid are weakly emergent from the laws of Newtonian mechanics. Of course, all this works only for phenomena for which the approximation scheme holds.

There are many other similar examples, such as the reduction of the first law of thermodynamics to statistical mechanics, reduction of Maxwell electrodynamics to the Standard Model of elementary particles, reduction of quantum mechanics to quantum field theory, or reduction of Newton’s law of gravity to general relativity. The essence here is that in each case the former can be reconstructed as a specific approximation of the latter.

In contrast to the above, the situations in which reductionism fails are much more interesting. In fact, these are nothing less than spectacular, since they often point to new discoveries in science. For the purpose of this article, I will focus on three pedagogical examples, called the *dark matter problem*, the *Solar neutrino problem* and the *arrow of time problem*. Each of these examples illustrates a different way in which reductionism can (and does) fail. Of course, other examples can be found as well, but the analysis of these three should be sufficient for subsequent discussion.

The first example is the failure to reduce the Standard Model of cosmology [5] (SMC) as the effective theory, to the Standard Model of elementary particles [6] (SMEP) as the corresponding structure theory. Aside from the fact that SMEP does not describe any gravitational phenomena that SMC contains, SMC describes the presence of so-called dark matter, in addition to the usual matter. The presence of dark matter particles cannot be accounted for by any of the matter particles in SMEP. Therefore SMC cannot be reduced to SMEP already at the qualitative level. In order to make SMC reducible to some structure theory, SMEP needs to be modified (in a non-obvious way) in order to account for dark matter particles. In other words, the mere presence of dark matter in cosmology requires us to rewrite the fundamental laws of physics. Here SMEP is considered fundamental because we do not yet have any structure theory for SMEP [7].

According to the terminology defined in the previous section, then, in this example the evolution and properties of the Universe (at large scales) are described by SMC, are not explainable by SMEP, and the existence of dark matter is strongly emergent.

The second example of failure of reductionism is even more interesting. The effective theory that describes our Sun, sometimes called the Standard Solar Model (SSM) [8], also fails to reduce to SMEP. As far as we know, the Sun is composed of ordinary particles that SMEP successfully describes. So both SSM and SMEP can be used to describe the Sun, and qualitatively they in fact do agree. Moreover, they also agree quantitatively, but for a simple factor of three in one of the observables: the fusion process in the core of the Sun generates an outgoing flux of neutrinos, some of which reach the Earth and are successfully measured; all else being equal, the measured flux of neutrinos (as described by SSM) is roughly three times smaller than the flux predicted by SMEP. In the beginning, physicists looked at various ways to account for this discrepancy (essentially by checking and re-checking the error bars of everything involved in both SSM and SMEP), but the discrepancy persisted, and became known as the *Solar neutrino problem* [9]. Over time, it became increasingly obvious that the Solar neutrino problem is nontrivial, and eventually all mathematical possibilities to reduce SSM to SMEP were exhausted. This generated a whole lot more interest, and subsequent experiments finally showed that the neutrino sector of SMEP needs to be modified (again in a non-obvious way) in order to account for that factor of three. So again, the missing factor of three in one of the observables of one effective theory required us to rewrite the fundamental laws of physics.

According to the adopted terminology, in this example the properties of the Sun are described by SSM, are not explainable by SMEP, and the amount of neutrino flux is strongly emergent.

There is a very important difference between the above two examples that needs to be emphasized. While SMC is not reducible to SMEP already at the qualitative level, SSM and SMEP do agree qualitatively, but not quantitatively. There is an important lesson to be learned here: qualitative agreement between the effective and structure theory is not enough for reductionism. The consequences of this are rather grave, and I will discuss them in the next section.

Finally, the third example of failure of reductionism is what is popularly called the arrow of time problem. It is essentially equivalent to the statement that thermodynamics (as the effective theory) cannot be reduced to any time-symmetric structure theory, nor to SMEP. The second law of thermodynamics [10] implies that the entropy of an isolated system cannot decrease in time, which means that thermodynamics has a preferred time direction, and is not time-reversible. Moreover, the amount of this irreversibility is copious: every physical system with a large number of particles displays time-irreversible behavior. This property makes thermodynamics automatically non-reducible to any time-symmetric structure theory, due to something called Loschmidt’s paradox [11]. As for SMEP, its equations are not completely time-symmetric — technically (pardon the jargon), K-mesons violate CP symmetry, which implies that they also violate the T symmetry due to the exactness of the combined CPT symmetry. However, the amount of time-irreversibility in K-meson processes is extremely small, and nowhere near enough to quantitatively account for the irreversibility of thermodynamics. Moreover, the particles that we most often discuss in thermodynamics (protons, neutrons and electrons) are not the ones that violate time-symmetry in SMEP, so the incompatibility is actually qualitative. Finally, in order to be able to reduce thermodynamics to a viable structure theory, we need to rewrite the fundamental laws of physics, again in a completely non-obvious way.

As in previous examples, we say that the entropy increase law is described by thermodynamics, is not explainable by any time-symmetric structure theory nor SMEP, and that the consequent “arrow of time” is strongly emergent.

The lesson to be learned from this example is that “complexity” can be a source of strongly emergent phenomena. Despite the fact that every particle in a gas can be described by, say, Newtonian mechanics, the gas as a collective displays behavior that is special, and explicitly not a consequence of the laws of Newtonian mechanics. And going further down to SMEP does not help either. The complexity can be manifested via a large number of particles, or because of strong/nonlinear/nonlocal interactions between them. As the level of complexity of a physical system increases, “more” becomes “qualitatively different” and stops being more of the same, so to speak.

*Analysis*

The examples discussed in the previous section guide us towards a set of criteria one must meet in order to establish reductionism between two theories. In particular, one must have:

(a) a well-defined quantitative effective theory,

(b) a well-defined quantitative structure theory, and

(c) a rigorous mathematical proof of quantitative reducibility.

If any of these desiderata is missing, one cannot sensibly talk of reductionism as defined at the beginning.

Perhaps the main point here is the necessity of quantitative formulation of a theory, and the example of Solar neutrinos is a sharp reminder that qualitative analysis may not be good enough. In order to stress this more vividly, let us consider a highly speculative thought experiment.

Imagine that we have managed to construct some hypothetical fundamental theory of elementary particles (one that is more fundamental and better than SMEP). Moreover, suppose that we have also managed to establish reductionism, in the above rigorous quantitative sense, of all physics, chemistry, etc. to this fundamental elementary particle theory. Reductionism all the way up to neurochemistry. Further, suppose that we have even constructed some effective quantitative theory of consciousness, that describes well all relevant observations. The natural idea is to reduce that effective theory of consciousness to the structure theory of neurochemistry, and consequently to our fundamental theory of elementary particles. Suppose that we attempt to do that, and find that neurochemistry, when applied to a human brain, qualitatively predicts all aspects of our effective theory of consciousness, but that there is a missing factor of two somewhere in the quantitative comparison. For example, suppose that the structure theory predicts a certain minimum total number of synapses in a brain in order for it to manifest consciousness. However, the effective theory tells us that consciousness can appear with half as many synapses. All else being equal (and rigorous), this single observation of the number of synapses in a conscious brain would falsify our theory of elementary particle physics!

As ludicrous as this scenario might seem to be, there are no a priori guarantees that it will not actually happen. In fact, a similar scenario has already happened, in the case of the Solar neutrino problem. It should then be clear that nothing short of rigorous quantitative agreement between two theories could ever be enough to establish reductionism.

There are many places in physics (let alone chemistry and other sciences) where this quantitative agreement has not yet been established, some of those places being pretty fundamental. For example, the mass of the proton has not yet been calculated *ab initio* from SMEP [12]. Now imagine that someone gets a flash of inspiration and finds a way to calculate it. What will happen if this value turns out to be two times as big as the experimentally measured proton mass? It would mean that the periodic table of elements, the whole of chemistry, and numerous other things are not reducible to SMEP. While nobody in the physics community believes that this is likely to happen, the actual proof is still missing. Thus, strongly emergent phenomena may lurk literally anywhere. Another example of a phenomenon that resists attempts at reductionism is high-temperature superconductivity [13]. The jury is still out, but it might yet turn out to be a strongly emergent phenomenon, due to the complexity of the physical systems under consideration, in analogy to the strong emergence of the arrow of time.

Regarding the analysis of the examples of the previous section, one more point needs to be raised. From the epistemological point of view, whenever we are faced with the failure of reductionism, we can try to modify the structure theory in order to make the effective theory reducible. This approach is reminiscent of the idea of parsimony — do not assume any additional fundamental laws until you are forced to introduce them. However, the three examples above of reductionist failure are a sharp reminder of the level of rigor necessary to claim that we are not forced to introduce a new fundamental law when faced with a complicated phenomenon. This means that we must be weary of applying parsimony too charitably, and opens the question of where does the burden of proof actually lie: is it on the person claiming that an emergent phenomenon is strongly emergent, or on the person claiming it is merely weakly emergent? All discussion so far points to the latter, in spite of the “parsimonious intuition” common to many scientists and philosophers. To this end, I often like to point to the following illustrative quote [14]:

“When playing with the Occam’s razor, one must make sure not to get cut.”

My conclusion regarding the burden of proof seems obvious from the examples discussed so far. Nevertheless, it is certainly instructive to discuss it from a more formal point of view, detailing the axiomatic structure of theories and the logic of establishing reductionism. Part II of this essay is devoted to this analysis, as well as to the issue of ontological reductionism.

_____

Marko Vojinovic holds a PhD in theoretical physics (with a dissertation on general relativity) from the University of Belgrade, Serbia. He is currently a postdoc with the Group of Mathematical Physics at the University of Lisbon, Portugal, though his home institution is the Institute of Physics at the University of Belgrade, where he is a member of the Group for Gravitation, Particles and Fields.

[1] The article is split in two parts mainly due to size constraints and to facilitate overall readability. However, the two parts should be considered an organic unit, since the arguments given in one are fundamentally intertwined with the arguments given in the other.

[2] The definition of a fundamental theory is epistemological since we may yet discover that the most elementary “pieces” we currently know of can be described in terms of even smaller entities, and thus give rise to another structure theory. From the ontological perspective, the existence of a fundamental theory is dubious, since there is always a logical possibility that the “most elementary” particles do not exist. There are also other issues regarding an ontologically fundamental theory, and I will discuss some of them in part II of the essay.

[3] Any typical effective theory has infinitely many solutions, and we cannot efficiently establish reductionism by comparing solutions one by one. Instead, this is done in practice by comparing the actual defining equations of two theories. Namely, one uses the equations of the structure theory, the vocabulary and the set of asymptotic parameters to “derive” all equations of the effective theory through a single consistent approximation procedure. This ensures that all solutions of the effective equations are simultaneously the approximate solutions of the structure equations, in the given approximation regime.

[4] It is usually the first example of reductionism that an undergraduate student in physics gets to learn about in a typical university course.

[5] The Standard Model of Cosmology is commonly called the Lambda-CDM model.

[6] The Standard Model of elementary particles.

[7] There are many speculative proposals for such a structure theory, but so far none of them can be considered experimentally successful.

[8] Standard solar model.

[10] Second law of thermodynamics.

[11] Loschmidt’s paradox.

[12] And before someone starts to complain — no, really, it has not been calculated, despite what you might read in the popular literature about it. If you dig deep enough into the actual research papers, you will find that the only thing that was established in the numerical simulations of lattice QCD is the ratio of proton mass to the masses of other hadrons. These ratios are in good agreement with experimental data, but the masses are all determined up to an overall unknown multiplicative constant, which cancels when one calculates the mass ratios. And this constant has not yet been calculated from the theory. For further information, read the “mass gap in Yang-Mills theories,” one of the Clay Institute Millennium Problems.

[13] High-temperature superconductivity.

[14] I always fail to find an appropriate reference for this statement. If anyone has any, please let me know!

Hi

Marko, this was an interesting essay and I am curious to see how you expand your ideas to address ontological issues. I definitely have issues with reductionism, though tend to stick to epistemological analyses.I would say that my gut reaction to your first two physics examples was similar to

DM‘s. It seemed to be a leap to say that since our current theory X cannot account (qualitatively or quantitatively) for Z that Z is a strongly emergent property. The theory/model may just not be correct. I saw your reply that we never have complete models, but I’m not confident that justifies making the conclusion of strong emergence.Being able to discount X as the right reductionist model does not empower Z as having a distinct character only acting/describable at that level.

I can’t say anything about the third (arrow of time), because while I think I grasp the issue my brain doesn’t seem able to hold onto it long enough to draw a conclusion. Ah, theoretical physics.

Your brain example was not far off from something I discussed on the earlier emergence thread at SS. In this case the modelers on the blue brain project really did hit a wall where some specific neural activity could be explained by multiple arrangements according to physical laws. There was no way to discriminate which mechanism should be correct (or actually is correct). In short, brain/cellular activity was resisting modeling from the next level down.

And so they (or at least the modeler talking to us) felt that there were rules of behavior regarding arrangement of neurons and synaptic activity that were being generated at that level and had to be described at that level. So you could say cellular concerns, or perhaps proper brain function to fit a specific cognitive task concerns, trumped neurochemical/physical concerns and so constituted an emergent property. It was not as simply as number of synapses, but the result was the same.

LikeLike

Reblogged this on emerging mind.

LikeLike

Patrice,

Thank you. Having tangled with the good Professor Carroll over at Cosmic Variance enough, I thought I’d just leave that alone.

As for Dark Energy, if one were to consider the possibility that redshift is due to an optical effect compounding on itself, this change in its rate would simply be the point it goes parabolic. The original premise was the rate would decrease steadily, but instead it appeared to drop off quickly and then flatten out. So some factor, other than the initial “bang” had to be invoked to explain this seeming background expansion of the space closer to us.

So if we look at it from the other direction, that it is due to a compounding optical effect, it would start slowly and then accelerate, until it reached the point at which the sources appear to recede at the speed of light. Now light from even further away would still reach us, but it would be shifted entirely off the visible spectrum and be black body radiation. Which is what is currently described as the cosmic background radiation. So basically the CMBR is the solution to Olber’s Paradox.

We accept that gravity is equivalent to acceleration, but the surface of this planet doesn’t seem to actually rush out in all directions, in order to keep us stuck to it, so maybe there is an optical effect, “equivalent” to recession, that is the source of redshift. Since it does serve to balance the effect of gravity, such that what expands between galaxies, is proportional to the collapse into them, it really would be Einstein’s cosmological constant.

Since this is my last post on this thread, I will add one more rant to the idea that “particles in motion” are at all fundamental. The most elemental state would seem to be the quantum foam of positive and negative charge fluctuating around the equilibrium state of absolute zero. It would be far more conceptually simple to consider this as wave action and the “virtual particles” as the crests and troughs of these waves. As such, their primary characteristics would be frequency and amplitude, ie. time and temperature. It is a long way from there to the Brownian motion that is the original basis of particles in motion as the basis of thermodynamics.

LikeLike

Brodix: Thanks. The condescending way with which professor Carroll exposed the weird Multiversist logic (whom he did not invent) riled me up. I want to thank Dr. Marko for providing the irritating, but instructive, link.

Non-conservation of energy is the crux of reducing physics to the Bible (also known as the Multiverse Faith; Pope told me today I better respect all and any faith, or I will get punched in the face from lack of faith, so I present my respects to those affected by Multiversism). Maybe we should call that the Multibible?

Massimo: Nothing on Earth, but everything in the Sky, or in the Sea, got me to think of Dr. Marko as a reductionist.

I was so busy being irritated by the claim that Fluid Dynamics reduced to Classical Mechanics, one of the most famous open problems. And by the claim that physics reduced to non-conservation of energy, that I failed to take seriously consider that Dr. Marko thought he was arguing against reductionism.

Other physics examples also irritated me, like the Solar Neutrino problem (which has been fixed with oscillating, non-zero rest mass neutrinos; indeed, it’s an interesting case where the Standard Model had to back-off, and be modified, from a confrontation with Sun Physics).

The fact that present cosmology plus “General Relativity” leads to violating non-conservation of energy, is, in my opinion, one of the most splendid, REDUCTIO ad absurdum I ever heard of.

I often do not answer, because I am afraid to waste an entire comment on a two line reply (so maybe we should be allowed five lines replies…5 comments, 500 words, 5 replies of 5 lines…)

LikeLike

Marko,

That made my day, and I’m definitely looking forward to part II.

Some related thoughts that I’m not clear on.

“However, the three examples above of reductionist failure are a sharp reminder of the level of rigor necessary to claim that we are not forced to introduce a new fundamental law when faced with a complicated phenomenon.”

Similarly :

“Let me just say that “irreducible until proved reducible” has an analogy to “innocent until proved guilty””

If I’m following, a lot of things may appear reducible but a closer and more rigorous inspection may show that they are not, and until they are shown to be clearly reducible they should be referred to as strongly emergent, i.e. meaning that they haven’t so far been reduced (and may never be due to our limitations, not due to their irreducibility) …

“but the effective theory could in principle impose additional constraints on the phenomena, which the structure theory does not impose (while remaining compatible with them). Those additional “laws” are called strongly emergent. But strong emergence does not deny the existence of a hierarchy of theories.”

Are you saying there are cases, independent of our abilities to reduce, of forever irreducible strongly emergent phenomena, if so could you give an example, or does in principle mean there is no reason to assume irreducible strong emergence does exist, or doesn’t exist, or both …

Along the same lines, do you mean that because models aren’t complete and that therefore weak emergence cannot be ubiquitous, therefore we can assume irreducible strong emergence exists?

LikeLike

Folks, sorry for being a bit silent, that pesky real world interferes again with my online life… 🙂 Ok, on to the next batch of Q&A:

Brodix,In short, no — unless you postulate the second law of thermodynamics (or something that implies it) as an additional piece of the kinetic theory of gases.

Philip,Massimo is right, neither program translation nor transformation is the same as reductionism. If you are looking at computer science equivalent of reductionism, I’d say that program refinement comes close — reductionism could be phrased as the following question: given two algorithms, is one a refinement of the other? That said, I am not that versed in the terminology of computer science, so take this formulation of reductionism with a grain of salt. But what I don’t understand is why are you looking for a computer-science reformulation of reductionism in the first place?

DM,Really? All features? I don’t see how you can prove that, without performing reduction in the way I described. Note that simulating the structure theory doesn’t cut it, since simulations can cover only a finite number of solutions, while you said *all* features of the effective theory. For this reason the reduction proof needs to be analytic, and for that you need a vocabulary (i.e. a direct mapping of the effective variables to structure variables).

Note that I have defined my terms — strong emergence is, by definition, the situation where reductionism does not hold. You seem to be ascribing some different meaning to the term “strong emergence”. For the purpose of the discussion of the article, I suggest that we stick to one terminology, since otherwise the readers could get confused. 🙂

Finally, I’ll address the second law of thermodynamics below, in a single answer for everyone.

SocraticGadfly,Unification is a process of constructing a new fundamental theory, which hopefully has less free parameters than the theories which are “being unified” (if the number of free parameters remains the same, this is called “trivial unification”). Constructing a new unified fundamental theory is highly nontrivial, and it always involves ontological stuff. Thus it is out of the scope of the epistemological analysis being discussed here. The ontological issues (i.e. a potential “theory of everything”) will be addressed in part II of the article, so just be patient. 🙂

AravisandMassimo,It is indeed true that I am discussing reductionism only between theories which can be expressed in a rigorous mathematical way. In addition to that, I have argued that for other theories (i.e. those that we haven’t yet expressed using math) the issue of reductionism cannot be answered. Moreover, the burden of proof is on the claim that reductionism holds. So to answer your question — given that I don’t see social sciences being expressed using math any time soon, I’d say they

do notreduce to physical sciences (epistemologically, to our best knowledge so far). Ontology is a different story, but my answer will be “no” in that case as well. Part II… 🙂Miramaxime,Of course we don’t. But that is again about ontology, not epistemology.

Because the process of “correcting” the fundamental theory (to keep reductionism working) does not converge. This will also become more obvious in part II of the article.

Coel,First, I am not sure I understand what exactly you mean with the term “supervenience physicalism”. Consistency between structure and effective descriptions is a weaker requirement than what I have defined as reductionism. Somebody mentioned the Venn diagrams analogy — I see consistency to mean that two sets are compatible at the intersection, while reductionism to mean both consistency and the additional requirement that one set is a subset of the other.

Second, I never said that studying reductionism between theories is not useful. On the contrary! What I have said is that it is not automatically established, as some people seem to advocate.

Oh yes, and such discussions tend to get very ugly very fast. So let me just skip to the technical summary of all such discussions — the reduction of the second law to QM randomness is very sensitive to where exactly one cuts the von Neumann chain (i.e. which set of interactions is to be assumed irreversible). And QM does not provide any answer to where this cut should be made — this is the essence of the measurement problem in QM. So if you cut the chain at a convenient place, you can obtain the second law, but then someone is bound to ask “why cut there and not here?”, at which point you have to concede that by specifying the cutting point you are adding an additional axiom to QM, thereby changing the structure theory to accomodate the second law. So again no reductionism. As I noted before, many people have tried these scearios in various ways, and failed.

But please, please, let’s not get into all that here, it’s too technical and not very illuminating for others. 🙂

And by the way, I completely agree that random and pseudorandom are very different things. 🙂

Patrick,Massimo is right — any specific boundary condition is not derivable from the theory, nor preferred to other boundary conditions.

Dbholmes,Thanks! 🙂

The point you make with brain behavior is precisely the answer to you question regarding that “[the] theory/model may just not be correct”. Namely, whenever you stumble upon a strongly emergent phenomenon, you can simply say that the fundamental theory is missing something. Whether it is just dark matter, or a “conscious mind”, or something else is just a matter of perspective. Dark matter seems trivial enough to just say that we failed to take it into account, while a “conscious mind” might seem complicated enough to say that it constitutes strong emergence — but there is no conceptual difference between the two. It’s just that we are not used to thinking of both from the same angle.

Everyone,Regarding the issue of the second law of thermodynamics, boundary conditons, etc. — Coel is right about deterministic theories: there are boundary conditions that generate the second law, and there are boundary conditions that generate its opposite. If you think a gas in the box is too complicated or quantum effects are important, consider instead the example process of billiard balls being scattered from the initial “triangle” position, and the reverse process. That should be convincing enough that Newton’s mechanics can predict both the second law and its opposite.

Let me just add one general pedagogical comment — physics always consists of two pieces: dynamics and boundary conditions. Dynamics is all about “restrictions” — laws that a physical system must obey. Boundary conditions are all about “freedom” — all configurations of a physical system that are possible. Given this setup, it is very important to understand that dynamics does not “predict” boundary conditions — quite the contrary, boundary conditions represent all the stuff that dynamics “fails to predict” (in a sense, there is a lot to “unpack” in these statements).

Given the above, it should be clear that preferring some particular boundary condition over the others counts as a “restriction”, i.e. a new dynamical law. It always goes over and above the previously existing laws, it is independent of them, while also compatible with them. Such an additional law is therefore always irreducible and strongly emergent (per my definition). In particular, any “explanation” of the second law of thermodynamics by appealing to any specific boundary condition cannot establish reductionism.

I hope this clears out that issue.

LikeLike

Well written, well argued, informative article so far.

I confess my own interest, like that of Aravis, lies in the problematic relationship between such issues in physics, and similar questions concerning psychology and the social. However, I don’t expect Marko to address that directly. Yet I suspect some others are thinking along the same lines as well.

LikeLike

Marko, your argument about the Second Law now boils down to saying that it can be stated in a way that it is false. Your example is the reverse billiard triangle scatter. Because your statement of the Second Law has counterexamples, you deny that it is reducible to more fundamental physics.

This is like complaining that 1=2 cannot be proved from the math axioms.

If you state the Second Law correctly, so that it actually holds, then it is reducible. You can find the derivations on the Wikipedia page you referenced. If you think that those derivations are wrong, then please show us the errors in them.

As it is, you have no example of something true and not reducible.

I note also that you have no defense of the criticisms of your other anti-reductionist examples.

LikeLike

This discussion actually demonstrates why I – and, I suspect more than one physicist – find the philosophy of science sometimes difficult to understand. My natural reaction is to apply a theory to a simple, even trivial example. If it doesn’t work on that level, I don’t see a reason why it should work in more complex situations.

If I understand Marko and Massimo correctly, a reduction is not really a reduction if it needs boundary conditions, because these conditions are not specified by the theory.

Now take the simple statement “I moved my arm”. The most trivial and uncontroversial reduction I can imagine is the reduction of a statement to itself. The fact that I moved my arm is the necessary and sufficient condition for the fact that I moved my arm.

Now the statement “I moved my arm” automatically contains a boundary condition. It refers to a starting position X1 at a time T1, and states that X2 at a time T2 is different.

If one doesn’t allow boundary conditions in reductions, then the trivial and uncontroversial reduction of “I moved my arm” to “I moved my arm” is not allowed, because the second statement – being identical to the first one – presupposes a boundary condition X1 at a moment T1.

This is quite weird.

LikeLike

Marko,

Thanks for the comment on looking more to program refinements! Program refinement (“a generalization of semantic equivalence”) is also referred to in the program transformation wiki article. This could be very useful, I think, to define “reduction” (in a computationalist context).

But what I don’t understand is why are you looking for a computer-science reformulation of reductionism in the first place?Because everything in the universe is code (programming), right? I thought that was a given! 🙂

LikeLike

Patrick, it may be weird, but besides the fact that “I move my harm” is a statement of fact, not a theory in any meaningful sense of the term (and, therefore, there is nothing to “reduce”), your worry is dispelled by the fact that there are, as Marko pointed out, successful cases of reduction. So it’s not like he is setting things up in such a manner that reduction is impossible in principle.

LikeLike

Hi

Marko,OK, accepted. But your argument that time-symmetric determinism doesn’t give the 2nd law only works if we do have complete determinism.

Anydegree of probabilistic non-determinism gives the 2nd law as weakly emergent. And here I’ll quote you saying: Farewell to determinism :-). In order to argue against the 2nd law being weakly emergent, you need to argue for complete determinism!“Supervenience physicalism” is purely a statement about ontology (that everything is composed of and supervenes upon physical stuff), but it does not imply the “strong” epistemological linkages of the sort that you define here as “reductionism” (though, on the semantics, I don’t accept that the term “reductionism” should be associated solely with the “stronger” forms).

Hi

Robin,The problem with this is that the 2nd law doesn’t only apply globally, it also applies locally (so long as the locality is sufficiently isolated, with little input of energy). Thus, if anti-2nd-law behaviour were generally allowed by the laws then we’d see plenty of examples of it locally, and we never do. For such reasons I don’t agree that the 2nd law (and arrow of time) are

onlya matter of the starting conditions of the Big Bang.Hi

Patrice,Non-conservation of energy in General Relativity has nothing to do with multiverse models. In classical physics one invokes a concept of “gravitational potential energy” in order for energy to be conserved. One can do the same in General Relativity, and if one does then energy is conserved in GR. But, for reasons that Sean Carroll explains,

sometheorists prefer not to, and instead invoke only a more general energy–momentum conservation. Either way works, so long as you’re consistent, so it’s really a matter of preference about how one defines terms (see Carroll’s article).But, whichever of these you do, it applies just as much to GR in general and to our-universe cosmology as to any multiverse model. So this issue is irrelevant for whether one argues for a multiverse model.

LikeLike

Hi all, my last comments on this thread.

Marko,> Really? All features?

Yes, in that I don’t think there is any feature that is in principle impossible to derive. I don’t mean that there is a systematic way of discovering all features.

> Note that simulating the structure theory doesn’t cut it, since simulations can cover only a finite number of solutions

A simulation reveals the same kinds of behaviour we see by empirical observation of the real world. If we can derive the features of an effective theory by observing the real world, we can in principle derive the same features by study of the simulation. Not sure what you mean by “solutions” in this context, but it seems clear to me that covering all possible situations and starting conditions is not necessary.

> Note that I have defined my terms — strong emergence is, by definition, the situation where reductionism does not hold.

You did not make clear that this is a definition. Instead you seemed to me to be arguing that where reduction is not possible (e.g. because the structural theory is wrong) then we must have strong emergence. By the usual definition of strong emergence, this does not follow. I think your definition is quite misleading and idiosyncratic.

> If you think a gas in the box is too complicated or quantum effects are important, consider instead the example process of billiard balls being scattered from the initial “triangle” position, and the reverse process. That should be convincing enough that Newton’s mechanics can predict both the second law and its opposite.

Funnily enough this kind of scenario is exactly what makes it clear to me that Newton’s mechanics predict the second law. Given any arbitrarily chosen starting position, the billiard balls get more entropic. The only way to get the reverse effect is to start from a position of low entropy, allow the system to become entropic, and take the resulting state with all momenta reversed as your starting position for your demonstration of a violation of the second law. There is no other way to get decreasing entropy in a Newtonian simulation. Since all bar such contrived starting positions result in the second law, it is evident to me that Newtonian mechanics predict the second law and not its opposite.

Massimo,> The question is the first one, not the second one, so Patrick’s solution doesn’t apply and Marko is still on target.

So you say, but it seems to me that nobody is challenging Marko on the first question. If this is so, then I don’t think this is the point in dispute. On the other hand, Marko is claiming that we cannot explain how the second law of thermodynamics arises from the structural theory without strong emergence and I think we can. I’m not changing the question, I am challenging a point made by Marko himself, and so is Patrick.

LikeLike

Hi

Marko,“The point you make with brain behavior is precisely the answer to you question regarding that “[the] theory/model may just not be correct”. Namely, whenever you stumble upon a strongly emergent phenomenon, you can simply say that the fundamental theory is missing something. ”

In your brain behavior case (the incorrect number of synapses) that would be true, but I think there is a subtle difference between the cases you gave and the one I am discussing.

Your examples have models resulting in wrong descriptions/predictions (qualitatively or quantitatively) at the higher level. In the case I am discussing the model can both describe and predict the actual phenomenon. The problem is that the lower level model (the structural) allows for so many ways to produce the higher level phenomenon. However, in reality (a specific system) we keep finding a specific solution being used, rather than all the others. The question then is where is the selection for that particular solution being ‘made.’

It is harder to make the case that it is an incomplete lower level model, since it does allow for what is seen, and in fact one could likely arrange cells into the other configurations and get the predicted results. It just isn’t occurring in the specific system under study. So the organizing principle really seems to be coming from concerns at the higher level (the system) rather than the lower (how the parts work). I hope that distinction makes sense?

LikeLike

Folks, I’ve been travelling, sorry for the delay…

Marclevesque,Thanks! 🙂

It’s not like there is one particular phenomenon that cannot ever be reduced to any structure theory. I am not saying that.

Instead, given any particular strongly emergent phenomenon, we can always reformulate the structure theory to make it reducible and weakly emergent. However, there can always be

otherphenomena that are still strongly emergent even for that new structure theory. So any particular example of strong emergence can be made reducible, but there are infinitely many such examples. This is discussed in part II.Yes.

Dbholmes,You are saying that the structure theory predicts many possibilities, but the effective theory always observe only one of those, and always the same one. This looks like the type of irreducibility that is discussed in the arrow of time problem — the effective theory has one law (always the same solution appearing in the system) while the structure theory is giving various alternatives to that “law”, none of which ever happen. I am not familiar with neurochemistry and with the details of your example, but it sounds precisely like the arrow of time irreducibility.

Ejwinner,Thanks! 🙂

I have answered to Aravis regarding the social sciences above, I hope that answers your question too.

Patrick,Massimo is right — moving your arm does not constitute a theory. I have explained the status of boundary conditions in my previous comment above, I hope that is clearer now.

Coel,Wow, are you trying to use my own argument against me? 🙂 I’m impressed! But I am afraid it won’t work, sorry. 🙂

My point with the von Neumann chain was that you don’t need just *any* type of probabilistic behavior. Rather, for the second law you need a very *specific* type of probabilistic behavior, namely you need to cut the chain at the level of intermolecular collisions. And nothing in QM says that the “measurement” happens precisely there. In fact, the interference experiments with C60 done by Zeilinger et al show that intermolecular collisions do not always collapse the wavefunction, so this possibility is almost ruled out experimentally.

I do agree with your general claim that the second law could be reduced to some probabilistic theory. But the “devil is in the details” in the case of QM — no proof has been constructed, and all attempts point to the fact that QM needs a face-lift upgrade (solve the measurement problem!) in order for reducibility of the second law to be properly addressed. Ontologically it could work if we can figure out the details, but in QM as we know it today (epistemology!), the probabilistic nature of the structure theory is too vague to allow for any reduction proof of the second law.

I hope we have reached a consensus on this topic, as far as epistemology goes.

DM,A simulation is the evaluation of one particular solution of the equations of the theory. If a theory has uncountably infinitely many solutions (which is usually the case), simulations can never evaluate all of them. And if you look at my definition of reductionism, it asks for *all* solutions of the effective theory to be also solutions of the structure theory (see also footnote [3] of the article). You may consider this to be a far too strong definition and advocate to relax the “all solutions” requirement into “only typical solutions” (for some definition of typical), but then you may run into trouble with the Solar-neutrinos type of irreducibility problems.

There is nothing in Newtonian mechanics that states that such a situation is “contrived”. That is precisely the problem — Newton’s laws predict both the second law and its opposite

on equal footing, never saying that one situation is contrived and the other is common. This distinction is the statement of the second law itself, not Newton’s laws.Everyone,It’s been my pleasure to participate in this discussion. 🙂 I suggest we move on to part II of the article.

LikeLike

Reblogged this on Skander Hannachi, Ph.D and commented:

Check out the following article:

Reductionism, emergence, and burden of proof — part I and part II

A concise yet rigorous analysis concept of reductionism, idea that all natural laws can be reduced to the fundamental laws of physics (e.g. how chemical interactions between substances can be reduced to the quantum mechanics describing their molecules).

I was a little bit frustrated by the absence of any treatement of what I expected to be the usual candidates for the question “does reductionism always hold?”: Can psychology be reduced to neurochemistry? Can history be reduced to population dynamics ?

Instead all the examples he provides are cases where reductionism is intuitively expected, like reducing fluid mechanics to Netownian mechanics or cosmology to particle physics (I’m oversimplifying the terms used for clarity and brevity’s sake).

More importantly, the author fails to mention some ideas by M. Gu et al [1] who provide evidence of emergent behavior in Ising lattices. They prove this behavior by showing that some macroscopic properties of the lattices are undecidable when considered as computations derived from its microscopic configuration. Since undecidablity in Turing machines is a direct consequence (arguably a corollary) of Godel’s incompleteness theorem, Gu et al’s proof is just a more rigorous version of Dr Vojinovic statement that Godel’s theorem is evidence of strong emergent behavior in our universe.

Finally, I really wish that towards the very end, he didn’t go in the direction of “Godel’s incompleteness theorem as proof of the Mystical”. There are more comprehensive discussion of that point of view, and most logicians would probably disapprove anyway.

[1] Mile Gu, Christian Weedbrook, Álvaro Perales, Michael A. Nielsen, Physica D: Nonlinear Phenomena, Volume 238, Issues 9–10, 15 May 2009, Pages 835–83.

LikeLike

Skanderhannachi,Just for the record: I am aware of that paper. 🙂 I just think that general audience is not really familiar with Ising lattices, and that such an example of emergence would be too obscure and too technical to explain. The three examples I discussed are usually considered common knowledge, and thus more accessible to a wide audience.

LikeLike