Reductionism, emergence, and burden of proof — part II

event_175132932by Marko Vojinovic

In part I of this essay I have introduced and discussed the idea of reductionism from an epistemological point of view. In what follows we will go one step further, and discuss the idea of ontological reductionism [15]. However, much of what follows will be actually devoted to rewriting the discussion of part I in a more formal framework, since this will provide us with a clearer picture of reductionism and allow us to discuss the idea of an ultimate fundamental theory, the so-called “theory of everything.” The formal axiomatic framework will enable us to invoke Gödel’s first incompleteness theorem to argue that such a theory cannot exist, thereby defeating any concept of ontological reductionism.

Axiomatic structure

The axiomatic structure of any scientific theory is very complex, so complex that all axioms are virtually never spelled out explicitly. The reason for this is that there are too many of them, and that it is not always easy to figure out what is the minimal set of independent axioms underpinning any given theory. Nevertheless, any quantitative theory has the following gross axiomatic structure, with axioms classified into several groups:

(1) axioms of logic

(2) axioms of set theory

(3) axioms about correspondence of basic theoretical quantities to experiment

(4) axioms about theory’s range of validity

(5) axioms about laws that Nature upholds

Group (1) is typically the set of axioms of the first-order predicate calculus [16], establishing a formal language and rules which define what is meant by deductive reasoning within the theory. Group (2) typically consists of the axioms of Zermelo-Fraenkel set theory [17]. Together with (1), these axioms establish the base for the rest of mathematics, necessary for the quantitative description of any theory. Group (3) represents the set of axioms concerning the kinematics of the theory, the properties of the basic physical quantities, i.e., the variables characterizing the theory, their postulated observability, and the experimental concepts necessary to observe them. Group (4) postulates what is the range of applicability of the theory, and introduces fundamental error bar estimates for the variables. Finally, group (5) are axioms about dynamics, the “meat” of the theory — statements about the laws which Nature is supposed to uphold, in the context of all previous axioms.

Given the above structure for both an effective and its corresponding structure theory, one can reformulate reductionism as a simple formal statement: the effective theory is reducible to the structure theory if and only if all axioms of the effective theory are theorems in the structure theory, given appropriate approximation semantics. Axiom sets (1) and (2) are most often identical in both theories, so they are automatically theorems of the structure theory. The axioms in (3) tend to be different between the effective and the structure theory, and establishing the former as theorems of the latter amounts to specifying a consistent “vocabulary” between the two sets of variables, as I have discussed in part I. Proving that axioms (4) of the effective theory are theorems in the structure theory amounts to specifying a consistent approximation scheme and the choice of a set of parameters which can be considered suitable for asymptotic expansions. Finally, proving that axioms (5) of the effective theory are theorems in the structure theory (under the suitable constraints of the established approximation scheme) establishes that the dynamics of the effective theory is a rigorous consequence of the dynamics of the structure theory. This establishes the reductionism between the effective and the structure theory.

Such axiomatic description of reductionism can be useful to discuss some of its properties. For example, it is easy to see that reductionism can be regarded as a relation of partial order among theories. First, every theory is reducible to itself, since all its axioms are also theorems by definition. This establishes reflexivity. Second, if the effective theory is reducible to the structure theory, and in addition the structure theory is reducible to the effective theory as well, this means that both sets of axioms can be proved to be theorems of each other. This means that the two theories are in fact equivalent, which establishes antisymmetry. Finally, if some effective theory is reducible to some intermediate structure theory, which is in turn reducible to another structure theory, it follows that the first theory is also reducible to the third theory, given that its axioms can be proved by appealing to the “middle” theory as an intermediary step in the proof. This establishes transitivity.

Another aspect that can be usefully discussed in the axiomatic context is the issue of burden of proof for reductionism. Namely, given two sets of axioms, describing the effective and the structure theory, one cannot simply claim that the effective theory a priori must be reducible to the structure theory. It is not valid to just assume (or even worse, postulate as some metaphysical principle) that the axioms of the effective theory must always be theorems of the structure theory. This can hold only if one manages to prove that the axioms of the former are theorems of the latter. Moreover, this proof must be mathematically rigorous, or otherwise there might be substantial loopholes, as exemplified by the Solar neutrino problem discussed earlier. Therefore, the burden of proof is clearly on the one who claims that reduction holds, the criteria for such a proof are very high, and a priori one must always start from the assumption that reduction does not hold between the two theories. This is popularly phrased as the statement that a reductionist must “walk the walk,” i.e., explicitly provide the proof for each pair of theories, before reductionism can be considered to hold.

One more useful aspect of the axiomatic definition of reductionism is the proof that a “theory of everything” cannot exist. Given the axiomatic structure (1)-(5) outlined above, this is a straight consequence of Gödel’s first incompleteness theorem [18]. In short, the theorem states that, given a set of axioms that defines some theory, if this theory meets some general requirements [19], there will always exist statements which are simultaneously both true and unprovable as theorems within that theory. These statements can be incorporated into the theory only as additional independent axioms, and there is infinitely many of them, which makes any set of axioms forever incomplete, loosely speaking. This is guaranteed already at the level of logic and set theory, and the existence of additional independent-but-true laws of physics (like the arrow of time) only provides additional source for this incompleteness. So-called Gödel-statements correspond to what we have described as strongly emergent phenomena — if such a statement is added to the set of axioms of an effective theory, the latter becomes non-reducible to the structure theory, unless we add the same axiom to the structure theory as well.

One can rephrase these conclusions as follows: given that we have epistemological access to only a finite set of phenomena in Nature, there is no way we can construct a “theory of everything.” At best, we can construct a “theory of everything so far,” which is fundamentally incomplete in the sense that there will always exist strongly-emergent phenomena in Nature that have not been accounted for by the theory, and therefore are not reducible to our fundamental theory.

The prime example of such a strongly-emergent phenomenon is the arrow of time, as discussed in part I. It cannot be reduced to the behavior of individual elementary particles, and one must consider it as an additional axiom in a fundamental theory. One can postulate it as it stands, or through an initial condition at the Big Bang, or through the fine-tuning of the “inflaton potential” in some suitable inflationary model, but one way or another it has to be postulated. And this is just one of an infinitely many such strongly emergent phenomena, as guaranteed by Gödel’s theorem.

Ontological reductionism

So far we have discussed the topic of epistemological reductionism. I have described in what sense one theory can be said to be reducible to another and I have argued that both the effective and the structure theory must be well-defined as quantitative mathematical models which are not in contradiction with experiments within their respective domains of validity. I have given examples of cases where reductionism can and cases where it cannot be established, both with respect to quality, quantity and complexity. I have argued that the burden of proof lies with the claim of reduction — any phenomenon must be considered strongly emergent until proved otherwise [20]. I have demonstrated that Gödel’s first incompleteness theorem excludes the existence of a “theory of everything,” and allows only for an epistemologically incomplete “theory of everything so far.” While we may try to keep redefining the fundamental theory by including each newly-discovered Gödel-statement again and again, this process does not converge, and therefore no well-defined theory of everything can exist. Finally, I have outlined a formal axiomatic treatment for all of the above.

As far as epistemological reductionism is concerned, the whole analysis has one final message: the program of establishing reductionism across all sciences is completely hopeless. Moreover, Gödel’s theorem guarantees that it will remain hopeless in perpetuity, regardless of the level of mathematical proficiency we may ever reach in the future.

All that said, there is one more important issue to be addressed — the possible ontological validity of reductionism. Namely, one could argue that the futility of epistemological reductionism does not imply the absence of ontological reductionism. In particular, one can claim that, despite Gödel’s theorem, one could “in principle” imagine a theory containing the collection of all (infinitely many) Gödel-statements as axioms, thereby covering all phenomena that could ever exist in Nature, strongly emergent or otherwise. Just for the sake of the argument, and despite the fact that any expert in mathematical logic would immediately begin yelling at us, let’s assume that such a theory can exist. As the last point of this essay I want to give an example to argue that this metaphysical assumption is intrinsically sterile and useless for any philosophical discussion.

The example goes as follows. Suppose that I get a flash of inspiration, and manage to mathematically formulate a theory of everything. Arguably, it will contain the specification of the most elementary “building blocks” of matter, the specification of all their interactions, the specification of all possible phenomena that can emerge from complexity, and the specification of, indulge me, eight uncomputable functions that provide the “input interface” for eight self-conscious Deities. The uncomputable functions are coupled to the rest of the theory in such a way that these eight Gods can influence any outcome of any physical process, as they see fit. For the sake of the argument, imagine that I can authoritatively claim that this is the ultimate theory of everything, describing our real world and all phenomena in it.

The most obvious feature of such a theory is its “anything goes” property. At best, it can be used to claim that ancient Greek religious mythology was wrong, since this mythology claims that there exist more than eight gods, which my theory demonstrates to be false. Outside of such silly arguments, it would be completely useless for any and all discussions whatsoever, including the most abstract metaphysical ones. But wait — a naturalist reductionist might ask — can we instead construct a theory which does not feature such a high level of arbitrariness, for example one which does not contain any deities? Well, that would mean that we should, say, remove the uncomputable functions from the theory. This in turn means that we are already restricting ourselves to a certain specific subclass of “all possible” theories (namely to the subclass of recursive theories [21]) and any theory from that subclass runs into the danger of being incomplete in the sense of the Gödel’s theorem. We then again run into the problem regarding the burden of proof — we need to explicitly prove that these excluded properties of the theory (i.e., the presence of eight gods) are not necessary for its completeness. And any such proof is of course missing.

This example illustrates that one cannot consistently discuss a theory of everything while at the same time insisting on parsimony. Parsimony requires us to assume a smallest possible number of axioms for a theory, while any hypothetical theory of everything must contain infinitely many axioms, due to Gödel’s theorem.

Conclusion

The moral of the story is that the concept of ontological reductionism is too elusive to be useful for anything — we can either accept the anything-goes theory, which is useless, or try to be more specific about the properties of the fundamental theory, which is burdened by the absence of proof of reductionism, i.e., one cannot prove it to be the theory of everything. Thus, the only reasonable way out of this conundrum is to actually give up on any notion of ontological reductionism whatsoever. Together with the futility of epistemological reductionism, the overall argument of the article is that one should abandon the metaphysical idea that all sciences and Nature in general are reducible to any imaginable theory of fundamental physics. While it is important for our general knowledge to establish reductionism between various theories whenever possible, there are stringent criteria for doing so, and it is not possible in general.

At the end I would like to raise a friendly criticism regarding the proponents of reductionism in Nature. The conclusions of this essay stand in sharp contrast to the popular opinion among scientifically-oriented people (even some practicing scientists) that reductionism unquestionably holds in science. The reason for this popularity arguably lies mostly in the scientists’ ignorance of the full axiomatic structure of the theories they study, and the lack of education in mathematical logic, especially its less trivial aspects. Despite being popular, the reductionist opinion is actually a heavy metaphysical assumption, virtually indefensible both on epistemological and ontological grounds. While reductionism can indeed be established in certain particular cases (which is always a useful thing to know), a sizable number of scientifically-oriented people generalize reductionism from these special cases to the full-blown level of scientific tautology (or something to that effect), completely disregarding a glaring lack of evidence and consistency. This was labeled by Dennett as “greedy reductionism” [22]. If anything, this approach can be labeled as “scientistic,” since it demonstrates both an unwarranted overconfidence in scientific results, and a superficial level of knowledge about actual statements of science. Science tells us far less than what is being attributed to it by such people, and one must be careful not to get carried away when interpreting scientific results.

Giving up the idea of reductionism essentially amounts to accepting strong emergence as a fundamental property of Nature — a physical system might display behavior that is more than the behavior of the sum of its parts. Proponents of reductionism might find this at odds with their favorite ideology (physicalism, naturalism, atheism, etc.), but there are actual examples of strong emergence in Nature, the arrow of time being the most prominent one. It would be interesting to see how many people would actually agree to change their minds when faced with this kind of approach, as giving up reductionism generally weakens the arguments that a physicalist may have against dualism, a naturalist against the supernatural, an atheist against religion, etc. Philosophy teaches one to keep an open mind, while science teaches one to appreciate the seriousness of experimental evidence. When these two combine to demonstrate that certain parts of a physicalist/naturalist/atheist belief system are just unfounded prejudices, even downright wrong, it would be interesting to see how many people will actually give them up. After all, these are precisely the people who boast about both open-mindedness and the scientific method, and invoke them to criticize dualists/supernaturalists/theists. Now they are challenged with giving up one of their cherished beliefs, and I would like to see how truly open-minded and scientific they can be in such a situation.

_____

Marko Vojinovic holds a PhD in theoretical physics (with a dissertation on general relativity) from the University of Belgrade, Serbia. He is currently a postdoc with the Group of Mathematical Physics at the University of Lisbon, Portugal, though his home institution is the Institute of Physics at the University of Belgrade, where he is a member of the Group for Gravitation, Particles and Fields.

[15] I have not provided a precise definition of ontological reductionism, and there are many different attempts in the literature. But briefly, ontological reductionism is the assumption that there exists a fundamental “theory of everything” to which everything else could be epistemologically reduced, given enough effort and rigor. We might not be in possession of a full formulation of such a theory (so epistemologically it might be out of reach), but the claim is that it exists, in the sense that it can be approached as a limit by formulating epistemologically ever more precise fundamental theories of nature. The assumption of ontological reductionism is that such a limiting procedure is convergent. I argue that most of the definitions of ontological reductionism found in the literature boil down to this one, operationally.

[16] First-order logic.

[17] Zermelo–Fraenkel set theory.

[18] Gödel’s incompleteness theorems.

[19] All theories discussed in physics and beyond are powerful enough to satisfy these requirements.

[20] One can draw a loose analogy with the principle of “innocent until proved guilty.”

[21] Recursive language.

[22] Greedy reductionism.

81 thoughts on “Reductionism, emergence, and burden of proof — part II

  1. Hi Marko,

    If I’m understanding correctly, I think our disagreement boils down to how we see parsimony and the burden of proof.

    You are saying that, given what we know of Quantum Mechanics, we can’t be sure that the 2nd law is weakly emergent from QM. Therefore we have to postulate an extra axiom that gives the 2nd law, and you are calling this “strong emergence”. You are then placing the burden of proof on anyone asserting that the 2nd law is weakly emergent from QM. Is that a fair summary?

    My reply is that, yes, we don’t know for sure that the 2nd law is weakly emergent from QM, because our understanding of QM is incomplete. But, nor can we say that it isn’t! I then assert that the parsimonious position is that it is, since that avoids your extra axiom.

    Further, let’s suppose that the 2nd law were not weakly emergent from QM. Have you considered how this would then work? You’d have an extra axiom (or more), necessary to produce 2nd law behaviour. But that means that individual particles would have to be in different places and moving differently than as predicted by QM, which means additional forces of some sort. At that point I’d very much place the burden of proof on those advocating such forces to find experimental evidence of particles behaving in ways inconsistent with QM. So far we have no such evidence.

    This is essentially the point Sean Carroll was making in the articles you linked to. All the evidence is that our current low-level theories are “complete” for the behaviour of atoms and molecules whizzing around in gasses, in physics labs, and in our brains. The burden of proof is then on anyone arguing for extra forces, or for the claim that their ensemble behaviour is not weakly emergent from the low-level.

    All of these examples appeal to reductionism precisely as I have defined it, …

    Well, are they? As I see it, Carroll’s and Weinberg’s remarks are compatible with my form of reductionism, namely “supervenience physicalism”. That is the doctrine that everything consists of patterns of low-level physical entities (particles or whatever), that their behaviour is described by physical laws, that everything else supervenes as an ensemble of these physical particles, and that the behaviour of the ensemble results from the aggregated behaviour of the low-level entities.

    That, again, is about ontology, and you’re right that owing to epistemological limitations we cannot fully establish it (any more than we can in science generally), but for the above reasons I consider it favoured by parsimony.

    I don’t see anywhere in those articles where Carroll claims the strong epistemological linkages that are needed in your definition of “reduction”. Indeed, Carroll explicitly disclaims that!

    In short, I don’t see that any of your arguments refute supervenience physicalism. What you have done is (1) argue that supervenience physicalism is not fully proven (I agree, but I argue that it is easily the best current model and the best framework for ongoing research), and (2) demonstrate that much stronger variants of reductionism, involving strong epistemological linkages, are not tenable (I agree there also!).

    PS to Massimo, Schlafly, ejwinner & Robin, it was an analogy, an adopting of mathematical Platonism as a thought experiment. Don’t worry, DM has not converted me to mathematical Platonism!

    Like

  2. I’ve wondered (for many years) why a TOE (or close enough to one) couldn’t just be put together by trial-and-error by a swarm of hackers using Agda* (or another programming language in its general category, with perhaps more advanced features). “TOE” would just be a program that works (perhaps not totally understood) in that it accurately makes correct predictions for (almost) all observational and experimental data. I’m not sure what is really wrong with that.

    (If the conclusion is that “TOE” can never be made, then that’s that.)

    * http://en.wikipedia.org/wiki/Agda_%28programming_language%29

    Like

  3. > “All you need is for the low-level particle behaviour to be in some degree probabilistic rather than fully deterministic.”

    This is popping up all the time, but unless something has changed since I studied statistical physics, it is wrong. The *description* is probabilistic. One does not have to assume that the behaviour is probabilistic. The particle behaviour can be completely deterministic (and time-symmetric). Jean Bricmont (of Sokal and Bricmont fame) gives an interesting, relatively simple example in his article “Science of Chaos or Chaos in Science” with the “Kac ring model” (in the appendix).

    I agree with Marko if he states that Weinberg, Caroll et al. are way too confident of the powers of reductionism, but I also believe that to make his point, he is unnecessary restrictive in his definition of what counts as a reduction. On the other hand, I also find that he’s somewhat loose. I don’t understand why he claims that fluid dynamics is reducible to a Newtonian (particle) approach. In fluid dynamics one takes a certain “continuity limit” to have things like “the pressure in a point of the fluid”. This works, but with the tacit understanding that the “point” has, in fact, macroscopical dimensions compared with the microscopical dimensions of the particles. The same is true for things like the “density” of a fluid etc. All these considerations are definitely not part of Newtonian mechanics, they are approximations added afterwards – Newtonian mechanics doesn’t have anything to say about them. In other words: the continuum limit actually takes things beyond the applicability of newtonian particle mechanics (but we don’t notice because as an approximation it’s good enough). In a sense, the continuum limit a simple version of the thermodynamical limit we need to reduce phase transitions etc. in a mathematically nice way to particle theories. I’m truly surprised that Marko considers fluid dynamics to be reduced, but not the second law.

    The best way to counter Weinberg, Caroll et al. is a simple question: “What’s the reduction? Show me the formulas.” If they can’t show them, their opinion is a belief and not science.

    Like

  4. Since this discussion about the percentage of academics who agree with Chomsky and whether or not the language acquisition device hypothesis that Chomsky posited is completely correct is tangential to the OP, I don’t wish to go down that argumentative road. Instead, I want to quickly remind you that you haven’t really responded to my initial argument (I actually suspect that the points brought up about Chomsky’s views are actually red herrings).

    I was arguing that it seems that not only is it wrong to say that any claim that opposes views that are held by most scientifically-orietneted people makes that claim (or the utterer of the claim) “anti-science.”
    I did this by pointing out intuitive cases where people opposed widespead views in the scientific community and are not only not considered to have made “anti-science” claims, but who are considered “science friendly/helpful.”

    First, I pointed out plenty of other examples besides Chomsky where people made claims that were contrary to popular beliefs in the scientific community that facilitated scientific progress (Firestone and Scholl, Bert Vogelstein, perhaps even Newton and Einstein).

    Second, whether or not Chomsky was wrong and whether or not he is believed by most linguists today still doesn’t show that his claims which went against popular scientific opinion were “anti-science.” After all, Newton’s views weren’t adopted when he first put them forth (they were contrary to popular scientific opinion), and they aren’t popular today (in the sense that people don’t believe Newtonian mechanics accurately represents the way the world is; quantum mechanics does, at least more-so). However, Newtonian mechanics had its time and provided valuable insights that were useful in coming up with quantum mechanics. They were certainly far from “anti-science.”

    Similarly, even if he was wrong, Chomsky’s claims brought in valuable insights that the linguistics community still considers and employs, and so intuitively his claims should not be considered anti-science (in some critical/negative sense of the term).

    If you want to claim that he was “anti-science” because he didn’t employ the scientific method to reach his conclusions, then that is just an odd use of the term “anti science,” since then history and literary criticism would be considered “anti-science” as well… (and it would be weird to criticize history and literary criticism because they don’t employ the scientific method..)

    If you define an “anti-science” claim as a claim that goes against popular opinion of scientifically oriented people, then okay, you are just begging the question ( Sorry for not commenting directly on the content of your paper, Marko. Unfortunately I don’t know much about this subject so I’m remaining silent on the actual content of your essay).

    Perhaps its just really not clear what you mean by anti-science, but it seems like a difficult road to travel to argue that you can claim that because a claim goes against popular scientific opinion, it is “anti-science” in a critical and negative sense of the term.

    Lastly, you said that “anyone who says that science progresses by paradigm shifts is also anti-science.The paradigm shift is defined to be a change in views that has no rational or measurable advantages.”

    Perhaps once again I just am not understanding what you mean by anti-science, but even here if we assume you are right that paradigm shifts are defined to be a change in views that has no rational or measurable advantages (which is not really right, since Kuhn clearly states that there is room for rational disagreement during scientific revolutions, its just that rationality underdetermines theory choice and value judgments in the community play a role see http://plato.stanford.edu/entries/thomas-kuhn/#4.1 for more on this), it is not clear why even this is anti-science…

    The paradigm shift thesis is meant to be a descriptive view about how science progresses and what it aims at… how is this anti-science? Since Kuhn attempted to argue against the view that sciences builds on previous theories and approximately gets at the truth, perhaps you think that someone is anti-science if they are not scientific realists? or perhaps somebody is anti-science if they don’t agree that scientific theory choice is solely determined by rational criteria? If so, this is a very controversial claim…and at any rate this looks like some different sense of “anti-science” than how the term was being used before…

    You also said that only philosophers and not scientists endorse the paradigm shift view of scientific progress. If you are trying to argue here that because most scientists disagree with kuhn’s analysis of how science progresses, anybody who endorses Kuhn’s views is anti-science, then you are just begging the question again. This is the very thing I am disputing…

    Not to mention, do scientists have privileged authority of a sociological/historical/philosophical analysis of how science progresses such that if philosophers agree with Kuhn and scientists disagree, we ought to assume that the scientists/ view is to be adopted by default? This also needs arguing for…

    Like

  5. Coel,

    You are saying that, given what we know of Quantum Mechanics, we can’t be sure that the 2nd law is weakly emergent from QM. Therefore we have to postulate an extra axiom that gives the 2nd law, and you are calling this “strong emergence”. You are then placing the burden of proof on anyone asserting that the 2nd law is weakly emergent from QM. Is that a fair summary?

    Yes. 🙂

    My reply is that, yes, we don’t know for sure that the 2nd law is weakly emergent from QM, because our understanding of QM is incomplete. But, nor can we say that it isn’t! I then assert that the parsimonious position is that it is, since that avoids your extra axiom.

    Saying that second law is weakly emergent from QM is saying that the statement of the second law can be phrased as a theorem in the axiomatic system of QM. You cannot just appeal to parsimony and simply “assert” that some statement is a theorem of a given axiomatic system — a theorem needs to be proved. Mathematicians don’t just go around formulating various statements and calling them theorems. Instead, the most crucial step is to prove that something is a theorem.

    In an axiomatic system the burden of proof cannot be shifted around arbitrarily. It always sits with the person claiming that a given statement is not independent of the axioms. If the claim is that the second law is a consequence of the axioms of QM, this claim needs proof. Parsimony doesn’t play any serious role here, and the claim cannot simply be “asserted”.

    For example, Euclid couldn’t just “assert” that the fifth postulate of geometry is a consequence of other postulates. This statement needed proof, and Euclid couldn’t formulate one, so he had to keep the fifth postulate as an independent axiom. Later on it turned out that the proof cannot exist and that this axiom is indeed independent of the others. A similar scenario might yet play out with the second law of thermodynamics and the axioms of QM. It is an open problem.

    Further, let’s suppose that the 2nd law were not weakly emergent from QM. Have you considered how this would then work? You’d have an extra axiom (or more), necessary to produce 2nd law behaviour. But that means that individual particles would have to be in different places and moving differently than as predicted by QM, which means additional forces of some sort.

    No, the axiom would not require the introduction of new forces. It can work as a superselection rule, stating that any initial conditions which would violate the second law (through subsequent dynamic evolution) are forbidden.

    A most celebrated example of such an “interaction without force” is the Pauli exclusion principle — it says that two fermions are not allowed to be in the same quantum-mechanical state, but there is no “force” of any kind (in the equations of motion) that is pushing them apart or something. It is a purely kinematical restriction, acting on the space of possible initial conditions, completely independent of the laws of dynamics.

    The same thing can be done for the second law, and indeed there are attempts like postulating that the initial condition at the Big Bang has very low entropy, or some similar ideas along those lines. If we accept (according to the working assumption) that the second law is not reducible to QM, postulating some kind of a superselection rule is not much of a problem.

    All the evidence is that our current low-level theories are “complete” for the behaviour of atoms and molecules whizzing around in gasses, in physics labs, and in our brains.

    But I don’t see how can you know this? Particles in the human brain certainly do obey the Dirac equation. But in addition, they also obey the second law of thermodynamics, which is arguably not a consequence of the Dirac equation, but an independent additional law (manifest only for large number of particles). Could there be any more of such additional laws governing the behavior of the brain? How do you know that there aren’t any? The set of Goedel-statements is infinite, how do you know that none of those statements could apply to the human brain, over and above the Dirac equation? It has to be consistent with the Dirac equation, sure, but that doesn’t mean that it doesn’t exist, or that it must be a consequence of the equation.

    How many such additional rules can we specify, on top of the Dirac equation? Well, quite a lot, actually. In one of the comments in the part I of the article, I noted that physics has two parts — dynamics, which restricts the behavior of the system, and boundary conditions — which represents the freedom, a set of all possible states of the system that dynamics fails to restrict. As long as the set of boundary conditions is nonempty, we can impose additional restrictions on it. This means that we can specify almost an infinity of other laws over and beyond the known dynamics. These other laws are called “superselection rules”, or otherwise, depending on the context. There are lots of names out in the literature.

    As I see it, Carroll’s and Weinberg’s remarks are compatible with my form of reductionism, namely “supervenience physicalism”. That is the doctrine that everything consists of patterns of low-level physical entities (particles or whatever), that their behaviour is described by physical laws, that everything else supervenes as an ensemble of these physical particles, and that the behaviour of the ensemble results from the aggregated behaviour of the low-level entities.

    That, again, is about ontology, and you’re right that owing to epistemological limitations we cannot fully establish it (any more than we can in science generally), but for the above reasons I consider it favoured by parsimony.

    Of course, the supervenience physicalism that you describe is a metaphysical position, one that you are welcome to uphold. Parsimony is also a metaphysical guiding principle, and you are welcome to uphold that as well.

    But you cannot claim (and I guess you don’t, but I think Carroll and Weinberg do) that such a metaphysical position has been “proved”, “supported”, or in some sense “established” — by science. Science has absolutely nothing to say on such metaphysics, until you specify all the details and properties of elementary particles, and the fundamental laws governing them. And if you do that, you run into all sorts of epistemological problems (that we have already discussed), as you said yourself.

    So I think we agree (surprise! we seem to have reached a consensus!) — my arguments indeed do not refute supervenience physicalism in any way, as long as we are talking metaphysics. But on the other hand, any connection of such a metaphysical stance to science and experimental knowledge is overloaded with epistemological problems, which are only partially tractable at best.

    It looks to me that I have merely rephrased here what you have already said in your last comment. 🙂

    Patrick,

    This is popping up all the time, but unless something has changed since I studied statistical physics, it is wrong. The *description* is probabilistic. One does not have to assume that the behaviour is probabilistic. The particle behaviour can be completely deterministic (and time-symmetric).

    No, it isn’t just description. The behavior itself is not fully deterministic. The reasons are quite unrelated to the topic of reductionism, and I have spelled them out in my previous article, see here: Farewell to determinism.

    On the other hand, I also find that he’s somewhat loose. I don’t understand why he claims that fluid dynamics is reducible to a Newtonian (particle) approach.

    Because I’ve seen a relevant proof of reduction? I have actually done it myself, as an exercise in theoretical mechanics, during my undergraduate days of studying physics. In the part I of the article I have sketched the rough idea, and I am surprised that you are raising any issues regarding this. This is a well established example of successful reducionism, and as far as I know nobody has ever disputed the reduction of fluid dynamics to Newton’s mechanics. The details are of course out of the scope of the article, but there is no controversy about this.

    Like

  6. Marko Vojinovic: “I have given examples of cases where reductionism can and cases where it cannot be established, … One more useful aspect of the axiomatic definition of reductionism is the proof that a “theory of everything” cannot exist.”

    What proof? I have no sympathy for the reductionism in its current definition. But, there is a ‘real’ universe here which does encompass ‘everything (physics, math, biology, intelligence, etc.)’. Regardless of whether we know it or it, it is here. Your above statement is wrong.

    I will use an analogy to define two key words.
    There are 10 castles in a valley. In every castle, there are some men, horses, cats and even some cockroaches. The castles define the ‘structure’ of the valley while those men and cockroaches play the ‘dynamics’.

    In nature (not human) physics, the fermions construct the ‘structure’ while all others (such as bosons) are playing the cockroach role (the dynamics). While the SR (special relativity) is very important for building the colliders, it plays no ‘structure’ role.

    While the fermions define the micro (fine) structure, the macro structure is defined with only one thing, the ‘event horizon (EH)’. Now, we have two big issues.
    Q1: where is that EH? Eight billion light years away from us? Or…?

    Q2, inside the EH, it is a ‘causal’ world. But, what sits beyond the EH, the ‘yonder’? Is the ‘yonder’ a reality?

    Without giving the detailed proof, I state that the ‘yonder’ is real because this universe is expanding (regardless of whether it is accelerating or not). It expands from ‘now’ into the ‘yonder’.

    So, the ‘STRUCTURE’ of this universe has two parts: the EH (the now, here) and the ‘YONDER’. The EH can be almost totally governed by causality. But, what is the linkage between the EH and the yonder? And, how? By definition, that linkage must be non-causal, and the ‘instantaneous action (IA)’ could be a good answer. Yet, how can such an IA be implemented in physics?

    First, is gravity playing any role in this Yonder game? If it does, it must ‘PUSH’ this entire universe from {HERE, NOW} into the Yonder. While the GR (general relativity) gives a good description about gravity in the EH, it does not play the Yonder game with ease.

    Second, I have showed the mechanism which pushes this universe into the yonder (see, http://prebabel.blogspot.com/2013/11/why-does-dark-energy-make-universe.html ). The force which PUSH the universe is F = ħ/(delta S x delta T), and it is the ‘source’ for the quantum principle {delta P x delta S >= ħ}.

    From the above, it is not too difficult to see that {quantum action (ħ) ‘IS’ the gravity (instantaneous action)} if we can show the IA implementation mechanism (Next). Now, I will just show two indirect evidences.

    One, causality: e (electric charge) = F (ħ c), c the light speed, F is function.

    Two, source of non-causality: m (mass charge) = F (ħ/c).

    Like

  7. I use the term “anti-science” just to mean going against the scientific establishment, for reasons other than scientific evidence.

    I agree that occasionally someone like Chomsky is right, and that his insights were valuable even if they are wrong. He would be a fine example of someone who can be brilliantly wrong.

    I just heard a radio talk show caller say that he does not believe in global warming because the theory depends on computer climate models, and they are just garbage in garbage out. I regard this as an anti-science argument, because scientists nearly all believe that there is at least some validity to the data and models, and the caller has not identified any specific flaws.

    Kuhn’s paradigm shift theory is similarly anti-science. It is a direct attack on what most scientists believe about science. You say that his thesis is meant to be a descriptive view, but that is a common misconception about Kuhn. He did write descriptions of science, such as a whole book on the early history of quantum theory, but he was never able to relate those descriptions to his paradigm thesis. Quantum theory had many rational and measurable advantages that everyone else recognized.

    We are getting a little off-topic here, but this site seems to have a theme where scientists somehow got science all wrong on issues like reductionism, realism, free will, causality, positivism, unity, infinity, axiomatic math, paradigms, emergence, and empiricism. Maybe it should be renamed Anti-Scientia Salon. Okay, I sometimes think that the conventional wisdom is wrong also.

    Like

  8. It seems to me that Marko is arguing implicitly against the so-called “strong” or “physical” Church-Turing Thesis, which I think Deutsch first stated (1985):

    `every finitely realizible physical system can be perfectly simulated by a universal model computing machine operating by finite means’. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the `universal quantum computer’ are compatible with the principle.

    Given I fail miserably at the snarXiv test
    http://snarxiv.org/vs-arxiv/
    I won’t quote any of pro and contra literature…but

    Zenil (2013) A Computable Universe: Understanding and Exploring Nature as Computation

    marshals papers from both sides.

    The “exclusion argument” apparently suggests, given a particular definition of causality, that
    higher order “macroscopic” entities that are supervenient on microscopic mechanisms are not causative of each other. Nevertheless, at the level of epistemology, a physical model skillfully predicting atmospheric CO2 levels in 2050 will most parsimoniously include high level models of how groups of humans organised into industries and countries make decisions and carry out plans.

    Like

  9. Thanks Marko, for acknowledging Category Theory, and its relevance. I feel less weird already.

    Schlafly, with his usually robust prose second only to mine before I got defanged, asserts that whether mathematical conjectures turn one way or another in the matter of veracity (whatever notion of Truth one exactly uses) will have no impact on physics. Perhaps. Indeed, at this point, mathematics and physics are viewed as not reducing to each other. So this is the standard view (semi-pun intended).

    However, it’s also plausible that mathematics is a form of physics we cannot conceive yet. So we can’t really be sure. It’s clearly unprovable at this point (LOL).

    I will offer a new argument for anti-reductionism, which I rolled out over dinner to some top life science academic (to fit the new academic mood).

    What’s the universe made of? Standard Modellists roll out their Standard Model, and express their Standard satisfaction to have explained 4% of the universe (minus gravity, where so-called General Relativity explains very little, besides tinkering with GPS).

    It is a curious thing, as it is not even clear to be that Quantum Field Theory applies to anything but the tiny regions of space where giant accelerators dump gigantic quantities of energy.

    OK, let’s cut to the chase. So what’s the universe made of? Forget the Dark Stuff, just concentrate on the matter Standard Modellists claim to understand. Well, they don’t (if they did, we already would have full blooded Quantum Computers).

    Standard Modellists considers the universe is made of fields and particles. But this is not the case for low energy physics. Consider the hydrogen atom: it’s made of a (more or less) particle, the nucleus and an electron. But the electron is not really a particle, it’s delocalized at low energies (at very high energies, electron become point like).

    There goes the reduction of complicated objects to their constituent parts. Delocalization and entanglement are parts of complicated objects that do not resist deconstruction. Delocalized particles can get entangled even more readily. Entanglement can propagate, it has its own architecture.

    Thus a complicated object, such as a virus, or a piece of DNA is not just a mix of m particles and n fields. It has an entanglement architecture, which is more than m + n.

    More is different, because more is entangled, and what’s less, or separated, is not. Entanglement is emerging in objects, not so in their separated parts.
    In the language of Category Theory, Quantum Entanglement creates new objects and new morphisms. Using the definition of reduction I gave, a large object A cannot reduce to a smaller B, because B cannot be a full subcategory of A.

    Standard Modellists have ignored entanglement (it does not provide with implicit military funding for mighty accelerators). As a famous one explained to me: ”It just gives headaches”. Well, it also keeps the universe together.

    Like

  10. Marko, I’m sorry but you’re wrong when you write “the behavior itself is not fully deterministic”. The issue is quite technical, but a good start is the Bricmont article.

    And yes, I’ve had fluid mechanics when I was an undergraduate student. I’m baffled that you call this a true reduction. It involves an almost magical mathematical limit, going from small volumes filled with particles (particles that have a small but finite mass) to infinitesimal volumes of some fluid (and these infinitesimal volumes have infinitesimal mass). When you take the continuum limit, there’s an infinity of those infinitesimal volumes – although every system is finite and only has a finite number of particles. And so on and so on. Strictly speaking, it’s nonsense – but it’s nonsense that works beautifully.

    Now, personally I think that the explanation of fluid mechanics by newtonian particle mechanics counts as a reduction (and a good one). But the requirements you postulate for true reductions are so limiting that there’s not much left to reduce.

    > “as far as I know nobody has ever disputed the reduction of fluid dynamics to Newton’s mechanics.”

    And that is surprising, because it would be a nice subject for a paper to spell out all the physical assumptions made implicitely or explicitely in the derivation of the equations of fluid dynamics. How do you go from a particle model to something like “the pressure in a point of a fluid”? It simply can’t be done if you rigourously apply a particle model, unless a point is not really a point. I suspect there’s no dispute because physicists routinely make similar assumptions and approximations and don’t even realize what they are doing.

    Like

  11. schlafly, sorry, but no. Your own example of climate change denialism is a good one: that definitely is anti-science, for the reasons you mentioned. But Marko has been engaging in a philosophical discussion based on the available science, and he has taken positions that are compatible with the established science. Nothing to do with “anti-science.” Second, Kuhn was a physicist and historian of science. He described what he saw as a historical pattern. Since scientists don’t do history of science, it is pretty much irrelevant whether what Kuhn said did or did not go along with the opinions of most scientists: even if it didn’t, it wouldn’t be anti-science.

    Like

  12. On the whole, the feeling is that proofs and theorems are being used to disguise the fact there is something we do not fully understand about physical computation. … So, Turing’s Titanic machine* is showing good signs of unsinkability, but within a turbulent natural environment, embodied, full of emergent wonders, and a computational structure reaching far into the hazy distance. Turing, as we know, anticipated much of this, and, we hope, is smiling down on us …

    * http://cacm.acm.org/magazines/2012/3/146259-turings-titanic-machine/fulltext

    Like

  13. Schlafly, I just want to note that I am a research technician in a cancer research lab, so I wouldn’t consider myself to be against the scientific community. I consider myself to be part of the scientific community. There are plenty of times where I listen to the scientific community’s claims on certain subjects and take them on authority.

    The reason I am taking time to carefully pick at your arguments is because I get the sense that you are anti-philosophy and place all epistemic powers/authority on scientists. My personal opinion is that both methods of inquiry have their advantages and can compliment eachother very well when respected properly and applied in proper ways.

    So when I see what appear to be anti-philosophy and pro-scientism claims, I am compelled to counter the claims so that the others don’t feel like scientism is the standard well-established view. Otherwise we might see what I would think is a counter-productive attitude in the public and scientific community- that philosophical methods are useless and secondary to scientific methods on all subject matters.

    Additionally, I see that Marko put a lot of careful thought and effort into this paper, and to dismiss it off hand because it is a philosophical analysis that goes against popular scientific opinion seems really unwarranted. If you are going to dismiss him, perhaps try being charitable to his paper first. Instead of simply stating how he got certain physics facts wrong when he used them as examples therefore his entire thesis is wrong, how about attempting to think of other examples that could get his points across? Try to see if there is actually something to be considered in his argument. I mean, you already put in the time to read the article, so why not at least try to see what insights there could be in it?

    I do appreciate, however, that at the end of your last comment you said that you do think going against conventional wisdom is sometimes okay and that Chomsky’s insights were valuable even if they may be mistaken. This is the kind of attitude I personally would like to see more as opposed to the scientism I frequently see (not just from you, but from others too).

    Like

  14. All the previous discussions will just be talks if I cannot show that how the IA (instantaneous action) is implemented in physics.

    What is gravity? For a particle A (pa) with m (energy-mass, not rest mass), it could be expressed with three parts.
    P1, instantaneity (timelessness): pa projects its gravity force to another particle with the instantaneity; that is, without a carrier (bosons, the slowpoke which is confined by light speed).

    P2, simultaneity: pa interacts with ‘all’ other particles in this universe simultaneously. If it needs to throw out bosons for this, it needs throwing 10^90 bosons out at the same time eternally.

    P3, bullying: it pushes itself from HERE to yonder.

    With P1 and P2, we can obviously define the IA. But, what is its implementation mechanism in physics?

    I have showed that the P3 is done by the ‘quantum action (ħ)’. Now, there are two issues.
    Q1, why should the Nature use two mechanisms {quantum action and IA} for a single function? Is quantum action the same as IA?

    Q2, while the ħ is the source for P3, but what the heck is the quantum action (ħ) anyway? Simply, what the heck is ‘QUANTUM’ anyway?

    A quick answer for Q2 is that ‘quantum’ is all about the quantum ‘uncertainty’. This is not totally wrong (just very much wrong) but is totally useless. The ‘quantum’ is all about the ‘quantum spin’. For the ‘structure’ particles (the fermions), they have an honor badge of {1/2 ħ}. For all the slowpokes (the bosons), they carry a badge of {1 ħ or with integer ħ}.

    Now, what the heck does this {1/2 ħ} mean? It means that all fermions see ‘two’ copies of universe while all bosons see only one copy (see, https://tienzengong.wordpress.com/2014/02/16/visualizing-the-quantum-spin/ ). That is, in additional to this REAL universe, there is a GHOSTLY copy. And, every ‘structure’ particles are bouncing between these ‘TWO’. In fact, it is this ‘BOUNCING’ which gives the particle its mass. With a mass-charge, there is gravity. The actual equation for this bouncing is available at http://www.prequark.org/Gravity.htm . [Note: on that same page, it also shows the way of calculating the ħ.]

    Deriving the gravity equation is not a big deal; Newton did, and it can be a special case in GR. But this derivation has special meaning. It is derived by ‘self-bouncing’ of an m-particle between two universes {the REAL (timed) and a GHOST-point (timeless)}. As every m-particle meets one another at the same spot (the ghost-point) eternally:
    One, there is no need for a messenger (boson) for their communication, thus the instantaneity.

    Two, the simultaneity is the nature outcome, as ‘ALL’ m-particles are meeting at the same spot.

    Finally, this Real/Ghost symmetry is isomorphic to the EH (even horizon)/yonder structure. With this R/G symmetry, we are just one step from constructing a TOE (see, https://scientiasalon.wordpress.com/2015/01/05/apa-2014-4-emergence-and-complex-systems/comment-page-1/#comment-10743 ), next.

    Like

  15. Hi Marko,

    Saying that second law is weakly emergent from QM is saying that the statement of the second law can be phrased as a theorem in the axiomatic system of QM.

    You are again asking for much stronger linkages than I am advocating. For example, I would assert that the phenomenon of a star is weakly emergent from stuff obeying QM and gravity — in the sense that if you take a universe filled with physical stuff obeying QM and gravity, then you get stars, and you don’t need any additional physical laws in order to get stars (though you do need to specify your stuff) — and yet a description of a star is not an axiom of QM or GR.

    … the Pauli exclusion principle … is a purely kinematical restriction, acting on the space of possible initial conditions, completely independent of the laws of dynamics.

    Really? You want to explain the exclusion principle as resulting solely from initial conditions? Let’s take a cloud of hydrogen gas collapsing into a star under gravity. This would be a turbulent gas, with molecules bouncing off each other, for tens of millions of years. Surely this will scramble all the initial states of each particle? Afterall, as you’ve argued, in the long run things are not deterministic. Non-determinism breaks down the links between initial conditions and the final states, such that local and recent “initial states” are the product of past dynamical rules and non-deterministic contingency.

    And yet, this proto-star is going to end up, tens of millions of years later, with the core going degenerate and the Pauli exclusion principle governing the disposition of the electrons. Are you really arguing that that is some sort of hold-over from the original “initial conditions”? I’d argue for it being part of the nature and behaviour of the particles themselves, and their interactions with each other.

    Could there be any more of such additional laws governing the behavior of the brain? How do you know that there aren’t any?

    I don’t know that for sure, but I’m putting the burden of proof on those arguing for them.

    Of course, the supervenience physicalism that you describe is a metaphysical position […] But you cannot claim […] that such a metaphysical position has been “proved”, “supported”, or in some sense “established” — by science.

    Not fully “proved”, no, but established and supported, yes! I take the line that all justifications of models in science take the form of a Quinean-style web of intertwinned ideas (rather than there being direct and specific evidences for different ideas independently). I’d argue that ideas both of parsimony and of supervenience physicalism are part of that theoretic web, and that they are indeed supported by evidence, namely the overall success of science’s web. Indeed, you can argue for parsimony and Occam’s razor on probabilistic grounds, in a similar way to how you defend other theories in science.

    Thus parsimony and supervenience physicalism are not “metaphysical” but rather they are successful scientific theories (whereas, e.g., vitalism and dualism are unsuccessful ones). Of course, as with all scientific theories, they are open to revision given better data. But, I’m placing the burden of proof on those wanting to over-turn them.

    Like

  16. Schafly,
    but this site … … should be renamed Anti-Scientia Salon.

    You seem not to understand what it means to be anti-science. So let’s look at the matter a little more carefully.

    We can begin begin by understanding what it means to be pro-science. Being pro-science means subscribing to the following four tenets:

    1. that science is a powerful and effective form of enquiry within the appropriate domain.
    2. that science uses certain rigorous methods of enquiry such as observation, measurement, test, verification, peer review, etc, within the appropriate domain.
    3. that science tolerates many points of view in the early stages of a theory until they are slowly replaced by a strong, evidence based consensus as the theory matures. 
    4. that science is strictly impartial, following the evidence where it leads and is not led by ideological prejudice.

    Being anti-science means rejecting one or more of these four tenets. Violating tenet (4) is perhaps the most important evidence of being anti-science. For example, Young Earth Creationists begin with an ideological supposition, that the Bible is inerrant and has revealed the true age of the earth. Consequently they question and bend the scientific evidence in order to support their ideological bias. The rejection of vaccinations is an example of (3). They reject a solid scientific consensus based on overwhelming evidence. Climate change denialists fall into this category as well. Many researchers have been disgraced for fraud in their work. We can call them anti-science because they have violated tenet (2). Astrologers and tarot card readers are anti-science because they violate tenet (1).

    The litmus test for anti-science is tenet (4). The anti-sciencer invariably starts out with a strongly held ideological view and then interprets everything else in ways to support this view. So, for example, certain ardent activist atheists (AAA) are eager to demonstrate that science rules out God. They will argue that the Universe randomly happened out of nothing, that strict, greedy reductionism is true, free will is an illusion, consciousness is an illusion and that eliminative materialism is true. If all these things were true we could safely conclude that God is an unlikely hypothesis. The problem is that none of these positions is supported by the evidence. This violates tenets (4), (3) and (2) and thus we can justifiably call them anti-science. Krauss, Greene, Hawkins, among others, qualify. I can already hear the howls of protest wailing like air raid sirens. And yet it is undeniable that they have violated tenet (4).

    Turning now to Marko. I cannot see how he violated any of these four tenets, so I am afraid the charge against him, of being anti-science, does not stand. Now, if you wish to level this charge against Scientia Salon, you should substantiate it in detail by appealing to my list above. I think you have a hopeless task. Massimo subscribes to all four tenets.

    If anything, you tend to extend science beyond its natural domain and thus you violate tenet (1). Thus you also display anti-science tendencies. Ironic, no?

    Like

  17. Very helpful articles and inspired discussion!

    It is gratifying to see that reductionism, strictly defined, is not likely at all. That comports with what biologists have strongly suspected for a very long time: identifying the operational characteristics of parts under certain circumstances answers some questions, but usually yields many more questions as a by-product. As our understanding increases, the subject becomes more complex. The question really is whether there will come a time when things will become less complicated – probably.

    Marko’s reasoning identifies some fundamental sources of the problem. Another source of the problem is the limitations of our mental computing power when placed up against the size of the problem. I cannot evaluate the complexity of the issues in physics and maths, but judging from the above thread it seems overwhelming. Looking at these problems from the biological perspective I suspect that the complexity of biological processes is greatly underestimated by non-biologists. The difference is that biologists have no doubt about the gaps in their understanding of the basic issues. Since we do not understand consciousness or what drives biological processes, it is not surprising that our fellow discoverers, the philosophers, physicists and metaphysicists are also lost at sea.

    Like

  18. By deriving gravity equation from quantum spin (self-bouncing in a Real/Ghost symmetry), it gives many significance.
    One, the quantum and gravity are unified.

    Two, the essence of gravity is now known: the instantaneity and the simultaneity.

    Three, the essence of quantum is now known, and it has two expressions: a) causality (ħ = delta P x delta S, as a viewing window, see https://tienzengong.wordpress.com/2014/12/27/the-certainty-principle/ ), and b) non-causality (ħ, the expression of R/G symmetry). As the spacetime is quanta, there is no superposition issue.

    Four, although GR (general relativity) is one expression of the gravity, it is a very bad idea as it excludes the essence of gravity (the instantaneity and simultaneity). GR is the sole culprit for preventing the unification of quantum and gravity.

    With this Real/Ghost symmetry, we are only one step from the Ultimate Reality (the timelessness and the immutability). I have showed half of that step in previous post: the rise of G-string (the structure particles, the fermions), see https://scientiasalon.wordpress.com/2014/10/31/mark-english-on-philosophy-science-and-expertise-a-naive-reply/comment-page-2/#comment-9360 . The other half (timelessness to arrow of time process) was discussed in a few articles of my own blogs, and it is the framework for calculating the Alpha, and I will discuss it when there is a chance for doing it. Right now, I would like to use the remaining allocated words to give a brief outline on a lively TOE.

    The math is based on one simple equation (zero = numbers/infinity). It is not too difficult to show that zero and infinity sit at the same point (named as a ghost-point). Then, the entire number line becomes a (real numbers/ghost-point symmetry), which is identical to the physics R/G symmetry.

    Obviously, the math and physics are now isomorphic structures. Yet, I also showed that the math structure can be rewritten as a 7-code expression: {zero (countable), zero (HC), zero (uncountable), real numbers, C (countable), HC (pseudo uncountable), U (uncountable)}, see https://scientiasalon.wordpress.com/2015/01/05/apa-2014-4-emergence-and-complex-systems/comment-page-1/#comment-10743 . That is,

    The 7-code system = R/G symmetry

    Then, I have showed the followings:

    First, the 7 code is the base (necessary condition) for consciousness, see https://scientiasalon.wordpress.com/2014/12/23/free-will-skepticism-and-its-implications-an-argument-for-optimism-part-2/comment-page-2/#comment-10472 , and the quark universe can be expressed as {Red, Yellow, Blue, White, G1, G2, G3} 7-codes.

    Second, the base for intelligence is a counting device (such as Turing computer), see the same link above. The G-strings (result of the immutability process) show that a Turing computer is embedded in both proton and neutron.

    Third, the dynamics of economics is wholly depending on the R/G symmetry, see http://www.chinese-word-roots.org/econom01.htm .

    Fourth, the dynamics of politics can be described with a 7-code system, see http://www.chinese-word-roots.org/cwr016.htm .

    Fifth, the structure of linguistics depends also on a 7-code system, see http://www.chinese-word-roots.org/cwr018.htm .

    These should be enough to show the reality of a lively TOE. But, no, TOE is not constructed or reached with reductionism.

    Like

  19. Hi Marko and others,

    I’ve glanced through the comments, and do not think the gist of the following has been mentioned, but apologize if it has.

    Not any comments by others, but rather Marko’s 2nd essay’s claim,

    “One more useful aspect of the axiomatic definition of reductionism is the proof that a “theory of everything” cannot exist. Given the axiomatic structure (1)-(5) outlined above, this is a straight consequence of Gödel’s first incompleteness theorem [18]. ..” ,

    is what I’d stick to commenting on.

    Firstly, there does seem just prior to that to be overconfidence that 1st order set theory, to which Godel incompleteness certainly applies, is both quite finished in mathematician’s minds (it, more than any physical science, has the appearance of a finished mathematics framework, yes, but …). And there also seems to be overconfidence that it would, in that form, be a basis for EVERY (not just final) science theory, at least a theory within physics/chemistry/biology. This Marko had more-or-less said a few paragraphs earlier.

    But let’s not dispute this now, but get to the main point.

    Marko’s remark above, in that simple form, would, if correct, show far more than it says—it would show that every such theory, not merely a final one, “cannot exist” in his sense. That leaves the whole question of reductionism in his sense as being meaningless, since nothing now exists for it to apply to, until some more subtle definitions are proposed for the meaning of ‘theory’ and ‘reduction’. No sensible set of axioms as below would logically imply all the true propositions in the theory, true as applied to the real world model.

    Let’s be more thorough about this. For those clinging to the desire for the non-existence of a final theory (and possibly having other good reason(s) for this, though I’ve heard none) the supposed reason, using Godel’s biggest theorem (he had at least 5 or 6 additional theorems worthy of a Fields Medal !), is in my opinion simply not thought out enough. Even using much less than full-blown set theory, surely any theory adequate to get anywhere in even a small part of science would need enough logic and math for incompleteness to apply. Every collection of axioms should be ‘axiomatic’ as they say: consistent and, most important, recursively enumerable—NOT merely finite as Marko expresses it—even for pure 1st order alone this finiteness needs more thought by him, in particular pondering about the difference between an axiom and an axiom-scheme. Such an axiom collection is going to have the property that some propositions which are true are NOT ones ‘the’ standard 1st order proof system has as its derivable propositions. In itself, this profound result of Godel has nothing to do with humans and their intelligence or lack thereof, and little or nothing to do with epistemology. If philosophers are nervous about physicists actually thinking that various other scientific disciplines are, or even will become, redundant to humans (Is that what is meant by epistemological reduction?), I think they are far too sensitive to imagined slights. Please read in detail what people like Weinberg actually write in their popular books.

    What Godel does tell us is that we have to understand the supposed ‘finality’ in what we mean by a putative final theory in a more sophisticated way than seems to occur either in Marko’s article or in any comments on it, as far as I can see. I think the question is entirely “ontological”, if I understand what people here (and elsewhere in their armchairs) mean by it. Finality (or not) must relate to the fact (or otherwise), that, for any axiomatic choice for any Marko effective theory, with the reinterpretation of all concepts in the effective theory as ‘combinations’ from the final theory, each axiom in that choice for the effective theory (and then trivially each of the axioms’ logical consequences) follows logically from some axiomatic choice of true propositions in the purported final theory, those axioms of the final theory claimed to exist as a ‘function’ of the axioms of the effective theory. Surely this is what is meant, and no more than this, by post-Godel theoreticians, when they say that all truths in the effective theory are logically deducible from the final theory. A simpler, equivalent, but less convincing way to put it is that any one truth in the effective theory can be deduced from truths in the final theory (deduced from finitely many of them just by the meaning of “deduce” in logic—perhaps Marko’s “finitely” many came from this, but certainly wouldn’t apply normally to the set of axioms).

    The physicist Steven Weinberg, who earlier in Scienta Salon seemed to be somewhat misunderstood (and not here! or even precisely quoted!), and dismissed perhaps not entirely fairly, may or may not have similar ideas about the nature of such a theory, when writing his popular book “Dreams of a Final Theory”. But I’m sure he was well aware of the defects of the supposed Godel argument. And it was written mainly to promote that proposed super-collider which never happened in Texas, not to expostulate technically on what he meant by a final theory. I think his remarks about philosophy, and about any supposed redundancy of other parts of science resulting from a final theory, are very fair and quite different from what a number of comments seem to have implied several months ago here.

    I know that I am being a bit ad hominem in using words like ‘defensive’ above about opposition to reduction, but to me the lack of good arguments against reduction does cry out for some psychological explanation. And I do think the onus is on those opponents to find such good reasons, which Marko has attempted, but utterly unsuccessfully in my view. And that onus is there because of science’s extraordinary partial successes of reduction, the explanation of life, at least here on earth, being perhaps the most striking example.

    No one claims we are close to at least formulating the precise language, and some hopefully fundamental truths, of a final theory. There certainly is, or has been, some arrogance there. And of course there will never be any complete assurance to humans that a final theory has been found in the sense above, both for logical reasons as above, and for reasons of observation as many have noted for 99th time. But there is never any complete assurance of truth for even a single claim of science which invokes a universal quantifier, though many things such as evolution have a probability so high that they are considered facts by sensible people.

    Like

  20. Davidlduffy,

    I am not really familiar with the detailed statement of the Church–Turing–Deutsch principle, so I cannot really comment. Reading the wikipedia entry on CTD, it sort-of kind-of “smells like” I am arguing against it, but I cannot firmly confirm or deny, either way.

    Patrick,

    The continuum limit of a fundamentally discrete system can indeed be rigorously formulated, as an asymptotic series expansion of Newton’s laws, where the expansion parameter is the ratio between the volume of a “piece” of the fluid and the total volume where the fluid resides. The first term in the series is the continuum variable of the fluid (say, density function), while the subsequent terms are corrections due to the discrete nature of the molecules of the fluid. For a suitably chosen error-bars in measuring the properties of the fluid, these corrections are small enough to be invisible. In such situations they can be dropped, while the remaining first term establishes the continuum limit.

    What might seem counter-intuitive is the fact that we are dealing with a double limit: the volume of the fluid “piece” is considered small enough to be infinitesimal, while at the same time big enough to contain a large number of fluid molecules. But despite intuition, these kinds of limiting procedures can be established consistently and rigorously, as described above.

    Coel,

    I would assert that the phenomenon of a star is weakly emergent from stuff obeying QM and gravity […] and yet a description of a star is not an axiom of QM or GR.

    I agree it isn’t an axiom, but only because it was proved to be a theorem. That is one part of what physicists do for a living (although such phrasing is uncommon). 🙂

    I assume here that your “QM” actually stands for “SMEP with a modified neutrino sector”, since the existence of our Sun (with its observed properties) is not a theorem of just “SMEP+GR”. In the latter formal system, the existence of the Sun is a Goedel-statement (i.e. strongly emergent), since it is experimentally true (Sun does exist), while at the same time unprovable in SMEP+GR. So we either need to postulate the existence of the Sun as an additional axiom in SMEP+GR, or we need to postulate some other axioms (which modify SMEP) so that ModSMEP+GR implies the existence of the Sun as its theorem. Either way, the observed existence of the Sun (with the properties we measured) implies that we must introduce some additional axioms beyond SMEP+GR. This is the strong emergence example from the Solar neutrino problem, rephrased in the language of formal theories.

    You want to explain the exclusion principle as resulting solely from initial conditions? Let’s take a cloud of hydrogen gas collapsing into a star under gravity. […] this proto-star is going to end up, tens of millions of years later, with the core going degenerate and the Pauli exclusion principle governing the disposition of the electrons. Are you really arguing that that is some sort of hold-over from the original “initial conditions”?

    Actually, yes. Remember the fact that in QM observables evolve nondeterministically, while the wavefunction evolves sometimes deterministically (Schrodinger equation) and sometimes nondeterministically (collapse due to measurement). Namely, the Pauli exclusion principle restricts the initial condition for the wavefunction, not for the observables. Then, the linearity of the Schrodinger equation will preserve the PEP throughout the deterministic part of the evolution (theorem), while the measurement process will collapse the wavefunction into a new initial state, which is again constrained by PEP (axiom). Therefore PEP is “preserved” throughout the evolution of the wavefunction, from any initial to any final moment, despite nondeterminism.

    At the level of observables, PEP is a statement of the indistinguishability of certain particles. So lack of distinguishability at the initial moment will map into lack of distinguishability at the final moment, regardless of nondeterministic evolution of observables. Lack of determinism of effective equations of motion will not evolve indistinguishable particles into distinguishable particles. If anything, it would do the opposite. 🙂

    So yes, the Pauli exclusion principle is preserved throughout nondeterministic evolution in QM, if postulated as an initial condition. And no dynamical forces are involved — otherwise they would have to be introduced as additional terms in the Schrodinger equation.

    But while these aspects of QM are interesting, we are getting off-topic with this. 🙂

    [Regarding the nonexistence of the additional laws governing the brain] I don’t know that for sure, but I’m putting the burden of proof on those arguing for them.

    Well, the Goedel’s theorem says that there are infinitely many potential candidates for such a law. So you say that the burden of proof is on me to find one such law (out of infinitely many potential candidates), while I say that the burden of proof is on you to show that none of the laws from that infinite set are actually realized in Nature. Who would you bet on to have a better chance of success? Your job is to prove a very strong conjecture as a theorem, while my job is to find a single counterexample.

    Note also that (on top of Dirac equation) I arguably already have one example of an independent law governing the brain — the second law of thermodynamics. A Bayesian-probabilist could count this as evidence to increase my chances. 😉

    I’d argue that ideas both of parsimony and of supervenience physicalism are part of that theoretic web, and that they are indeed supported by evidence, namely the overall success of science’s web.

    Oh, but this is highly dependent on what counts as “success”, and on the choice of the pool of scientific ideas contained in the web. If I remember correctly, I think Massimo has already suggested in one of the previous posts that the observed proliferation of special sciences is precisely the argument against reductionism, not in favor of it. But as a physicist, I am not qualified to make statements on that topic — it’s a job for the philosophers of science, statistical meta-analyses, and such. 🙂

    Liam,

    Thanks! 🙂

    The difference is that biologists have no doubt about the gaps in their understanding of the basic issues.

    I wish I could say the same for physics, but it would be a stretch — a lot of us display unwarranted overconfidence in understanding our respective areas of physics. And mathematicians are usually the first in line to point this out in a vocal way. 😉

    Phoffman56,

    Just two small points. First, I am aware of the difference between an axiom and an axiom scheme, but I didn’t want to complicate the article with such details, it isn’t really necessary and it would only create confusion. Second, it is of course true that both the structure theory and the effective theory are incomplete in the Goedel sense. Yet despite that reductionism can be defined as a relation between them — if you look at the definition, I do not require completeness of either theory to introduce reduction relation.

    Like

  21. Thus is the first time I have heard Supervenience Physicalism described as a scientific theory. None of the definitions I can find sound particularly scientific. There is “Everything supervenes on the physical”.

    Or SEP suggests the following for Supervenience Physicalism: “Physicalism is true at a possible world w iff any world which is a physical duplicate of w is a duplicate of w simpliciter.” and other fairly similar constructs.

    Both seem to cry out for, at least, a workable definition of “physical”.

    If there is another definition used by scientists but it would seem to require a different definition of “supervene” than the SEP suggests.

    There may be different definitions of these used by scientists. But, again, I have never heard of this.

    Like

  22. Hi Marko,

    I agree it isn’t an axiom, but only because it was proved to be a theorem.

    Does your assertion then amount to the statement that there will not always be derivable theorems? (I would agree that there will not be, though, again, I don’t see a requirement for such theorems as entailed by supervenience physicalism.)

    Lack of determinism of effective equations of motion will not evolve indistinguishable particles into distinguishable particles. If anything, it would do the opposite.

    Yes, it’s the latter I would argue for, that distinguishable particles become indistinguishable through close interaction. If we don’t allow that then we have to argue that particles are obeying the PEP in a stellar core because they were indistinguishable, and thus entangled, tens of millions of years earlier when they might have been separated by vast distances.

    While that is in line with Copenhagen, that is the sort of reason why (at the risk of accusations of having a metaphysical belief!) I’d go for some brand of decoherence which doesn’t involve people doing “measurements”.

    Your job is to prove a very strong conjecture as a theorem, while my job is to find a single counterexample.

    If you did find a counter-example I’d just incorporate it into the model, and then challenge you to find another counter-example! Essentially this is parsimony, we add new features into the model when forced to by evidence. I’m not sure how else science could proceed.

    Anyhow, this is my fifth and last, so thanks for the interesting discussion.

    Hi Robin,

    Thus is the first time I have heard Supervenience Physicalism described as a scientific theory.

    Sure it is! (though scientists more usually use the vaguer term “reductionism”). Any chemist is going to assume supervenience physicalism in doing chemistry; it’s one of the many ideas that are part of the “web” of science’s world model.

    Both seem to cry out for, at least, a workable definition of “physical”.

    Good point, and we can’t really define “physical” because we don’t have any non-physical stuff to contrast it with. Similarly we can’t define “natural” because we don’t have anything “supernatural” to contrast it with. But, again, it’s really about modelling things in terms of known physical stuff, and of putting the burden of proof on anyone wanting anything else.

    Hi labnut,

    activist atheists … argue, that, …

    It’s always a good idea to quote your opponents actually making the argument you attribute to them! “Atheist activists” are well aware that it’s always possible to postulate an unobservable and undetectable god. The counter is not the argument you give, rather the counter is Russell’s Teapot (or variants involving pasta) coupled with Occam’s razor.

    Like

  23. Hi Marko,

    Again a very clear and interesting essay.

    But as PHoffman points out, there is. A difference in saying a) “it is not possible to explain everything via reduction since there are statements that cannot be reduced” and b) “everything that can be explained (by some effective theory) can be reduced to the same fundamental structure theory”. You seemed to have tackled a) while b) seems to be in the center of the philosophical discussion (as traditionally conceived. Note that a) and b) are not incompatible either.

    Let’s look at your definition of “reduction”. You write that any axiom of an effective theory has to be a theorem in the structure theory and seem to say that an initial condition that cannot be derived from a fundamental axiomatic system is yet another axiom and hence “strongly emergent”.
    While this might be following from your definitions it seems to me to somewhat miss the point of the (traditional) philosophical discussion.

    According to this position any brute fact (for instance a low entropy state at the beginning of the universe, if it was a brute fact) would count as a strongly emergent phenomenon, even though no effective theory could ever explain it any better than a fundamental theory (and in fact you seem to be using this as an example).

    While I am certainly skeptical of any “greedy reductionism”, this seems to place the burden too high, in my view. A more “fair” challenge for reductionism would seem to be to successfully reduce any initial condition of an effective theory to a theorem OR a corresponding initial condition in the structure theory. Since fundamental brute facts or constants of nature could hardly be referred to as “emergent” in any effective theory, at least according to its traditional understanding, I do not think that their existence threatens the reductionists’ project. What are your thoughts on this?

    Like

  24. Hi Marko,

    Thanks for the reply.

    “First, I am aware of the difference between an axiom and an axiom scheme, but I didn’t want to complicate the article with such details, it isn’t really necessary and it would only create confusion. ”

    Well, not confusion for me I hope! The question, finitely many axioms or not, is a minor one here, so I won’t dispute much more about that, other than saying this. You say that you wish to use in an essential way something as technical and precise as Godel incompleteness to claim some kind of ‘knock-down-argument’ against the existence of any final theory. But now, to be nice to readers, you are avoiding technicalities as straightforward as the above, by actually speaking as though the axiom set is finite. Together there does seem to be a fairly severe disconnect happening.

    “Second, it is of course true that both the structure theory and the effective theory are incomplete in the Goedel sense. Yet despite that reductionism can be defined as a relation between them — if you look at the definition, I do not require completeness of either theory to introduce reduction relation.”

    This more important part of your reply misses my point, perhaps due to me writing unclearly. I quoted your dramatic assertion (theory doesn’t exist because of Godel) about final theories, to point out that whatever you seemed to think you could conclude about final theories would surely apply equally well to any physical theories.

    To put my point in another way, and try to write more clearly, firstly that point has nothing to do with whether or not your definition of reduction between theories depends on assuming completeness of one or another of them. It is simply that, because of Godel, with your definition, it seems that no theory could reduce to any theory. That includes not even reducing to itself. So it is hardly surprising that you can deduce from this definition that no final theory could exist!

    Now if that is not the case, I would like to see what the definition of reduction really is. As I think Coel may have already said, there is nothing in article II here which explains, in the more precise terms of your attempted general definition of theories, and gives the reduction definition, whether ontological or epistemological. So we are forced back to your article I, where the definition is quite confusing, talking about “solutions”, and not that easy to see what it means in terms of the talk at the start of article II about axioms–1st order–Zermelo-Frankel–etc. That earlier definition is the following, where, to repeat, ‘solutions’ of theories seems pretty foreign to the attempted employment of mathematical logic:

    “The effective theory is said to be reducible to the structure theory if one can prove that all solutions of the effective theory are also approximate solutions of the structure theory, in a certain consistent sense of the approximation, and given a vocabulary that translates all quantities of the effective theory into quantities of the structure theory.”

    If “solutions” of the effective theory simply means all propositions true in the real world model of that theory, then Godel incompleteness applies to show your definition of reduction is vacuous, as below. If it means instead the propositions deducible from some axiom choice for the effective theory, then I think your argument is simply fallacious; incompleteness proves nothing of the kind you are claiming. And of course, you oughtn’t to change horses in mid-stream, where a horse is a reduction definition. To avoid that, we need the following before this claim of non-existence for a final theory can even be discussed rationally.

    I’m attempting to get you to really say what you mean by reduction between theories in terms of this mathematical logic, and why it should be that you claim to be able to use incompleteness to establish the non-existence of any theory which reduces to every theory, yet to do so without your argument being easily generalized to show that no theory whatsoever can reduce, in your sense, to any theory, even to itself. (We seem to agree that “theory” here means one to which incompleteness applies.)

    Like

  25. Hi Marko,

    Thanks again for a clear and thorough exposition, although it may be some time before this slow old autistic brain of mine can come to any conclusion about it.

    It would be interesting to hear, perhaps in some future article, from someone who disagrees with you who can put the opposing view as clearly and thoroughly as this, and including a similarly robust definition of a “reduction” if they disagree with yours.

    “Supervenience Physicalism” does not cut it as an alternate view because any available definition does not even touch on the concept of reducibility. The world could consist of intrinsically irreducible elements and still meet any definition of “supervenience physicalism” that I can find.

    I don’t even know what a statement of metaphysical ontological reductionism would look like.

    But I guess the alternate view might say that undecidability is not a show-stopper for a theory of everything because undecidability is part of the nature of everything.

    I note, as an analogy, that some of the most useful electronic circuits you can build are achieved by connecting logic gates as a paradox.

    Like

  26. Coel,
    supervenient physicalism is a metephyiscal position. It is a claim on fundamental ontology, and thus can never be a scientific theory.

    I have grown to like much of what you say on social issues. You have strong arguments concerning the rights of non-theists in the current situation.

    But when you mix metaphysics and physics, frankly I find this embarrassing. The term ‘metaphysics’ arose because Aristotle wrote on such issues ‘after’ (‘beyond’) his ‘Physics’ – it was beyond empirical demonstration, and demanded logical, psychological, and sociological discussion.

    This has never changed. The question of any metaphysics is, what does it get us? What is wanted is a complete description of reality that encompasses both science and human experience.

    I draw the term ‘fundamental ontology’ from Heidegger, who understood (his) contemporary physics and mathematics better than his critics recognize, and who had a firm grasp on the history of metaphysics as well. Believe me, you are as a child (and so am I) before Heidegger’s grasp on the issues you raise.

    But let’s get to what we can both understand.

    If ‘supervenient physicalism’ is a scientific theory, show me the mathematics. Otherwise it is a metaphysical claim dependent on an a-priori assumption.

    And no, all the empirical evidence will actually mean nothing, because, in fundamental ontology, the evidence can be interpreted differently.

    Scientists may need to assume ‘supervenient physicalism’ to pursue their research. But that is not proof. Other explanations are available.

    Like

  27. Coel,
    It’s always a good idea to quote your opponents

    Try reading the Grand Design by Hawkings or Universe From Nothing by Krauss and you will discover an embarrassment of riches to support my contention. But then you know that, you also have read the books. The Grand Design has 77 references to God and Universe From Nothing has 69 references to God.

    That is strangely obsessive for books that purport to be about physics and cosmology. If I didn’t know better I would think that Hawkings and Krauss were renegade theologians.

    For a striking contrast, consider these examples. Brian Greene in his much longer book, Hidden Reality, uses the word ‘God’ a mere three times. Turok and Steinhardt (Endless Universe) could manage only seven mentions of God. Roberto Unger and Lee Smolin in their rather long book, Singular Universe and the Reality of Time, mention God ten times. Alex Vilenkin(Many Worlds in One), the most forthright in this group, manages fifteen mentions.

    So I stand by my contention, except that I withdraw Brian Greene from my original list. His book(Hidden Reality) mostly sticks to speculative science.

    are well aware that it’s always possible to postulate an unobservable and undetectable god

    Any postulated God would be the creator of the Laws of Nature and would therefore be necessarily unobservable by science, just as postulated multiverses are unobservable. Throwing teapots around is a colourful diversion from this essential fact. But in any case, your teapot argument applies with even greater force to the multiverse because you would need an infinite number of teapots. An infinite number of teapots imposes on you the infinite burden of proof 🙂

    Like

  28. Miramaxime,

    “everything that can be explained (by some effective theory) can be reduced to the same fundamental structure theory”

    I don’t really understand why you think this statement has not been addressed, or otherwise I fail to understand the statement properly. Namely, given a bunch of effective theories (and assuming they do not contradict each other), one can always simply put all their axioms together and thus construct a “fundamental” theory to which each of the effective theories can be reduced. This is trivial in a sense, but always possible.

    A less trivial thing to construct would be a fundamental theory which has greater explanatory power than just the conjunction of all effective theories. That is also probably possible, but it requires more work — one needs to construct the fundamental theory using research independent from the effective theories, and afterwards prove that each effective theory reduces to the fundamental one, as I described in the article. This would be a much harder thing to do, and the resulting properties of that fundamental theory are not obvious.

    According to this position any brute fact […] would count as a strongly emergent phenomenon
    […]
    A more “fair” challenge for reductionism would seem to be to successfully reduce any initial condition of an effective theory to a theorem OR a corresponding initial condition in the structure theory.

    There are two types of brute facts — those that depend on initial conditions (I think in philosophy these are called “contingent”, although I’m not entirely certain that this is correct use of terminology), and those that do not depend on initial conditions (i.e. “non-contingent”). The example of the former would be the current position of the Earth in our Solar system, while the example of the latter would be that gold is yellow (as opposed to other metals).

    The current position of the Earth formally counts as strongly emergent, but this is in a sense trivial, due to the fact that the space of initial conditions for that solution is huge. On the other hand, the color of gold is merely weakly emergent, since we can calculate it from the theory.

    Finally, note that the second law of thermodynamics is not itself an initial condition, but rather a *law* claiming that some initial conditions are forbidden. This is also strongly emergent, but in a very nontrivial sense — while we can observe other solar systems and see different configurations of planets (contingent on different initial conditions than our solar system), we *never* observe a physical system having one of the initial conditions “forbidden” by the second law of thermodynamics.

    In other words, you should always make a distinction between “facts” and “laws” in Nature — either can be strongly or weakly emergent. 🙂

    Phoffman56,

    [Regarding the precision of the exposition of Goedel’s theorem as a key argument …] there does seem to be a fairly severe disconnect happening.

    Yes, there is a disconnect — it is a tradeoff between clarity and precision. When I write a serious research article, I aim at precision at the expense of clarity. When I write a blog post, I do the opposite. The target audience is different in the two cases, so I try to adapt…

    I’m attempting to get you to really say what you mean by reduction between theories in terms of this mathematical logic

    I am still not sure I understand what you are asking, but I’ll try… 🙂

    In part I, I gave the definition of reductionism in terms of solutions of the theory. In part II, I gave the definition of reductionism by saying that all axioms of the effective theory should be theorems in the structure theory (given suitable approximation semantics). It might not be immediately obvious that these two definitions are equivalent, but this should become clear if you understand the “solution of the theory” to be the same thing as the “theorem in the axiomatic system of the theory”.

    Indeed, I failed to explicitly make this connection in the article — it appeared too obvious to mention. 🙂

    So given this, it should not be hard to convince yourself that the part I definition and part II definition are equivalent. Consequently, you can use the part II definition to figure out what reductionism means in the context of two (incomplete) formal theories. As a trivial example, every theory is reducible to itself, since all its axioms are also theorems by definition (this was mentioned as reflexivity in the article). So the concept of reductionism between incomplete theories is not vacuous in any sense.

    Hope that helps! 🙂

    Robin Herbert,

    Thanks! 🙂

    I don’t even know what a statement of metaphysical ontological reductionism would look like.

    That was actually one of the motivations for me to write this article. As a physicist, I wanted to see how much of the metaphysical idea of reductionism can actually be formulated in the language of science, and what it would look like. The definition of reductionism given in the article (and the subsequent analysis) represents just how far one could get with mapping metaphysics to physics. As it turns out, nowhere even near to what people would naively expect. 🙂

    Everyone,

    It was a pleasure to engage in the discussion! 🙂

    Like

  29. I can’t deny myself the pleasure of one comment, tho it may be too late: at least on Main Street, pursuit of the Theory of Everything is the justification for super-colliders. Is their great cost then perhaps unjustified? If Marko believes this then I can understand why one might call him anti-science. Also, (this seems rather facile) — but isn’t the idea of a Theory of Everything, somehow, circular? It would have to explain itself.

    Like

  30. Hi Marko,

    In the end, we maybe just disagree in that I think you should claim “A final theory could not be a cre theory”, rather than “A final theory could not exist.” I deliberately use a goofy temporary new notation and avoid even referring to “axiom” or “axiomatic” below, just to preclude misconceptions based on differing notation.

    So, very temporarily, the notation: A theory is just any set of sentences closed under deduction. A c theory is a consistent one (eliminating only the set of all sentences, for a given language). A cre theory is a recursively enumerable c theory. (‘cre’ is just called ‘axiomatic’ really). You seem to use theory in the sense of cre theory, except your axioms would generate under deduction the sentences as above and not necessarily be all such sentences.

    Everything here is 1st order, as you said, and assumed ‘rich enough’ (defines the natural numbers,e.g.) so that the cre theories are the ones to which Godel’s incompleteness theorem applies. The c theory which is just the set of all sentences true in your model (the real world, or part of it) wouldn’t be r.e.

    Now in the 1920’s, people like Einstein and Hilbert might have considered the possibility of a final theory in your form which was cre, if “re” had been explained to them.But “re” really came from Godel and mainly Turing in the 1930’s. But by the mid-1930’s , surely no physicists who really thought about it and had digested Godel incompleteness would be considering it. I don’t talk to string theorists, and perhaps some of them still mistakenly naively think about their hoped-for final theory being in your cre theory form. You never even did mention recursively enumerable, and in any case, I suppose few people at LHC, Perimeter, etc. think much along these lines, the pattern of 1st order, Z-F, etc. you give being not too likely to be sufficient, when one thinks about the state of quantum field theory, for example.

    Anyway, since the 1930’s, I would imagine that most serious ‘final theory people’ would expect such a theory to be the c (but definitely not cre) theory consisting of all sentences true in the real world model for some language, where that is still far from specified yet, and where also some basic true sentences are given (‘their fundamental equation of strings’, say). They would be well aware that neither these nor anything better later which is produced will generate all true sentences; i.e. aware of Godel’s theorem. And reductiveness in this case is the never known for certain fact that any truth in any scientific theory would be deducible from truths of their final theory. In particular, they would try to prove that as much as they know now in physics could be deduced from the basic true sentences which they gave when breaking the news to colleagues that they really thought their theory was final.

    So we probably do not differ much. I realize of course that cre (i.e. axiomatic) theories are the main interest in much of mathematical logic. But I just do not think it is generally thought that the claim, that Godel incompleteness eliminates the possibility of a final theory, has much veracity.

    Like

Comments are closed.