APA 2014-4: Emergence and complex systems

Gospers_glider_gunby Massimo Pigliucci

This session of the Eastern Division meetings of the American Philosophical Association (part of my ongoing series of commentaries on the meeting) was chaired by Emily Parke (University of Pennsylvania), and the speakers were Mark Bedau (Reed College) and Paul Humphreys (University of Virginia).

Bedau’s talk was far easier to follow, so I apologize if I have not been able to do justice to Humphreys, though in the end I think I got where he was going…

Briefly, Bedau wants to reframe the debate about emergence. It is usually presented as an attempt to understand the true nature of emergence, which implies that if one’s view of it is right the other one has to be wrong, which leads to acrimony and generally unproductive discussions. Bedau favors instead a type of pragmatic pluralism about emergence.

He proposed that there are two hallmarks of emergence: the whole depends on its parts, but it is also somehow autonomous from its parts — the problem is that the two at the very least seem inconsistent with each other, so that we need a way to reconcile or resolve such apparent inconsistency.

For Bedau emergence can be weak, nominal, or strong (and then there is a fourth type to be addressed later), depending on how the dependance / autonomy relationship between wholes and parts is supposed to work. A pluralist project has to ask whether the dependance and autonomy hallmarks in any particular concept of emergence are in fact inconsistent: if they are, that concept is out. But, depending on how the two are cashed out, there are a number of ways of making them not mutually contradictory. A second pluralist test for any given concept of emergence has to do with whether there are real examples of the types of emergence being examined (if not, then the concept is provisionally ruled out).

Bedau then introduced the concept of bottom-up whole: the idea that the whole is nothing but the organized combination of its parts. An example is provided by enclosed vesicles that spontaneously form when certain lipids are dropped in an aqueous environment, well known from biophysics: the whole looks a lot like a cell membrane and displays membrane-type “holistic” properties, but is also made of a countable and known number of individual components, assembled in a particular fashion. Another example is cellular automata: we write the program, so we know exactly how it works, what rules determine the behavior of the system, and how many pixels form each cellular automaton evolving on the screen. Nonetheless, the automata also have ensemble-level properties, such as movement (the automata “move,” not the individual pixels).

Let me now turn to Bedau’s classification of the three basic types of emergence, as he sees them:

Strong emergence:

whole-part dependence > properties of wholes supervene on properties of parts

autonomy of the whole > properties of the whole are undefinable by / irreducible to properties of the parts

examples > conscious minds?

Bedau thinks this is a coherent concept (I don’t, as I see inconsistency between the ways in which whole-part dependence and autonomy of the whole are defined), but he claims that there are no actually well understood real examples of it. The conscious mind, the only example usually brought up in this context is very much under question, and to accept it without further analysis would come close to begging the question.

Nominal emergence:

whole-part dependence > bottom-up whole, as defined above

autonomy of the whole > the concept applies only to wholes, some properties of wholes are undefined for parts (e.g., “gliders” in the game of life move around, have speed and direction, but individual cells don’t, as mentioned above)

examples > cellular automata, lipid layers, lots of others

Weak emergence:

whole-part dependence > bottom-up whole, as above

autonomy of the whole > patterns of behavior of whole caused by incompressible causal web among its parts (i.e., brute-force calculation is required to go from behavior of parts to behavior of whole)

examples > some types of cellular automata, those for which one cannot tell arbitrarily far into the future what is going to happen by just examining the meaning of the rule (i.e., not trivial automata); lots of others

At this point one of the attendees commented that it looks like what the author calls bottom-up whole is actually indistinguishable from supervenience [1]. I think she was right. Also, Bedau used essentially the same examples to illustrate normal and weak emergence: traffic jams, vesicles, protocells, origin of life, cellular automata, and agent-based models, thus introducing a palpable degree of confusion between the two concepts.

Now to the forth kind of emergence, which seems to be original with the author: he called it continual creative emergence. Examples include complex life forms, mind, culture, and technology. It is characterized by autonomous “door-opening” processes (e.g., a biological or technological innovation) whereby new and more complex kinds of wholes emerge over time. It is also characterized by iterations of higher-order nominal and weak emergence.

I think I know what Bedau means here. The phenomena he is referring to are often described as “major transitions” in evolution [2], and include for instance the origin of cells, of multicellularity, and of language, among others. They are best understood — biologically — as transitions made possible by changes in multi-level selection dynamics, as postulated by Samir Okasha [3]. I do agree that they also represent interesting examples of emergence, though whether they are best characterized the way Bedau did remains to be seen.

Essentially, Bedau was attempting to make room for less acrimonious and more constructive discussions of the various concepts of emergence on the table, a laudable goal to be sure, though of course there may be other ways of being pragmatic, as he put it, or even ecumenical, about it.

Humphreys talk was on what he termed “transmutational emergence” and much of what follows comes straight from the author’s handout at the talk, which was fairly dense and difficult to follow (no slides!). The author began by pointed out that the philosophical literature on emergence has recently focused on synchronic emergence and compositional ontology, maintaining that a lot of resistance to emergence depends on a number of commitments and inclinations: endorsement of fundamental physicalism, causal closure of the physical domain, the deployment of supervenience, and a sense that emergence is “scientifically superfluous.”

Humphreys suggested that some of the open issues may be ameliorated by paying attention to diachronic emergence. He also proposed that there are several kinds of emergence, and that paying attention to mental phenomena (a very common approach) is a bad move precisely because it is so controversial and one risks begging the question in reaching one’s conclusions. On this, as we have seen, he agrees with Bedau.

It is a common assumption in discussions of emergence that the constituent parts of a whole are synchronically and diachronically stable, meaning that if taken apart they would retain their identity, and that such identity does not change over time. The latter, according to the author, is not always the case.

Humphreys proposed a number of criteria to identify emergence: the emergent entity has to be novel, it is autonomous from its origins, it results in some way from the dynamics of its original domain (after all, it “emerges” from something!), and some holistic feature is present.

The first example of diachronic emergence presented by Humphreys is admittedly artificial: he called it checkers world. Consider a universe that looks like a checkers board and imagine that the rules of the game are actually laws regulating interactions between white and black “particles” in that universe. The individual particles are immutable, in the sense that their properties remain the same regardless of the state of the system (though any given particle may be able to exercise a property only in certain circumstances, like annihilating a different color particle if it is in its proximity, etc.).

Now imagine that — unlike the case of actual checkers — the rules of checkers world allow for the evolution over time of new kinds of particles (say red and blue), and that the new particles follow different rules of behavior. The idea is that the properties of the later universe cannot be predicted from the laws regulating the initial black/white universe, unless one also knows the additional law(s) that made the transition to the new universe possible. (Note that the original laws are still active, but after a while there are no longer any black and white particles to instantiate them.)

The second example was drawn from real science, but the author labelled it as tentative: the Standard Model of physics includes a certain number of particles (six quarks and their anti-particles; six leptons and their anti-particles; and four force carrying particles; plus the Higgs boson). Leptons and quarks are non-composite, they are really fundamental. Interestingly, however, they decay, i.e. they diachronically transform into other particles (e.g., a charm quark transforms into a strange quark plus a W boson, and the latter then transforms into up and down quarks).

Humphreys suggested that there is a parallel of sorts with checkers world, with fundamental entities that do not synchronically supervene on anything, and yet transform themselves into other fundamental entities over time. Also note that quarks are not found in isolation, which means that their properties cannot actually be maintained when the entities are considered in isolation (because they are never found in such state).

His third example was a classic one in discussions of emergent behavior: mobs. This is a loosely structured group of humans displaying socially disruptive behavior. Mobs have distinct group-level properties, such as levels of violence, and the discussion usually is on whether these properties are or are not reducible to the properties of individuals, with some authors suggesting that the group-level behavior might best be described by (irreducible) sociological laws that are distinct from those (psychological) describing the behavior of individuals.

But here Humphreys proposed that a better way of looking at mobs is that the properties of the group (and of the individuals interacting within the mob) change over time during the evolution of the system. Rather than invoking sociological holism, one can focus on the temporal psychological transformations of the individuals, who experience diminishing levels of rationality and increasing levels of violence, while at the same time of course maintaining physical individuality.

Interestingly, the case is different from, say, that of flocks of birds, or crowds of humans. In these latter instances, the psychological characteristics of the individuals do not change over time (well, at the least for humans, we don’t really have much access to the psychology of birds).

In conclusion, according to Humphreys, it is useful to distinguish dynamic from simple fundamentality: the first one requires that “a fundamental object, a property, or a fact must be present during the initial state of a system”; the second one requires that “an object, property, or fact not have components,” and the two are logically independent. Since the laws operating at the initial state of a system (checkers world, the universe, the crowd about to turn into a mob) may not fix the laws at all times, there is a distinction between causal closure and nomological closure.

I wish there had been time to explore this idea of a (possible) distinction between causal and nomological closure, which obviously connects with our discussion of causality in physics vs the special sciences from a few days ago [4], but that’s the best I can do to summarize the session, I’m afraid.

_____

Massimo Pigliucci is a biologist and philosopher at the City University of New York. His main interests are in the philosophy of science and pseudoscience. He is the editor-in-chief of Scientia Salon, and his latest book (co-edited with Maarten Boudry) is Philosophy of Pseudoscience: Reconsidering the Demarcation Problem (Chicago Press).

[1] Supervenience, Wiki entry.

[2] The Major Transitions in Evolution, by J. Maynard Smith and E. Szathmáry, Oxford University Press, 1998.

[3] Evolution and the Levels of Selection, by S. Okasha, Oxford University Press, 2007.

[4] APA 2014-2: Against causal reductionism, by M. Pigliucci, Scientia Salon, 31 December 2014.

48 thoughts on “APA 2014-4: Emergence and complex systems

  1. The relationship between vesicles and their component surfactant molecules seems to me similar to the relationship between those molecules and their component atoms. And chemists do spend some time discussing, at least over coffee, in what sense chemistry is reducible to physics and whether we should regard such properties as polarity as emergent.

    Like

  2. Hi Massimo,

    Consider a universe that looks like a checkers board and imagine that the rules of the game are actually laws regulating interactions between white and black “particles” in that universe.

    We’re told that the “rules of the game” regulate interactions between black and white particles. Presumably these laws are complete about black and white particles and indeed about the game (in that there are no properties of the game that are not encapsulated in the laws). OK so far.

    Now imagine that […] the rules of checkers world allow for the evolution over time of new kinds of particles (say red and blue), and that the new particles follow different rules of behavior.

    Thus it is the laws-of-B&W that “allow for” the evolution red and blue particles that “follow different rules”. OK, fine.

    The idea is that the properties of the later universe cannot be predicted from the laws regulating the initial black/white universe, unless one also knows the additional law(s) that made the transition to the new universe possible.

    But, if it were the B&W laws that “allow for” the new behaviour, then surely we can “predict” the later behaviour, even if only by a suck-it-and-see, brute-force simulation. If we programmed all the B&W laws into a perfect simulation then we could watch and say “oh look” as the simulation started manifesting all of the B&R behaviour. In that sense the properties of the late universe can indeed be predicted from the initial B&W laws.

    We would thus not need “additional law(s) that made the transition possible” since this would already be “allowed for” in the B&W laws. This would only not be the case if the initial laws of the B&W world were incomplete or imperfect.

    Now, if the laws of the initial B&W world were indeed incomplete, then obviously we can’t then predict the R&B behaviour. But that is surely a trite and uninteresting point that is not relevant to “emergence”.

    At least, that’s how it seems to me from my stance of supervenience physicalism (which physicists usually term “reductionism”).

    To make the point more generally, yes, one needs to allow for the dynamic properties of the low-level entities. But, if one has a complete understanding of the low-level entities then that must include their dynamic behaviour. And, if one doesn’t have sufficient understanding of the low-level entities, then it’s obvious that one will struggle to predict emergent phenomena when the low-level entities start interacting.

    Like

  3. Emergence is an awfully complex subject to deal with in 500 words.
    It seems though that the first requirement would be to completely wash out all concepts which are emergent and that would have to include many of the usual suspects, like whole/part, top down/bottom up, nodes/networks, cellular automata, etc.
    If I were to start at the beginning, it would be with the void and then see what has to be added. For one thing, I would argue, against current physics theory, that the void is synonymous with space and that three dimensions are simply a mapping device, like longitude, latitude and altitude, so there is not elemental frame. Without any physical properties to define it, space would have two, non-physical, characteristics, in that it would be a state of equilibrium, as well as infinite, since neither of these have form to need explanation. I think General Relativity admits to the equilibrium, by using the speed of light in a vacuum as the constant. We could conduct a conceptually simple, though physically difficult experiment, to measure the opposite effect and place clocks in various situations in space, until we find the one which runs the fastest and it would be closest to that state of equilibrium.
    So the next concept/property we would have to admit, is some form of energy, as a fluctuation of this equilibrium. This then would allow positive and negative and as such a wave action over the void and since it is infinite, with no common or universal frame, there could be multiple waves. This then produces the top down forms of amplitude and frequency. Which then, as I’ve argued elsewhere, give the effects of temperature and time.
    Now in the interactions of these fluctuations, there are the forces of attraction/balance and repulsion/conflict, which would build larger peaks and troughs of this energy. I suspect that at this level, the node/network dichotomy will start to emerge, with whole/part as a function of that. Then let the feedback loops build up and break down forms and structures for a very long time and you get these incredible spikes of complexity, even in which states and entities are looping back to wonder how this all came to be.
    I would also have to say this doesn’t fully explain the state of consciousness, though the top down forms it manifests, both thoughts and beings, are explicable. So it might have to be an additional ingredient.

    Like

  4. Hi Coel,

    But, if it were the B&W laws that “allow for” the new behaviour, then surely we can “predict” the later behaviour, even if only by a suck-it-and-see, brute-force simulation. If we programmed all the B&W laws into a perfect simulation then we could watch and say “oh look” as the simulation started manifesting all of the B&R behaviour. In that sense the properties of the late universe can indeed be predicted from the initial B&W laws.

    To put this into context, it is like saying that you could predict quantum physics using Wolfram’s Rule 110.

    Rule 110 is something you could teach a child to implement. Give it your brute force simulation, throw as many initial conditions at it as you can and eventually it will ‘predict’ quantum physics.

    But you couldn’t just watch and say ‘oh look, quantum physics’. All you would see is a pattern of black and white squares, unless you understood quantum physics.

    Like

  5. Hi Massimo, sadly I missed the entry on causal reductionism, but here I make it in time for emergence! This is ironic given that I just got done retiring my last site and pseudonym (Brandholm) to go with my real name (dbholmes) and new site… emerging mind! Actually I wanted ‘evolving mind’ but it was taken and I thought what the heck, emergence is just as good as evolving. Was I wrong?

    No. Or at least I don’t think so.

    It sort of felt like the speakers you described were over-thinking, or perhaps over-defining what I always took as a rather simple concept.

    Emergent properties to me refers to systems whose behavior can be better (more simply) described:

    1) at the higher level of organization (holistically), and

    2) without reference to the parts which make it up or the mechanics defining those parts.

    Maybe it was sloppy language but I think Bedau is mistaken to consider the ‘whole’ as autonomous from or not reducible to the properties of the ‘parts’. Just because the ‘whole’ may be operating in reaction to totally different features not seen at the ‘parts’ level (hence emergent), that does not mean that they act contrary to everything at the lower level.

    The problem is that it becomes exhaustive trying to describe actions of the ‘whole’ at the level of the ‘parts’ and (more importantly) understanding the significance of the higher level features being acted upon. That last point means prediction basically goes out the window.

    This may be better understood through example.

    One can presumably model at the cellular, chemical, and brute physical level a body processing light from paper covered with ink marks, changes in its neurochemistry, movement of the body, all leading in a necessary chain of physical events to the physics of a small chunk of lead hurling into and disrupting another body. That in retrospect can and must (in a deterministic universe) be describable.

    However, at the higher level it will be much faster and easier to say that Joe was enraged on finding a love letter from his wife to his best friend and so shot her.

    And arguably, unless one is a Maxwellian Demon, there would be no way of knowing what exactly what would happen next at the level of the parts.

    At the emergent level of minds prediction is straightforward. Joe is in big trouble and the police will be looking for him. Perhaps he’ll run down south to Mexico.

    The emergent property is something operating on a new level, not free from its parts, but with additional operable meaning for the entities at that level.

    And I don’t mean just consciousness. A cell reacting to an energy source, or apoptotic signals, can be treated at that cellular level of meaning, or it can be reduced to pure chemistry (ignoring biology entirely) that make the higher level events possible.

    Humphrey seems more like he’s discussing ‘change’ than any emerging property, though I found the mob example interesting.

    Like

  6. Here is an example of independence between high and low level processes.

    We have a set of rules on black and white squares and upon those rules we implement Eratosthanes Sieve algorithm.

    Let’s say I implement a register machine on the black and white squares rule and then write a Basic interpreter on that machine and write Eratosthanes Sieve in Basic.

    Someone could completely understand the algorithm without knowing anything at all about the black and white squares rule (since it could be implemented on any substrate that was a universal computer, the fact that it is presently implemented on the black and white squares rule does not add to our understanding of the algorithm).

    Similarly we can completely understand the black and white squares rule without knowing anything about Eratosthanes Sieve. That is because once we have understood that the black and white squares rule is a universal computer we know that it can do any computation and therefore details of specific computations that it can do add nothing to our knowledge of it (only to our knowledge of computation in general).

    So I would suggest that high level and low level processes are independent when:

    1. You can completely understand the high level process without knowing anything about the low level process
    2. You can completely understand the low level process without knowing about the high level process.

    This also makes an important distinction about dependence on low level processes. This particular instance of the algorithm is completely dependent on the black and white squares rule, but the principle by which it operates is independent of the black and white squares rule.

    So complete physical dependence upon a low level process does not imply conceptual dependence on that process.

    Like

  7. Great Reductio. But now I must ask: Does Quantum Physics demonstrate emergent behavior, or was it embedded all along? Is the motion of the golf ball after being hit by the club, emergent? Or a function of the known behaviors of forces in the known universe?
    If both are emergent, the label means very little. But neither can be emergent, because the clear predictable elements have been there all along (even if we have to work backwards to see it).
    So now we enter the philosophers’ realm where things are imbued with a life and truth of their own, and something can be ’emergent’ by being both ‘from within’ and ‘not from within’.

    Like

  8. Something I’ve read about recently is the connection Jeremy England is making between emergence and thermodynamics [1,2], based on a principle that matter will “gradually restructure itself in order to dissipate increasingly more energy.” Maybe it will turn out to be a useful idea.

    [1] http://www.yalescientific.org/2014/07/origins-of-life-a-means-to-a-thermodynamically-favorable-end/
    [2] http://www.salon.com/2015/01/03god_is_on_the_ropes_the_brilliant_new_science_that_has_creationists_and_the_christian_right_terrified/

    Like

  9. Leptons and quarks are non-composite, they are really fundamental.

    This seems to depend on what “fundamental” could mean in this context. Leptons have properties like electric charge, spin and mass. What could these things possibly mean except in context of the laws of physics of which they are a part? These things are only descriptions of how they interact with each other and other entities. So they are not fundamental in that they depend upon there being these laws whereby they interact.

    It seems to me that whenever you have this “transmutational emergence” all you are doing is describing different behaviours of something more fundamental.

    In the case of checkers any individual checker is not fundamental, the rules and something capable of implementing the rules are more fundamental.

    A checker piece, taken apart from the game of checkers, loses its identity. It is just an object. I could play a game of checkers with, say, two types of pasta spiral and shell. It would still be checkers, but an individual piece of spiral pasta does not have the properties of a checker piece. What gives the individual checker particle its properties is the rules of the game, not anything intrinsic.

    I would suggest that leptons and quarks are somewhat analogous to this.

    Like

  10. I have the following properties:

    1. Being a citizen of the United States and a resident of the state of Missouri.
    2. Being a professor at Missouri State University.
    3. Belonging to the congregation, Temple Israel.
    4. Being Jewish.
    5. Liking Marvel Comics better than DC comics.
    6. Liking Aristotle better than Plato.
    7. Owning property in both Missouri and New York.
    8. Being married for fifteen years.

    Are these “emergent properties”? And if so, what is their relationship to the “underlying” properties of strings, quanta, or whatever? (Asides from Coel’s completely “supervenience reductionism” view, which is a non-starter)

    Like

  11. You may also like the work of Arto Annila

    http://www.helsinki.fi/~aannila/arto/

    From:

    Click to access natprocess.pdf

    Emergence can be understood as a natural process when entities of nature
    are described as actions that all are composed of some integral number
    of quanta. Then, the central connection between the symmetry of action
    and the qualities of a system, given in terms of conserved quantities and
    motional modes, can be analyzed mathematically to conclude that novel
    characteristics will emerge due to quanta that are either acquired or
    lost at a dissipative step of evolution. Conversely, no new qualities
    can appear in an isolated system or in a system that has attained a
    thermodynamic stationary state in its respective surroundings. The
    evolving system demands a holistic description whereas the stationary
    system suffices with a reductionist account

    Like

  12. “Now imagine that … the evolution over time of new kinds of particles (say red and blue), and that the new particles follow different rules of behavior.”

    This ‘imagination’ has given out two emergences: 1) the immutable stones are mutated, 2) the initial laws are also mutated. Is this ‘imagination’ a Nature reality? If not, it can only be a good novel, not a scientific or philosophical study.

    Massimo Pigliucci: “… to explore this idea of a (possible) distinction between causal and nomological closure, which obviously connects with our discussion of causality in physics vs the special sciences from a few days ago.”

    Amen!
    In the last discussion, there is seemingly one confusion remaining: the ‘causal event’ vs the ‘causal law’.
    For every causal event, it is definitely the result of a ‘cause’. Yet, in the ‘causal law’, a ‘cause’ does not always produce an effect.

    {A big house is burnt down by a ‘small’ fire}, but a ‘small’ fire does not always able to burn down a big house. {A beautiful girl walked by causes the car crash}, but many walked by beautiful girls do not always cause car crash.

    In a causal ‘event’, a given cause is only the ‘necessary’ condition, not sufficient condition (which consists of boundary conditions).

    The ‘emergence’ of Humphreys and Bedau is not too much different from the ‘causal’ discussion while it points out two important features of emergence: 1) emergence is synchronically and diachronically stable, with some holistic feature, 2) it is autonomous from its origins. But, they did not give out the detailed ‘mechanism’ and the ‘source (the fundamental) of the emergent process explicitly.

    All emergence consists of two parts.
    One, the emergence ‘source’, the fundamental.
    Two, the emergence ‘process’, the mechanism.

    All emergence processes consist of three steps.
    Step one: beginning with a ‘formal’ system A. [Note: every chaotic system can always be formalized.]
    Step two: ‘system A evolves via Gödel process (which goes to ad infinitum).
    Step three: this ad infinitum is reined in by a ‘renormalization’ process (making the emergence to be a finitude).

    However, this emergence process is not the ‘source’ of the emergence. What is the source of the beginning ‘formal system’? Of course, for many emergent systems, their ‘sources’ are also emergences. Then, is there a ‘first’ emergence? These questions can be answered in two parts.

    Part one: the ‘source’ of an emergence is always reached by the ‘first emergence’ via the ‘similarity transformations’.

    Part two: the ‘last emergence’ is the ‘mutual immanence’ of its ‘source’, via an ‘indivisible principle’. While they two are obviously different but are indivisible {such as, the quark confinement}.

    Obviously, this cannot be discussed in detail in this short comment. But, it is discussed in detail in the book “”Linguistics Manifesto, (ISBN 978-3-8383-9722-1)”, and it is available at some University libraries (such as, Cornell, Columbia, Harvard, etc. see http://www.worldcat.org/title/linguistics-manifesto-universal-language-the-super-unified-linguistic-theory/oclc/688487196&referer=brief_results ).

    Like

  13. Yes, they are emergent properties. Their relationship to strings, quanta etc is not straightforward. The properties you list are high level descriptions of patterns that can be discerned in a world which is composed of strings, quanta or whatever at the finest grain scales. It’s no different from the relationship between gliders and their properties (velocity, size, periodicity) and the basic rules and cells of Conway’s Game of Life. It’s not so hard to understand in this latter simple case, so it escapes me what the problem is in applying the same ideas to the real world. This stuff does not seem to me to be that difficult to grasp and smells somewhat of a manufactured problem.

    This is supervenience reductionism, I think, and I don’t agree at all that it is a non-starter. If you think it is then I imagine you don’t interpret Coel correctly.

    Like

  14. Hi Robin,

    I don’t think you understand Coel’s point, which is itself I think a misunderstanding of the original thought experiment, which I think was intended to illustrate a case where some rules of the universe are unknown because they are not instantiated anywhere..

    Coel’s interpretation, I believe, is that if the rules of the B&W universe truly allow for blue and red entities to emerge, then those entities will be implicit in the original rules. Coel is therefore assuming that we have all the rules of the universe, whether or not they are instantiated. Sufficiently detailed study of the rules will reveal red and blue entities (even if it is infeasible as a practical matter to study it in sufficient detail). I agree with Coel’s point regarding this interpretation.

    Your answer regarding rule 110 may be apposite, but you didn’t spell out why this is. The crucial point about rule 110 is that it is Turing complete, so it can be used to compute any function, including a simulation of the laws of quantum physics. Since quantum physics can therefore supervene on a rule 110 substrate, your argument would be that Coel’s position would seem to imply that quantum physics can be derived from rule 110. I agree that this seems absurd on the face of it. This is because deriving quantum physics from rule 110 would essentially mean enumerating all possible computable functions, and the laws of quantum physics will be in there somewhere. This kind of works in the abstract, but stands as an example of a study which is infeasible to carry out in sufficient detail (to put it mildly).

    Taking it back to the black and white, blue and red analogy, it seems to me the reasonable conclusion is that if the blue and red world is an inevitable evolution of the black and white world, then this should indeed be possible to infer from a complete description of the laws of the black and white world. If, on the other hand, it is a very unlikely outcome or one of a near infinite series of possible outcomes, then it is not so easy to infer it from the laws of the black and white world, though we should instead be able to tell something about the set of possible outcomes (which would include it), like the set of Turing computable functions for rule 110.

    In answer to the original thought experiment, if our description of the rules of the black and white world is incomplete, being silent on the latent rules that will bring blue and red entities into existence, then of course we cannot see that they will arise, but only because we have incomplete knowledge of the rules.

    Like

  15. Coel,

    “Presumably these laws are complete about black and white particles and indeed about the game”

    Not necessarily, otherwise checkers world would become a trivial example. I think the point is that there are higher level laws, which regulate the change over time, which are not part of the basic rules of the game, those that regulate the behavior of the black/white pieces.

    “then surely we can “predict” the later behaviour, even if only by a suck-it-and-see, brute-force simulation”

    As I said above, not necessarily. I find it interesting that every time the topic of emergence comes up some people find it necessary to immediately focus on a defense of nomological reductionism, which as far as I can tell is a reasonable *metaphysical* position. How about all the interesting issues that emergence poses from an epistemic perspective, or even more fundamentally, as a window on how the world works by interlacing different hierarchical levels of interaction between objects?

    “if the laws of the initial B&W world were indeed incomplete, then obviously we can’t then predict the R&B behaviour. But that is surely a trite and uninteresting point that is not relevant to “emergence””

    Why would that be “trite” or “uninteresting”? If we could confirm this sort of view empirically it would amount to a spectacular discovery that would upend the entire way we look at science and the natural world. Far from being either trite or uninteresting. Incidentally, I take it that Lee Smolin, in his just released book, is posing something like a diachronic view of the laws of nature. Speculative, yes. Trite or uninteresting? No.

    dbholmes,

    “It sort of felt like the speakers you described were over-thinking, or perhaps over-defining what I always took as a rather simple concept”

    Perhaps, that’s a professional hazard for philosophers… However, those that you describe as emergent systems are one kind, there are others, so perhaps the speakers weren’t overthinking after all.

    “I think Bedau is mistaken to consider the ‘whole’ as autonomous from or not reducible to the properties of the ‘parts’. Just because the ‘whole’ may be operating in reaction to totally different features not seen at the ‘parts’ level (hence emergent), that does not mean that they act contrary to everything at the lower level.”

    First, that was one of the types of emergence described by Bedau. Second, and more crucially, there is no contradiction between the two levels, only independence, they are not the same thing.

    “at the higher level it will be much faster and easier to say that Joe was enraged on finding a love letter from his wife to his best friend and so shot her.”

    Right, epistemically. But the further issue is whether there is something different ontologically between the two levels of description. As far as I’m concerned that’s an open question about which one best be agnostic.

    “Humphrey seems more like he’s discussing ‘change’ than any emerging property”

    Not exactly. When the rules and make up of checkers world change because of meta-laws this is more than just change over time, although of course it is that as well.

    DM,

    “This stuff does not seem to me to be that difficult to grasp and smells somewhat of a manufactured problem.”

    Only because you assume a priori that nomological reductionism must be true, so everything else is either understandable that way or it becomes a non-problem.

    “if our description of the rules of the black and white world is incomplete, being silent on the latent rules that will bring blue and red entities into existence, then of course we cannot see that they will arise, but only because we have incomplete knowledge of the rules.”

    I’m sorry but that truly is, to use Coel’s phrase, trite and uninteresting. Of course if we have perfect knowledge of a system, including the laws regulating is emergent properties, then we have perfect knowledge of that system (predictability still doesn’t follow, unless one also has perfect knowledge of the initial condition and perfectly accurate computational methods).

    The interesting question here is whether there are higher-level laws, or whatever one wants to call them, that are compatible (in the sense of non contradicting) lower level ones, and that are however not derivable from the latter. I know that most physicists think that’s impossible; okay, and a lot of biologists think this is obviously true. As a philosopher, as I said above, I’m agnostic and I keep following the pertinent literature with interest.

    Like

  16. On reflection, a bad example. Molecules should strictly be considered, not as ensembles of atoms, but as ensembles of atomic nuclei and electrons. Then there is nothing puzzling about the fact that molecules can show polarity, although atoms do not.

    A better example is thermodynamics. Notoriously, the physical laws covering individual particles show (near enough for our purposes) time-reversal, whereas thermodynamic properties such as entropy (with its implication of time direction) and even pressure and temperature are only meaningful for large ensembles.

    Like

  17. Hi Massimo,

    As far as I can see the thing Coel called trite and uninteresting in his comment is the same thing you call trite and uninteresting in mine. I agree. I never meant to imply that it was interesting. I am echoing Coel’s point. It seems to be a trivial observation that if we don’t have a complete description of the system then we cannot predict red and blue, which makes the thought experiment pretty irrelevant. It seems to me that you misunderstood the intent of Coel’s point.

    I think the confusion may be that I am assuming that there are no extra laws that describe the emergent properties. I am assuming that any such laws themselves emerge necessarily from the fundamental laws. You are more open to the possibility that emergent laws are something over and above the fundamental laws. So when I consider having a complete knowledge of the fundamental laws of the black and white universe, I am assuming that means we have complete knowledge of all the laws of the black and white universe that are needed to predict its evolution.

    You ask whether it is possible that there are higher level laws which are compatible with lower level laws. My view is that if they are independent of the lower level laws, as seems to be required, then it is hard to see in which sense they are higher level. Lower level or higher level are concepts that apply when we want to reduce higher level laws to lower level laws. If the laws are independent then these new laws are not higher level, just different. It is logically possible that there are laws which have not been observed and cannot be observed because the physical situations where they would apply do not occur naturally, e.g. in the context of faster than light travel.

    But if these laws are supposed to be in operation day to day, then it seems that this is inconsistent with compatibility with low level laws. If low level laws say particles obey newtonian mechanics (say), but higher level laws say that certain high-level assemblages of particles do not obey newtonian mechanics, then we have a clear contradiction. If simulated at the level of the particles, we get one behaviour, but if simulated macroscopically we get another.

    It is logically possible that such contradictory high level laws exist, but if they do they are not compatible with the laws describing individual particles. This would mean that the laws describing individual particles are incorrect approximations to a much more complex general law. This seems to be unlikely and I reject it for reasons of parsimony.

    Like

  18. Hi Robin,

    Here is an example of independence between high and low level processes.

    I agree that one can completely understand the high level independently of the the low level. The high level can have multiple implementations (running the same App on different chip architectures). But, I don’t agree that we can completely know the low level without “knowing” the high level, if we allow that a brute-force simulation counts as “knowing”.

    … we know that [a universal computer] can do any computation and therefore details of specific computations … add nothing …

    Specific computations have to be implemented at the low level, and thus to be complete the low-level descriptions needs to include the specific computation.

    You are dividing the low-level into “general” rules plus specific information about “starting positions”. That’s fine (and useful for human-built computers), but a complete low-level description includes both. A low-level account of chess includes the allowable moves *and* the starting positions of the pieces.

    Given [Wolfram’s Rule 110] your brute force simulation, throw as many initial conditions at it as you can and eventually it will `predict’ quantum physics.

    That’s like “predicting” next week’s lottery by listing every possible number! To “predict” quantum mechanics you’d need to specify the particular initial conditions that lead to it, and those are just as much part of the low-level description as the dynamical rules.

    So [leptons] are not fundamental in that they depend upon there being these laws whereby they interact.

    I think it’s a mistake to see physical laws as “telling” particles how to interact. The physical laws are, rather, descriptions of the nature of the leptons, and it is indeed their intrinsic nature that is determining how they act.

    Hi Massimo,

    I think the point is that there are higher level laws, which regulate the change over time, which are not part of the basic rules of the game, …

    In that case the “basic rules of the game” are incomplete. Isn’t the topic of “emergence” about when we do have a complete low-level description?

    The interesting question here is whether there are higher-level laws … that are however not derivable from the latter.

    We need to clarify the nomenclature. To me a “higher-level” law is one that is emergent from and thus supervening upon the lower-level description. If you are adding laws that are totally independent of the lower-level description then you’re adding in new *fundamental* laws, not “higher-level” laws. Essentially you’re just saying that the initial low-level description was incomplete.

    These extra laws are not “emergent” or “higher level”, they’re basic and fundamental. Thus we need to distinguish the concepts: “Emergent 1” = “supervenes on the low-level”, and “Emergent 2” = “occurs later in time”.

    How about all the interesting issues that emergence poses from an epistemic perspective, or […] as a window on how the world works by interlacing different hierarchical levels …?

    Absolutely, those are of prime interest! Just as soon as everyone is agreed on the underlying principle of supervenience physicalism! 🙂 (Since agreement on that hugely clarifies and facilitates that discussion.)

    Like

  19. DM:

    Just to clarify, there is no such thing as “supervenience reductionism.” Supervenience is a physicalist thesis that tends to be much weaker, in its commitments, than reductionism…that is, if we are engaging in non-Humpty-Dumpty semantics.

    As for the rest, you are much more confident than I am that there is some clear relationship between the “lower” and “higher” levels. As certain as you are that if we knew every last thing there was to know about quanta or strings or whatevers, we’d know everything there is to know about economics, politics, poetry, and love affairs, but I’m just as certain that we would not. The mere fact that there are laws at many of these higher levels of description—and thus, types and kinds—that are not, in any way, reducible to laws, types, or kinds of chemistry or physics should tell you that.

    I would actually go farther than this and maintain that chemistry and politics consist of entirely different language games with substantially different rules–in the form of grammars, conceptions of warrant, etc.–and that consequently even relations as weak as supervenience are wildly overstated, but one needn’t go that far to make the point.

    Like

  20. Bedau: “ emergence:
    whole-part dependence > …,
    autonomy of the whole > …,
    examples >…”

    The whole-part is of course related by definition. Yet, the ‘essence’ of emergence is that the whole ‘must’ be larger than the sum of the parts. This additional something is the essence of the emergence. Thus, the entire issue is about this ‘additional something’: 1) what is it? 2) where is it coming from? These two questions are answered with two parts (only).

    Part one, the general emergent-process {from an initial axiomatic system (initial condition) + Gödel process (evolution) + renormalization (cut out the infinities, making a finitude)}.

    Part two, the ‘first’ emergence vs its fundamental: the result of mutual immanence and the principle of indivisibility.

    These two parts are ‘implemented’ with two processes (principles).

    P-one, the ‘Spider Web Principle’: the initial condition is a ‘total’ symmetry {where to cast the first spider thread is totally symmetric}. When the first thread is casted, the total symmetry is ‘broken’, the location of the web is locked. When the second thread is casted, the ‘center’ of the web is fixed. When the third thread is casted, the ‘size’ of the web is defined. These symmetry-breaking sequences are the ‘emergence’ process.

    P-two, the Martian Language Thesis — Any human language can always establish a communication with the Martian or Martian-like languages. This language universe consists of three parts: 1) the continent A (meta-language, the world events), 2) the continent B (the meaning space about those world events), 3) a grand canyon (separating these two continents). Language is the ‘bridge’ to link the two continents. All languages are the ‘emergences’ of this language universe. While English can never be reduced to Chinese, they can always be translated. That is, the reductionism is meaningless in this language universe.

    Emergence is not an effect of a cause but is something ‘additional’ popped out from a fundamental. The following is a simpler example than the language-pops.

    A ring-string is a total symmetry. When a ring is cut, it becomes a line-string which consists of three parts (two ends and the middle) = (A, C, B) or = (red, yellow, blue).

    As the line-string can be written as (red, yellow, blue), the ring-string = (white). That is, the (white) is fundamental while the (red, yellow, blue) is the emergence. Yet, there is something additional to this emergent process, the ‘cut’ (symmetry breaking).

    Any ‘emergence theory’ which does not discuss this ‘cut’ is wrong. What is this ‘cut”? For the ‘first emergence’, the ‘cut’ is an innate part of the fundamental (mutually immanent and indivisible).

    So, the ‘arrow of time’ is the emergent of the {cut of timelessness (an indivisible)}. How to cut an indivisible? This is a big issue. But, the ‘result’ is that this cut gives rise to the Alpha equation. Another ‘cut’ on immutability (another indivisible) gives rise to the Standard Model fermions.

    Like

  21. I wish there had been time to explore this idea of a (possible) distinction between causal and nomological closure

    Me too, because that seems like a distinction that might give us a foothold in these discussions.

    The question would seem to be whether the causal constraints of a new entity are fully determined by the old rules. If they are not, then simulation as Coel and others have been talking about it would not work. There are things that you would not see emerging in those simulations.

    In the sort of simulation we’ve been discussing, causal factors are all fully determined. You see “new” behaviors at higher level (gliders, for example, in GoL), but there’s nothing new causally at work — it’s all just “weak emergence”. And in fact simulations like this have been used to argue for a causal reductionist/weak emergentist/supervenience view. The argument basically says: “if all the behaviors and entities that we see in the universe emerge in the simulation, nothing more is needed causally”.

    But a simulation in which new causal factors emerge would have to be one in which the behaviors of its entities are capable of changing the program that governs their behaviors — a sort of “meta-causal” layer (because the initial encoding of the rules would have to contain rules that change the rules, and this process itself would have to be “nomological”). If our actual universe works like that, we are in a weird place indeed.

    Like

  22. Is the premise of law itself emergent?
    Are there laws in the void, where there would be nothing to govern? Is there some platonic realm of laws? If so, then presumably all behavior would be governed by these pre-existing platonic laws and that would make them very complex, so why not consider the possibility of the opposite position, that laws are an expression of what happens, not the formula dictating what happens.
    For example, the most elementary law would be repeatability; That the same cause always yields the same effect, because that is the essence of what a law requires and how we would discover it. Without that, there is no such thing as law.
    If we consider this position, it might go a long way to explaining why there seems to be a certain chaos to discovering the nature of laws and why there seems to be a disconnect between levels. Lower level laws, that of the hardware/medium, seem disconnected from the higher level of software/message, yet maybe in our search for that linear connection, we are missing a lot of feedback loops of structure building up and breaking down and that one of the next level of laws, beyond repeatability, would be a cycling of expansion/creation and contraction/dissolution of complex structure. Thus the feedback of higher levels, through more elementary levels, much like basic waves, as message, through any number of complex mediums. As well as how extremely complex biological organisms are motivated by very basic impulses.
    Not that I expect this to get much initial consideration, because education, of which everyone here is well endowed, is a product of just such processes, otherwise known as trial and error and the result is what is known as knowledge and “laws.” This goes back to the previous discussion of intuition and how everyone uses it, but then forgets it, like waking from a dream, when they discover the solution to whatever it is they were trying to solve. Hindsight is 20/20. Otherwise known as being determined and lawful. Foresight is chaos and fog. Reductionism gives us laws, but from what are they formed?

    Like

  23. Hi Massimo, I understood that what I described likely fell within the categories you listed for Bedau, though I was not certain why all the distinctions of weak to strong were necessary. If there are truly other kinds of emergence, then it is likely I was ‘underthinking’ the issue. But I guess I am still not exactly clear what these are (hang on to that thought).

    “there is no contradiction between the two levels, only independence, they are not the same thing.”

    So I think maybe the language was throwing me off, and my own was a little sloppy as well (I meant to say “they can act contrary” not “they act contrary”). The ‘autonomous’ label combined with ‘not reducible’ suggested to me that one level could act in ways that could violate expectations from lower level mechanisms, which I do not believe is possible.

    “Right, epistemically. But the further issue is whether there is something different ontologically between the two levels of description. As far as I’m concerned that’s an open question about which one best be agnostic.”

    Ok, so I completely agree about staying agnostic, which is why I tend to restrict my analysis to the epistemic. ‘Seems as if there is something new’ is probably the best I can ever get toward an ontological claim on this. I wouldn’t know how anyone can move further (at least I haven’t seen anything convincing). Referring back to your statement about different kinds of emergence, is this what you were referring to?

    Also, I wondered if you could expand on the statement in your essay: “The conscious mind, the only example usually brought up in this context is very much under question,…”

    On Humphreys…

    “When the rules and make up of checkers world change because of meta-laws this is more than just change over time…”

    This was not convincing to me as described. Meta-laws suggests there are rules in the system that consider (direct/allow) changes in the system over time. To me (but perhaps I am wrong) emergence involves an evolution of new entities/rules, that are not organic to the original system, from normal interactions when certain select environments/arrangements created by those parts produce novel considerations.

    If the example given was that during play, certain arrangements of checkers on the board produced novel ‘forms’ that one could describe as playing by another set of rules, it might sound more emergent to me.

    Like

  24. Hi Aravis, I agree with your statement that:

    “As certain as you are that if we knew every last thing there was to know about quanta or strings or whatevers, we’d know everything there is to know about economics, politics, poetry, and love affairs, but I’m just as certain that we would not.”

    But I’m not certain DM was suggesting the lower level would deliver explicit knowledge of the higher level entities/rules. Rather, the functions of the lower level ultimately underlie those things. Knowledge of the lower alone would could (in theory) produce models of the behaviors of the higher level entities seen at the lower level. Of course I think that would require such computation/knowledge as to be almost useless.

    I do disagree with DM that the properties you listed were emergent properties. While those are properties one can use to describe yourself, I don’t see how those are ‘emergent properties’. Emergent properties are a bit more generalized (class level), so the descriptors you give are merely elements of the ‘emergent property’ of mind, and perhaps societies. If you suddenly decided to convert to Xianity, that would be a change for you (as a mind) but not the emergence of a new property.

    If DM or Coelare suggesting that knowledge of quarks/atoms/etc would allow explicit knowledge of the higher level entities/rules (like oh yes now we know there are ‘states’ and ‘schools’ and ‘teachers’ and what those do) then I am totally with you.

    I thought Robin‘s description of the independence of two levels was accurate:

    “1. You can completely understand the high level process without knowing anything about the low level process
    2. You can completely understand the low level process without knowing about the high level process.”

    I’d only add that on top of being able to ignore the other level, it can be extremely difficult (if not practically impossible) to try to deliver an accurate prediction/account of what will happen on one level, when activity is occurring at the other.

    Like

  25. Coel,

    In that case the “basic rules of the game” are incomplete. Isn’t the topic of “emergence” about when we do have a complete low-level description?

    Define “complete”. I’d say we never have a complete low-level description, since there are laws (which you and I may call fundamental) that live intrinsically in a high-level description, and cannot even be formulated at the low-level. Besides, there is the Goedel’s first incompleteness theorem, just waiting to jump on you if you try to claim that any imaginable “complete low-level description” of the real world could even exist.

    Regarding the above, note that I use the terms “low-level” and “high-level” in a different sense than you do — please see below.

    To me a “higher-level” law is one that is emergent from and thus supervening upon the lower-level description. If you are adding laws that are totally independent of the lower-level description then you’re adding in new *fundamental* laws, not “higher-level” laws. Essentially you’re just saying that the initial low-level description was incomplete.

    These extra laws are not “emergent” or “higher level”, they’re basic and fundamental.

    Define “higher level”. When physicists talk about higher level laws, they do not mean higher in some theoretical hierarchy of basic laws and derived laws. Instead, they talk about laws that apply to systems that are bigger in “size”, i.e. number of particles or complexity of interactions.

    The issue is that a certain observed law may be fundamental, while at the same time requiring high-level description for its own formulation. In other words, a large collective may uphold a law that is not a consequence of laws that each member of the collective upholds. You are correct that the collective-law is then fundamental (since it is not derivable from member-only-laws), but it also does not apply to any member individually. That is called “strong emergence” — a collective may be more than just the sum of its parts — and that is what I think Massimo was talking about.

    Everyone,

    I can see a lot of confusion in the comments, mostly regarding terminology and talking past each other. Several days ago I have submitted an article on reductionism and emergence to Massimo (before this discussion started). I guess he is yet to read it and decide whether to post it on SS or not, but if he does, the article should help clear out some of the confusion. Hopefully. 🙂

    Like

  26. Hi Robin Indeededo,

    Does Quantum Physics demonstrate emergent behavior, or was it embedded all along? Is the motion of the golf ball after being hit by the club, emergent? Or a function of the known behaviors of forces in the known universe?

    Yes, this is sort of what I am exploring.

    I wonder if the philosophers who opine on emergence and reductionism would agree with statements like “Quantum physics is an emergent behaviour of Rule 110” or “Lagrangian Mechanics is an emergent behaviour of Game of Life”? Or perhaps “General Relativity is reducible to Lambda Calculus”.

    If so then I agree that the terms kind of lose meaning.
    Hi Coel,

    But, I don’t agree that we can completely know the low level without “knowing” the high level, if we allow that a brute-force simulation counts as “knowing”.

    So I get 2 facts in this order:

    Game of Life is a universal computer
    Game of Life can implement Eratosthanes Sieve

    What did fact 2 add to my knowledge about GoL? Nothing. Because by knowing fact 1 I already knew that it could do any computation which, by definition, includes Eratosthanes Sieve.

    So now when I learn “GoL can implement X” I learn nothing new about GoL.

    That’s like “predicting” next week’s lottery by listing every possible number!

    Yes, that was my point, I was commenting on your concept of a brute force prediction. If we were to explore every possible behaviour of Rule 110 then quantum physics would be one of them.

    Like

  27. There was more I was going to say, but I might wait to see if Marko’s article is published.

    But the theme that seems to be emerging (so to speak) is that all we mean by ‘B is an emergent property of A’ or ‘B is reducible to A’ is ‘B is something that can happen in A’.

    Like

  28. Hi Aravis,

    I think dbholmes got a lot right in his comment describing how he understands my position.

    > there is no such thing as “supervenience reductionism.”

    “Supervenience reductionism” is the label you have given to Coel’s view (and mine, I guess), so it exists insofar as Coel and I hold it. What it is really is a form of weak supervenience, and not reductionism at all as you use the term, but it is, Coel maintains (plausibly in my view) what most self-identifying “reductionist” scientists actually believe.

    > you are much more confident than I am that there is some clear relationship between the “lower” and “higher” levels.

    I’m not at all confident that there is a clear relationship. The relationship is rather indirect and unclear. My view is just that all higher levels supervene on the lower levels in much the same way as the gliders in Conway’s game of life. There is nothing straightforward or one-to-one about that relationship. Really, I think our view on this is pretty trivial and uninteresting and being misunderstood as the kind of strong wrong-headed reductionism it isn’t.

    > As certain as you are that if we knew every last thing there was to know about quanta or strings or whatevers, we’d know everything there is to know about economics, politics, poetry, and love affairs, but I’m just as certain that we would not.

    I think you misunderstand my view and Coel’s. We do not think that knowing everything about fundamental physics entails knowing everything about human affairs on planet earth. Rather it implies (extremely indirectly) a set of possible states of affairs in our universe. So perhaps it does entail the laws that govern economics and poetry, but it would do so only by entailing all possible cultural configurations on any conceivable planet in any hypothetical society of biological (or machine) intelligences that could exist in this universe. This is a set of possibilities so vast as to be pretty useless as a field of study in itself. There is no getting around the fact that if we want to study economics or poetry as they work on earth, then we need to study these as fields in themselves. And that’s not even mentioning the fact that it is not feasible to derive high-level laws from low-level laws even when we do know everything about fundamental physics simply because it demands more computation than could ever be feasible.

    Like

  29. DM,

    “I am assuming that there are no extra laws that describe the emergent properties. I am assuming that any such laws themselves emerge necessarily from the fundamental laws”

    I know. But the question is why? You (and Coel) have good reasons to think so, but you are making a strong, and in my mind still unwarranted, metaphysical leap by taking that position, while you guys seem to think that it, ahem, emerges straight out of the science. It doesn’t.

    “if they are independent of the lower level laws, as seems to be required, then it is hard to see in which sense they are higher level”

    Because they manifest themselves only at higher levels of complexity of matter. The levels here refer to organization of matter, not to the laws themselves.

    “if these laws are supposed to be in operation day to day, then it seems that this is inconsistent with compatibility with low level laws”

    I honestly don’t see why. Let me ask the question another way: where did the lower level “laws” come from? What’s their source? At the moment there is no particularly good answer to that question either, so I don’t see why it should be any more troubling to admit that we don’t have answers to how the (allegedly) emergent laws describing, say, biological or economic systems themselves arise.

    “a form of weak supervenience”

    To be clear: supervenience doesn’t come in degrees: either something supervenes on something else or it doesn’t, so “weak supervenience” is an oxymoron.

    Coel,

    “In that case the “basic rules of the game” are incomplete. Isn’t the topic of “emergence” about when we do have a complete low-level description?”

    To assume that the low-level description is complete in the reductionist sense is to be the question, in the context of this discussion. See my response to DM above.

    “To me a “higher-level” law is one that is emergent from and thus supervening upon the lower-level description.”

    That’s one way to interpret higher level laws. Another is to say that they are compatible with, but do not result from, lower level laws.

    “Just as soon as everyone is agreed on the underlying principle of supervenience physicalism!”

    Why? (Again, see my answer to DM.)

    brodix,

    “Is there some platonic realm of laws?”

    No, again see my response to DM above.

    “why not consider the possibility of the opposite position, that laws are an expression of what happens, not the formula dictating what happens.”

    That’s what, for instance, Nancy Cartwright and Ian Hacking have been suggesting for a while. That no laws are “fundamental,” they are all phenomenological, i.e., approximate description of phenomena. I’m not sure I buy it, but I am strongly sympathetic to it.

    “Reductionism gives us laws, but from what are they formed?”

    Precisely the question that I asked to DM (and implicitly to Coel) above.

    holmes,

    “The ‘autonomous’ label combined with ‘not reducible’ suggested to me that one level could act in ways that could violate expectations from lower level mechanisms, which I do not believe is possible.”

    As far as I know, nobody sympathetic with strong emergence nowadays suggests that, although something like that has been proposed before (vitalism in biology, for instance). The exception may be dualists about the mind, like Chalmers. But that’s why both authors of this session stayed away from mental causation altogether: simply not well understood enough, hence likely to bring one to begging the question.

    “I tend to restrict my analysis to the epistemic. ‘Seems as if there is something new’ is probably the best I can ever get toward an ontological claim on this.”

    That strikes me as right. Now, of course, if we move from ontology to epistemology than it is downright bizarre to deny the existence of strong (epistemic) emergence: pretty much all the special sciences are full of examples of it. Which, by the way, is similar to the reason why antirealists in philosophy of science (next essay!) think of themselves as more empiricists than the realists: they take the empirical data at face value, attempting to reduce what they call inflationary metaphysics. Now I’m not an antirealist, but I do think that in the case of emergence the reductionist has a lot of work to do to explain the phenomena, and he doesn’t even seem to see it.

    “I wondered if you could expand on the statement in your essay: “The conscious mind, the only example usually brought up in this context is very much under question…””

    That just meant, as I suggested above, that one cannot use the mind as the only case of strong ontological emergence, because we don’t understand the mind well enough to actually make that case. Phenomena from non-fundamental physics to biology to the social sciences are better candidates.

    “Meta-laws suggests there are rules in the system that consider (direct/allow) changes in the system over time. To me (but perhaps I am wrong) emergence involves an evolution of new entities/rules, that are not organic to the original system, from normal interactions when certain select environments/arrangements created by those parts produce novel considerations.”

    You may be correct, and perhaps that’s what Humphreys meant. As I said, his talk was difficult to follow, and I was taking copious notes in preparation for this essay…

    Like

  30. On strong emergence:
    whole-part dependence > properties of wholes supervene on properties of parts

    autonomy of the whole > properties of the whole are undefinable by / irreducible to properties of the parts
    “Bedau thinks this is a coherent concept (I don’t, as I see inconsistency between the ways in which whole-part dependence and autonomy of the whole are defined), but he claims that there are no actually well understood real examples of it.”
    I see no problem. Autonomy is defined in terms of nonreducibility. Reducibility is usually defined in terms of derivability. There are plenty of systems in which there is a higher level cohesive activity that is not derivable (computable) from the motions of the parts. I discuss this in A dynamical account of emergence (Cybernetics and Human Knowing, 15, no 3-4 2008: 75-100).

    Click to access A%20Dynamical%20Account%20of%20Emergence.pdf

    That paper gives the dynamical conditions under which this nonderivability occurs. They are fairly common. Flocks are not emergent in this strong sense. Mark used it as an example in a talk he gave in Newcastle, Australia close to twenty years ago, and both Cliff Hooker and myself took exception.Macroscopic flock dynamics are pretty much certainly reducible in the sense I have given. You need dissipation to get strong emergence.
    I am a bit disappointed the Mark and Paul have not got further with the issue. I suppose that is part of the reason I don’t interact with philosophers much anymore. The comments here, by the way, are full of red herrings.

    Like

  31. Hi Massimo,

    > But the question is why?

    Parsimony. The ability of complex, surprising behaviour to arise from very simple rule-based systems has been demonstrated repeatedly in simulations, cellular automata, mathematics and so on. It’s a topic I take a great interest in (e.g. I did a MOOC in the topic of complex systems run by Melanie Mitchell of the Santa Fe Institute). These findings are very powerfully suggestive and leave me confused as to why anybody would feel there is any reason to believe that high level laws are independent of low level laws.

    > while you guys seem to think that it, ahem, emerges straight out of the science. It doesn’t.

    Well, I agree that it doesn’t, but the science does seem to leave very little reason to believe otherwise. It is logically possible that there are high level laws that only come into play when there are complex systems at work, but I find that to be very implausible because weak emergence appears to be enough to account for everything we see.

    > where did the lower level “laws” come from? What’s their source?

    I agree that’s an important question. I can’t speak for Coel but you ought to know my view, which is that the MUH explains it. All mathematical objects exist, and our universe is just the mathematical object comprised of its physical laws. They didn’t come from anywhere, they exist Platonically just as do all other possible sets of physical laws. If there are high level laws that only come into play with complex systems, but which don’t simply emerge weakly from low level laws, then in my view they would have been there from the beginning as part of the structure of the universe. Any purely low-level description of the universe would have to be incorrect and incomplete. Such high level laws could not be compatible with low-level laws, they would have to override and supercede low-level laws in situations where they come into play.

    >> “a form of weak supervenience”

    > To be clear: supervenience doesn’t come in degrees

    I’m contrasting weak supervenience with strong emergence. I’m trying to explain that Coel’s views and mine are considerably weaker than Aravis seems to infer.

    In any case, there are usually a large variety of philosophical interpretations of various concepts, and supervenience seems to be no different. Without necessarily implying that this is what I had in mind, take for example http://plato.stanford.edu/entries/supervenience/#4.1 which distinguishes between “weak individual supervenience” and “strong individual supervenience”.

    So without committing myself to any particular academic account of supervenience (which I lack the expertise to identify off the top of my head), I just wanted to be clear that it doesn’t seem to me like I’m making any particularly strong claims, at least from Aravis’s perspective where his concern is the relation of quarks to poetry, etc.

    Like

  32. Hi Massimo,

    Let me ask the question another way: where did the lower level “laws” come from? …

    “Physical laws” means descriptions of how stuff behaves. Thus, “low-level” laws are descriptions of how particles, owing to their natures, move and interact with each other.

    Now, suppose there are “higher level” laws (of biology or economics or whatever) that are “compatible with, but do not result from, lower level laws”. In that case, some animal would be doing something different than as specified by the low-level laws alone. Thus the particles composing that animal would be doing something different than as specified by physics. That means — by definition — that the low-level descriptions of how those particles move and interact would be incomplete.

    This means that you’d need to have physical entities such as particles moving in a way other than as specified by physics, which means some (in-principle detectable) forces that are acting on particles in ways inconsistent with current physics.

    Now, one could argue that if these “extra forces” are only acting in certain complex situations (such as biology) then physics may have over-looked them. But it does mean that if we looked closely enough at a biological system then we’d see physical particles moving in ways incompatible with current physics.

    As far as we can tell, “the laws underlying the physics of everyday life are completely understood” (as emphasized by Sean Carroll).

    you are making a strong, and in my mind still unwarranted, metaphysical leap by taking that position.

    That “metaphysical leap” being essentially Occam’s razor, since there is no evidence for these “strong emergence” extra forces that are in addition to current physics.

    Hi Marko,

    When physicists talk about higher level laws … they talk about laws that apply to systems that are bigger in “size”, i.e. number of particles or complexity of interactions.

    “Complexity of interactions” I agree with. Thus one presumes that the “higher level” behaviour of a complex system is the result of all of the simpler interactions that make up the complex system. The idea of “strong emergence” is not one that has any standing in physics, and as far as I’m aware there is no evidence for it. To take an example, the 2nd law of thermo is higher-level and “emergent” in the sense of being statistical and applying to complex many-particle interactions. But, you do not have to “add in” a 2nd law in order to get a low-level simulation to manifest 2nd-law behaviour, it is “weak emergent”.

    Hi Robin,

    Game of Life is a universal computer
    Game of Life can implement Eratosthanes Sieve
    What did fact 2 add to my knowledge about GoL? Nothing.

    Agreed. But now consider (1) basic specification of GoL, (2) GoL configured such that it is implementing Eratosthanes Sieve. The second takes far more information to specify. If you do want a complete low-level specification then you need that extra information.

    If we were to explore every possible behaviour of Rule 110 then quantum physics would be one of them.

    Agreed, but that is not “predicting” QM any more than listing every lottery number is a prediction of next week’s number. A “prediction” of QM needs to emulate that behaviour and nothing else. That is not entailed in your bare-bones Rule 110.

    Like

  33. DM, some people would say that the empirically most parsimonious way to read the universe is precisely that it a number of systems are characterized by strong emergence. And of course parsimony is only a generic heuristic, which has turned out to be wrong a number of times in the recent history of physics.

    Like

  34. We are having a hang up on the issue that the high level systems (economics or politics) are ‘seemingly’ having nothing to do with the low level systems (fundamental physics or number theory).

    I have discussed a “Large Complex System Principle” (LCSP) at this Webzine: there is a set principles which govern all large complex (a number set, a physics set, a life set or a vocabulary set).

    Corollary of LCSP — the laws of a “large complex system x” will have correspondent laws in a “large complex system y.”

    Yet, can the ‘current’ mainstream physics and mathematics produce that ‘set’ of laws and principles which work for the high level systems? The answer is a big No. They are at least ‘one step’ away from it. I will discuss this issue in three steps.

    Step 1, what is emergence? A baby chick ‘breaks’ out the egg shell is emergence which consists of three parts {the chick, the cut, the big outside world for them to roam in}. In general, we take the cut and the outside (spacetime) for grant.

    Step 2, if ‘numbers’ (as chicks) are the emergences, what is the ‘cut’ and the ‘outside’? The entire number line is defined by three entities {zero (0), numbers, infinities} with an equation {0 = numbers/infinities}. By taking ‘hypothesis of the continuum (HC)’ as the third infinity, the number line can be described with a 7-code system {zero (countable), zero (HC), zero (uncountable), 1, C (countable), HC (pseudo uncountable), U (uncountable)}, see https://scientiasalon.wordpress.com/2014/11/24/infinities-in-literature-and-mathematics/comment-page-1/#comment-9782 . With this 7 codes, all members (numbers or else) of a system can be uniquely identified, and this is the ‘base (necessary condition)’ for consciousness. From this, three systems are linked.
    One, number system (7 codes)
    Two, life system {A, G, T, C, M (male), F (female), K (kids)}
    Three, elementary particles {red, yellow, blue, white, G1, G2, G3}

    Now, the high level ‘consciousness’ is linked (not reduced) to the low level systems.

    Step 3, linking to economics. We need re-write the 7-code law into a symmetry (the total symmetry). If the Standard Model symmetry is a ‘ball’, the SUSY makes another ball as its symmetry partner, forming a dumbbell which in fact reduces the symmetry. For a ball, its symmetry will not be ‘reduced’ only if its symmetry ‘partner’ is a ‘point’ on that ball. So, the SM symmetry is a ‘ball’ with one ‘point’ missing.

    SM + the missing point = total (perfect) symmetry, and this forms the REAL/Ghost symmetry. It is not too difficult to show that {0s, infinities} are residing at the same ‘point’ (the ghost) while the numbers are ‘REAL’.

    With the REAL/GHOST symmetry:
    ‘now’ goes into the ghost point becoming the past,
    ‘future’ pops out from the ghost point becoming ‘now’.

    With this real/ghost symmetry, we can derive all high level systems. I am providing only two links here.

    The economics: http://www.chinese-word-roots.org/econom01.htm
    The politics: http://www.chinese-word-roots.org/cwr016.htm

    Like

  35. Hi Coel (and to some extent DM). I have to disagree with the idea that rules for higher order entities are somehow part of/packaged into rules for lower order systems.

    Let me try to explain better why this is…

    A set of rules regarding the behavior of some entities (X) can lead to certain stable aggregations of X (which we can then call Y). Behavior for continued stability of Y in a new environment filled with Y’s may involve rules of interaction but now at the level of Y itself (they key off features at Y).

    Perhaps the minimal rules related to stability for Y can be said to be ‘within’ the rules of X. But in addition to stability there can/will be rules related to other potential interactions between Y’s at that level (growth, complexity). While X rules for stability cannot be violated Y rules beyond that can be completely novel (meaningless and so unpredictable from lower order rules). More importantly they may only come about when Y has developed one set of particular behaviors versus others (meaning that as time goes on some potential sets of Y rules may get cut off). So to know the full Y rules one has to wait and see how things play out. They are contingent on properties of Y, but not mandatory once Y emerges.

    This goes back to my example of Joe killing his wife. When atomic structure or cellular life emerged it would have been impossible to anticipate/predict the emergence of minds and the ‘elements’ found at that level such as: names, personal relationships, jealousy, manufactured weapons, and murder (not to mention music and songs… like ‘Hey, Joe’ which can hold representative meaning of all those same things).

    I think DM (perhaps Coel) agrees with me with that example, but perhaps not the next point…

    If events happened to roll another way historically at X or Y, minds or specific aspects of minds (i.e. sexual jealousy, weapons) might never become part of the Y rule book. If none of actual Y rules could be predicted as not all were mandatory, why should they be considered part of the rules of X?

    It is possible in theory that by running endless simulations of X using X rules you would eventually come across the emergence of Y and the subset of all possible Y rules seen in the particular environment it happened to develop within (at one time). But that makes my point that you would have to wait and see it emerge. It was not there all along.

    In short:

    X rules may ‘have it in them’ to create Y and Y rules, but that does not mean that Y and Y rules are ‘within X rules’.

    To Massimo,
    “Now I’m not an antirealist, but I do think that in the case of emergence the reductionist has a lot of work to do to explain the phenomena, and he doesn’t even seem to see it.”

    Agreed!

    Like

  36. Coel,

    As far as we can tell, “the laws underlying the physics of everyday life are completely understood” (as emphasized by Sean Carroll).

    Did you bother to read Sean’s third paragraph? Here is the relevant part (quoting Sean):

    Most blindingly obvious of all, the fact that we know the underlying microphysics doesn’t say anything at all about our knowledge of all the complex collective phenomena of macroscopic reality

    Besides, I had a few words back and forth with Sean about the level of rigor of the statements he made in that post. It’s in the comments. 🙂

    the 2nd law of thermo is higher-level and “emergent” in the sense of being statistical and applying to complex many-particle interactions. But, you do not have to “add in” a 2nd law in order to get a low-level simulation to manifest 2nd-law behaviour, it is “weak emergent”.

    Do you have a proof for this statement? I’m sure a whole bunch of physicists would just love to see it. 🙂

    Ironically, you are quoting one of the most established examples of “strong emergence” in known physics. Let me sketch a counterexample to your statement.

    Take an isolated box split in two halves by the door in the middle. Fill the left part of the box with a gas, while leaving the right part in vacuum. Then open the door, watch the gas fill the whole box, and record the process with a camera.

    If you play the recording, you will find that the trajectory of each atom of the gas obeys, say, Newton’s laws of mechanics. In addition, the gas as a whole obeys the second law of thermodynamics.

    Now play the recording in reverse, all the way to the point where the gas fills only the left part of the box. Again you can find that the trajectory of each atom of the gas still obeys Newton’s laws of mechanics. However, the gas as a whole is now in flat-out contradiction with the second law of thermodynamics.

    Conclude that the second law cannot be a consequence of Newton’s laws, since there are two example systems where Newton’s laws hold in both, while the second law holds in one but is violated in the other. Therefore, the second law is independent of Newton’s laws.

    Finally, note that the only relevant property of Newton’s laws that was used here is time-reversibility. So one can repeat the argument for any other “low-level” laws governing the motion of atoms in a gas, as long as those laws are time-reversible. Say, the Standard Model of elementary particles, or any other — pretty much all “fundamental” laws of physics are time-reversible. The second law of thermodynamics cannot be considered a consequence of any of such laws. And if it isn’t their consequence, then it is independent, fundamental, “strongly emergent”, and so on.

    By the way, I didn’t invent this, it is known for a long time now — take a look for example at the Wikipedia entry on Loschmidt’s paradox as a starting point.

    At some handwaving level, a particular atom cannot “know” what other atoms are doing and which way they are going, so that it could somehow “adjust” its own behavior. This is the statement of time-reversibility of low-level laws. On the other hand, the gas as a collective somehow “does know” what each of its atoms is doing, and never displays behavior like completely retracting into the left half of the box. So each atom in the collective obeys a certain law of motion, but the gas as a collective obeys all those laws taken together, plus one additional law that is “invisible” to each atom individually, and independent of the low-level laws governing the atoms. This is called strong emergence — the whole is more than the sum of its parts.

    Like

  37. I address this to Jeff Collier–but am interested in the comments of anyone here:

    Your paper you linked to was helpful for me. The discussion for me, so far, had seemed to be based on particular assumptions that I was unable to understand the reason for. I was going to stay out of the conversation since I am formally trained in neither philosophy nor science and felt that I was perhaps missing something that would eventually become obvious to me. While I lack prerequisite education for understanding your dynamical account of emergence, I think, maybe, your paper expressed the problems in my own head.

    I was wondering how we could be discussing the relation of parts to ‘wholes’ when it’s not clear to me that any ‘whole’ as I understand it, exists. I was understanding a ‘whole’ as a closed system, or what you call ‘the self-sufficiency of objects on their composition’–to quote:

    “The major physicalist alternative to emergentism is ontological reductionism…Assuming physicalism and determinism, and a modest finitism implying the closure and self-sufficiency of objects on their composition….This sort of reduction requires ….an artificial consideration of systems as closed.”

    I see that, as a practical matter. we assume the self-sufficiency of objects insofar as we consider objects to be inherently real, but that objects are nothing more than function. This goes to the section 8. of your paper on Individuation and Autonomy, in which

    “Cohesion is also a property of individuation, because it not only binds together the components, but because the binding must be stronger overall than any binding with other objects….One variety of cohesion is autonomy, which is an organizational closure that maintains the closure so that the autonomy survives..autonomy is functional in that it produces survival, and the components are functional inasmuch as they contribute to autonomy. This is the most basic form of function.”

    But, there are, as you say, levels of cohesion and autonomy which may lead to functional conflicts, as between body/cells, mind/body, etc. So, my interest in these sort of discussions, is to how much our experience of these levels as qualitatively distinctive is a constraint of our consciousness on a reality in which, otherwise, all distinction is quantitative. In other words, to use the metaphor of Plato’s injunction to ‘carve nature at its joints’, I wonder if nature is nothing but joints, and how we carve it is a function of consciousness.

    Like

  38. I want to say that I am glad John Collier posted a comment here. Although I wasn’t able to access the paper he linked to, I was able to find several other papers, and I found them informative and cogently argued.

    I didn’t want to get into the argument, but I found the position argued by Coel and DM unnecessarily reductionist, to the point where they seem not to be talking about emergence at all. Also, they seem to be making a ‘god’s-eye-view’ argument – ‘*if* we could know everything about the fundamental laws’ – well, but we don’t, so the value of such a view is hypothetical at best.

    I confess I would have liked to have seen more discussion concerning emergence in terms of social systems. Aravis actually opened the door to that, but nobody walked through it (the type of social system that grants citizenship is clearly emergent from systems that have less complexity, less diversity, and fewer techniques to articulate social differences). The social phenomenon discussed as a candidate for emergence – mobs – is actually a poor choice. On review, much sociology and social psychology of mobs seems weak to me, because mobs come in different varieties, with various degrees of organization, violence or potential violence, etc., to the point where I question our present definitions of it (despite that most of us share enough of a general sense of such phenomena that we can talk about them in meaningful social or political contexts).

    I also would have liked to see more about applications of the principle of emergence in biology. I recent had cause to read about Systems Biology and find much of it, as both explanation and application, intriguing. (Perhaps there can be a discussion of it at a later date?)

    Like

  39. Hi dbholmes,

    I have to disagree with the idea that rules for higher order entities are somehow part of/packaged into rules for lower order systems.

    Let’s remind ourselves that “rules” or “laws” are essentially *descriptions* of behaviour. So, if we ask whether the high-level *descriptions* of high-level phenomena are contained within the low-level description, then the answer is clearly no.

    But, that’s different from whether the *behaviour* (as opposed to the *description* of the behaviour) is entailed by the low-level description. That is a difference between ontology and epistemology. The doctrine of supervenience physicalism that I’m defending is about ontology; it is not about epistemology, and clearly supervenience physicalism alone is inadequate for epistemology.

    This goes back to my example of Joe killing his wife.

    Another distinction we need to make is between supervenience and long-term determinism. I would assert that if we had a complete low-level description at time = now of all the molecules making up Joe and his wife, then that entails the death of Joe’s wife. That low-level description would of course have to entail all the specific starting states of every molecule.

    That is not the same thing as asserting that the death of Joe’s wife could have been predicted an eon ago, since over long timescales things are not deterministic.

    Hi Marko,

    Did you bother to read Sean’s third paragraph?: “… doesn’t say anything at all about our knowledge of all the complex collective phenomena …”. (Added bolding)

    Yes, and I agree, and that comment is about epistemology. Again, at the moment I’m defending supervenience physicalism, which is about ontology.

    Now play the recording in reverse, all the way to the point where the gas fills only the left part of the box.

    We’re dealing with a simulation here, not a time-reversed recording. The difference is important, since, if it were a simulation, what you say would only work under strict determinism and with an infinite number of significant figures for each variable.

    With any small degree of randomisation (presumably from quantum indeterminacy) you no longer get time reversibility, you get weakly emergent second-law behaviour. I’m sticking to my claim that the second law is weakly emergent, though since it is a probabilistic law it does require some departure from infinite-accuracy strict determinism.

    On the other hand, the gas as a collective somehow “does know” what each of its atoms is doing, and never displays behavior like completely retracting into the left half of the box.

    I’d reject any additional “agent” that somehow “knows” about the collective state, and then exerts some additional force on particles telling them how to move. That sort of “strong” emergence seems outlandish and lacking in any evidence. All you need for the second law to be weakly emergent is for each particle to be throwing a quantum dice as it moves.

    Like

  40. While I’m not as informed of all the various aspect of lower level versus higher level laws and naturally assumed it was about chemistry out of physics, biology out of chemistry, sociology out of biology, etc, one aspect which seems to be running through these debates is that thermodynamics is a higher order to what might be called direct linear causality, which I would argue are really two sides of the same coin. As Newton did say, for every action, there is an equal and opposite reaction. What he didn’t add is the reaction isn’t necessarily and in fact is not linear. Those relativistic feedback loops to particular actions are thermodynamics and so the “lower order” of direct action is always part of the “whole”/ higher order, larger relativistic activity. Like nodes and networks.
    This goes to a point I made previously, that often higher order actions, such as of society being comprised of individuals and thus biology, often exhibit less complex behaviors, much like a wave going through a more complex medium. This then goes to why statistics can be so effective, as these thermodynamic feedback loops balance out much of the detail of the lower levels. Such that extremely biologically complex individuals can be motivated by very basic impulses, just as very complex societies can fall into elementary processes, such as like how all the activities which lead to wars resemble vortices, where often even efforts to prevent them seem to only add to the energy feeding them.
    This then also goes back to the point I keep making about time and how it is the effect of future becoming past and as such is to temperature, what frequency is to amplitude.
    Now we think of time as fundamental and temperature as emergent, but entropy is that inherent tendency toward equilibrium and balance, of all the various amplitudes trading energy around to match. While frequency is a linear function and in a non-linear context is noise/static. “Chop” to a sailer.
    Which goes to say that if we want to see the big picture, we need to step back and look at it, not simply dig ever further into the details, as details only inform you about details.

    Like

  41. Coel,

    Yes, and I agree, and that comment [by Sean Carroll] is about epistemology. Again, at the moment I’m defending supervenience physicalism, which is about ontology.

    When answering to Massimo, you quoted Sean’s article, and his statement “the laws underlying the physics of everyday life are completely understood”, which is obviously an epistemological claim. I don’t really understand why did you quote Sean to begin with, if you are not discussing epistemological reductionism.

    Besides, the issue of reducing the second law of thermodynamics to Newton’s laws (or QM, or Standard Model, or whatever fundamental theory we have available) is also epistemological in nature. The topic of ontological reductionism is a no-go on completely different grounds. This is addressed in my pending article, so I won’t elaborate here, but in a nutshell — Goedel’s first incompleteness theorem forbids the existence of a “theory of everything” which is simultaneously both ontological and parsimonious. That said, I suggest that we postpone the discussion regarding ontological reductionism until after my article, which can (hopefully) provide convenient framework for such a discussion. 🙂

    what you say would only work under strict determinism and with an infinite number of significant figures for each variable.

    I suggest against mixing determinism (or lack thereof) into the discussion of reductionism. It is bound to introduce confusion in some of the audience, while it will not help you derive the second law. That said…

    It is indeed true that the nondeterministic effective equation of motion (or equivalently, a nondeterministic simulation which iterates such equation through time) is not time-reversible. However, such equation can only tell you that your system will become unpredictable after some finite time, it doesn’t tell you anything even near the statement that the system obeys the second law of thermodynamics — which is a requirement that restricts indeterminism (it’s a law that must be obeyed by the system). Simply put, at the level of the gas-in-the-box example: after some finite time such equation of motion will tell you that you cannot know which volume the gas will occupy. This is not the same as stating that it must occupy both left and right compartments — it says that the uncertainty of the total occupation volume is bigger than the size of the box. This equally “predicts” the second law and its converse. Upon mesuring the position of the gas, it says there is equal uncertainty of finding it in the whole box or in the left compartment only.

    All you need for the second law to be weakly emergent is for each particle to be throwing a quantum dice as it moves.

    The quantum mechanical description of a system contains two pieces — the equation of motion (which is not only deterministic but even unitary, and also time-reversible) and the collapse postulate (which is nonunitary, nondeterministic, and time-irreversible). The latter describes the process of “measurement” in QM, which is anything but well understood.

    Given that, you seem to be claiming that one poorly understood phenomenon (the second law of thermodynamics) can be properly understood and accounted for by reducing it to another, even more poorly understood phenomenon (the quantum-mechanical measurement). Even if you can provide some convincing argument that establishes such reductionism (so far all previous attempts at this have failed), this strategy does not prove the weak nature of the emergence of the second law, but rather just reformulates the problem in another language. There are already well-known (and much simpler) attempts to do this — by “reducing” the second law to the low-entropy state of the Universe at the Big Bang, or by fine-tuning the inflaton potential in some inflationary model, etc. Essentially, it is just substituting one axiom for another, no reduction can actually be established.

    Just as a more emphasized analogy, I could say that the statement “God exists” is reducible to the (so far unknown) theory of quantum gravity — we still don’t have a proper epistemological formulation of that theory, but ontologically there is “no problem”. It is obvious that such statements cannot count as proof of anything, let alone ontological reductionism. 🙂

    Like

  42. Hi Coel, just to let you know this is my last comment on this thread.

    “…that’s different from whether the *behaviour* (as opposed to the *description* of the behaviour) is entailed by the low-level description. That is a difference between ontology and epistemology…”

    Yes, fair enough. I know my language is epistemology-laden on this subject, as is most of my analysis. I agreed with Massimo that agnosticism is preferable regarding ontology (of emergent properties).

    However, part of my reply was trying to point further to what I see as an ontological issue. I may have undercut myself by returning (repeatedly) to epistemic concerns/language. Two passages were meant to get at not just descriptions, but behaviors entailing other behaviors.

    1) “More importantly they may only come about when Y has developed one set of particular behaviors versus others (meaning that as time goes on some potential sets of Y rules may get cut off). So to know the full Y rules one has to wait and see how things play out. They are contingent on properties of Y, but not mandatory once Y emerges.”

    This suggests that sets of potential behaviors (on which descriptions have to be based) can be cut off from possibility based on events unfolding. The result being that behaviors themselves are contingent and not mandatory, which leads to the epistemic issue of rules only being able to be made after the fact.

    2) “If events happened to roll another way historically at X or Y, minds or specific aspects of minds (i.e. sexual jealousy, weapons) might never become part of the Y rule book.”

    Eh, word count issues hit me on that. The preferred sentence was “…might never have existed and so become part of the Y rule book.”

    When you say “entailed”, I am not so confident it is that strong a connection (regardless of randomness that might interfere).

    I’ll leave with one last analogy that hopefully sticks with the ontological angle. Totipotent stem cells can become any cell type in an organism’s body as well as placental cells. As development begins actions take place within the cell, and its potency decreases limiting it’s possible phenotypes from then on. The range of real behaviors (whether we can describe or model them accurately or not) diminishes.

    Similarly, as entities of one level aggregate (X becomes Y) their arrangement, and the resulting environment for that arrangement, might chip away at the full set of behaviors for Y that the lower level might allow. The potential range of behaviors for Y diminishes. But like the stem cell, there is no mandatory path and it is up to temporal issues for setting exactly what it will be. Same for Y. And some possible behaviors, especially for complexity, might be ‘entailed’ by the internal logic set up at that level based on how many Y’s you get (and in what form). Sort of like what the body can actually do once all the cells have finished dividing and differentiating.

    Like

  43. Hi dbholmes,

    The result being that behaviors themselves are contingent and not mandatory, …

    Yes, I agree. Most higher-level behaviours will be hugely dependent on vast amounts of historical contingency. Thus, the evolution of mammals will have been contingent on the historical accident of a meteorite hitting earth, and all of our politics and economics is contingent on that. Thus, as I’ve emphasized in other comments, a low-level description does need all of the contingent “starting points”, not just the dynamical rules. (I’ve also emphasized that the low-level description needs recent starting points, since in the long term things are not deterministic.)

    Thus, I agree with you entirely that historical contingency narrows down what actually happens from the superset of what could in-principle happen under the dynamical rules alone.

    Hi Marko,

    I don’t really understand why did you quote Sean to begin with, if you are not discussing epistemological reductionism.

    I was trying to emphasize what “strong” ontological emergence must entail. It implies that there are hitherto unknown forces that are telling atoms how to move. These forces are controlled by some agent which in some way “knows about” the high-level properties of the system, and which “chooses to” invoke such forces only in complex situations such as biology and not in simpler situations.

    My point in referring to Sean Carroll was simply that physicists would have to have so far totally overlooked these forces — which must be acting on low-level particles such as atoms and molecules. That’s not totally impossible (if the forces only operate in complex situations) but the burden of proof is very much on those making this outlandish claim.

    … you seem to be claiming that one poorly understood phenomenon (the second law of thermodynamics) …

    I’m surprised that you consider the second law poorly understood! Let’s take your box with all the red particles on the left and all the blue particles on the right. Now let’s evolve it with (1) Newtonian mechanics, plus (2) a random-number generator adding in a small amount of non-determinacy.

    That system will then automatically evolve to the high-entropy state of (roughly) equal numbers of each particle on each side. That follows purely because the number of microstates for that high-entropy macrostate vastly exceeds the number of microstates for the all-on-the-left macrostate. Therefore, given that the effect of the random-number generator is to make each microstate equally probable, the overwhelming probability is that the system will evolve to the high-entropy macrostate. That is more or less a statement of the second law, which is then weakly emergent from the Newtonian mechanics plus some random non-determinacy.

    Are you really disputing that this simulation would work and would manifest weakly-emergent second-law behaviour? (Indeed, a google search found several such simulations, such as this one.)

    (PS. I look forward to your article on this; this was my fifth and last.)

    Like

  44. Lots of agreement.

    I think the cases of strong emergence I’ve read about are the expression relations or laws and forces we are currently not aware of, or have yet to accurately apply.

    But, I don’t think we can know all or that we can be said to be on our way to full knowledge.

    So meanwhile, when we speak of things like cracks in science I think it’s because we are relatively blind to the extend of what we do not know.

    In fact, we tend to disregard aspects of complex processes when they are beyond our current understanding by glossing them over, rounding down their relevance, and by over-inflating how much we can still put together in a coherent fashion.

    Overall, I think it is more accurate to view ‘reality’ as something that is mostly unknown, and then as something on which we are charting some sensual and scientific, human perspectives.

    Like

  45. Coel,

    My point in referring to Sean Carroll was simply that physicists would have to have so far totally overlooked these forces — which must be acting on low-level particles such as atoms and molecules.

    Oh, I see, ok. 🙂 Btw, just as a side note — such forces are not completely unheard of. They are generally described by nonlocal equations of motion: physical process “here” depends on the physical process “there”, without anything propagating from “there” to “here” to transmit some “signal”. This is commonly called “action at a distance”. Also, there are models involving tachyonic particles and such. Finally, the strength of the force may be dependent on the number of particles in the system, which would make the force too weak to be visible for elementary-particle physics (like gravity is), or something to that effect. It wouldn’t be so impossible to construct such models — the main reason they are not studied as mainstream is that they are mostly nondeterministic, and that we lack math that is powerful enough to extract useful results from such models.

    Let’s take your box with all the red particles on the left and all the blue particles on the right. Now let’s evolve it with (1) Newtonian mechanics, plus (2) a random-number generator adding in a small amount of non-determinacy.

    Oh, but now you are changing the problem! I do agree that the second law could conceivably be reduced to Newton’s laws plus an element of randomness (although this still requires rigorous proof). I do not agree that the second law could be reduced to Newton’s laws alone.

    Note that the added element of randomness is an additional axiom of your low-level theory, one which is essentially equivalent (or stronger) to the second law. So this approach doesn’t really reduce the second law to Newton’s laws, but only redresses it into fundamental randomness and postulates it as an independent axiom. That is not weak emergence, unless you could give an account for that fundamental randomness from Newton’s laws alone. I have already noted a similar approach — one can “push” the second law into the initial low-entropy condition at the time of the Big Bang. But that is also not derivable from any equations of dynamics, and counts as an independent axiom.

    I look forward to your article on this; this was my fifth and last.

    Agreed. It would be more prudent to postpone the discussion until then. 🙂

    Like

Comments are closed.