The nature of the past hypothesis

bigBangby David Wallace

This is a special video presentation that Scientia Salon is publishing at the request of Barry Loewer, Director of the Rutgers Center for Philosophy and the Sciences [1]. It is part of a series of videos published after that came out from the 2014 conference on the philosophy of cosmology held in Tenerife, Spain [2].

In the video, the author revisits the role of a “low-entropy” past hypothesis in statistical mechanics, and argues that contrary to an apparently-widespread view, (i) the asymmetry in boundary conditions required for statistical mechanics to be derived is not well understood as an entropy constraint; (ii) it is misleading to see the high level of uniformity of the early Universe as a “low-entropy source” for present thermodynamical non-equilibrium.

_____

David Wallace is a philosopher of physics, based in the University of Oxford at Balliol College. He is a member of the Philosophy of Physics group at Oxford, which is one of the largest in the world. Wallace’s original training was in theoretical physics and his current research interests are mostly in the philosophy of physics. He has been particularly active in trying to develop and defend the Everett interpretation of quantum theory (often called the “Many-Worlds Interpretation”); his book on the topic, “The Emergent Multiverse“, was published in June 2012.

[1] Barry Loewer, Rutgers University.

[2] International Conference on Cosmology, Tenerife, Spain, 12-16 September 2014.

Advertisements


Categories: video

Tags: ,

38 replies

  1. I think I have a high level grasp of what he is saying, but that may be a complete illusion. Maybe I will wait for our physicists to comment.

    Like

  2. As an experience, it reminded me of trying to read Slavoj Žižek. I understood every word but very few of the sentences.

    Liked by 2 people

  3. dadooq,
    Like the sentences have evolved randomly, reaching some state of equilibrium berween past and future …

    Like

  4. Beautiful! 🙂

    The only problem I have with all this is that unfortunately I couldn’t attend that conference. 😦

    I agree with everything that David said, the comments by Albert and Rovelli, and David’s answers to the final two questions by Saunders and O’Raifeartaigh (though I didn’t quite understand Saunders’ actual question, and I feel he himself didn’t either 😉 ).

    Regarding David’s main points (i) and (ii), I can only paraphrase Rovelli’s comment “we (physicists) already know all this”.

    In particular, rephrasing the arrow of time problem in terms of the entropy of the early Universe is somewhat of an oversimplification, and it is the way physicists first attacked the issue, already in the time of Boltzmann. Since then, we became increasingly aware of the shortcomings of the “low entropy did it” approach. So that approach is not quite so largely widespread in the physics community, AFAIK. My feeling is that it’s just an effect of philosophers of physics not taking into account all the shortcomings that physicists discovered in the last 150 or so years (since the time of Boltzmann).

    So I’d say that David is not saying anything “contrary to an apparently-widespread view” of physicists, but maybe only of philosophers of science. Which is just a miscommunication issue between the two communities, IMO. And David does a great job at setting that straight.

    Thanks for the lecture, I enjoyed every bit of it! 🙂

    Liked by 1 person

  5. That was a nice lecture, and I agree with the general thrust that trying to explain time asymmetry in terms of special starting conditions is problematic. I’ve argued this with Marko before, but why not simply put the time asymmetry into the low-level quantum mechanics?

    Indeed, de facto, this is more or less the case already. In order to make quantum mechanics actually work, one needs to have the wavefunction “collapse” or “decohere” or “split into many worlds” (pick your favourite), and all of those are time asymmetric.

    Of course none of those are actually understood, but, as the lecture explained, once we go to “mid level” physical laws we have time asymmetry, so why not have the decoherence introduce it?

    Like

  6. David,
    This presents a clear example of a point I keep trying to make, that physics is treating time as a narrative vector, from past to future, on which the present exists as a point and then assumes this point can go either direction. Much as you explain. As you say, in a very specialized context, going from start to finish is simply the exact opposite of going from finish to start and will put you back at the same place.
    What I keep trying to explain is that if we consider time as an effect of action, as it is frequency being measured, then it is the process by which future becomes past. To wit, the earth is not traveling that vector from yesterday to tomorrow, but tomorrow becomes yesterday because the earth turns.
    So rather than the present simply being a point on the vector, it is a process occurring within this state of physical existence. Duration is the state of the present, as the events form and dissolve, thus going from future to past, not a dimension external to it.
    The reason clocks can run at different rates is because they are separate actions. A faster clock burns energy faster and so recedes into the past quicker. The turtle is still plodding along, long after the hare has died.
    What determines the asymmetry of time is inertia. By definition, action goes one direction, not any other. The math is abstracted from this process, not the basis for it.
    As for determination, the past is determined, because it has occurred. While the laws deciding this process are set, the input into any event only arrives with its occurrence and not before, since information travels at a finite rate. So causality yields determination, not the other way around.
    As an effect of action, this makes time similar to temperature. Time is to temperature, what frequency is to amplitude. It’s just that while amplitude en masse is temperature, frequency en masse is static. Which is why measures of time require isolating a distinct frequency. Yet the overall effect of change is that multitude, just as temperature is a multitude of the amplitudes of those waves of energy.
    I’m used to being told I have to study the entire field to appreciate what I’m saying, but as a philosopher of science, you might appreciate that a general approach has its advantages. As it is, it does fit the issues you raise in the presentation.

    Like

  7. Very interesting talk and comments. Brodix, you may agree with me that our perception of time like classical physics and color perception is part of our Manifest Image, but unlike color we have not explained how our animal brains represent time which is necessary for movement. The idea of time reversal is perhaps what our brains do naturally and subconsciously or is the fundamental underlying process by which we perceive time. Or stated otherwise, we are not reversing time but reversing action which may represent time.

    Like

  8. Victor,
    Basically, yes. Even today we see the sun as rising in the east and setting in the west, though we have come to realize it is the earth turning west to east. Much complex and quite accurate math was devised to describe this, in epicycles. The problem was in explaining it physically.
    Our minds function as a sequence of perceptions and so this vector of events is fundamental to our view of reality, just as the planet is the center of our view of the cosmos. As primates, we have close set eyes, designed for judging distance, an attribute quite necessary for swinging around in trees and then for throwing objects, so our perception of space is biased toward the vector over the volume. Now measures of distance and duration are intertwined; Consider the space between the crest of two waves, compared to the rate they pass a certain point, but so to are measures of volume and temperature, or pressure. Yet while we understand the connection, we also understand the differences, so no one tries to argue for temperaturevolume.
    This then raises the issue of space and as I’ve argued previously, the equilibrium of the vacuum is implicit in the speed of light as a constant. As we could reverse the premise of time stopping at C and placing clocks around in space, the one which runs the fastest, would be the closest to this absolute equilibrium.
    So we have the vacuum of space, with waves fluctuating in it and the essential forms of these waves are frequency and amplitude, i.e. time and temperature.

    Like

  9. Coel’s comment raises the question for me of why people say that low level physics is time reversal invariant in the first place, if physics at a low level is indeterministic.

    On another note, I was interested in the comment (if I heard correctly) that the job of a philosopher of science is to tell scientists what they already know.

    I can cash this out as an analogy of when I go to a customer to design an information system I will first find out how they are dealing with information at the moment and how they wish to be dealing with it.

    I will then model this information and take it back to the customer. In effect I am going back and telling them what they told me – or telling them what they already know.

    But the value I hopefully add is that I am showing it to them in a way they might not have thought of it before. And in fact I rarely do this without immediately identifying redundancies and processes which invite errors.

    So I can see that there can be value in telling people what they already know if by doing so you are presenting it in a way they have not previously thought about it.

    Liked by 1 person

  10. Gents,

    Yes our brains like everything else in nature are made from that first level which David Wallace shows. The philosopher of mind and science deals with how brains get to the next level of reality or how we developed into beings that operate in the volumetric space for ‘swinging around in trees and then for throwing objects’.

    My suspicion is that biological cells are no different than fundamental particles because they are repeatable throughout nature and are composed of more fundamental ‘particles’ which happen to be longer molecular chains (Massimo’s earlier realm of thinking). My idea is that neurons exhibit some type of biological level ‘quantum behavior’. The longer organic molecular chains in cells form slow events within the cell boundary and neurons make the events even slower by unifying across boundaries. Francis Crick went from studying the longest molecule in nature to studying the containment vessel of consciousness. My theory has a panosychist flavor but incorporates the other theories once I get into how the brain is actually structured.

    Pretty good theory I think.

    Like

  11. Robin Herbert: “Coel’s comment raises the question for me of why people say that low level physics is time reversal invariant in the first place, if physics at a low level is indeterministic.”

    Coel is wrong. When a virgin had her first night, she became a woman, and this simple fact has two points.

    P1, virgin is not false illusion, it is a REAL state, similar to a quantum system before it has its night with other particles.

    P2, her first night is an irreversible process. Coel confused about the virgin state with a process which breaks that virginity.

    David Wallace talked about a very simple issue:

    Equation 1: {the time symmetry (in fundamental physics) + The “X” = the arrow of time (in nature phenomena and in some mid-level physics laws}

    What is {The “X”}?

    If anyone knows {The “X”}, he knows right the way of deriving the following equations.

    E1, Alpha equation:
    Beta = 1/Alpha = 64 ( 1 + first order sharing + sum of the higher order sharing)
    = 64 (1 + 1/Cos A(2) + .00065737 + …)
    = 137.0359 …

    A(2) is the sharing angle, A(2) = 28.743 degree

    The sum of the higher order sharing = 2(1/48)[(1/64) + (1/2)(1/64)^2 + …+(1/n)(1/64)^n +…]
    = .00065737 + …

    E2, language to describe quark/lepton:
    String 1 = (V, A, A 1) = {1st, red, 2/3 e, ½ ħ} = red up quark.

    String 2 = (-A, V, V 1) = {1st, red, -1/3 e, ½ ħ} = red down quark.

    E3, Planck data (dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82):
    Calculation: see https://scientiasalon.wordpress.com/2014/10/28/the-varieties-of-denialism/comment-page-1/#comment-9212

    E4, {delta P x delta S > =ħ}: it is DERIVED from the force F (dark energy) = ħ/ (delta S x delta T)

    GR (general relativity) is totally useless for these calculations (the 4-Es), and it must be wrong to play a part in the final physics theory (see http://www.quantumdiaries.org/2015/03/13/einsteins-most-famous-equation/#comment-1909398622 ).

    Wallace did not even remotely mention about these 4-Es (the litmus test for correct physics), that is, he does not know the answer for {The “X”}.

    In his talk, {Initial (boundary) conditions,
    Low level to high level mapping,
    Low entropy at earlier time,
    Decoupling of …,
    Mid-level physics laws,
    Huge number of degrees of freedom, etc.},

    No, none of these has anything to do with {The “X”} of Equation 1.

    No, not a single point of Wallace is about addressing the issue of {The “X”}.

    {The “X”} is the key issue discussed in detail in the book “Super Unified Theory” (available at http://inspirehep.net/search?p=find+a+gong,+jeh+tween ).

    There are two arrows in THIS universe:
    A1, Arrow of time (soft arrow), T-arrow (TA).

    A2, Arrow of entropy (hard arrow), E-arrow (EA).

    While TA does make contribution to EA, they two are completely different arrows. More, later.

    Like

  12. Hi Massimo,
    Thanks for posting this. The conference was enormously interesting. If your readers want to see the other talks they can be found at https://www.youtube.com/playlist?list=PLV4bq2vDW15-qO3ho8GNnqeZHoNjuBDC-
    As you and some of your readers are probably aware David Wallace is responding to an account of temporal asymmetries (time’s arrows) and the foundations of thermodynamics that derives from Boltzmann’s seminal work and has been developed by many others including recently by David Albert and Sean Carroll (two other speakers at the conference). In David Albert’s version the fundamental laws of the universe include, in addition to the dynamical laws, the Past Hypothesis PH (a proposition that specifies the macro state of the universe at the time of the big bang as having very low entropy) and a uniform probability distribution over the micro states that realize this macro state. There are arguments that this proposal is able to account for temporal asymmetries including the second laws of thermodynamics, the asymmetry of knowledge (records and memory), that we can influence the future in the temporal direction away from the PH (the past) but not in the opposite temporal direction (the past), that causes typically precede their effects and so on. The arguments are complicated and philosophically subtle and to an extent controversial (see Albert’s Time and Chance and After Physics and Carroll’s From Eternity to Here). If successful (as I think they are) they solve the puzzle how temporal asymmetries can be squared with physics whose fundamental dynamical laws are time reversal invariant (TRI). In his talk Wallace is to an extent deflating the importance of the Past Hypothesis and relatedly deflating the urgency which is felt by some physicists e.g. Penrose and Carroll of explaining the PH (explaining why the universe occupied such a low entropy condition at the time of the BB.) The PH has struck some (especially Penrose) as very puzzling since such a low entropy condition is very improbable relative to the Boltzmann probability measure. Wallace in his talk argues that the both the problem of squaring time’s arrow with TRI physics and Penrose’s puzzle are not as puzzling as it seems at first since physics at a non-fundamental level includes many theories that involve temporally asymmetric dynamical laws (e.g. the Boltzmann equation, equations of radioactive decay) and these do not involve the PH. These mid-level laws may well play an important role in explaining the higher level temporal asymmetries. But of course this just raises the question of how the mid-level laws are related to the fundamental TRI laws. Wallace’s main point is that the PH is not involved in accounting for the mid-level laws. He points out that the macro states involved in such laws are typically realized by micro states that evolve asymmetrically to the future. So, for example, typical micro states that realize a block of ice in warm water evolve to the future to states in which the ice is more melted. A micro state that evolves to a larger ice cube and warm water would be one that would strike us as very delicately arranged and untypical. This was Boltzmann’s original insight. In his and the Albert -Carroll account it is captured by the probability distribution which assigns very small probability to such states. Wallace is right that the PH doesn’t enter the explanation of the temporally asymmetric mid-level physical laws holding toward the future. So there is really no dispute between Wallace and the Albert-Carroll account about this. However the PH is absolutely essential to establishing that these laws (in particular the second law of thermodynamics) hold throughout the whole history of the universe. Without it the probability distribution would lead to crazy inferences about the past e.g. that the ice cube was smaller and the water colder in the past because the dynamics is temporally symmetric. Since the second law holds from the BB the entropy at that that time was very tiny. Further the fact that entropy was lower in the past plays a central role (together with the probability distribution) in explaining the asymmetries of records and influence. So I disagree with the suggestion that it is misleading to see the low entropy constraint (the PH) of the early universe as an essential component of the explanation of the present thermo-dynamical equilibrium and for the temporal arrows including the second law. It is a crucial component of the explanation of the temporal asymmetries.

    Liked by 1 person

  13. Hi Robin,

    Coel’s comment raises the question for me of why people say that low level physics is time reversal invariant in the first place, if physics at a low level is indeterministic.

    Interesting question! It’s probably because the time-reversal-invariant aspects are the ones that are understood, whereas the indeterminacy is the part that’s not understood.

    The raw Schroedinger equation is deterministic, and can be run backwards or forwards in time. But, the raw Schroedinger equation won’t predict any measurement on its own, you also need to “collapse” the wavefunction, and that seems to be non-deterministic and non-TRI.

    Hi loewer2013,

    However the PH is absolutely essential to establishing that these laws (in particular the second law of thermodynamics) hold throughout the whole history of the universe.

    Sticking in some indeterministic dice-throwing between the lowest-level laws and the mid-level laws (such as the 2nd law of thermo and the Fokker-Planck equation that Wallace mentioned) would also ensure that those laws held everywhere, regardless of the PH.

    Like

  14. There is an old saying that God did not create man. Man created God.

    The same is true for math. Math and laws in general are the more consistent patterns we recognize in nature. In the dichotomy of order and chaos, they are on the ordered side of the spectrum. As such, they are not the essence, the basis of, the seed from which reality strings. They are the hard residue, the skeleton or shell that is left over when all the ambiguous, chaotic and indeterministic aspects have been distilled away.

    Now some mathematical equation might well work forward as well as it does backward, but the presumption that this is the same as a quantum particle, a billiard ball, or you walking out your door and realizing you forgot something, are the same going one way as the other, is nonsense.

    For one thing, reality is composed of energy and whether the energy is in the form of the billiard ball, or its momentum, it is still energy. So reversing that ball requires the additional energy to both stop it and then start it in the opposite direction. If we were to reverse all the atomic dynamics making up that ball, we would effectively erase it from the universe and replace it with an exact copy, with all energies perfectly reversed.

    The premise of entropy is that usable energy in a closed system can only decline. Since energy is conserved, unless it is radiated out of that system, it is combined, not lost. The cold and the hot become warm.

    The directional quality of the inertia of energy is more fundamental than the entropy of a particular system. An arrow of time based on the inertia of energy is more fundamental then the dynamics of a system constructed from it.

    When you measure time, you are not measuring entropy. You are measuring the frequency of a particular action. The scalar quantity of that measurement is non-directional, but the action being measured is not and the action is foundational to the measure, not the other way around. The math is abstracted from the reality. It is descriptive, not explanatory.

    I realize I’m beating my head on the wall, but eventually the reset button will be pushed.

    Like

  15. Loewer2013,

    So I disagree with the suggestion that it is misleading to see the low entropy constraint (the PH) of the early universe as an essential component of the explanation of the present thermo-dynamical equilibrium and for the temporal arrows including the second law. It is a crucial component of the explanation of the temporal asymmetries.

    The point of the lecture was to argue that low entropy is not the “real cause” for the arrow of time. Low entropy is merely a gross-structure description (a convenient tool, a by-product, an epiphenomenon if you will) of the underlying microscopic state of the Universe immediately after the Big Bang. The actual problem lies in understanding the reasons *why* macroscopic initial state had low entropy in the first place.

    So in this sense, simply postulating that Big Bang had low entropy is not really satisfactory as a solution of the arrow of time problem. We want to understand what is going on in terms of microphysics, i.e. we want *reductionism* of low-entropy postulate to the underlying fundamental theory. And that is what is missing. David spent a large portion of his lecture trying to explain the mechanism and details of reductionism, precisely because of this.

    Like

  16. Well, is a tomato plant time reversible? No but yes if you take one of its seeds and regrow it. A seed is not a low entropy condition but a very high entropy device. Perhaps the BB was actually a seed or the initial conditions are a chain reaction in the space time fabric. Time does not really exist but is an appearance produced by actions.

    Like

  17. No one can truly tell the past, nor can anyone tell the future. As for now, how would science define now? What is now, how long does now last, what is the speed of now, how is now measured, and what time is now? Please do tell us science, measure away, tell us the hypothesis of now. Thanks, =

    Liked by 1 person

  18. A good introduction. As Carlo Rovelli mentions, there’s nothing new in it for physicists, but it’s an excellent attempt to describe a quite subtle situation for an audience of laypeople,

    Just one big and a few small remarks.

    If there’s one thing David Wallace should have stressed more forcefully, it’s that the fundamental physical laws need initial conditions. They don’t tell what the initial conditions are. They only tell what’s going to happen given initial conditions. This means that we don’t know why the initial conditions of the universe were what they were. It seems that the universe started with low entropy. Why? Why didn’t the universe start in a state of (near) equilibrium? Afaik there’s no explanation that convinces everybody. But this is a problem for all physical laws. Even if we couldn’t reduce irreversible mid-level laws to time-reversible microscopic laws, the problem remains. You still can’t explain why the universe did not start in a state of (near) equilibrium.

    My own opinion is that there’s no contradiction between the time-reversibility of fundamental laws and the irreversibility of mid-level laws. Mid-level laws are about observables (mathematical operators) defined on an incredible number of degrees of freedom. It turns out that the overwhelming majority of states in phase space describe microscopic configurations that lead to irreversible behavior of these observables in a realistic time-frame – even if the underlying dynamical laws are time-reversible.

    No need for additional mechanisms or the collapse of the wave function. Compare it to a swimming pool filled with a billion billion white balls and one red ball. A blind person is always going to pick a white ball within a realistic time-frame.

    Now the small remarks.

    Around 09:00 David Wallace gives the example of an eclipse. But that’s a system with few degrees of freedom. The rest of his talk is about systems with many degrees of freedom.

    Around 17:45 he introduces the idea of coarse-graining, but afaik coarse-graining (in the technical sense of the word) is not necessary to explain the second law from time-reversible fundamental laws. I think he should have avoided this expression, although I understand what he is trying to say.

    Between 20:30 and 23:00, when he’s talking about the use of statistics, I feel he suggests a degree of subjectivity that isn’t there. He seems to suggest that we are supposing some sort of behavior by using statistics. That’s not necessarily true. It’s a recognition of the fact that we don’t know the microscopic states of the system.

    Around 28:30 he talks about the fact that not all initial microscopic states lead to the irreversible behavior we observe in mid-level physical laws. But then he says about such states that they need to be carefully specified. That’s a bit loose. These states are states like all the others. The point is that they are sitting in an incredibly tiny corner of phase space.

    Like

  19. Patrick,

    It turns out that the overwhelming majority of states in phase space describe microscopic configurations that lead to irreversible behavior of these observables in a realistic time-frame – even if the underlying dynamical laws are time-reversible.
    […]
    Around 28:30 he talks about the fact that not all initial microscopic states lead to the irreversible behavior we observe in mid-level physical laws. But then he says about such states that they need to be carefully specified. That’s a bit loose. These states are states like all the others. The point is that they are sitting in an incredibly tiny corner of phase space.

    No, there’s not an overwhelming majority of irreversible states, and those special carefully specified states don’t sit in an incredibly tiny corner of phase space. In a sense, such states fill up precisely one half of the total phase space of what we call “irreversible” processes. For example, for *every* phase point describing the initial microstate of an egg rolling off the table and breaking into a gazillion pieces down on the floor, there is *exactly one* phase point describing the initial microstate of a goo on the floor picking itself up, assembling itself into a full egg and jumping onto the table.

    As long as equations of motion are deterministic and time-reversible (and all relevant equations of microphysics are such), there is a one-to-one correspondence between the generic initial states of “irreversible” processes and specially crafted initial states which reverse those “irreversible” processes. So in a sense there is an exactly equal number of the generic and the specially crafted states, filling up the phase space.

    That said, there is an independent issue of how we assign a probability measure (i.e. a “volume element”) across that phase space so that the “generic” half of states have reasonably high probability, while the “specially crafted” half have vanishingly low probability. Any such probability measure will be highly nontrivial and extremely weird-looking. Beyond the second law of thermodynamics (which is merely a high-level coarse-grained macroscopic description) we have absolutely no clue how to deduce that weird probability measure from the laws of microphysics.

    That’s the whole problem. You are not allowed to simply assert that all such specially crafted states are sitting in an incredibly tiny corner of phase space — instead, you are required to provide an explanation *why* they do so, in terms of microphysics. And laws of microphysics, being time-reversal invariant, are completely incompetent and helpless in providing any such explanation. So the second law of thermodynamics so far remains irreducible to the known microphysics.

    One appealing possible way out of this situation is what Coel mentioned above — use the fundamental time-irreversible evolution present in QM to deduce that weird-looking probability measure. That certainly sounds like a good idea, but until we resolve the measurement problem itself, this idea amounts to explaining one mysterious fact of Nature using another, even more mysterious fact of Nature. So it isn’t really an explanation, but merely rephrasing one problem in terms of another (harder) problem. And even this rephrasing hasn’t been done successfully yet, despite some valiant attempts.

    Like

  20. My other question might be a bit confused, so I would not blame anyone for not attempting to answer.

    To start with, I can see two reasons (there may be more) for why a deterministic system might not be time reversible.

    Firstly it may be that there are states which can be arrived at by more than one cause (ie a ball sitting in a gully between two slopes)

    Secondly there may be cases where the time reverse transform may imply some sort of prescient action. To illustrate this at the macro level imagine a basketball hitting a sprung back board. The board is pushed away by the force of the ball and then springs back into place. But the time reverse of this would require that the backboard moves away as the ball approaches and then snaps back into place just in time to meet the ball and push it into the hands of the player.

    At the micro level, the wave behaviour of the system depends upon facts about the particle behaviour even when it does not decohere, such as the way in which an electron bounces off the side of the slit in the double slit experiment.

    And the particle behaviour has the second kind of time irreversibility described above. For example Richard Feynman describes a variant on the double slit experiment where the slit screen has a certain amount of give so that the electron’s position and momentum can be detected (he is illustrating the connection between uncertainly and decoherence in such a system). If the momentum can be measured accurately enough then the system remains coherent since the position cannot be measured accurately enough to determine which slit it passed through.

    So, looking at the version of this system that remains coherent, the wave behaviour of the system as a whole depends upon just the kind of time irreversibility in the basketball example.

    So a time reverse transformation of the wave behaviour of this would depend upon just such ‘prescient’ features of the particle behaviour – the screen moving to the left in anticipation of the approach of an electron and snapping back in time to meet the electron.

    So I can’t see how even the coherent behaviour of the system could be considered time reversible.

    Now obviously I haven’t just seen something that physicists have unaccountably missed, so this must represent some misunderstanding I have about quantum physics or time reversibility.

    But I am finding that I can’t really understand the possible solutions offered until I can understand the problem. To me it seems time irreversible at any level in one way or another.

    Liked by 1 person

  21. Hi Marko,

    For example, for *every* phase point describing the initial microstate of an egg rolling off the table and breaking into a gazillion pieces down on the floor, there is *exactly one* phase point describing the initial microstate of a goo on the floor picking itself up, assembling itself into a full egg and jumping onto the table.

    However, as I said above, I still don’t see how they are time reversible. It is not so much a problem of a smashed egg reassembling itself, it is more that if we take the position and momentum of the egg just before it hits the ground (let us say straight down and at a certain velocity), couldn’t that have resulted from more than one prior condition?

    So why does it leap onto the table, as opposed to leaping onto another table at an equal distance on the other side, or onto a nearby rubber ball and thence a graceful arc across the room into the hands of a toddler?

    And is there a transform of the equations of motion that would make the ball start to roll towards the approaching egg and surface of the rubber ball start to presciently deform in anticipation of the approaching egg and then snap back a little as the egg lands neatly into the waiting deformation?

    Now someone might say that if we take this at a molecular level then there are interactions of air molecules that could only have resulted from a certain prior condition, but let us say that this is happening in a vacuum with the toddler suitably attired in a space suit.

    These appear to be problems even if the underlying physics is deterministic.

    Liked by 1 person

  22. Marko,

    Afaik your opinion is not shared by the majoritiy – the overwhelming majority? 🙂 – of people active in statistical mechanics. The idea that you can get irreversible macroscopic phenomena (on an realistic timescale) from time-reversible microscopic laws for the overwhelming majority of initial conditions, is not controversial. There are simple examples, one of which (The Kac Ring Model) can be found in “Science of Chaos or Chaos in Science”, an article by Jean Bricmont that can be found easily on the web. It’s al old article (I already recommended it on Scientia Salon) but it’s a good read and an excellent point of departure for people who like to have this discussion. Not only for physicists (there’s nothing new in it for them) but for everyone who knows a minimum of mathematics. The article explains with considerable (even philosophical) subtility the situation. Tell us where the Kac Ring Model goes wrong. I’m really interested.

    > (…) but until we resolve the measurement problem itself, this idea amounts to explaining one mysterious fact of Nature using another, even more mysterious fact of Nature. (…) So it isn’t really an explanation, but merely rephrasing (…) And even this rephrasing hasn’t been done successfully yet, despite some valiant attempts.

    Agree.

    Is David Wallace going to react on this forum?

    Like

  23. We human beings are terrible temporal chauvinists — it’s very hard for us not to treat “initial” conditions differently than “final” conditions. But if the laws of physics are truly reversible, these should be on exactly the same footing — a requirement that philosopher Huw Price has dubbed the Double Standard Principle. If a set of initial conditions is purportedly “natural,” the final conditions should be equally natural. Any theory in which the far past is dramatically different from the far future is violating this principle in one way or another. In “bouncing” cosmologies, the past and future can be similar, but there tends to be a special point in the middle where the entropy is inexplicably low.
    http://preposterousuniverse.com/eternitytohere/faq.html

    My own view: Multiple Historiesin which the universe has multiple possible cosmologies, and in which reasoning backwards from the current state of the universe to a quantum superposition of possible cosmic histories makes sense — plays a role in all this, somehow.
    http://en.wikipedia.org/wiki/Multiple_histories

    Like

  24. Loewer2013 and all,

    Thank you for your explanatory comment which went a long way towards bringing David Wallace’s talk within reach of Main Street. This is the essential ingredient of any presentation on Scientia Salon. As the physicists in the audience and on this site have said that there was nothing new in the talk for them, I assume it’s intended audience was other philosophers. It is Massimo’s purpose for this site to take the next step and present this material to a wider audience. I understand that the “raw data”, as it were, should be presented for those who can operate at that level. The rest of us need an interpreter.

    Like many others, I am fascinated by the apparent discontinuity represented by the sudden shift from temporal asymmetry to temporal symmetry in physics. I am also fascinated by another possibly symmetry, represented by reports from meditators and mystics (ok, chin up guys, it’s not that painful) that they encounter certain states that involve a shift from an awareness of time to timelessness. Maybe one day we will get our Galileo or Einstein for this range of experiences also, but meanwhile I wonder what it might mean for our view of ourselves and our world if our “manifest image” is topped and tailed by a timeless reality?

    Like

  25. Hi Marko,

    One appealing possible way out of this situation is what Coel mentioned above — use the fundamental time-irreversible evolution present in QM to deduce that weird-looking probability measure. That certainly sounds like a good idea, but until we resolve the measurement problem itself, this idea amounts to explaining one mysterious fact of Nature using another, even more mysterious fact of Nature.

    Yes, fair point. But your reply to Patrick and the entire concept of explaining the 2nd law of thermo in terms of the “past hypothesis” of the starting state of Big Bang, all depend on a rigorously deterministic universe.

    Now, I recall a rather nice essay called Farewell to determinism by a certain Marko Vojinovic 🙂 in which the author states that “We all know that quantum mechanics is probabilistic, rather than deterministic”, and later “And this concludes the outline of the argument: we must accept that the laws of Nature are intrinsically nondeterministic”.

    Now, probabilistic non-determinism scrambles information. One cannot have tightly specified starting microstates that would then exhibit anti-second-law behaviour, since, as it evolved, the probabilistic non-determinism would quickly scramble the information in that microstate. Whatever microstate you start in, you’re then going to head for most-probable macrostates (=> 2nd law).

    Thus the PH seems pretty irrelevant to me. The non-TRI in the mid-level laws is entirely and adequately explained by probabilistic non-determinism in the low-level laws. Mid-level laws such as Fokker–Planck and the 2nd law of thermodynamics are statements of probability, arising out of the low-level dynamical laws being probabilistic.

    I hesitate to disagree with someone like Sean Carroll, who understands quantum mechanics way better than I do, but if there’s a flaw in the above argument, that then requires the PH solution, I’d be interested to know what it is.

    Like

  26. Hi Robin,

    > it is more that if we take the position and momentum of the egg just before it hits the ground (let us say straight down and at a certain velocity),

    I’ve tried to explain this to you before. Let’s try one more time.

    The specific examples you cite are not particularly compelling as each of them will produce a different trajectory and so could be recovered from information about the egg’s momentum alone (which will not be straight down unless it was dropped from directly above). Other scenarios you have cited in other instances of this argument are trickier and need the following response:

    The position and the momentum of the egg just before it hits the ground is not generally enough information. On determinism, to play back the state of the egg 30 seconds with perfect accuracy, you would in fact need complete information about all the particles that are within 30 light-seconds of the egg. Only then will you find out what happened, because each of the different ways the egg might have come to fall with that momentum will have distinct knock-on effects on the world at large (which will propagate outwards at less than the speed of light), and the state resulting from these different effects are what will settle the question of how the egg came to fall.

    Like

  27. Hi DM,

    I’ve tried to explain this to you before. Let’s try one more time.

    Well no you are not explaining, you are simply repeating the claim. I already know the claim.

    The issue is that you appear to be under the impression that determinism in itself can guarantee reversibility but it cannot.

    “Exactly one next state” does not imply “Exactly one prior state”. Or to put it another way “exactly one next state” does not imply a one-to-one correspondence between states.

    The laws of motion are time reverse invariant in that the negation of t produces just the same equations.

    I am saying that it seems to me that even this does not guarantee a one-to-one correspondence between states – that TRI is not like, say, an invertible function.

    The specific examples you cite are not particularly compelling as each of them will produce a different trajectory and so could be recovered from information about the egg’s momentum alone

    Well of course they have different trajectories,that is the whole point.

    But, no, you can’t infer prior trajectory from momentum. A given momentum is not associated with exactly one prior trajectory.

    You can’t even infer prior trajectory from momentum, position and the forces acting on the body. Although these things are enough, cetera paribus, to calculate the next state, they are not enough to calculate a prior state.

    Even if you don’t agree that the three particular events that I described could result in one particular momentum and position, it is nevertheless the case that any given momentum and position can result from any number of trajectories.

    And it is not use talking about every particle within 30 light seconds of the object because the same thing will apply to them.

    So this is basically what I am asking. If TRI implies a one-to-one correspondence between states then, yes there is a particular state associated with the egg reassembling itself and retracing a particular trajectory.

    But for the reasons given it seems to me that TRI does not imply such a correspondence and that you cannot assume that there is a particular state associated with that egg reassembling itself and jumping onto the table.

    Again, I am not saying “physicists are wrong I am right”. But if I am wrong I would be interested in knowing how what I am saying here is wrong.

    Time reversal invariance just doesn’t seem to give you that “run the film backwards” reversibility that people describe, I would be interested in why people say that it does..

    Like

  28. Hi Robin,

    > The issue is that you appear to be under the impression that determinism in itself can guarantee reversibility but it cannot.

    I understand and I agree. I understood us to be discussing determinism in the context of time reversal invariant laws such as those of Newtonian dynamics. Where the laws are time reversal invariant, determinism does indeed imply a one-to-one relationship between earlier states and later states.

    > that TRI is not like, say, an invertible function.

    And yet on determinism it is. For a deterministic law, there is one possible future state at a particular time. If the law is TRI, then what is true of extrapolation to the future is also true of extrapolation to the past, therefore there must be only one past state at a particular time.

    > A given momentum is not associated with exactly one prior trajectory.

    Yes it is, for an isolated system (well, to be precise, you need both the position and the momentum). You can only get different prior trajectories when other bodies have interfered with it, which is why to be precise you need information about all the other bodies in the vicinity.

    Your idea that a given observed state can arise from any number of initial conditions in a closed system is simply incorrect. It may sometimes appear that way because many subtly and undetectably different versions of an observed state can sometimes be produced from wildly different inital states, as in your example of a rock rolling down a valley where it seems it could have rolled down either slope. In either case, we seem to have the same final state, but in actual fact there will be tiny differences. There must be if TRI and determinism are both true.

    > Although these things are enough, cetera paribus, to calculate the next state, they are not enough to calculate a prior state.

    The two situations are the same. You can only calculate the prior state if you have the information you need about any bodies that might have interfered with it in the past. You can only calculate the future state if you have the information you need about any bodies that might interfere with it in the future.

    > And it is not use talking about every particle within 30 light seconds of the object because the same thing will apply to them.

    Not really. Only if you are interested in the histories of all of those particles also. If your interest is in what happened to the egg 30 seconds ago you only need to focus on the particles which could have possibly had a causal effect on that. For those particles which would have been out of causal contact with the egg 30 seconds ago you can forget about them.

    Like

  29. victorpanzica: “… A seed is not a low entropy condition but a very high entropy device.”

    Based on what? Related to what?
    The absolute entropy in SEED cannot be calculated, but its relative entropy can be easily calculated.

    In the process of {Seed to plant to tomato fruit}, seed has smallest entropy. In ONE sense (one definition of entropy), the entropy increase for the entire process is not very big if the ENVIRONMENT is excluded from the calculation. The major increase of entropy in this process is exported into the environment.

    The problem here is not about the right or wrong but is about many using totally wrong as right all the time.

    loewer2013: “… see Albert’s Time and Chance and After Physics and Carroll’s From Eternity to Here …”

    Do Albert or Carroll have the answer for this {arrow of time arising process}? No, they don’t. Did David Wallace gave an answer? No, he did not. Then, what is your point.

    Marko Vojinovic: “So the second law of thermodynamics so far remains irreducible to the known microphysics.”

    Exactly. Reason: 1) all fundamental physics laws are related to ‘time’ (timelessness or timed), 2) arrow of time (TA) does make contribution to the entropy-arrow (EA) but EA is totally different from the TA.

    Marko Vojinovic: “For example, for *every* phase point describing the initial microstate of an egg …, there is *exactly one* phase point describing the initial microstate of a goo on the floor picking itself up, assembling itself into a full egg and jumping onto the table.”

    So what? As soon as the arrow-of-time arose, all physical processes are irreversible in term of entropy {although TA is different from EA}.

    Coel: “Thus the PH seems pretty irrelevant to me.”
    Amen!

    Coel: “But your reply to …, all depend on a rigorously deterministic universe.
    Now, I recall a rather nice essay called Farewell to determinism …”

    Amen!
    This is the key issue. More, later.

    The issue is very simple: {timelessness (in fundamental physics laws) + process (***) = arrow of time (TA)}

    Process (***) =*= {64, 48}
    =*= {the Cabibbo and Weinberg angles}
    =*= {Alpha equation; locking three measuring rulers}
    =*= {quark/lepton LANGUAGE: in terms of space-time sheet}
    =*= {Force (dark energy) to derive (delta P x delta S > =ħ)}
    =*= {Planck data (dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82}
    =*= {spin (1/2), see https://tienzengong.wordpress.com/2014/02/16/visualizing-the-quantum-spin/ }
    =*= {GR (general relativity) is wrong to play a role in the final physics}

    Without knowing Process (***), all the above are unanswered and unanswerable issues. As those issues are now answered precisely, the Process (***) is known precisely.

    Michael J Ahles: “No one can truly tell the past, nor can anyone tell the future.”

    Who is the {No one}? Why making one’s ignorance to be the ignorance of others.

    Like

  30. Robin,

    DM’s explanation is correct — if we have equations that deterministically (i.e. uniquely) predict the future from the present, and if those equations are time-reversal invariant, then they must uniquely predict the past from the present as well. Otherwise, they are either not invariant or not deterministic.

    In your case of the ball rolling from one of the two hills — if there is no friction between the ball and the hills, the ball will never come to a rest at the bottom, but will keep climbing back and forth indefinitely. If there is friction, the ball will eventually come to rest because its kinetic energy will be transferred via friction forces to the kinetic energy of the surface molecules of the two hills (heat, in simple terms). After the ball settles, you can identify which hill it came from by inspecting the state of the molecules in the two hills — one side will be slightly hotter than the other, and that is the side from which the ball initially rolled down.

    A similar analysis can be made for the egg example and any other example. The bottomline is always that in a deterministic TRI system, there is a unique past for every given present, just as there is a unique future for it.

    Patrick,

    Regarding the Kac Ring model, I assume you refer to this paper. Sure, it’s pretty easy to point where the model goes wrong — the Boltzmann’s “Stosszahlansatz” assumption (equations (2) on page 39) is not a consequence of the dynamical laws of the system, but rather an ad hoc assumption that there are no correlations between certain events in the system. IOW, it is an encoded assumption that some configurations of the ring are equally probable as the others. As I explained in previous posts, this amounts to introducing a particular probability measure into the phase space, for which dynamical TRI laws give absolutely no input.

    Coel,

    Yes, I have been assuming deterministic physics throughout this thread — simply because others are more comfortable in thinking about Newtonian mechanics than something more serious. 🙂 Of course, this is an idealization, the real world is not deterministic. But this idealization is useful because it simplifies the formulation of the arrow of time problem, and makes our lives easier in discussions. 🙂

    That said… What nondeterminism tells us is that it is impossible (even in principle) to accurately retrodict the initial state of the Universe, or any state near the Big Bang. It states that the initial condition (at or near the BB) did not exist beyond certain precision. But then this non-existent state is claimed to have very low entropy (!!!). I believe it doesn’t take a lot to see a problem with that claim.

    So the presence of nondeterminism, if anything, only amplifies and exacerbates the problem of low entropy initial condition, it doesn’t resolve it at all. And I didn’t want to complicate the discussion even further by invoking nondeterminism.

    Like

  31. Marko,

    OK, I was expecting that answer. But personally I feel that Bricmont explains on page 40 why this “Stosszahlansatz” (assumption (2)) is a reasonable physical assumption. He even admits that assumption (2) cannot hold for all times and all configurations, and explain why this is not a problem.

    Like Coel, I get the impression that your position is slightly inconsistent.

    If I remember correctly, you think that we can derive the laws of fluid mechanics from Newtonian mechanics. But to do so, you have to assume that the particles in the fluid exhibit the macroscopic behavior of, well, a fluid. The simplest example of fluid mechanics is the pressure in a container filled with water. It’s perfectly compatible with Newtonian mechanics that the velocity distribution of the water molecules is such that on a certain moment they suddenly all move in the same direction and blow the wall out of the container. Yet, you assume they don’t and they won’t. You make certain assumptions about initial conditions and general macroscopic behavior – assumptions that are not, I repeat not, present in the dynamical laws of Newtonian mechanics.

    If assumptions like these are admissible in the derivation of fluid mechanics, then why aren’t they admissible in the derivation of the second law?

    Like

  32. I’m not a physicist, so I ask these questions to inform myself.

    What distinctions are there, if any, between increasing entropy and the arrow of time? From my (admittedly amateur) research so far, I’ve been unable to see a distinction.

    Since entropy always evolves to maximum entropy or equilibrium, and this evolution is spontaneous, then why isn’t the low entropy of the BB (or lower entropy of any previous state of the universe relative to the present) sufficient to explain the arrow of time?

    When Marco says that the low entropy of the BB doesn’t *cause* the arrow of time, does he mean, in other words, that simply saying that very low entropy eventually turns into maximum entropy no more explains why that happens than simply calling it ‘spontaneous’?

    Also, I can understand ‘maximum entropy’ or ‘equilibrium’ . That seems to be an absolute state. However, to say the BB had ‘low entropy’ seems to be sort of relative and so no more meaningful than to say that the universe evolves to higher entropy. I don’t see what saying it adds to the information contained in the second law. The second law precisely says that the universe any amount of time in the past has low entropy compared to now.

    Like

  33. Hi Marko,

    … nondeterminism tells us is that it is impossible … to accurately retrodict the initial state of the Universe … It states that the initial condition … did not exist beyond certain precision. But then this non-existent state is claimed to have very low entropy (!!!).

    As usual Marko, you let your epistemology dictate your ontology! Yes, we cannot precisely reconstruct historic states from information present in the universe today, but that doesn’t mean they were “non-existent”, it just means that the process from those historic states to the present was not fully deterministic.

    Leaving the Big Bang aside, let’s take the Earth’s atmosphere 100 years ago. I’m betting that, given quantum indeterminacy and deterministic chaos, we could not, even in principle, retrodict the microstate of the Earth’s atmosphere exactly 100 years ago, yet I (for one!) would not conclude that Earth’s atmosphere was then “non-existent”.

    Like

  34. Marko, you said:

    “That said… What nondeterminism tells us is that it is impossible (even in principle) to accurately retrodict the initial state of the Universe, or any state near the Big Bang. It states that the initial condition (at or near the BB) did not exist beyond certain precision. But then this non-existent state is claimed to have very low entropy (!!!). I believe it doesn’t take a lot to see a problem with that claim.”

    Forgive me if this comment only serves to justify your decision not to couch your explanation in terms more serious than Newtonian mechanics. If the initial condition, at or near the BB, did not exist beyond certain precision, does this not imply that the initial condition *did* exist within certain precision? And if so, is it not to this existent state that the low entropy is attributed?

    I have invariably found your comments to provide more light than heat in these threads and they are much appreciated. It would be a further help if you could, when simplifying things for the general reader, indicate the area or point of simplification, as in the determinism/indeterminism above. This would inform anyone interested in pursuing the matter further that the current argument is not intended to be definitive.

    Like

  35. Marko,
    ” if there is no friction between the ball and the hills, the ball will never come to a rest at the bottom, but will keep climbing back and forth indefinitely. If there is friction, the ball will eventually come to rest because its kinetic energy will be transferred via friction forces to the kinetic energy of the surface molecules of the two hills (heat, in simple terms).”

    What makes that system and thus the equations derived from it, is the inertia of the energy being expressed. This, fundamentally, goes one direction, not both. So determinism derives from causality, not the other way around. Yes, we can predict what the system will do in the future, as well as retrodictively explain what it did in the past, but if there is simply a void, with no physical activity causing that process, there would be nothing to predict!!!! Without the causality, there is no determination and therefore no equation to derive from it.

    Like

  36. Marko,

    About the ball and hills example above, isn’t it the case that the heat traces left by friction on one of the hills will be obliterated by the action of the 2nd law? If so, any investigator who arrives late to the scene will have no means of finding out which hill the ball rolled down or whether it arrived in its current position by some other means.
    If the 2nd law destroys information in this way how can any retrodiction over cosmological timescales be achieved? Not only would a vast amount amount of information have been lost, but even the fact that it ever existed would be mere speculation.

    Like

  37. Everyone,

    This is my fifth post, so I’ll have to be brief. Hopefully the author will be willing to join the discussion and continue answering questions.

    Patrick,

    If assumptions like these are admissible in the derivation of fluid mechanics, then why aren’t they admissible in the derivation of the second law?

    Because of circular reasoning. You are not allowed to introduce a time-irreversible assumption into the derivation of the second law, and then use that second law to explain the time-irreversibility of Nature. The only thing such reasoning does is to “explain” time-irreversibility by assuming time-irreversibility.

    Wm. Burgess,

    What distinctions are there, if any, between increasing entropy and the arrow of time?

    There are no (serious) distinctions. One could say that these are two different names for the same thing. And neither can be explained/reduced to known microphysics.

    Coel,

    I’m betting that, given quantum indeterminacy and deterministic chaos, we could not, even in principle, retrodict the microstate of the Earth’s atmosphere exactly 100 years ago, yet I (for one!) would not conclude that Earth’s atmosphere was then “non-existent”.

    Look at it the other way around — suppose that 100 years ago the state of the Earth’s atmosphere existed with some satisfactory precision. Start with it and evolve it forward in time, to present. Due to quantum indeterminacy and deterministic chaos, we can (arguably) predict from theory that quantum uncertainty of the present state of the atmosphere (including say positions of molecules) is larger than the Earth itself (if not even bigger). What happens then is that we additionally *measure* the state of the atmosphere today, collapsing its wavefunction, and thereby reducing the error-bars to get a (reasonably) well-defined state of the atmosphere today, despite the prediction of the theory. And that last step — the measurement — is the famous spooky, unpredictable, undescribable, wooohooo, handwaving blind spot of QM.

    So, mutatis mutandis and reversing time, if you want to claim that the Universe was in some well-defined state at the time of BB, you need to argue that some observer has performed a measurement of that state back then. And I can only say — good luck with that! 🙂

    Dadooq,

    If the initial condition, at or near the BB, did not exist beyond certain precision, does this not imply that the initial condition *did* exist within certain precision?

    Sure it does. It’s just that this precision will be so low that such a “state” would cover a large portion of the whole phase space, begging the question why it has low entropy.

    About the ball and hills example above, isn’t it the case that the heat traces left by friction on one of the hills will be obliterated by the action of the 2nd law?

    Since in this example we have assumed deterministic TRI microphysics, there cannot be any second law to delete information. In reality, the second law does indeed act, which means that deterministic TRI microphysics is a wrong assumption.

    Like

  38. Hi Marko,

    if you want to claim that the Universe was in some well-defined state at the time of BB, you need to argue that some observer has performed a measurement of that state back then.

    But of all the interpretations of quantum mechanics, Copenhagen is by far the silliest (well, maybe not “by far”, given MWI). No-one really thinks the cat is both alive and dead do they?

    Any sensible interpretation of quantum mechanics says that quantum-superposition states are quite fragile, and that they decohere when you bump things together. Thus, by the time you get to the macroscopic scale, the wavefunction has decohered (or “collapsed”).

    Admittedly we can’t give a full account of how that happens, but it’s bonkers to suppose that you need human observers (or any sentient observers) in the process. What you do need is low-level things bumping together. Indeed the tricky thing (from the point of view of building quantum computers, etc) is keeping the states un-collapsed, because they are so fragile.

    So, again, I suggest that you’re letting your epistemology rule your ontology. If we took the present state of Earth’s atmosphere and predicted it forward for 100 years or retrodicted it back 100 years, I fully agree that *our* *prediction* would be highly uncertain. That is not at all the same as asserting that the state then would be highly superposed or undefined. That is an unwarranted leap from what we can know about that state (epistemology) to the nature of that state itself (ontology). (And no, I don’t think this depends at all on the presence of sentient mammals.)

    Liked by 1 person

%d bloggers like this: