Lee Smolin and the status of modern physics

3262010115214PMby Joe Boswell

[This is the first interview we are publishing here at Scientia Salon, hopefully the beginning of a new interesting trend at the magazine.]

I write a science and philosophy blog called Adams’ Opticks [1], and about a year and a half ago I published an in-depth critique of Lee Smolin’s Time Reborn, a radical reappraisal of the role of “the present moment” in physics [2,3]. My article was certainly critical of the book, but also something of a labor of love, and I’m completely thrilled to say that Lee has now read the piece and would like to respond. What follows is a Q&A, with most of the questions derived from the earlier post [4].

Adam’s Opticks: Hi Lee, central to your thesis as outlined in Time Reborn, and in its recent follow-up The Singular Universe (co-authored with Roberto Mangabeira Unger) [5], is a rejection of the “block universe” interpretation of physics in which timeless laws of nature dictate the history of the universe from beginning to end. Instead, you argue, all that exists is “the present moment” (which is one of a flow of moments). As such, the regularities we observe in nature must emerge from the present state of the universe as opposed to following a mysterious set of laws that exist “out there.” If this is true, you also foresee the possibility that regularities in nature may be open to forms of change and evolution.

My first question is this: Does it make sense to claim that “the present moment is all that exists” if one has to qualify that statement by saying that there is also a “flow of moments?” Does the idea of a flow of time not return us to the block universe? Or at the very least to the idea that the present moment represents the frontier of an ever “growing” or “evolving” block as the cosmologist George Ellis might say?

Lee Smolin: Part of our view is that an aspect of moments, or events, is that they are generative of other moments. A moment is not a static thing, it is an aspect of a process (or visa versa) which generates new moments. The activity of time is a process by which present events bring forth or give rise to the next events.

I studied this idea together with Marina Cortes. We developed a mathematical model of causation from a thick present which we called energetic causal sets [6]. Our thought is that each moment or event may be a parent of future events. A present moment is one that has not yet exhausted or spent its capability to parent new events. There is a thick present of such events. Past events were those that exhausted their potential and so are no longer involved in the process of producing new events, they play no further role and therefore there is no reason to regard them as still existing. (So no to Ellis’s growing block universe.)

AO: Can you help me understand what you mean by a “thick present”? I’m confused because if the present moment is “thick” rather than instantaneous, and may contain events, it seems like you’re defining the present moment as a stretch of time, which looks like a contradiction in terms. Similarly, when you say that the activity of time is a process I’m left thinking that events, activities and processes are all already temporal notions, and so to account for time in those terms seems circular.

LS: I can appreciate your confusion but look, think about it this way: the world is complex. What ever it is, it contains many elements in a complicated network of relations. To say what exists is events in the present does not mean it is one thing. The present is not one simple thing, it is the whole world, therefore it contains a vast complexity and plurality. Of what? Of processes, which are dual to events.

AO: One of your main objections to the idea of eternal laws comes in the form of what you diagnose as the “Cosmological Fallacy” in physics. Your argument runs that the regularities we identify in small subsystems of the universe — laboratories mainly! — ought never to be scaled up to apply to the universe as a whole. You point out that in general we gain confidence in scientific hypotheses by running experiments again and again, and define our laws in terms of what stays the same over the course of many repetitions. But this is obviously impossible at a cosmological scale because the universe only happens once.

But what’s wrong with the idea of cautiously extrapolating from the laws we derive in the lab, and treating them as working hypotheses at the cosmological scale? If they fit the facts and find logical coherence with other parts of physics then great… if not, then they’re falsified and we can move on. As an avowed Popperian yourself, are you not committed to the idea that this is how science works?

In addition, wouldn’t the very idea of “laws that evolve and change” make science impossible? How could we ever confirm or falsify a hypothesis if, at the back of our minds, we always had to contend with the possibility that nature might be changing up on us? Don’t we achieve as much by postulating fixed laws and revising them on the basis of evidence as we might by speculating about evolving laws that would be impossible to confirm or falsify?

LS: To be clear: the Cosmological Fallacy is to scale up the methodology or paradigm of explanation, not the regularities.

Nevertheless, there are several problems with extrapolating the laws that govern small subsystems to the universe as a whole. They are discussed in great detail in the books, but in brief:

  1. Those laws require initial conditions. Normally we vary the initial conditions to test hypotheses as to the laws. But in cosmology we must test simultaneously hypotheses as to the laws and hypotheses as to the initial conditions. This weakens the adequacy of both tests, and hence weakens the falsifiability of the theory.
  2. There is no possible explanation for the choice of laws, nor for the initial conditions, within the standard framework (which we call the Newtonian paradigm).

Regarding your questions about falsifiability, one way to address them is to study specific hypotheses outlined in the books. Cosmological Natural Selection, for instance, is a hypothesis about how the laws may have changed which implies falsifiable predictions. Take the time to work out how that example works and you will have the answer to your question.

Another way to reconcile evolving laws with falsifiability is by paying attention to large hierarchies of time scales. The evolution of laws can be slow in present conditions, or only occur during extreme conditions which are infrequent. On much shorter time scales and far from extreme conditions, the laws can be assumed to be unchanging.

AO: I’m actually a big fan of Cosmological Natural Selection (which suggests that black holes may give birth to new regions of spacetime, fixing their laws and cosmological constants at the point of inception [7]) — and I can see how that is both falsifiable in itself, and would still allow for falsifiable science on shorter time scales.

Far more radical, however, is your alternative theory which you dub the Principle of Precedence. The suggestion here is that we replace the metaphysical extravagance of universal laws of nature with the more modest notion that “nature repeats itself.” The promise of this idea is that it makes sense of the success of current science whilst leaving open the possibility that truly novel experiments or situations — for which the universe has no precedent — will yield truly novel results.

To my mind, however, this notion begs many more questions than it answers. You claim, for instance, that the Principle of Precedence does away with all needless metaphysics and is itself checkable by experiment. But is it? You suggest setting up quantum experiments of such complexity that they’ve never been done before in the history of the universe and seeing if something truly novel pops out. But how could we ever tell the difference between a spontaneously generated occurrence and one that was always latent in nature and simply unexpected on the basis of our limited knowledge? And once again, as a falsificationist, shouldn’t you count the thwarting of expectations as evidence against individual theories, rather than positive proof of a deeper principle?

LS: My paper on the principle of precedence is a first proposal of a new idea. Of course it raises many questions. Of course there is much work to do. New ideas are always fragile at first.

As to how to tell the difference between a spontaneously generated occurrence and one that was always latent in nature — this is a question for the detailed experimental design. Roughly speaking, the statistics of the fluctuations of the outcomes would be different in the two cases. I fail to see how such an experiment would violate falsificationist principles.

In addition, we believe we know the laws as they apply to complex systems: they are the same laws that apply to elementary particles. To posit new laws which apply only to complex systems, and are not derivative from elementary laws, would be as radical a step as the one I propose.

AO: Can you tell me how the universe is supposed to distinguish between precedented and unprecedented situations? On the face of it, it seems like unprecedented things are happening all the time. You and I have never had this conversation before. Are we establishing a new law of nature right now, and if not, why not?

Another objection: can you tell me where novelty is supposed to come from? If the “present moment” is both the source of all regularity in the universe, and the blank slate upon which formative experiences are recorded — then what could introduce any change? Are you assuming that human consciousness and free will may be sources of genuine novelty?

LS: How nature generates unprecedented events and how precedent may build up are important questions that need to be addressed to develop the idea of precedence in nature. What I published so far is just the beginning of a new idea.

It’s intriguing to speculate about the implications for intentional and free actions on the part of living things. But in my view this is very premature. I am not assuming that consciousness is a source of novelty; I am only making a hypothesis about quantum physics. There is a very long way to go before the implications could be developed for living things.

AO: Nevertheless, it seems readily apparent from your collaborations with the social theorist Roberto Mangabeira Unger, and also the computer scientist Jaron Lanier, that you see many connections between your conception of physics and the prospects of human freedom and human flourishing. It concerns me, however, that in pursuit of a singular — very beautiful — solution to so many problems in science, philosophy, politics and our personal lives, a lot of awkward details may get overlooked.

In philosophy, for instance, you claim to show that the reality of the present moment — conceived in terms of unresolved quantum possibilities — may at last solve the problem of free will. But what of the history of compatibilism in philosophy — from David Hume to Daniel Dennett — that purports to show that our freedom as biological and psychological agents is not only compatible with the regularity of nature, but may in fact depend upon it?

LS: There are certainly common themes and influences in my work and those of Jaron Lanier and Roberto Mangabeira Unger. And I’m happy at times to indulge in some speculation about these influences. But these are very much to be distinguished from the science. The point is that I am happy to do the scientific work I can do now and trust for future generations to develop any implications for how we see ourselves in the universe. There is much serious, hard work to be done, and it will take a long time. Especially given the present confusions of actual science with the science fiction fantasies of many worlds and AI (these two ideas are expressions of the same intellectual pathology).

I agree that we have to build a counter view carefully. I don’t claim to show that my work solves the problem of free will. I suggest there may be possibilities worthy of careful development as we learn more. As for compatibilism, I am unconvinced, but I haven’t yet done the hard work needed to develop the alternative. Dan Dennett is a generous, serious and warmhearted thinker who works hard to produce arguments which are crystal clear. But talking with him or reading him, both of which are great pleasures, I sometimes find that at the climax of one of his beautifully constructed arguments, the clarity fades and there is a step which I can’t follow. I hope someday to have the time to do the hard work to convince myself whether the fault is with his reasoning or my understanding.

AO: Since I have you here, let me try to make the compatibilist objection compelling with three more questions, inspired to a great extent by Dennett’s Freedom Evolves [8]:

  1. If we turn to physics (as opposed to biology or psychology) in search of free will, are we not likely to end up granting as much free will to rocks or tables or washing machines — or indeed computers — as we do to human beings? If we are to be able to change and adapt in response to the problems we face, surely the science of free will must be the science of a human plasticity that outstrips the plasticity of nature more generally?
  2. You claim that the openness of physics may enable us to transcend the fatalism inherent in predictions from climate science, for example: in 2080 the average temperature on earth will be six degrees warmer than it is now. But what of those other predictions stemming from climate science such as: a concerted effort to reduce carbon emissions will avert disaster? If the true nature of physics undermines the certainty of the first prediction, does it not also undermine the certainty of the second?
  3. Setting yourself against a long history of thinkers who would write off the sensation of “now” as a psychological quirk incompatible with timeless physics, you go so far as to call it “the deepest clue we have as to the nature of reality.” But I wonder what you make of the innumerable psychological and neuroscientific studies that demonstrate the problematic nature of human perception of time over short intervals? Benjamin Libet’s apparent prediction of conscious decisions from unconscious brain activity seems particularly troubling. Might you be persuaded to push in the direction urged by Dennett and resist such a conclusion by arguing that an instantaneous “you” cannot be contrasted with your slow-moving brain activity, and that the search for free will and consciousness in “the present moment” is fundamentally misguided? Can we not look, instead, to the mechanically-possible processes of decision making, learning and adaption that take place over seconds, minutes, weeks and years?

LS: I don’t see why grounding human capabilities in an understanding of what we are as natural beings implies that every capability we have is shared with rocks. We have a physical understanding of metabolism, or the immune system, but rocks and tables have neither. My guess is that when we know enough to seriously address these issues, the vocabulary of concepts and principles at our disposal will be greatly enhanced compared to what we have now. Certainly we are aspects of nature and every capability we have is an aspect of the natural world.

Regarding climate change, the first is a prediction of what could happen if we don’t take action to strongly reduce GHG emissions. My point is not that the climate models are completely accurate. My point instead is that the intrinsic uncertainties in their projections are the strongest reason to act to reduce emissions so we can avert disaster however the uncertainties develop. In national defense we prepare for war because the future is uncertain. Climate change is not an environmental issue, its a national security issue and should be treated as such.

As for the objections from neuroscience, I completely fail to see the force in this kind of argument. Those studies are fascinating but I don’t think they remotely show what is claimed. Certainly the present moment is thick and the self is not instantaneous. But giving up the instantaneous moment for the thick and active or generative present (as I sketched above) does not imply that consciousness or time or becoming are illusions.

AO: Lee Smolin — thank you!

_____

Joe Boswell is a writer and a musician trying to figure out how to make a living in a world where words and music are free. He has a degree in English literature, but having learned to bluff philosophy by listening to lots of podcasts, he enjoys picking fights with eminent scientists and philosophers on his blog, Adams Opticks (https://adamsopticks.wordpress.com). His songs are available on Bandcamp (https://joeboswell.bandcamp.com). He does Twitter too (https://twitter.com/joeboswellmusic).

Lee Smolin is is an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of Physics at the University of Waterloo and a member of the graduate faculty of the Philosophy department at the University of Toronto.

[1] Adam’s Opticks.

[2] Time Reborn: From the Crisis in Physics to the Future of the Universe, by L. Smolin, Houghton Mifflin Harcourt, 2013.

[3] For a good introduction to the basic ideas of Time Reborn, see this video.

[4] On Time Reborn as modern myth: Why Lee Smolin may be right about physics (but probably wrong about free will, consciousness, computers and the limits of knowledge), by Joe Boswell, 5 October 2013

[5] The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy, by R.M. Unger and L. Smolin, Cambridge University Press, 2014.

[6] The Universe as a Process of Unique Events, by M. Cortês and L. Smolin, arXiv.org, 24 July 2013.

[7] For a more critical take on the idea of cosmological natural selection see: Is Cosmological Natural Selection an example of extended Darwinism?, by M. Pigliucci, Rationally Speaking, 7 September 2012.

[8] Freedom Evolves, by D.C. Dennett, Viking Adult, 2003.

Advertisements

74 thoughts on “Lee Smolin and the status of modern physics

  1. Very nice interview, both questions and answers (though I was perhaps more impressed by the questions!).

    Lee Smolin’s ideas are initially intriguing, but I’m far from convinced they make any sense. In trying avoid what he sees as the “intellectual patholog[ies]” of Many Worlds, AI and so on, he has introduced an idea which is (as he more or less admits himself) pretty half-baked.

    If the only law is that nature repeats itself, then, as pointed out in the interview, we have not even begun to define the function that determines what happens in an unprecedented situation.

    Choosing randomly is too vague to be viable. Consider how difficult it is to make sense of the rule “choose a number at random”. Perhaps you may choose a number such as 5 or 73. The problem is that in order to do so you need to make a number of implicit assumptions. You need to know what kind of number (integers, rationals, reals, complex?), what range (e.g. between 0 and 100) and what distribution (e.g. uniform or gaussian). Neither human beings nor any other process can make random choices without such (possibly implicit) parameters. There is no way of getting away from the need for some kind of universal rules at some level, and once this is acknowledged we are essentially right back where we started.

    So, yes, Smolin is right that the laws of the universe could change and evolve over time, but it seems to me that we are left with the logical necessity of ultimate laws governing how this happens, meaning he falls victim to the same two problems he identified with the Newtonian paradigm.

    Pace Smolin himself, his ideas are unfalsifiable. In order to make a falsifiable statistical prediction, you need to have a hypothetical rule which you expect to describe what will happen. This is exactly what Smolin denies. If his hypothesis is that anything can happen, then anything is consistent with his hypothesis.

    With regard to whether only the present moment exists or whether a growing block of time exists or whether the past, present and future all exist, this strikes me as a meaningless question with no answer. To me, these are not different ways reality could be but different ways of describing reality. Existence in particular is a problematic term which only makes sense with respect to specific definitions, and so the existence of the future or the past only depends on whether you want to use existence in one sense or another.

    Even the suggestion that his idea could resolve the free will problem seems to be sufficient grounds not to take him terribly seriously. Dennett and his ilk have a perfectly adequate account of free will that needs no paradigm shift in physical understanding. There is, quite frankly, no mystery to solve in regard to free will. I see no reason at all to motivate Smolin’s speculations other than an ill-motivated distaste for Many Worlds and functionalism and so on.

    Like

  2. Libet really doesn’t show what is often claimed, I’m surprised to see it still brought up in conversations around free will. It both has problems with the philosophical claim and empirically.

    Like

  3. Joe Boswell
    “Another way to reconcile evolving laws with falsifiability is by paying attention to large hierarchies of time scales. The evolution of laws can be slow in present conditions, or only occur during extreme conditions which are infrequent. On much shorter time scales and far from extreme conditions, the laws can be assumed to be unchanging”
    That quote will do for a start. It means that we always use retrospect to check evolution from initial conditions, but the same micro events apply to the macro, its just a collective of micro’s that follow the logical patterns laid by two forces – electromagnetic “charge” & gravitational “mass”, with two supplementary consistent forces (weak for charged orbitals, and strong for massive nuclei). The pattern is obvious if you read here http://1drv.ms/1tnKM6f but it is summarized metaphysically (at absolute root) as cause in the past and effect in the future across a present moment of action-reaction. A present moment is a continual lock. It is all we ever experience (by action-reaction events rather than in “imaginary” present moments), but it has a past behind it and future ahead of it at all times, for continuity. Cause & effect across a present moment do not equate to cause = action and effect = reaction, it is slightly, but not much more complicated than that.
    In fact charge is action as cause, and mass is reaction as cause, with “mutuality” in operation of the two forces at all times. Action is always compression, and reaction is decompression in nature, and it applies to everything, and frames cause by charge as action (compression) – causally oriented to contain as atomic orbitals, while cause by mass is reaction (decompression) – causally oriented to extend as solar system orbits. Mutuality applies for both forces to compress together to their separate orientations as action, then decompress together as reaction, for all events in nature, including a Big Bang, supernova accretions towards solar system formation, and life (the Big Bang was an initial massive reservoir in compression with potential to decompress). Properties of charge and mass are easily divided into inertia and being effected by the inertia of others in action-reaction events – that is a division between cause & effect, and for each force they are “equal” (offset). As the forces lurch by mutual compression-decompression, the patterns they create have a simple logic, and this is a key *Charge concentrates a center within a neutral surface (as a compressed neutron in a void on edge to slip into decompression by decay) * Mass concentrates a surface around a neutral center (Earth’s center is absolutely neutral in gravitational force, which concentrates at the surface). The idea that causes always precede effects across present moments is a lock, but it is a mere “slip tendency”, to the same extent as decay is a mere “slip” into decompression (to use an example). A bright future ahead for “real” physics!

    Like

  4. (continued)

    (continued) So, the patterns are set across action-reaction present moments by two forces with “completely opposite” structures. We can try to securely measure because they are points of contact, not immeasurable momentum in void (you may not realize that the Uncertainty Principle merely states that momentum cannot be known at one stationary position and instant, which is obvious by definition, so all is measured at points of contact, action-reaction, either side of the passage of unknowable momentum). The forces contain as charge TOWARDS action-reaction, by being causal action compressing as atoms from decaying neutrons, eventually to a fully compressed Periodic Table by supernova accretion. Mas extends AFTER action-reaction, by being causal reaction decompressing eventually into solar systems by accretion disks after a supernova explosion.
    Charge is causally “deductive” (secure) in its action of absorbing a photon to determine “black or white” (securely) whether to attract or repel. Emission is “inductive” reaction (grey) to an equal “hypothesis” of attracting or repelling. The definitions of charge as causal compression and mass as causal decompression extend to specific definitions of cause as deductive and effect as inductive. Mass, is deductive in a decompressive reaction to gravitational fields (b or w), whilst inductive in absorption of fields compressing (grey) – because mass not only attracts weakly by emitting its field to other mass, it also explodes strongly when those attractive emissions attract too many other masses. Absorptions of weak gravitational fields compress the absorber inductively towards the giver, but this is “grey” because the absorber has its own inertia and they reach a grey compromise between inertia. However, emission of an attractive field is deductive (b or w) because the fields attract continually until the grey build-up of compressions has concentrated enough to literally explode. The black & white aspect of gravitation (comparable to charge being equally black & white attracting & repelling as distinct opposites) is to explode suddenly, equal to its own capacity to attract others by giving fields. It is an equal explosion from its own contribution to a collective of decayed neutrons, for grey accumulations to explode by a massive capacity for momentum in mass under compression.
    (1) There is charge as causal action & mass as causal reaction, (2) there is charge as neutral surface and concentrated center & mass as the opposite, and (3) there is the additional key that mass provides all momentum, whilst charge provides all angles. They are mutual, and combine inextricably, but gravitational mass has potential for momentum exploding by pushing outwards by forward rotating “gravitons” in causal extension, while charge has potential for angles drawing inwards by backward rotating photons to keep massive momentum intact across rest mass & energy extremes, for pivots & orbits. See Circular Polarization illustrations to see backward rotations of photons. Mass & charge always combine by charge containing by angles to keep massive explosive momentum intact (at a Big Bang and also intricately by neutron decay after a Big Bang explosion). Nature partitions between charge and mass “complete opposites” and all patterns are a compromise between those extremes.

    Like

  5. So the previous discussion was about how scientists are different than philosophers with one major difference being scientists commitment to clear definitions. Here we have, in a presumably scientific interview, a philosophical discussion about the nature of time with quite a lot of metaphorical and ambiguous language. So it seems to me.

    Ken Anderson, here is that Smolin interview on youtube. Maybe your computer can play it in full.

    Like

  6. Joe,
    The problem with time is that as individual beings, we experience change as a sequence of perceptions and so think of this process of time as the point of the present moving from past to future. Physics codifies this by reducing time to measures of duration, from one event to the next.
    The elemental reality though, is that these events, as well as all physical features, are being formed and dissolved. As such, it is they which go from being in the future to being in the past. To wit, the earth doesn’t travel some fourth dimension from yesterday to tomorrow. Tomorrow becomes yesterday because the earth turns.
    Consider how this resolves various issues:
    Free will versus determinism; Potential precedes actual. While the process is necessarily deterministic, thus yielding a determined past, information only travels at a finite speed and so input cannot be fully known prior to the occurrence and if input cannot be known, neither can output. Causation yields determination, not the other way around.
    As to the “thickness” versus “instant” of time; As an effect of action, duration is not some vector or dimension external to some “point” of the present, but is the state of what is present, physically extant, as the events form and dissolve.
    Different clocks can run at different rates, or the same clock will, in different conditions, because these are separate actions. When you measure time, you are measuring frequency and comparing it to other such measures. Obviously every regular action has a frequency, some more regular than others. As per the twins example, a faster clock simply uses its energy quicker and so recedes into the past faster. The tortoise is still plodding along, long after the hare has died.
    As such, time is an effect of action, similar to temperature. Time is to temperature, what frequency is to amplitude. It is just that amplitude en masse is experienced as temperature, while frequency en masse is noise/static, because our rational minds function as just such a sequencing device and so multiple such effects disrupts our sense of concentration. Thus to measure time, we isolate a particular action and measure its rate, but the effect of overall change is cumulative, just like temperature is cumulative of amplitudes.
    Much as we still see the sun rising in the east and setting in the west, as the earth spins west to east, so to is our perception a function of the singular experience of sequential events, even though it is a distillation of cumulative input, resulting in the particular, narrative experience as output.
    So while our minds think narratively, the larger reality is much more thermodynamic, with all those feedback loops to our every action serving as the equal and opposite effect. For instance, while we think of causality as linear, such that A leads to B, it is really energy exchange that is causal. For example, yesterday didn’t cause today, any more than one rung on a ladder causes the next. Light shining on a spinning planet causes this effect of days. The linear narrative is only effect.
    Given that energy is conserved, time is like a tapestry being woven from strands pulled out of what had been woven and it is these feedback loops which is how the energy of one event precipitates effects on succeeding events.
    Now the brain is divided into hemispheres and while the left is the linear, sequential, rational side, the right, emotional, intuitional functions as a scalar, a thermostat or pressure gauge. Thus basic emotions are thought of on such scalar terms, hot/cold, pressure/release, while intuitions “rise to the surface.”
    So reality is the dichotomy of energy and form and while energy is dynamic and conserved, form is static and transient. Therefore the arrow of time for energy is from past forms to future ones, while the arrow of time for form is from potential, to actual, to residual. Much like the product of a factory goes from start to finish, while the process goes the other direction, consuming material and expelling finished product.
    Our digestive, respiratory and circulatory systems process energy, while the central nervous system processes form, aka information.
    Cut it off here…

    Like

  7. So, I will finish with a key example of the mutual relation between forces. I won’t go into a Big Bang, because that is more speculative, but supernova accretions are better known. Nature’s patterns are a Big Bang, supernovae, solar systems, rocky inner planets with elements in their MANTLE in direct proportion to their creation in supernovae, and life on those planet using those chemicals. They are the only patterns you need to worry about in mutuality of the two forces, easy! What I can say about a Big Bang before explaining mutual accretion as a supernova, is by using Weinberg’s First Three Minutes from 1977, still relevant today. Decaying neutrons have Escape Velocity at a Big Bang, or equal decompression from their collective gravitational capacities for compression. Leaving aside the idea of expanding space-time for now, and gravitational attraction as magical mathematical curvature with force, for now, I will just look at what happens a while after neutrons decay in a neutralized gravitational field, expanding and forming helium nuclei in the first few minutes. That is a state of charged decay collecting under gravitational compression eventually to supernovae, for a Periodic Table to be created as a supernova decompresses by massive momentum to distribute the Periodic Table as a solar system disk. The patterns are very easy.
    The key to supernovae as mutuality between charge causally structured to concentrate centers within neutral surfaces, and mass causally structured to concentrate surface around neutral centers, is rotation. All atoms eventually accumulate after a Big bang into massive rotating accretions, where there is a literal spring-shift by charge (as a magnetic line directly across its own electrical line along massive contours). Atoms sort by shifting momentum to their rotations, as mass, from inner to outer, to export momentum to a “surface” for mass, while it concentrates atoms falling inwards. This is consistent with electrical & magnetic lines in atoms themselves keeping them intact while they sort across contours – because atomic orbitals may also tend inwards by an electrical line while sorting by losing photon momentum outwards to drop into a lower orbital, as its causal orientation to be more concentrated (closer) to a center. The same process applies to sling an atom entirely across a contour when accreting for a supernovae. Both forces are served, by exporting angular momentum while dropping atoms into compressions to form heavier atoms. The massive rotation from export of Angular Momentum explodes using massive neutrino momentum when the Table has formed from central compression by charge, to disperse a cloud for a solar system, leaving a neutron star in rapid rotation behind. This is a prime example of mutuality of the two forces, serving themselves while they serve each other in dynamic relations that eventually settle into less dynamic ones, for life on Earth-type rocky inner planets. But still they stay mutual, in compounding by charge as chemistry into massive aggregations obeying both charged and massive rules at all times. Physics is dead easy!

    Like

  8. Lee Smolin: “Past events were those that exhausted their potential …, they play no further role and therefore there is no reason to regard them as still existing. …
    The present is not one simple thing, it is the whole world, …
    Certainly the present moment is THICK and the self is not instantaneous. …
    We developed a mathematical model of causation from a thick present which we called energetic causal sets.”

    How thick? If it is 14 billion years thick, it is the whole world, indeed.

    “Time” is a 100% physics issue, and it has two parts:

    P1, a measurable reality, defined operationally. Every physicist knows this P1-time perfectly.

    P2, what is the base (or essence) of time?

    What is the EXPRESSION for this P2-time in this universe?

    First, we should first find out what this P2-time (if any) is: that is, write its expression out with a clearly defined LANGAUGE.

    Second, we should describe its BASE.

    “TIME” came into being at the same moment when this universe popped out. The emergence and the evolution (EaE) of this universe are totally governed by TIME. Yet, the EaE is totally governed by some measuring RULERs, the nature-constants {C (light speed), e (electric charge), and ħ (Planck constant)}. Yet, these nature-constants are LOCKed by a pure number, the Alpha. Thus, an intelligent guess of the expression of the P2-time is those nature-constants (especially, the Alpha).

    Thus, if we cannot derive Alpha, we will never know what the P2-time is. This becomes the litmus test for all physics theories. Both GR (general relativity) and QFT (quantum field theory) cannot derive Alpha, and thus they are wrong (incomplete). M-string theory and multiverse cannot derive Alpha, and they are wrong too. Can Lee Smolin’s theory derive Alpha? If not, it is wrong.

    Worse yet, Lee’s theory is wrong even without the above argument.

    First, THICK is not clearly defined and is not a physics variable.

    Second, Lee’s understanding of PAST and FUTURE is wrong. Just using a very small aspect of the TIME (the human experiences) from its all-encompassing definition (physics), the PRESENT is mainly DETERMINed by FUTURE. I am buying a plane ticket TODAY in order to attend a meeting next month. Also, the PAST is always in the PRESENT. Buddha’s thinking (2500 years ago) is still governing many people’s actions today. My hole-in-one golf swing is totally based on many years of hard training. Every practice swing in the PAST lives in my muscle-memory.

    No, PAST is not forever gone. Yes, the PRESENT is filled with FUTURE. In this particular aspect of TIME (human experiences), {past, present, future} cannot truly be distinguished, THICK indeed. Is this just a philosophical talk? No, this is the P2-time exactly: the BASE of P2-time is timelessness. And, this {timelessness to arrow of time process (mechanism)} is how to derive the Alpha.

    Like

  9. Dr. Smolin mentioned some problems with extrapolating from small laboratory systems to the whole universe, and I agree but would like to add another. When studying an isolated subsystem, you’re not always able to study the interactions of that subsystem with the rest of the universe, which can affect the behavior of the subsystem. This can lead to experimental artifacts. Biochemistry has long grappled with artifacts by use of experimental controls, for example, and the more recent push in systems biology. This is where many different systems (proteins, genes, RNAs, lipids, tissues, etc.) are studied in the same conditions simultaneously.

    I’ve always wondered about what seems to be an analogous situation in mathematics. If you start with a single set of all the positive integers and want to compare the number of positive integers to the total number of integers within the context of the single set, mathematicians use a copy of the positive integers that is outside the set and pair these off one to one with all the integers in the set to find that there the number of positive integers is the same as the total number of integers. To me, this seems similar to studying a cell nucleus outside a cell and trying to compare its properties to those of a cell nucleus inside a cell in which the nucleus is interacting with the rest of the cell. This can cause misleading results (experimental artifacts). In the infinite set case, it seems like by doing the pairing off method, you lose the sequential even-odd relationship between the positive integers and the rest of the integers, which leads to the conclusion that the number of positive integers is the same as the total number of integers. I know very few will agree, but it seems like inside the original single set system, there’s a relationship between the positives and all the integers which says that there are one-half as many positives as total integers. Some might argue that thought experiments like this don’t need to follow good experimental technique, but I would say: Why not? You’re performing a manipulation on a system in your mind in order to find out more about that system. Sounds like an experiment to me.

    Anyways, I think Dr. Smolin has a point, and I think his point extends to other areas, like the above, as well. Thought experiments are still experiments and good experimental technique should be used!

    Like

  10. Lee Smolin said:

    Dennett is a generous, serious and warmhearted thinker who works hard to produce arguments which are crystal clear. But talking with him or reading him, both of which are great pleasures, I sometimes find that at the climax of one of his beautifully constructed arguments, the clarity fades and there is a step which I can’t follow.

    This.

    This is just exactly how I feel about Dennett. I am going along thinking “Yes, I see that” and suddenly he does a Melbourne right turn and he is off somewhere and I have no idea how he got there.

    I keep thinking that there must be some obvious link between the two ideas he has just presented but I can’t for the life think of it.

    I am glad there is someone else, especially someone as smart as Lee Smolin, who feels the same.

    Like

  11. As for the rest of it, I am not sure how seriously to take any of it.

    It seems to me to be a question of semantics. “All that exists is the present moment” and “the block universe” are not inconsistent statements but different ways of looking at the same structure.

    Language constructs like “the present moment” or the present tense refer, however imperfectly, to time or at least to our experience of it.

    The universe is (or is modelled by) a four dimensional manifold and one of those dimensions is time. So ordinary language about time is referring imperfectly to particular parts of that manifold and not others.

    So the statement “the past still exists” or “the future already exists” are no more true statements about the space time manifold than “Mount Everest exists in Australia” is a true statement about the geography of this world.

    So “All that exists is the present moment” is a statement that is completely consistent with the block universe, in fact it is completely tautological since the present tense of “exists” refers to “the present moment” by definition.

    If someone makes the statement “the past, present and future all exist” then they must either be talking about some kind of “God eye” view of the universe or else the tense in their statement must be referring to some kind of meta time, neither of which can be part of a strictly scientific statement about what we can know of the universe.

    If someone can tell me of a mathematical difference between the views, where the mathematics of one cannot just be transformed into the other (they way a three dimensional structure parameterised by a variable can be transformed into a four dimensional structure) then I might understand the distinction.

    Otherwise it appears to be a pseudo distinction.

    As for the free will part, it seems that Smolin has been dragged rather unwillingly, so to speak, of the free will aspect and has said that he does not think that there are any free will implications that can be derived from his ideas at the moment.

    So I am not sure why Disagreeable Me is saying that this is an idea to solve the “problem” of free will.

    That said, I agree that there is no problem of free will to solve, although probably for different reasons.

    I don’t think that there is any coherently stated problem of free will that we even need “compatibilism” so solve.

    Like

  12. Wow. I put a “like” because I know Smolin, and like him, from way back, when he was still a struggling physicist. But I understood, or agreed with few things, he said in this interview, except at the end about the CO2 catastrophe.

    There he repeated what I have written for many years, and even the Pentagon (!) has been saying for quite a while: the Greenhouse Gas/Fossil Carbon burning catastrophe is a national security issue. It may not look like it, to those who have not considered the situation in depth, but the crisis with Putin is part of it.

    I was surprised Smolin did not mention Quantum entanglement (maybe I read too fast?) That is sure to change the notion of what we mean by “present” in the fullness of time.

    The crisis in physics is mostly due to a lack of imagination. But that may change soon. The truth is emerging: it’s all about waves, baby. But waves don’t compute (yet). Non-Linear wave theory does not really exist (as far as I know).

    The latest evidence is that Einstein was wrong about photons: photons are not points, they are structured. http://www.sciencemag.org/content/early/2015/01/21/science.aaa3035

    Those who want to go fundamental in physics have to dial back 115 years, when Poincare’ deduced E = mc^2.

    After that, some basic philosophical mistakes were made.

    As a result, research, even experimental research pointed the wrong way.

    Yes, light in empty space does not go at c always. If it’s structured, it slows down by .0001%. But the striking, drastic experiment was not run at the world’s most prestigious universities, just the University of Glasgow.

    Why?

    Because most physicists are lost in a doomed paradigm. They have been searching below the wrong lamp post, called “shut up and calculate”. Well, they calculated the wrong things (if you disagree I have one million string Calabi-Yau theories to sell you!)

    Physics will rebound as it always does: through new experiments. Simple experiments, not experiments with 5,000 physicists turning screws for their PhDs, in a giant collaboration. The structured light experiment is a new angle on our lack of understanding of the Double Slit experiment.

    New experiments are coming because it is rather humiliating to be unable to make Quantum Computers work…. When biology, deep down inside is about little else.

    Like

  13. Dear Roger Granet,

    while I cannot judge on the merits of the approach in biology that you describe and generally do agree that complicated systems need to be decomposed with care, I can assure you that your assessment of some basic results in set theory is way off.

    It is a bit misleading to say that two infinite sets have the “same number” of elements, given that there exists no finite number that could represent the “amount of elements” of an infinite set. Instead what mathematicians find (and can easily prove) is that two infinite sets can have the same cardinality, even though one is a proper subset of the other. While this might sound counter intuitive, it follows necessarily from a rigorous definition of “cardinality” and the standard axioms of set theory. So, as long as you don’t reject the axioms (and different axiomatic systems can in fact lead you to different conclusions here), you have to come to this result.

    The cardinality of a set can – in principle – not be established “within the set itself” but only in relation to some other set. But there is no need to appeal to any “external” standard. If you consider the set of all integers and the set of all positive integers, you are considering two sets, one of which is a proper subset of the other. Now, you can (e.g. under ZFC axioms) readily establish a bijection between the two sets, that means for every element in the first set, you can designate a corresponding element in the second set. By definition these two sets thus have the same cardinality.

    Hope that helps!

    Like

  14. Joe,
    Add a postscript to my above comment:
    Much like the product of a factory goes from start to finish, while the process goes the other direction, consuming material and expelling finished product. As the individual goes from birth to death, while the species moves onto new generations, shedding the old.
    Our digestive, respiratory and circulatory systems process energy, while the central nervous system processes form, aka information. The most elemental definitions of energy being frequency and amplitude.

    Like

  15. Hi Roger et al,

    I feel the need to point out that

    “…to compare the number of positive integers to the total number of integers within the context of the single set, mathematicians use a copy of the positive integers that is outside the set..”

    is not correct in any usual sense of your words. But you possibly have a point about trying to concoct some sense of one subset being half the size of its containing set, but some new sense of the words which makes, in the infinite case, a half-size subset strictly smaller than the whole set in some more subtle way than just being a subset of it and not equal to it. In Cantor’s time, most mathematicians not at the very top levels thought he was wrong in something like that sense, and since then, I’m sure many undergrads in pure math initially think along these lines at least for awhile. But finding that new sense seems very unlikely to happen at this stage, maybe 130 years later IIRC.

    The incorrectness above assumes by “outside the set” you mean at least ‘not a subset’, or maybe even ‘no element at all in common’ , i.e. disjoint. But in a way it is easier if anything to exhibit this matching when it IS a subset, rather than when it isn’t, as follows.

    We assume given ‘the’ set of all integers with its ordering, adding and multiplying. Th element 0 is then easily defined, and the relevant subset here consists of those elements strictly larger than 0. Then the matching can be defined to be simply the set of the following ordered pairs:

    (x+x , x) for each x in the subset; i.e. (2,1) (4,2) (6,3) etc…; and

    (x+x-1 , 1-x) for each x in the subset; i.e. (1,0) (3,-1) (5,-2) (7,-3) etc….

    Then it is not all that difficult to prove, using some elementary given properties of the initially given set, that the leftmost elements consist of each of the positive integers occurring exactly once, and the righthand elements similarly except ‘all integers’ replacing ‘positive integers’.

    On the interview of Smolin:

    I am vaguely acquainted with him, since I show up at Perimeter occasionally when there is an interesting talk that I think I might understand to some extent. And his questions after many talks, including ones in the philosophy dept. of the institution where I have an office, seem always very interesting. But I have not read his material at all on the topic of this interview, just a few things about it. So I imagine the very simple observation, that the most basic relativity seems to preclude even the objective existence of any ‘present moment’, probably has some easy response in terms of what he has conjectured. The interview did not seem to raise this, but maybe I read too quickly and missed it.

    Like

  16. Disagreeable Me:

    Glad you enjoyed the questions! Thanks for the complement. I think our priors are exactly aligned.

    Re: “Pace Smolin himself, his ideas are unfalsifiable” – I’d like to point out that Cosmological Natural Selection *does* make falsifiable predictions about the possible masses of neutron stars. I’m more doubtful about the falsifiability of the Principle of Precedence, for reasons we agree on.

    Re: “Smolin is right that the laws of the universe could change and evolve over time, but it seems to me that we are left with the logical necessity of ultimate laws governing how this happens” – Smolin is very aware of this. He calls it the “meta-law dilemma”. Here’s a brief snatch from ‘Time Reborn’:

    “It might look at first like a dead end, but after living with it for several years I have come to believe that it is, instead, a great scientific opportunity, a provocation to invent a new kind of theory that will resolve it. I’m convinced that the meta-laws dilemma is solvable and that how it is solved will be the key to the breakthroughs that will enable cosmology and fundamental physics to progress in this century.”

    Robin Herbert:

    I like your point that saying “the present moment is all that exists” is entirely tautological! That’s very neat. I agree in general that there’s a huge risk of purely semantic deadlock involved in such talk.

    As for demonstrating a hard mathematical difference between “the past, present and future all exist” and “the present moment is all that exists” – perhaps you’d like to check out Smolin’s paper ‘The Universe as a Process of Unique Events’ (http://arxiv.org/abs/1307.6167)?

    I don’t have the maths-smarts to be able to make sense of that. My lazy, philosopher’s hunch is that he may be building mathematical towers on shaky linguistic foundations – but if someone could tell me that Smolin is effectively *grounding* his definition of the present moment in his theory of Energetic Causal Sets then that would give me good reason to change my mind about his thesis.

    Like

  17. Hi Adamopticks,

    “As for demonstrating a hard mathematical difference between “the past, present and future all exist” and “the present moment is all that exists” ….. (http://arxiv.org/abs/1307.6167)?”

    A quick run-through on that paper gives me the impression it has rather a different general point than what you say there. Rather than claiming to be a theoretical indication that

    ‘the past and the future do not really exist’ (Is that not what you say just above?),

    it seems more like the related but different

    ‘Time does exist fundamentally, not just as a thing emerging from not looking right down to Planck dimensions’

    which is given a theoretical indication.

    “Time” as just above is certainly in the relativistic sense as basically ‘causal order’, so my earlier remark at the end of mainly a different reply is not applicable to that paper.

    Like

  18. Is Smolin’s Principle of Precedence related in any way to Feynman’s Sum over Histories? At “this present moment” all probabilities collapse, everything becomes the past. Before “this present moment” there are many events (knowable?) probabilities, contributing in their collapse, to bring into existence, at “this present moment”, a set of events some of which are related and we call the past. All event probabilities are potential making up the future until they collapse at “this present moment” then move into the past. That progression is (may be), within current and possibly advanced physics understanding, smooth.

    Like

  19. Lee Smolin: “(from http://arxiv.org/abs/1307.6167 ) … In this paper we’ll propose the diametrically opposite view. We develop the hypothesis that time is both fundamental [not emergent] and irreversible, as opposed to reversible and emergent. We’ll argue that the irreversible passage of time must be incorporated in fundamental physics to enable progress in our current understanding. The true laws of physics may evolve in time and depend on a distinction between the past, present and future, … The models … posit a fundamental irreversibility of time.”

    This is a wrong approach in physics. Instead of incorporating the ‘arrow of time’ in physics laws, we should try to find out two things.
    One, what is the BASE for this ‘arrow of time’?

    Two, how does this ‘arrow of time” emerge from that BASE?

    I have showed repeatedly that the base of ‘arrow of time’ is timelessness. Thus, the ESSENCE of every moment on the ‘arrow of time’ is timelessness. The issue is all about what the EMERGING mechanism is. I again have repeatedly showed that this mechanism produces two pure numbers {64, 48}, and then these two numbers gives rise to the Alpha equation. I will not repeat this here as some details are available at (http://prebabel.blogspot.com/2012/04/axiomatic-physics-final-physics.html and http://prebabel.blogspot.com/2013/10/multiverse-bubbles-are-now-all-burst-by.html#uds-search-results ).

    The ‘arrow of time’ is an emergent phenomenon, and the physics is about its emerging mechanism. No, there is no need to incorporate the phenomenon (consequence of law) into the law itself.

    This {phenomena vs laws} issue is very important to the issue of {empirical data vs theoretical truth}. I have showed three ways of obtaining the truths.

    T1, via empirical data (Epi-telescoping).

    T2, via matching with the anchor-web.

    T3, via a beauty-contest (designing our own universe). See https://scientiasalon.wordpress.com/2015/02/10/physicists-and-philosophers/comment-page-3/#comment-11910 .

    That is, there are two types of knowledge, 1) empirical data (always fallible), 2) theoretical truth (eternal).

    Without physics-gadgets (such as LHC or Planck satellite), philosophers are best equipped to search for theoretical truth.

    The Popperianism (falsifiability) has a great success and great achievement, but it is fundamentally wrong as TRUTH cannot be falsified by definition. Without T2 and T3, physicists had no way of distinguishing the truth from the nonsense claims. But now, we can falsify multiverse by simply show that 1) all nature constants can be derived, 2) all those nature constants are not bubble dependent, and no empirical proof is needed for this. We can also falsify SUSY by showing that 1) the SUSY is not needed in the {string unification, the G-string}, 2) it is not needed in the calculations of nature constants (see http://www.quantumdiaries.org/2015/01/09/string-theory/#comment-1788717126 ).

    So, Lee’s attempt of incorporating the phenomena into laws is wrong, and the ‘arrow of time’ is emergent while only the ‘timelessness’ is fundamental. And, I am greatly surprised that no major push on anti-Popperianism in philosophy thus far.

    Like

  20. Maybe I can help a bit in some things here…

    Robin Herbert,

    If someone can tell me of a mathematical difference between the views, where the mathematics of one cannot just be transformed into the other (they way a three dimensional structure parameterised by a variable can be transformed into a four dimensional structure) then I might understand the distinction.

    There are two important things to note. First, the “time-evolution” of a 3D manifold can certainly always be represented as a 4D manifold. However, be aware that the other way around does not always work — not every 4D manifold can be represented as “time-evolution” of a 3D manifold. In short, a generic 4D manifold need not have the “S x R” topology, where “S” is a 3-mainfold, and “R” is the set of real numbers, representing the time coordinate. These details are typically very important in general relativity and quantum gravity, so one needs to be careful about the concepts of a “block universe” and its relation to time evolution of 3D space.

    Second, the context of the research Lee is focused on is quantum gravity. This means describing the notion of time and space in a quantum-mechanical fashion, as opposed to a classical, manifold-like description. Quantum gravity is all about guessing some more primitive structure, which can be approximated with a manifold at large distances. So in Lee’s work, concept of space is replaced by one such more primitive structure (the set of “energetic causal events”), while the concept of time is more complicated (and too tricky to discuss here). But the point is that the model Lee speaks of cannot be interpreted simply as 3D space evolving in time. Rather, the elements of the primitive structure (which we perceive as “space”) are being *created* in discrete steps, and these steps can be interpreted as “time flow” in a certain sense.

    The bottomline is that both the “block-universe” idea and the “space evolving through time” idea are very inadequate descriptions of what is going on in this QG model.

    Joe,

    […] Smolin’s paper ‘The Universe as a Process of Unique Events’ (http://arxiv.org/abs/1307.6167)?

    I don’t have the maths-smarts to be able to make sense of that. My lazy, philosopher’s hunch is that he may be building mathematical towers on shaky linguistic foundations – but if someone could tell me that Smolin is effectively *grounding* his definition of the present moment in his theory of Energetic Causal Sets then that would give me good reason to change my mind about his thesis.

    Yes, that seems to be exactly what he is doing (I skimmed through the paper). The concept of “present” is essentially defined as the set of events which can still generate new events. The set of events which have been “exhausted” and do not generate new events are in the “past” (and as such can be ignored because they do not influence the physics anymore), while the set of yet-to-be-created events is the “future”, something which explicitly does not exist until it becomes created by the “present” events and thus becomes part of that present itself. Therefore, the set of “present” events contains all events that can still create additional events, despite the fact that some of these events have been created in the causal past of the others. It may sound confusing, but the math of the whole thing is pretty clear and straightforward (for a trained QG researcher, that is).

    So there are no shaky linguistic foundations here — only a shaky linguistic intuition interpreting the math, the latter being quite sound. 🙂

    Wesley,

    Is Smolin’s Principle of Precedence related in any way to Feynman’s Sum over Histories?

    Being “related in any way” is a very fuzzy notion here. The principle of precedence and sum over histories are certainly not the same thing, if that is what you are asking. They are both being employed in the model construction in 1307.6167. I also don’t see how either could be a full consequence of the other. Finally, whether one implies some partial aspects of the other or not — I don’t know, that is more fuzzy, and I cannot make any clear-cut statements without a detailed analysis.

    Like

  21. tienzengong, I’m not sure what it means to say that falsifiability is wrong because truth cannot be falsified. I’m afraid that sentence betrays a lack of understanding of what Popper said.

    Like

  22. Marko, the following is of course rather confusing to me, not the least of which is I’m neither a mathematician nor a QR researcher. . . obviously. Still, it is intriguing. Could you provide an analogy that might help. I am particularly interested in how one could even access a “past” in terms of the definition of present and future. In other words, how would one even be able to determine this:

    “The set of events which have been “exhausted” and do not generate new events are in the “past” (and as such can be ignored because they do not influence the physics anymore), while the set of yet-to-be-created events is the “future”, something which explicitly does not exist until it becomes created by the “present” events and thus becomes part of that present itself.”

    Why would one even have to posit a past, much less ignore it?

    Like

  23. Let me simplify my confusion as a lay person. Suppose I’m outside at night looking into the sky. The light reaching my eyes is from the past but in my present. What does this mean?

    Like

  24. Marko – That’s just sepculative math that leads to a rather ridiculous holographic universe. You can “believe” that what you see is a hologram of galaxies receding, but what I see is galaxies receding like anything recedes in space over time. Abstract models for reality are obviously not reality.

    tientzegong – I have told you many times that Godel simply offends his own logic with the question whether a part of a consistent whole can ever by proved – a whole cannot by defintion be divided to assess any part of it. That is logic, and you are right it applies to any “language of logic” because it is called conformity to definition – that’s logic.

    Robin Herbert “the present moment is all that exists” is just a bad bit of garammar, nothing to be made of it, what they mean is the words I used – “a present moment is continual, it is all we ever experience”. that might get to the point of the statement, despite its grammar.

    Roger Granet – don’t expect to be able to “verify” everything about any experiment, remember that verification (Goedel) is a fool’s errand, so just “falsify” the bits you can.

    Like

  25. Massimo: “… I’m not sure what it means to say that falsifiability is wrong because truth cannot be falsified. I’m afraid that sentence betrays a lack of understanding of what Popper said.”

    “Falsifiability” might be a philosophical issue for philosophers; it is now a life-death issue for theoretical physicists. So, if I am lacking the understanding of Popper said, I do know what I have faced Popperianism in the past 30 years. That is, what did Popper have said does not truly matter. I and many top physicists do know what the Popperianism (in terms of falsifiability) means in physics.

    There are two types of theoretical physics.

    T1, empirical data BASEd model building: the interplay of theories and verifications.

    T2, philosophical insights BASEd framework construction: can be verified (not falsified) and refuted with anchor-web matching and with beauty-contest.

    I personally do not see that T1 is truly theoretical physics. On the other hand, the M-string theory is a T2 work although it failed to make any contact to any known physics. Today, the T2 physics faces two situations.

    S1, its consequence (not prediction) is way beyond the current technology to probe it.

    S2, the true truth must not be falsified, per definition. Of course, the non-true T2 claims can of course be proved as WRONG by the ‘theoretical truths’.

    Most of the top physicists who join the anti-Popperianism camp is because of the S1. But, my stance comes from the S2. Then, what is the ‘theoretical truth’?

    If X is a system which is declared true by some means (such as with T1 physics), then the LANGUAGE which describes X is ‘theoretical truth’ although language itself was neutral (without any truth/false value). Well, without examples, this will just be talking talks. Here are some examples.

    Ex 1, the Standard Model fermions are deemed as true. Then, the LANGUAGE (such as G-string, see http://putnamphil.blogspot.com/2014/06/a-final-post-for-now-on-whether-quine.html?showComment=1403375810880#c249913231636084948 ) which describes them is a ‘theoretical truth’.

    Ex2, {(1/Alpha) = 137.0359 …} is deemed true. Then, an equation which derives that number is a ‘theoretical truth’, no need for any gadget-testing verification.

    The mission for the M-string theory is to get {string unification}, but failed. If no one succeed, the failure itself is not a proof of it being wrong. But, we do have one LANGUAGE which successfully describes all SM fermions, and this is enough to declare that M-string is wrong.

    The objective of multiverse is to show that this fine-tuned universe is just a happenstance among zillions possibilities. Their key argument for this is that these fine-tuned nature constants of this universe cannot be DERIVED. By “Show them” how to derive them, multiverse is easily refuted.

    The ‘anti-falsifiability’ issue is clearly defined and understood among physicists, regardless of what Popper had said. Yes, ‘falsifiability’ must go in both case (S1 and S2). This could be a great ‘intellectual’ topic for philosophy, but it is a life-death issue for theoretical physicists.

    Like

  26. Tienzengong, thanks for your lengthy explanation. The fact remains that your sentence referred to above is inconsistent with an understanding of Popper. It may very well be that physicists don’t need Popper, but the first need to understand him in order to justify rejecting him.

    Like

  27. I tend to agree with Thomas here. Indeed, I’m reading Smolin’s latest book and a major part of his and Unger’s argument is that denying the reality of time makes incomprehensible one of the major discoveries of 20th century cosmology: that the universe has an age.

    Like

  28. Joe Boswell,

    I enjoyed the interview, and the discussion that has followed. However, I admit that I would have preferred more Smolin and less Boswell. It seems you went into the interview not so much to clarify topics but to argue points. I think that has led to some loss of clarity in the comments that have followed.

    Smolin has a way of making suggestions concerning issues beyond his science research, but he is usually careful to remark the need to leave such suggestions as suggestions, rather than positions he’s ready to defend. That may not be a wise strategy to follow; but insisting he defend suggestions as arguments loses sight of what can be gained by letting him clarify the points that he *is* arguing for (which are difficult enough for a non-scientist to follow).

    Like

  29. Thomas,

    Could you provide an analogy that might help

    Actually, I can. There is a nice analogy that can illustrate the behaviour of physical “events” in this model. Consider the population dynamics of, say, humans — people can live, mix their genetic material, procreate and die. So you can divide the “total” population of humans into three distinct subsets — those that cannot procreate anymore (dead and sterile people), those that can still procreate (alive fertile people) and those that are yet to be conceived (people who could potentially be born as offspring from the previous group). These three subsets are called (by definition) the “past”, the “present” and the “future”.

    So each human corresponds to a physical “event”. Combining two “present” humans one can obtain a new human (a fresh new “event”) that becomes the part of the “present”. The properties of the new human are somewhat determined by the genes of the parents (energies and momenta of parent-events), but not completely since there is an element of unpredictability in the mixing of genes (the “event generator” is uncomputable). Note that both the still-fertile parents and their offspring are all members of “present”, despite the fact that parents are “older” than the offspring in the causal chain (or more precisely, web) of events. Note also that incest-like behaviour is allowed: a parent and the offspring can interact to generate further offspring. A given human becomes a member of the “past” when they die or otherwise become sterile — they cannot have offspring anymore, i.e. they cannot influence the course of “future becoming the present”.

    The progress of time is described by the evolution of the population of the “present”, since existing members die into the “past” while novel members come into existence from the “future”. The evolution is irreversible, nondeterministic, and the “laws of nature” that describe the regularities among the events in the “present” are dependent on the events themselves. This means that as the events die out and are replaced by new events, the laws describing regularities in new events may be different than the laws describing old events. So laws of physics are in that sense time-dependent. Another way to put it is that laws of physics are emergent properties of the particular set of events, and contingent on it — as the set changes, so may the laws describing it.

    The issue of uncomputability of the event-generator is what Lee referred to as the “meta-law dilemma” — whether the law that governs the evolution of the laws of physics is computable or not (although I doubt he ever phrased it this way).

    The model presented in 1307.6167 is fairly analogous to the above population dynamics. The only thing that is a bit awkward is the definition of notions of past, present and future, which are somewhat at odds with the usual intuition of the terminology. But if you consider space, time and matter as a collection of discrete “events” (this is the context of causal set theory), the way that notions of past/present/future are defined becomes more natural. In fact, it would actually be quite hard to come up with a better definition.

    HTH! 🙂

    By the way, I feel that the above description of the time flow (the future becoming the present and then the past) will make Brodix very happy! 😉

    Massimo,

    Note that I haven’t read the actual book (I am only moderately familiar with Lee’s research), so I am not sure what problem you are referring to. But let me just say that (in my opinion) the model presented in 1307.6167 is still a far cry from being anywhere near applicable to cosmology. Lee has often repeated that this research direction requires a lot more work to upgrade it from the level of an interesting toy-example idea to the level of a comprehensible theory. Only then one could ask the model about its cosmological predictions. But even intuitively, I don’t quite understand what would be the trouble with cosmology in this setup? Can you phrase the problem more precisely?

    Like

  30. Massimo: “The fact remains that your sentence referred to above is inconsistent with an understanding of Popper. It may very well be that physicists don’t need Popper, but the first need to understand him in order to justify rejecting him.”

    Thanks for your comment. Of course, Popper had more achievements than this single ‘falsifiability’ issue (FI). But this FI became the dominant force for doing science. My issue with Popper does not go beyond this FI. While the anti-FI movement arose just a few years ago (after some pet projects failed with the LHC Run I data), my problem with FI is more fundamental.

    If there is ‘theoretical truth’, then FI is wrong for it. There are many things in this universe which cannot be falsified.

    Let me use my “Prequark Chromodynamics (http://www.prequark.org/ )” as example. No, it is not a theory, as it does not PREDICT anything. It is only a LANGUAGE which describes the SM fermions. In that language, it uses the term Prequark (Angultron, Vacutron). Are Angultron and Vacutron predictions? No, they are just alphabets or lexicons of the language. If this language cannot describe ALL SM fermions, then it is not good, or simply wrong. If it does a good job as a language, it is ‘theoretical true’ for this DESCRIPTION.

    Can we turn this Prequark language into a testable theory?

    For a sub particles composite, the composed particle can be pumped to an excited state. Yet, for an iceberg type composite system, the constituents (big chunk of ice, large ocean of water and huge sky) are zillion times more massive than the composed system (the visible iceberg). An iceberg composite might not be able to be pumped to an excited state. The name sake of Pre-quark is the Pre- (not sub-), before quark but not necessary smaller, (see http://www.quantumdiaries.org/2015/02/04/lhc-run-ii-excited-quarks/#comment-1843911054 ). Thus, a failure of seeing an excited quark does not falsify the Prequark. The Pre- (not sub-) quark is not a ‘particle’ by all means. Vacutron is just an alphabet for quantum-vacuum. Angultron is just an ‘orientation (as angle)’ in the quantum vacuum. Thus, they are way beyond the technology we have to probe them.

    Again, the equations which calculate all nature constants are just LANGUAGES. One can claim that those are numerologies. Yet, for a single scheme to calculate all, it requires a huge luck. Furthermore, the scheme (timelessness to ‘arrow of time’ process, immutability to SM fermion structure mechanism) is not PREDICTION. They are just the BASE for the language (equations).

    We are now able to DESIGN our own universe and enter into a beauty-contest with the Nature’s design. The design starts from arbitrary choosing a set of axioms. From there, we write axiomatic sentences and theorems, then entering into the beauty-contest. No, there is no PREDICTION, just sentences, theorems and beauty-contest.

    This is how the theoretical physics does now. Indeed, it needs no FI anymore.

    Like

  31. Thank you, Marko. As usual in your case, that’s clear and helpful. Funny you mentioned Brodix because I thought of him when you provided your earlier description. The problem for me in your analogy is that you don’t account for a fourth subset–those who can procreate, but choose not to–which would seem by definition to straddle both past and present, and possibly the future. Or would this event be a sort of facsimile of sterility? But I understand that this is merely an analogy. It’s interesting to me because I’ve frequently been bothered by notions of the present. Notions of the present are in some ways more difficult for me to grasp and express than are those of the past and future. The present seems a subliminal point needed to bridge past and future.

    Like

  32. phoffman56: Thanks for the reply. I’ll only make one comment so as to not distract from the rest of the Smolin essay and comments, but when you mention that the pairing off is easier to do within the set of the integers as shown by:

    ~~~~~~~~
    (x+x , x) for each x in the subset; i.e. (2,1) (4,2) (6,3) etc…; and

    (x+x-1 , 1-x) for each x in the subset; i.e. (1,0) (3,-1) (5,-2) (7,-3) etc….

    Then it is not all that difficult to prove, using some elementary given properties of the initially given set, that the leftmost elements consist of each of the positive integers occurring exactly once, and the righthand elements similarly except ‘all integers’ replacing ‘positive integers’.
    ~~~~~~~~

    the gap between the leftmost positive integer and the rightmost integer in the above pairs is increasing to infinity as you go to the right. This seems to suggest that within the context of the single set that the pairing off method has broken the “natural” one positive integer for every two total integers relationship that occurs in the set of naturally sequenced integers. It’s that relationship between the positive integers and the total integers as they march in lockstep towards infinity that seems to be most important to me and that is lost in the pairing off method.

    Relating this back to the interview with Smolin, I just wonder if, in studying lab.-scale subsystems, that some relationships between a subsystem and the rest of the universe are also being broken? Just speculating.

    Thanks again for the feedback!

    Like

  33. Marko,
    It very much does. Though I think there are a lot of other aspects of it which can be considered.
    Considering I had a grandfather who had a child at 70, there are fuzzy lines in your analogy. I would describe what is fully past as those forms which no longer contain any of their constituent energies. Such as yesterday is fully past, while a rock that has been around for billions of years is not. Given there is no energy in the future, but energy is conserved, the future is indeterminate, but certain.
    Which would lead to the issue of determinism, as in causality yields determination, not the other way around, as seems to be an issue when we take that narrative dimension as fundamental.
    While this view does have problems for aspects of current physics, such as making time an effect of action, like temperature and therefore emergent, as well as making it irreversible, since actions, by definition of their inertia, are going one direction and not the other, I’m surprised so few even want to debate the issue.
    The more educated people are, the more they are set on a particular course of thinking, like learning one language and not another.
    Thanks for the tip anyway.
    Regards,
    John

    Like

  34. Terrific.
    First prize would be if Lee Smolin and Alberto Unger joined this conversation.

    The heart of  Lee Smolin’s  thought is contained in the conclusion to his new book, The Singular Universe, on page 501:

    … science is not about what might be the case. There are an infinite number of things that might be true of the universe, but which could never be observed …. Science is only about what can be conclusively established on the basis of rational argument from public evidence … science …must begin with the principle that there is a single causally connected universe that contains all its causes. By all causes, I also mean that the laws themselves are explained in a way that has testable consequences.

    There are two possibilities.
    1. There is a single causally connected universe that contains all its causes. Such a universe is ultimately self-explanatory because it contains its own laws that can be derived from the development of the universe. This is an atheist friendly hypothesis that discards the need for a God hypothesis. This is the possibility that Lee Smolin advocates.

    2. The universe does not contain all its causes. In this case there are boundaries to science that can never be crossed. We can speculate but never know who’s speculation is closest to the truth. The seeming timeless and unchanging nature of the laws of nature point to something that lies outside the universe. That is because they are not time dependent in a time dependent universe and so cannot be part of the universe. This is a theist friendly hypothesis since a hypothetical God would be ‘external’ to his creation.

    These two possibilities are metaphysical commitments and not science. Science, as a whole, has implicitly, and of necessity,  made a metaphysical commitment to (1), ‘a single causally connected universe that contains all its causes‘. But science is meeting boundaries and that seems to support the second possibility, the universe does not contain all its causes. This is a deeply unsettling possibility to atheism since it opens the door to theism. The ill-considered books by Krauss and Hawkins are an attempt to close the door. But unprovable speculation cannot close that door. By indulging in unprovable speculation they have adopted the mantle of the priests they disdain.

    Lee Smolin is right in that the only defensible metaphysical commitment that science can make is to  ‘a single causally connected universe that contains all its causes‘. As he says, ‘Science is only about what can be conclusively established on the basis of rational argument from public evidence‘. 

    Leave the rest to the humanities and to the priests.

    Speculation is allowed, indeed desirable, because it opens paths for science to follow. That is only possible if we restrict speculation to what is potentially testable. Most importantly, we should never elevate speculation to dogma, as Krauss and Hawkins have done. That is the First Deadly Sin of Science.

    Like

  35. Hi Joe Boswell,

    Good points and clarifications.

    > I’d like to point out that Cosmological Natural Selection *does* make falsifiable predictions

    Sure. I was really talking about the Principal of Precedence here.

    > Smolin is very aware of this. He calls it the “meta-law dilemma”.

    Good to know, but it seems to me to really undermine his position. His confidence that the meta-law dilemma will be resolved is no different in my eyes to the confidence of the Newtonian paradigm that there is some explanation for eternal natural law. Any kind of explanation that could account for the meta-laws could be used to account for the laws themselves. He has posited a redundant cosmic middle-man. He could be right, but there is very little reason to motivate any confidence at all that he is. Occam’s razor seems to be against him.

    Hi Massimo,

    > a major part of his and Unger’s argument is that denying the reality of time makes incomprehensible one of the major discoveries of 20th century cosmology: that the universe has an age.

    Correct me if I’m wrong here, but nobody is arguing that time is not real. There are simply different interpretations of the nature of time. In the B theory of time, for instance, where past present and future all exist as one construct outside of time, time exists within the universe as one of its dimensions. To say the universe has an age is far from incomprehensible: it is just to say that the present moment is about 14 billion years along the time axis from the origin.

    Like

  36. Marko, I’d rather wait until I finish at the least the first half of the new book. I plan on publishing two reviews of it here at SciSal, one per half.

    Like

  37. DM, what Unger and Smolin mean when they say that modern fundamental physics (as opposed to cosmology) doesn’t take time to be real is that the field equations of general relativity have no place for a universal time (all frameworks are relative), and that they are interpreted as implying a unified “spacetime.” The authors instead think that this is a limitation of GR, and that time is fundamental, possibly with space being emergent. Similar issues apply to quantum mechanics, whose equations are also time-symmetric. As I said, though, I’d rather stay out of further involvement until I’ve had time to read (and digest) at the least half of their book, on which I will write to (half) reviews.

    Like

  38. There’s an interview with Smolin earlier this year in SA where you’ll find comments like this:

    “As Roberto Mangabeira Unger and I argue in our new book The Singular Universe, the most important discovery cosmologists have made is that the universe has a history. We argue this has to be extended to the laws themselves. Biology became science when the question switched from listing the species to the dynamical question of how species evolve. Fundamental physics and cosmology have to transform themselves from a search for timeless laws and symmetries to the investigation of hypotheses about how laws evolve.”

    . . . . .

    “There is no use wondering what symmetry unifies the elementary particles and forces if Penrose and Leibniz are right that the more fundamentally we understand nature the less symmetry the laws will have. Nor is it fruitful to look for principles to frame timeless laws when the real story is that the laws evolved, and so are to be explained historically.”

    http://blogs.scientificamerican.com/cross-check/2015/01/04/troublemaker-lee-smolin-questions-if-physics-laws-are-timeless/

    Like

  39. Hi Massimo,

    It may very well be that physicists don’t need Popper, but the first need to understand him in order to justify rejecting him.

    Physicists do understand Popperian falsifiability well enough (after all, it was the way physics already was before Popper produced his commentary). Nor is there really any move to abandon Popperian falsifiability among physicists. Even Sean Carroll’s Edge post on falsifibility is really just asking that it not be interpreted too simplistically, but philosophers of science have already accepted that.

    Hi labnut,

    The universe does not contain all its causes. In this case there are boundaries to science that can never be crossed.

    Science is a process of trying to trace causes all the way back. There is no arbitrary stopping point where we declare “boundary of universe”, where everything beyond that stops being science. Any “cause” that has effects in the observable universe is, by definition, part of the universe and is fair game for science (and is amenable to study owing to those effects).

    Science, as a whole, has implicitly, and of necessity, made a metaphysical commitment to (1), ‘a single causally connected universe that contains all its causes‘.

    Not true. What sorts of causes there are, and their nature and their own origin, is not any sort of “metaphysical commitment” that science must make. Rather, enquiring into the answers to such questions is exactly what science does.

    Like

  40. Universal Laws?
    The only laws of Nature are the ones we unnaturally create. The Universe is united and free unless you were taught to measure and divide it, and boxed or govern it All in. Haven’t we all been taught science and religion? Do you believe or have faith in what you were taught? How many laws have we created, can they even be counted, and what do they all mean? To me: science is science with its measurements and divisions, religion religion with the same inequities, the measure and division of the Gods and the Godless, and then there is truth, singularity, the absolute, freedom at last. I believe in that or this, =

    Like

  41. Coel,

    Any “cause” that has effects in the observable universe is, by definition, part of the universe and is fair game for science (and is amenable to study owing to those effects).

    I don’t want to come across as disagreeing with you here, or as being a nitpick, but just for the sake of precision — we should really say “Any cause that has reproducible effects in the observable universe…”. There are things that cannot be reproducibly studied, and we (physicists, scientists…) must always be acutely aware of the difference between observation and experiment. For example, astrophysics and cosmology are playing on the knife’s edge in this respect (successfully, but still…), as I’m sure you are aware. 🙂

    This nitpicking is just a note for non-scientists — in case they are not aware of the level of rigour necessary for something to be called “science”. 😉

    Like

  42. Without solid examples, the beauty-contest will just be talking talks. I will now show the solid examples about the beauty pageant with rounds of contest and with matching contestants.

    Dp: discovered physics, representing the Nature’s design

    Mdu: My designed universe, by choosing a single SCHEME. Yet, this scheme will not be discussed here. Only contestants will be presented.

    Wp1: winning point one

    Round 1: some nature constants
    Cabibbo angle (Dp) = 13.04 degrees
    θc (Mdu) = 13.5211574853

    Weinberg angle (Dp) = 28 to 30 degrees
    θW (Mdu) = 28.75 degrees

    Beta (1/Alpha) (Dp) = 137.0359 …
    Beta (Mdu), with Alpha equation = 137.0359 …

    Wp1: In Dp, these three are free parameters, cannot be derived. Yet, In Mdu, they are derived values.

    Wp2: in Dp, they are unrelated. In Mdu, they are linked in a chain of derivations. (See http://prebabel.blogspot.com/2012/04/alpha-fine-structure-constant-mystery.html and http://prebabel.blogspot.com/2011/10/theoretical-calculation-of-cabibbo-and.html ).

    Wp3: Dp puts no constrain on Multiverse. Mdu refutes Multiverse by yanking out its objective (claiming that those constants cannot be derived).

    Round 2: SM fermions
    Dp: quarks and leptons are discovered particles.

    Mdu: quarks and leptons are described with a LANGUAGE (see http://www.prequark.org/ ).

    Wp1: Dp does not provide a framework to calculate how many generation of quarks there are. The 4th generation quarks and the sterile neutrino are speculated. But, the grammar of the Mdu language allows only three generations (no more, nor less). The latest CMB data shows that the Neff = 3.04

    Wp2: Dp does not prohibit SUSY. But, the EDM ruled out any SUSY below 30 Tev., the LHCb rules out below 100 Tev. Again, the grammar of the Mdu language does not allow any SUSY. Furthermore, SUSY plays no role in the nature constant calculations.

    Wp3: although the human intelligence is an empirical fact, Dp cannot provide any hint for its emergence. In Mdu, a Turing computer is embedded in both proton and neutron (see http://www.prequark.org/Biolife.htm ).

    Wp4: in Mdu, the fermions (quarks and leptons) are iceberg type composite particle. With the similarity transformation, the Cosmo (the entire universe) is also an iceberg type composite system. So, the Planck data (dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82) can be calculated, see https://scientiasalon.wordpress.com/2014/10/28/the-varieties-of-denialism/comment-page-1/#comment-9212 .

    Round 3: UP {delta P x delta S >= ħ}
    Dp: this is an empirical fact, cannot be derived.

    Mdu: the governing force F (Cosmo) = ħ/(delta S x delta T), S (space), T (time). That is, UP is derived from this F (Cosmo).

    Wp1: UP is derived in Mdu.

    Wp2: UP (Dp) does not give any hint about dark energy. In Mdu, the dark energy is the consequence of this F (Cosmo), see http://prebabel.blogspot.com/2013/11/why-does-dark-energy-make-universe.html .

    Are these Wps predictions? No. They are just the beauties of the language.
    Can they be falsified? No. Falsifiability is PRACTICALLY useless in this beauty-contest.

    Like

Comments are closed.