The evidence crisis

220px-Calabi-Yau-alternateby Jim Baggott

Thanks to a kind invitation from the Simons and John Templeton Foundations and the World Science Festival, last Friday (30 May) I participated in a public discussion on ‘Evidence in the Natural Sciences’ with Professors Brian Greene and Peter Galison.

This discussion was the final act in a one-day symposium of the same name, held at the Simons Foundation’s Gerald D. Fischbach Auditorium on 5th Avenue, in New York City. These were comfortable, well-appointed surroundings. But the overwhelming message from the symposium was actually quite discomfiting. In its 300-year maturity, it seems that science is confronted with nothing less than a crisis of evidence.

The crisis takes many forms. I learned that mathematicians are increasingly resorting to computer-based proofs that signal a loss of certainty and the ‘end of conviction.’ Efforts are underway to develop computer-based algorithms that will soon provide the only way to review such proofs, leading one audience member to wonder how long it will take to eliminate mathematicians entirely from the process.

Eliminating humans, and their biases and general lack of self-criticism, appears to be the only workable solution to a crisis of evidence in the bio-medical sciences as well. This is a field in which John P. Ioannidis (now at the Stanford School of Medicine) famously declared in 2005 that ‘most published research findings are false’ [1]. This was real sit-up-and-take-note stuff. The research findings in question are of the kind that can lead eventually to clinical trials of new drugs.

I’d been invited to address yet another type of evidence crisis. Last year I published a book, called Farewell to Reality, which challenges some of the prevailing opinions about contemporary theoretical physics of the kind which address our ‘big questions’ concerning the nature of the physical universe. In it I argue that some theorists have crossed a line. They are suffering a ‘grand delusion,’ a belief that they can describe physical reality using mathematics alone, with no foundation in scientific evidence. I call the result ‘fairy-tale’ physics.

My role in our public discussion was that of interlocutor and facilitator. Greene is of course widely known for his Pulitzer short-listed The Elegant Universe and follow-ups The Fabric of the Cosmos and The Hidden Reality, his many radio and TV appearances and his growing role as a popular science educator (he is co-founder of the World Science Festival with his wife, former ABC News producer Tracy Day). Galison is a Harvard science historian with a flair for popularization, author of Einstein’s Clocks, Poincaré’s Maps and Objectivity. He has developed a couple of TV documentaries, about the H-bomb and about national secrecy and democracy, and is currently working on a film about the long-term storage of nuclear waste.

And then there was me, sitting in the middle. An interlocutor with an agenda. What follows is not a transcript of our discussion (I’m hoping that the Simons Foundation will post a video of this online), but rather a summary of my position.

So What’s the Problem?

Wind the clock back. On 4 July 2012, I watched a live video feed from the CERN laboratory near Geneva, and celebrated the announcement that a particle that looked a lot like the Higgs boson had finally been discovered.

This was a triumph for a theoretical structure called the standard model of particle physics. This is the theory that describes physical reality at the level of elementary particles and the forces between them and which helps us to understand the nature of material substance.

But our joy at the discovery of the Higgs was tempered by concern. We know that the standard model can’t be the whole story. There are lots of things it can’t explain, such as the elementary particle masses and the nature of dark matter. And it is not a ‘theory of everything’: it takes no account of the force of gravity.

We build scientific theories in an attempt to describe and hopefully explain empirical data based on observations and measurements of the physical universe around us. But in the twenty-first century we’ve run into a major obstacle. We have evidence that tells us our theories are inadequate. But we have no data that provide meaningful clues about how our theories might be improved. Theorists have therefore been obliged to speculate.

But in their vaulting ambition to develop a ‘theory of everything,’ some theorists have crossed a line without any real concern for how they might get back. The resulting theories, invoking superstrings, hidden dimensions and a ‘multiverse,’ among other things, are not grounded in empirical evidence and produce no real predictions, so they can’t be tested. Arguably, they are not science.

Albert Einstein once warned [2]:Time and again the passion for understanding has led to the illusion than man is able to comprehend the objective world rationally by pure thought without any empirical foundations — in short, by metaphysics.” What did Einstein mean? Quite simply, there can be no science without evidence or at least the promise of evidence to come.

How Should We Interpret ‘Reality’?

I believe that the root of the problem lies in the way we seek to interpret the word ‘reality.’ Pick up any text on philosophy and you’ll find discussions of reality under the general heading ‘metaphysics.’ How come? Physical reality seems really rather tangible and logical. It confronts us every morning when we wake up. Surely, despite what the philosophers might say, we can be pretty confident that reality continues to exist when there’s nobody looking. The science fiction writer Philip K. Dick once declared: “Reality is that which, when you stop believing in it, doesn’t go away.” [3]

But reality is curiously schizophrenic. There is an ‘empirical reality’ of things as we observe or measure them. This is the reality that scientists try to address. The purpose of science is to seek rational explanations and ultimately an understanding of empirical reality by establishing a correspondence between the predictions of scientific theories and the results of observations and measurements. Such a correspondence gives us grounds for believing that the theory may be ‘true.’

Here’s an example. In 1964, Peter Higgs, Francois Englert and Robert Brout speculated that there must exist a special kind of quantum field — which became known as the Higgs field — responsible for giving mass to elementary particles. In 1967 Steven Weinberg used this field to predict the masses of some exotic particles called W and Z bosons, which we can think of as ‘heavy photons.’ These particles were discovered at CERN in 1983, with more-or-less the masses that Weinberg had predicted. Consequently, the Higgs field was incorporated into the standard model of particle physics.

But there could in principle have been other possible explanations for the masses of the W and Z particles. If the Higgs field really exists, then it should produce a tell-tale field quantum — the Higgs boson. In 2012, establishing a correspondence between the empirical data produced at CERN and theoretical predictions for the behavior of the Higgs boson gave us grounds to believe that the Higgs field really does exist and that the standard model is ‘true’ within its domain of applicability.

We would perhaps not hesitate to declare that lying beneath this empirical reality must be an independent reality of things-in-themselves, a reality of things as they really are. But such an independent reality is entirely metaphysical. Kind of by definition, we cannot observe or measure a reality that exists independently of observation or measurement. We can only speculate about what it might be like. As Werner Heisenberg once said: “We have to remember that what we observe is not nature in itself, but nature exposed to our method of questioning.” [4]

It is this independent reality that philosophers try to address, which is why their speculations appear under the heading of ‘metaphysics.’ Now, philosophers are not scientists. They don’t need evidence to establish a correspondence between their interpretation of an independent reality and our empirical world of observation and measurement. They’re more than satisfied if their interpretation is rationally and logically structured and coherent. There is truth here, but of a subtly different kind.

Crossing the Line

Contemporary theorists find themselves caught in a bind. Without any clues from empirical data to guide theory development, and ever eager for answers to the ‘big questions’ of human existence, it seems that theorists have had no choice but to cross the line from physics to metaphysics.

There’s nothing wrong with this. Theorists have been doing this for hundreds of years. But, as scientists rather than philosophers, they have speculated about the nature of an independent reality of things-in-themselves with the aim of getting back across the line as quickly as possible. Einstein’s special and general theories of relativity were founded in arguably metaphysical speculations about the nature of space and time. But Einstein was at pains to get back across the line and show how this interpretation of space and time might manifest itself in our empirical reality of observation and measurement. The rest, as they say, is history.

Contemporary theorists have simply stopped trying to find their way back. Worse, they have built a structure so complex and convoluted and riddled with assumptions that it’s virtually impossible to get back.

What do I mean? As they have explored the metaphysical landscape of a mathematically-defined independent reality, the theorists have misappropriated and abused the word ‘discovery.’ So, they ‘discovered’ that elementary particles are strings or membranes. They ‘discovered’ that there must be a supersymmetry between different types of particle. They ‘discovered’ that the theory demands six extra spatial dimensions which must be compactified into a space so small we can never experience them. They ‘discovered’ that the five different types of superstring theory are subsumed in an over-arching structure called ‘M-theory.’ Then, because they ‘discovered’ that there are 10-to-the-power-500 different ways of compactifying the extra dimensions, each of these must describe a different type of universe in a multiverse of possibilities. Finally, they ‘discovered’ that the universe is the way it is because this is one of the few universes in the landscape of 10-to-the-power-500 different kinds that is compatible with our existence.

I want you to be clear that these are not discoveries, at least in the sense of scientific discoveries. They are assumptions or conclusions that logically arise from the mathematics but for which there is absolutely no empirical evidence. It’s not really so surprising that the theory struggles to make any testable predictions. There is simply no way back to empirical reality from here.

Don’t be blinded by all the abstract mathematics, all the ‘dualities’ which connect one kind of mathematical description with another. These help to establish ‘coherence truths,’ of the kind X = Y. But when neither X nor Y correspond to anything in the empirical world that even hints at the possibility of an observation or a measurement, then we can be clear that this all remains firmly metaphysical.

Alarm Bells

We have a problem. The theorists are stuck on the wrong side of the line, and most believe there is no viable alternative. As Nobel laureate Steven Weinberg remarked to me a little while ago [5]:

String theory still looks promising enough to be worth further effort. I wouldn’t say this if there were a more promising alternative available, but there isn’t. We are in the position of a gambler who is warned not to get into a poker game because it appears to be crooked; he explains that he has no choice, because it is the only game in town.”

Obviously, we sympathize. But what if, instead of being obliged to attend remedial therapy, those addicted to gambling were able somehow to influence the rules, to make gambling an acceptable pastime? No scientist likes to be stigmatized, to be accused of pseudo-science. This is why some in the theoretical physics community are seeking to change the way we think about science itself.

For example, string-theorist-turned philosopher Richard Dawid recently argued [6]: “final theory claims introduce the new conception of a scientific process that is characterized by intra-theoretical progress instead of theory succession … The status of a merely theoretically confirmed theory will always differ from the status of an empirically well-tested one. However, in the light of the arguments presented, this difference in status should not be seen as a wide rigid chasm, but rather as a gap of variable and reducible width depending on the quality of the web of theoretical arguments.”

The problem with this is that as soon as we accept the notions of ‘intra-theoretical progress’ and ‘theoretically confirmed theory’ we risk completely disconnecting from any sense of real scientific progress. We risk losing respect for evidence, deepening the crisis, unplugging from empirical reality and training — how many? one, two? — generations of theorists to believe that this is all okay, that this is science fit for our modern, post-empirical age. We ensure they inherit an addiction to gambling.

Some are already talking of these theorists as ‘lost generations’ [7]: “It is easy to estimate the total number of active high-energy theorists. Every day hep-th and hep-ph bring us about thirty new papers. Assuming that on average an active theorist publishes 3-4 papers per year, we get 2500 to 3000 theorists. The majority of them are young theorists in their thirties or early forties. During their careers many of them never worked on any issues beyond supersymmetry-based phenomenology or string theory. Given the crises (or, at least, huge question marks) in these two areas we currently face, there seems to be a serious problem in the community. Usually such times of uncertainty as to the direction of future research offer wide opportunities to young people, in the prime of their careers. To grab these opportunities a certain reorientation and re-education are apparently needed. Will this happen?”

Maybe it’s already too late. In a more recent assessment, Dawid writes [8]: “Many physicists may wish back the golden old days of physics when fundamental theories could (more often than not) be tested empirically within a reasonable period of time and a clear-cut empirical verdict in due time rendered irrelevant all tedious theoretical considerations concerning a theory’s viability. Empirical science, however, must answer to the situation it actually faces and make the best of it. A sober look at the current situation in fundamental physics suggests that the old paradigm of theory assessment has lost much of its power and new strategies are already stepping in.”

There’s more. Scientists have a duty of care to a public that has developed an unprecedented appetite for popular science. This is an appetite that was greatly enhanced by the success of Stephen Hawking’s A Brief History of Time and has been fed by some excellent science writing, not least from Greene himself.

I haven’t done the research, but I very much suspect that if you were to ask a randomly selected group of scientifically literate readers about the theories we use to describe and understand the universe, many of these readers will likely tell you something about superstrings, hidden dimensions and the multiverse.

In truth, today these theories describe nothing and add nothing to our understanding, because this is metaphysics, not science. These theories do not form part of the accepted body of tried-and-tested scientific theory used routinely to describe our physical world, the kind used at CERN in the hunt for the Higgs boson. As Nobel laureate Tini Veltman claimed, paraphrasing Wolfgang Pauli, these theories are ‘not even wrong.’ [9]

Now readers of popular science might just want to be entertained with the latest ‘Oh wow!’ revelations from contemporary theoretical physics. But surely they also deserve to know the truth about the scientific status of these theories. I think Danish science historian Helge Kragh hit the nail squarely on its head when he observed, in a review of John Barrow and Frank Tipler’s The Anthropic Cosmological Principle [10]:

“Under cover of the authority of science and hundreds of references Barrow and Tipler, in parts of their work, contribute to a questionable, though fashionable mysticism of the social and spiritual consequences of modern science. This kind of escapist physics, also cultivated by authors like Wheeler, Sagan and Dyson, appeals to the religious instinct of man in a scientific age. Whatever its merits it should not be accepted uncritically or because of the scientific brilliancy of its proponents.” Amen.

In the end

I believe that contemporary theoretical physics has lost its way. It has retreated into its own small, self-referential world. In search of a final ‘theory of everything,’ theorists have been obliged to speculate, to cross the line from physics to metaphysics. No doubt this was done initially with the best of intentions, the purpose being to get back across the line carrying some new insight about the way the universe works that would provide an empirical test. Instead, the theorists have become mired in a metaphysics from which they can’t escape.

We might ask if there’s any real harm done. I personally think there’s a risk of lasting damage to the nature of the scientific enterprise. Admitting ‘evidence’ based on ‘theoretically confirmed theory’ is a very slippery slope, one that risks undermining the very basis of science. In the meantime, the status of this fairy-tale physics has been mis-sold to the wider public. We’re in crisis, and we need a time-out.

_____

I’d like to acknowledge a debt to Columbia University mathematical physicist Peter Woit, and especially his book Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics, Vintage, London, 2007.

Jim Baggott completed his doctorate in physical chemistry at the University of Oxford and his postgraduate research at Stanford University. He is the author of The Quantum Story, The First War of Physics, and A Beginner’s Guide To Reality. Most recently he published Farewell to Reality: How Modern Physics Has Betrayed the Search for Scientific Truth.

[1] J.P.A. Ioannidis, ‘Why Most Published Research Findings are False,’ PLoS Medicine, 2(8), e124, August 2005.

[2] Albert Einstein, ‘On the Generalised Theory of Gravitation,’ Scientific American, April 1950, p. 182.

[3] Philip K. Dick, from the 1978 essay ‘How to Build a Universe that Doesn’t Fall Apart Two Days Later,’ included in the anthology I Hope I Shall Arrive Soon, edited by Mark Hurst and Paul Williams, Grafton Books, London, 1988. This quote appears on p. 10.

[4] Werner Heisenberg, Physics and Philosophy: The Revolution in Modern Science, Penguin, London, 1989 (first published 1958), p. 46.

[5] Steven Weinberg, personal note to the author, 13 January 2013.

[6] Richard Dawid, ‘Underdetermination and Theory Succession from the Perspective of String Theory,’ Philosophy of Science, 73/3, 2007, pp. 298-332.

[7] M. Shifman, ‘Frontiers Beyond the Standard Model: Reflections and Impressionistic Portrait at the Conference’, arXiv:1211.0004v2, 14 November 2012.

[8] Richard Dawid, ‘Theory Assessment and Final Theory Claim in String Theory,’ Foundations of Physics, 43/1, 2013, pp. 81-100.

[9] Martinus Veltman, Facts and Mysteries in Elementary Particle Physics, World Scientific, London, 2003, p. 308.

[10] Helge Kragh, Centaurus, 39, 1987, pp. 191-194. This quote is reproduced in Helge Kragh, Higher Speculations: Grand Theories and Failed Revolutions in Physics and Cosmology, Oxford University Press, 2011, p. 249.

Advertisements


Categories: essay

Tags: , , , ,

196 replies

  1. According to both theory and observation, overall space appears close to flat. This means that what is expanding intergalactically is balanced by what is collapsing intragalactically.

    Sorry, but no it does not mean that.

    Liked by 1 person

  2. I do not quite understand what “limited epistemic agents like us” means.

    Without referring to the theory of quantum mechanics or incompleteness/undecidability theorems, what is a reason for accepting this as a given? What, exactly, is absolutely given to exist and be beyond this limit.

    I might put it this way: “We do not know if there is data that is beyond the detection of any instrument we may ever build.”

    Like

  3. Philip, it very simply means that we do not have unmediated access to reality. It means we have to rely on our evolved senses, limited reasoning ability and so forth. To deny that we are not limited (as opposed to omniscient) epistemic agents seems to me very strange.

    Like

  4. Massimo – I don’t want to discuss this in any detail in your comment section since the system is rather confusing. I would just want to note that for some of us it would be incorrect to say that we do not have unmediated contact with reality. It is the entire claim of mysticism. We can assume otherwise, but we cannot rigorously state that the world is otherwise. We would need to prove it. If I have misread you my apologies. Just registering another view.

    Like

  5. Hi Philip, Massimo,

    You’re both right.

    Massimo’s right. We are not omniscient. There may be much that will be forever beyond our grasp.

    Philip’s right. We have no evidence that the universe will ultimately prove to be incomprehensible to us.

    So you don’t actually disagree with each other. It seems to me that the appropriate attitude with respect to the comprehensibility of the cosmos to humans is agnosticism.

    Like

  6. Hi Jim,

    Read through Duff’s comment again and ask yourself how an average Guardian reader, with no formal training in science or philosophy, is likely to interpret it.

    There seem to be two distinct issues here: (1) whether theoretical physicsts themselves have lost their heads over empirical verification and falsification, and (2) whether the wider public is confused or arriving at wrong interpretations. I’m not convinced on the former, though the latter may be the case in some instances (though of course at least half the public are going to assume that anything labelled a “theory” is speculative and unproven anyhow).

    Like

  7. Coel,
    It’s just that your interpretation of empiricism and falsification is too narrow.
    Instead of making broad accusations you should make concrete arguments, anything else is not useful(we’ve been over this ground before). So to help you along, let’s clarify the question.

    Tell us what is the correct interpretation of empiricism and falsification, so that we can compare it with my putative interpretation. I presume you have an authoritative source and that it is not merely your opinion?
    Then show how that differs from my interpretation, presuming you do actually know what my interpretation is. Please quote my exact words and stick to my clear intent so that there can be no misunderstanding or misrepresentation.

    No it does not. Sean Carroll is entirely sensible on these issues.
    Calling for falsification to be retired is sensible? Would you care to make a persuasive argument instead of a bald, unsubstantiated assertion?

    Your last sentence baffles me. Surely the ultimate test of scientific endeavour is that it delivers “useful, testable predictions that survive scientific scrutiny“. Is that not a reasonable position to take? If a whole field fails to deliver on this promise we should closely questions its utility.

    Science has to be allowed to develop theories where the testability of those theories is unclear.
    Yes, and if you look through my comments you will see I have consistently argued for the necessity of speculative hypotheses as a prelude to good science. You would than also see that I argue we should clearly label them as speculative and unproven until we have empirical validation.

    This is the very heart of the debate. We should be clearly and unambiguously labelling speculative and unproven hypotheses as being just that. We should not be promoting these hypotheses in high profile books because that creates the clear implication that it is good science when it is hardly more than fairy-tale physics.

    Have you read Jim Baggott’s book? I urge you to get it.

    Like

  8. Or Lee Smolin’s The Trouble with Physics; or Peter Woit’s Not Even Wrong.

    Like

  9. Hi labnut,

    Tell us what is the correct interpretation of empiricism and falsification, …

    I have done this extensively and repeatedly on the previous two threads. But since you ask again:

    In science, evidence validates *theories* about how things work (rather than entities). If a theory predicts A, B, C and D, and if we validate the theory by empirically verifying A, B and C then we have good evidence (indirect evidence but still good evidence) for accepting D. It doesn’t matter whether D can be falsified so long as A, B and C can be,

    Specific case 1: Suppose we use models of planetary motion to predict solar eclipses, and validate these models to a high standard using empirical data. A solar eclipse within human history could be recorded on folk memory or cave paintings or written records, but a solar eclipse before humans existed would leave no possible trace today. A prediction of such an eclipse is thus unfalsifiable and not empirically verifiable. However, it is still scientific by the above principle, and if the model is validated sufficiently then we can regard the statement “There was an eclipse at such a time and place” as factual, even lacking empirical evidence for that particular eclipse, and lacking any way of directly falsifying the statement (though one could falsify the model that predicted it).

    Specific case 2: Given the finite speed at which information can be transmitted, there is an observable horizon from beyond which we cannot obtain information. Yet, if we can validate models of how cosmology works sufficiently well, using empirical data from within the observable horizon, then we can have sufficient evidence to accept statements about beyond the observable horizon as both scientific and factual, just as in the above case of the past eclipse. (Please note that I have not stated that any particular model is so verified.) This holds despite the impossibility of obtaining information from those regions.

    Then show how that differs from my interpretation. …

    My disagreement is with anyone who takes an over-simplistic attitude of “we cannot obtain information from beyond the observable horizon and therefore any statement about any such region is unfalsifiable and not scientific”.

    Calling for falsification to be retired is sensible?

    Carroll did not call for the concept to be entirely rejected, just to be interpreted better than *some* people do. To quote him: “The falsifiability criterion gestures toward something true and important about science, but it is a blunt instrument in a situation that calls for subtlety and precision.” As I read Carroll his Edge piece is in line with my stance above.

    We should not be promoting these hypotheses in high profile books because that creates the clear implication that it is good science when it is hardly more than fairy-tale physics.

    Question for you: In the 50-yr gap between the prediction of the Higgs Boson and its discovery, would you have regarded it as wrong to discuss the Higgs and its role in the standard model in high-profile popular books? Would you have regarded it as “hardly more than fairy-tale physics” that the public should not be told about? (By the way, I do agree with you that all such expositions should be clear about what has and has not been proven, but that’s not what my question is about.)

    Like

  10. Coel,
    I would be more than happy to hear what it does mean. In the basic world I live in, 1+(-1)=0.
    As far back as reading Hawking’s A Brief History of Time, when it came out in ’89, that is what was described, that gravity and universal expansion are inversely proportional.
    Yes, everyone continues to see the overall universe as expanding, yet Einstein did describe gravity as the contraction/inward curvature of space and proposed a cosmological constant to balance it. Then is is discovered the space between galaxies expands.
    So it seemed to me that if what expands between galaxies is equal to what falls into them, then there would be no extra expansion.
    Sorry to be so simple minded, but I’m just not seeing how it is these two effects are not canceling each other out.
    If redshift is an optical effect, it would logically compound and so there would be that curve upward, with distance. Big Bang theory seemed to assume the cooling from the singularity was even and so was surprised to find it dropped off, then flattened out to a residual expansion, which has now been attributed to dark matter. Now this has been called similar to a cosmological constant.
    It’s not like I make my living at this, but I like to try and make sense of the reality I inhabit and with multiverses, it seems like the train must have left the station awhile back.
    Regards,
    John

    Like

  11. Hi guymax

    The problem is that the concept of metaphysics is used in at least two senses as well as a dialectical weapon to counter the rivals in the debate. The concept of metaphysics in the way that was used by Hegel and Kant leads to an open and non-dogmatic discussion that has got room in the philosophy of science. But we have to consider that metaphysics is a borderline scenario that should conform to the framework of scientific method and to the elementary principles of logic. The chain of metaphysical reasons that is related to science should not be a fairy tale, the philosophers of science that aims to defend theorems and theories have to fit to the popperian criterion that tell us how to refute the scientific theories partially or totally. The second meaning of the concept refers to what is known as pseudoscience and is used to discredit those who defend arguments that don’t conform to the scientific method or show a disproportionate fantasy.

    Not all quantum pioneers were sensible about metaphysics and philosophy, only few catered to gain a philosophical background, Einstein was one of them and also L. Boltzmann. There is an anecdote about Einstein in this regard. In 1913 was nominated member of the prestigious Academy of Science in Berlin; who nominated him praised for their brilliance but also noted that he could have missed the target with their speculations implicit in his quantum theory of light that describes the quantum of action or photon. It seems to me that it was a polite way of saying that his photonic theory belonged to the domain of metaphysics and was out of reality. At that time, the trendy aspect of the Einstein´s physics was considered radical, a word that remember a sort of displeasure before a phenomenon considered para-scientific, pre-scientific, ambiguous, metaphysic and even freak.

    Planck himself was embarrassed for these entities and struggled hard throughout his life to understand them. I think is difficult for the cutting edge physicists to devote part of their time to study philosophy or spending years to discern the subtlety that zoom in and out the physics respect of metaphysics. Some of them would think that it is a waste of energy or even an insane project. And vice versa, others would see some aspects of the string theory and the M-theory fanciful and unreal. I wonder if the problematic inquiries of theoretical physics are the sample of a great intellectual work or the hint of a delusion. The belief in the geocentric model was a big delusion but it proved to be fertile for the advancement of science and the collection of empirical data. The same happens with some contemporary scientific conjectures, there is no problem in holding a skeptical position about them, no matter if the issue is the big bang theory, the string theory or the value obtained for the radius of the proton.

    What exceeds my knowledge is the role played by the emotional aspect of the scientist’s mind as glides across the waters of metaphysics and fantasy, the label that the emotion prints in the birth and development of certain scientific conjectures is evident. In this sense it would be nice to set up an epistemic framework that tell us, as far as possible, which are the points of contact and no contact between physics and metaphysics, reality and fantasy, and how understand the emotions from the perspective of the philosophy of science. If, as you say, something went wrong in the past, it´s our responsibility to review the situation in this present. I agree with Jim Baggott when he writes down that physics needs a time-out, though I see his idea as a metaphorical picture that somehow portrays the ideological and sociological aspects of science.

    Of course, many physicists have strong convictions about their work, quite often they shield to criticism, the critical discourse inherent in the philosophy of science is not welcome all along.

    Like

  12. I actually agree that we don’t seem to be making progress, but I rather thought it was because physicists are hampered by the limits of their instruments and experimental techniques. I think many of the theorists are having trouble finding their way back to experiments because currents experiments are not yet powerful enough to provide new data. Black hole event horizons will not provoke the same kind of theoretical controversies when we can do experiments with artificial black holes. If science is stalled by a failure of technique, that is depressing but not the fault of a wrong conception of reality. (Confession: I have been strongly influenced by J.D. Bernal’s emphasis on the role of new scientific technology in the progress of science.) Further, although I don’t think you can deem theories with a bunch of parameters lacking extensive experimental measurements as intrinsically unscientific on grounds of falsifiability, simple pragmatic utility seems to offer all the grounds for criticism you might otherwise wish.

    “I believe that the root of the problem lies in the way we seek to interpret the word ‘reality.” I believe the relevance of the your anti-realism is the way it provides a touchstone for what counts as empirical evidence. (References to Copenhagen incidentally are to the shared philosophical program. Also, they came naturally in this context to a reader of The Meaning of Quantum Theory. I agree that there’s no need to revisit the Copenhagen interpretation itself.) This approach forbids asking questions that are not narrowly experimental, even to the point where model-building is disallowed. I find it doubtful that it can be reconciled with science in general. I’m not convinced that Mario Bunge’s hypothetico-deductive interpretation of science is the final word, but it is far more compelling than the half-assed Popperism that (depressingly) seems always to be the sub-text, when it isn’t the plain text.

    “There is an ‘empirical reality’ of things as we observe or measure them. This is the reality that scientists try to address. The purpose of science is to seek rational explanations and ultimately an understanding of empirical reality by establishing a correspondence between the predictions of scientific theories and the results of observations and measurements. Such a correspondence gives us grounds for believing that the theory may be ‘true.’” I believe your anti-realist approach, coupled with the predictivism, requires basically that any acceptable theory has to formulated pretty much as laboratory experiments. Just as the Copenhagenists forbade asking how classically-acting measuring instruments could interact with quantum phenomena (not even how they came into existence,) the anti-realist approach forbids asking how timeless point particles could interact the Einsteinian spacetime of everyday life. The thing I’m not agreeing with is that you can simply define away mundane reality as proper empirical evidence.

    “Now, philosophers are not scientists. They don’t need evidence to establish a correspondence between their interpretation of an independent reality and our empirical world of observation and measurement. They’re more than satisfied if their interpretation is rationally and logically structured and coherent. There is truth here, but of a subtly different kind.” Personally, I don’t think it helps to call the philosopher’s notions of truth, truth. (I think they could incorporate a correspondence notion of truth and still do philosophy, although it would be very, very different from what it is now.) But we also disagree that the superstring theorists et al. aren’t using a correspondence theory of truth as well. They just use a different concept of what counts as correspondence. (No, I’m not an official spokesman, but this is what I really do believe they say between the lines.) Although swamped in a mass of parameters currently unamenable to experimental confirmation, all these theorists so far as I can tell, attempt to correspond to reality by incorporating QM and GR into their models. And if they don’t, I believe they deem the models as illustrative or heuristic simplifications, not genuine theories aimed at corresponding to reality.
    I do believe that if technology progresses to provide more empirical data their work will change, and refuted propositions will more or less be summarily dropped (even if only when the creators drop dead.)

    I’m still not sure how an anti-realist can have a coherent notion of truth as correspondence to anti-reality. By the way, the remarks on mathematics are very confusing. And the difficulties in scientific medicine do not seem to me to have any logical relationship to the difficulties in physics. I think they’re a different set of problems entirely.

    Like

  13. Coel,
    I realize a lot of these pieces seem to fit together quite neatly, but there are some large gaps which seem to get swept under the rug after a few decades and then everyone assumes the problem is solved and moves on. Specifically I’m referring back to my original point about time. To restate it; “We experience change as a linear sequence of events and so think of time as the point of the present moving from past to future, which physics then distills to measures of duration between events, to use in models and experiments. This being the basis of the geometry of spacetime, given clockrates vary in different conditions.
    The basic reality though, is that the changing configuration of what is, turns future into past. To wit, tomorrow becomes yesterday because the earth turns, rather than it traveling a meta-dimension from yesterday to tomorrow.
    This makes time an effect of action. Which makes it much more like temperature, than space.
    Basically time is to temperature, what frequency is to amplitude. It is just that with temperature, we experience the cumulative effect of lots of individual velocities/amplitudes and so think of it as an effect, while with time, we personally experience the individual sequence and assume there must be a universal rate, yet only experience the cumulative effect. Just like temperature is a cumulative effect.
    A faster clock only burns quicker and so falls into the past faster. The hare is long dead, yet the tortoise plods along.”
    As it is, the current inclusion of time into this four dimensional geometry of spacetime not only creates large issues, such as blocktime and not being able to explain why it is asymmetric, but it is another factor where it disagrees with QM, which uses a single external clock.
    If we simply view time as a measure of action, ie, frequency and not as some meta-dimension, then there is no blocktime. Its passage is asymmetric due to simple inertia, ie. the planet is not going to stop spinning and go the other direction. The future is not deterministic, nor does the past remain probabilistic ie. multiworlds, since probability precedes actuality. To wit, there are ten potential winners before a race and one actual winner after it.
    Above all, it makes time the dynamic effect of creation and dissolution we all experience, not a static dimension of events.
    The problem for cosmology is that this means spacetime is only a mathematical model, like epicycles and the ‘fabric of spacetime’ is no more physically real than those giant cosmic gear wheels.
    So it leaves no conceptual basis for an expanding universe. Space is simply the void. That vacuum across which light travels at C. Which gets to another point I made, that no one addressed; “the expansion is proposed as the expansion of space itself, rather than simply an expansion in space, to explain why we appear at the center of this expansion. Yet the argument then goes that these distant galaxies will eventually disappear, as they recede faster then the speed of light. This effectively proposes two concepts of space; That measured by a stable speed of light and that which is expanding, as measured by the redshift of this very light! Now the realm of quantum math may not have much respect for basic mathematical principles, but if you are using a stable unit to define a variable quantity, then the stable unit is your denominator and the true measure of the substance in question. It would seem there is some underlaying dimension of space, as defined by lightspeed, which is simply taken for granted.”
    So why, if space is expanding, wouldn’t the speed of light increase proportionally, in order for it to remain constant to this expanded space? Of course, if this was so, we wouldn’t be able to detect that expansion, as the increased rate of propagation would cancel the redshift.
    Now it’s easy to just tell me it isn’t so and leave it at that, but eventually people are not going to keep buying these patches need to sustain the current model.
    Regards,
    John

    Like

  14. “which has now been attributed to dark matter. ”
    Make that attributable to ‘dark energy.’

    Like

  15. This article is as nicely-written as it is biased to the point of defeatism. So much so in fact, that its main lines of thought (“It’s all gone to math”; “Ah, the math with it!”; “Math rises!”; “Heaven v. Math”, etc.) can be dismantled using a single quote by the very guy the article glorifies for his ability to come back after crossing into the “metaphysics realm”: Albert Einstein and his famous “A mathematical equation stands forever”.

    Like

  16. My last comment seems to have gotten eaten by WordPress, so here goes again.

    From the standpoint of someone in this field, there seem to me to be two incompletely addressed issues at play here.

    (1) Experiment has slowed to a crawl.

    We are only just now — decades after it was proposed — confirming the existence of a Higgs-like particle at the LHC. And so far it appears to be a vanilla Higgs doublet, with no hint or evidence of exotic physics whatsoever. No signatures of dark matter. No hint about neutrino masses. No signs of supersymmetry, or extra dimensions, or low-energy effects from quantum gravity.
    This isn’t just Nature being stingy, by the way. We should have seen the Higgs twenty years ago — a whole general of physicists ago — had the SSC not been canceled. The SSC would have been more powerful than the LHC and maybe seen exotic physics beyond the Higgs, but, in so small part due to public infighting between high-energy and low-energy physicists over funding dollars, the project was cancelled by Congress in the early 1990s. Starving high-energy physics of data badly damaged a whole generation of high-energy physicists, and it wasn’t their fault or the fault of hiring committees or of fads or of psychological or sociological factors.
    What do you do when there’s no reliable data coming in for so many years?
    The fact is that there are lots of young, bright people who are interested in high-enegy theory. What do you recommend that they do? Become condensed-matter physicists? Please tell me, concretely, what you would suggest they do.
    It’s easy for people outside the field to say that they should just go back to the drawing board and come up with new ideas that could lead to alternative tests that bypass big accelerators, but you try doing it!
    A big part of the difficulty is…

    (2) The field has calcified due to a minefield of known constraints, previous empirical results and data, no-go theorems, and other experimental and mathematical obstructions.

    There’s a popular image that high-energy theorists just come up with whatever they want and it becomes a new fad. That’s not quite right. From years of gathering data and probing the deep structure of the models and frameworks we know and trust — quantum mechanics, quantum field theory, general relativity, semiclassical quantum gravity, cosmology — we already have a huge number of constraints on what one can propose. Almost any idea one could propose gets cut down almost instantly because it runs into one of these obstructions. (It’s often fun watching this happen in seminars, for example.)
    So telling young physicists just to go home and come up with new ideas from scratch sounds great in words, but it’s a fatal suggestion, and not because of a cabal of tenured professors who will kill their career. It’s because you’ll never get anywhere with that approach. It just doesn’t work. Years will go by with nothing to show for it.
    Lots of people in the field spend part of their time working with alternative ideas on the side, and they never go anywhere, because they can’t make it through that minefield of constraints. They violate something — flavor-violation constaints, gauge invariance, locality, Lorentz invariance, unitarity, renormalizability, cancellation of gauge anomalies, black-hole physics, big-bang nucleosynthesis, accommodating chiral gauge groups, the litany of no-go theorems like Coleman-Mandula — and that’s just the tip of the iceberg!
    What’s amazing is that we have any ideas at all that have managed to slip past all these constraints, and basically the only one is string theory and ideas that have been spun out of string theory. String theory is the one model we have in which physics is self-consistent all the way to the Planck scale, and incorporates gravity into a quantum theory in a consistent way without violating all the many constraints we know about.
    I know lots of people in the community, and nobody says string theory is an accurate description of the world in the absence of some kind of experimental data. And lots of people are looking for lucky accidents that might make it into an experimental science. Indeed, Einstein was very lucky that our solar system has two very helpful features — a moon whose apparent diameter in the sky is the same as the sun (which made that Eddington eclipse experiment possible) and the planet Mercury (which was close enough to the sun that we can detect its orbital anomalies). Without those lucky accidents, general relativity might not have been an experimental science for decades. It would have been beautiful mathematics, but little else.
    So what string theorists are hoping for is a lucky accident. Maybe some signature in the CMB, for example. If the BICEP2 results hold up (an uncertain prospect at this point), then that would at least get in the ballpark of where quantum gravity should become noticeable.
    In the meantime, a lot of people use string theory like a crystal ball or a “theory-generating function” that spins off other ideas that, due to their origin in string theory, likewise survive all those constraints. Supersymmetric models, extra-dimensional models, various dark-matter models, etc., own much of their origins to string theory, but are conceptually independent of string theory and could be true even if string theory turns out not to be.
    And there are lots of formal techniques that came out of string theory — holographic techniques for studying quark-gluon physics or condensed-matter systems, scattering-amplitude methods for computing cross sections to study LHC data — that have been useful as well.
    That’s why people have stuck with string theory, looking for new ideas to spin off or studying dualities and the internal structure of the theory in the meantime. It’s not primarily because it’s a fad or because of sociology, but because string theory is the only thing we have that can generate this stuff without running into that minefield of constraints. People continue to try (and fail) to find alternatives, but it hasn’t been done yet, and might end up being impossible. (For example, if you think loop quantum gravity hasn’t gotten a fair shake, then you really need to study those constraints better.)
    So I ask you — what do you expect high-energy theory people to do? Please tell me concretely. Perhaps some of the armchair-physicist bloggers would be willing to put down their blogs for a few months and try it out for themselves rather than yelling criticisms from the sidelines.
    I agree that this discussion might mean that high-energy physics has a serious problem. I would just be more careful about pointing the blame at the people themselves.

    Like

  17. Mr. Baggott,

    I don’t think you’ve responded to any of my main points here. In your original post, you were claiming that string theory makes no predictions and is untestable. I pointed out that such statements are meaningless because the term “string theory” is too broad. It doesn’t refer to a specific model of fundamental physics but a vast theoretical framework.

    Do you disagree with this statement? If string theory is not a specific model of fundamental physics, then how are we to understand your claim that it is untestable? What does it mean to test string theory?

    I also pointed out that a particular model of string phenomenology may be perfectly predictive and falsifiable. Indeed, there are lots of stringy models of particle physics and cosmology that have either been supported or ruled out by experiments.

    Do you disagree? You say that “the structure is inherently incapable of reconnecting in a way that doesn’t involve so many ad hoc auxiliary assumptions as to render any prediction meaningless.” I’m not sure what you mean here. Can you give me an example of a string theory model that makes too many ad hoc assumptions to produce meaningful predictions?

    Finally, I pointed out that string theory has already been used to understand observed phenomena. You say this is not a view that is embraced by experts in heavy ion and condensed matter physics, but I can think of some very high profile scientists in these fields (such as Dam Son and Subir Sachdev) who have become strong advocates for the use of string theory in these areas. Do you disagree with any of the actual results in this subject?

    Like

  18. “I believe that contemporary theoretical physics has lost its way” Can some care be put into making statements like this? What is being criticised in this article is a relatively small subset of all of theoretical physics yet the criticism sounds like it is being applied more widely than that without justification.

    Putting in Frank “Omega Point” Tipler’s book into the same camp as string theory also seems to be unfair, particularly considering that the Final anthropic principle is presumably what is being referred to (I can’t access the original quote for context), which Tipler uses to justify a strange variant of Christianity. To paraphrase Asimov: “But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.””

    Like

  19. MathPhysPhD,
    Perhaps one reason Jim hasn’t responded to your “QFT just as bad as string theory” argument is that you already made it at the top of this comment section, and I responded with the argument against it. Again:
    http://www.math.columbia.edu/~woit/wordpress/?wp_super_faq=isnt-string-theory-just-as-predictive-as-quantum-field-theory
    You just completely ignored this.

    As far as string theory and heavy-ion physics goes, perhaps you recall that one expert called for a “Pinocchio award” to Brian Greene when he claimed string theory as a good way to understand heavy ion physics. As far as I can tell, after an immense amount of hype, that idea is now pretty much dead: I don’t see anyone analyzing LHC heavy ion data using string theory based models. AdS/CMT is more promising, since there’s an infinity of condensed matter systems and thus more likely to be some where duality arguments are useful. I doubt you’re any more an expert about these than I am though, so no more able to evaluate this. I am an expert in string theory hype though, and these latest claims are often made by exactly the same people, in exactly the same way, as past ones (eg. heavy ions) that didn’t work out. Oh, and of course this has nothing at all to do with what Jim is discussing, speculative theories of everything based on string theory, not the application of qft dualities to condensed matter systems.

    Like

  20. ns12345,
    I mostly agree with your diagnosis of the situation: we have no experimental guidance, and no good ideas about how to go beyond the SM, as well as a lot of experience with ideas that don’t work and the general principles why.

    This is a very tough situation for the field to be in. I think you’re ignoring though the argument that some leaders of the field have chosen to deal with this not by acknowledging the situation, but by trying to make the problem go away by abandoning the conventional constraints of science. It’s easy to find prominent theorists going on publicly about how these are wonderful times, great progress is being made, the anthropic string theory landscape explains it all, etc. Someone should be providing a reality-check to this nonsense, and I don’t think you should be complaining when they do it unless you’re someone who has done your part publicly yourself.

    I also disagree with the idea that the string theory TOE idea is the one left that hasn’t hit a no-go theorem, so that’s a reason to keep doing it. The string theory landscape shows that this idea has failed, just like others, and there is no more reason to pursue it than other failed ideas. Yes, if you keep at an idea despite it not seeming to work, you may someday find a way around the problem, and in an environment of only ideas that don’t work, you’re stuck doing this. The problem is string theory TOE is the one failed idea that you can make a successful professional career pursuing, with lots of people to work together with. It’s the one that has a get out of jail free card, and we’d be better off if that was revoked.

    As a more positive comment, I think we do have some experimental guidance: the SM is a lot better than anyone ever thought. The discovery and properties of the Higgs seem to indicate it’s a valid theory up to absurdly high scales. It would seem to me that highest priority should go to trying to understand those aspects of it that we don’t now really understand. A couple examples are confinement and the non-perturbative behavior of the electroweak sector. Yes these are extremely hard problems, but we need to find ways to make it professionally legitimate for young theorists to work on them (and not just by themselves, in their spare time, but as their full-time research, with a sizable group to work with).

    Like

  21. That’s a very sensible and insightful comment. One thing to add is that the data-starvation is almost inevitable as physics “completes” its account of the everyday world and progresses to the extremes. In Faraday’s time one guy could do this stuff in his garden shed, but now it takes many leading nations to band together to build the LHC. So you’re right that perhaps the real problem here is not string theory but that limit on experimental access to high energies. No science has ever got far without lots of guidance from nature.

    Like

  22. This all seems reasonable to me Mario. The different opinions on what metaphysics actually is does seem to be the problem, although I would characterize it as a failure of scholarship rather than a confusion over the meaning of words. .

    Philosophy of science is not my interest, and to be honest it does not seem an important area of philosophy to me. My interest is in fundamental theories,. thus in metaphysics,.

    Erwin Schrodinger is always my leading example of a physicist with important things to say about philosophy. Now there was a thinker. Far ahead of Einstein as a philosopher. Eddington I find interesting, and Heisenberg also, and a few others from around that time. But yes, it certainly was not all of them. Feynman is hopeless.

    Like

  23. You write: “This is a very tough situation for the field to be in.” I agree.

    You also write: “I think you’re ignoring though the argument that some leaders of the field have chosen to deal with this not by acknowledging the situation, but by trying to make the problem go away by abandoning the conventional constraints of science. It’s easy to find prominent theorists going on publicly about how these are wonderful times, great progress is being made, the anthropic string theory landscape explains it all, etc.”

    I agree with this, but what I don’t agree with is the often implicit conclusion that the way to “fix” the problems in the field is to prevent leaders from behaving this way. It is true that people shouldn’t be pretending everything is just great and selling tall tales, but even if we get them all to stop (and it’s actually not the majority of them — most people are really low key), that doesn’t change the “tough situation” at all. And it’s sometimes frustrating when it seems like critics think that if we can just keep people from being bombastic then we’ve made real progress. That’s treating a symptom, but not the cause.

    Again, however annoying some people in the field may or may not be, that’s not the ultimate cause of these problems. That’s why I am dismayed when I hear the blame get placed on the human beings here, as if they’re responsible for the *real* problems in the field today.

    I don’t agree with your next statement: “I also disagree with the idea that the string theory TOE idea is the one left that hasn’t hit a no-go theorem, so that’s a reason to keep doing it. The string theory landscape shows that this idea has failed, just like others, and there is no more reason to pursue it than other failed ideas.”

    There is absolutely no no-go theorem (no^2-theorem?) that prevents us from finding a vacuum whose low-energy physics matches on to the Standard Model. Indeed, lots of people are currently looking for one! It’s just a hard problem to find such a vacuum, because the landscape is so huge.

    That means string theory has a huge leg up on every other idea that we’ve tried. The others clearly break. That’s one reason people use string theory as a “theory-generating function,” because the ideas it spits out don’t crash fatally into no-go theorems and all the other constraints. Like I said, starting from scratch and trying to navigate around the minefield of constraints is unimaginably hard, and hasn’t led to any progress in many decades, despite the fact that many people have certainly been trying them. Indeed, in the few cases where it looked like there might be progress (e.g., 11d supergravity), the result turned out to be a part of the string theory landscape!

    You write: “It’s the one that has a get out of jail free card, and we’d be better off if that was revoked.” I disagree. Without string theory and ideas it has generated (and other related ideas that you don’t like), we have essentially nothing to go on for physics beyond the standard model. Years of trying other approaches has yielded nothing that survives all the constraints. That’s why so many people try to spin ideas out of string theory, because at the very least we know we’ll avoid running into all those constraints.

    Like I said, it’s easy to declare that people should just start from scratch, but then what do we actually do, concretely? I get so bothered especially when I hear this from people totally outside of high-energy physics, as if they have any idea what they’re talking about. (Try it yourself!, I always say.)

    You write: “The discovery and properties of the Higgs seem to indicate it’s a valid theory up to absurdly high scales. It would seem to me that highest priority should go to trying to understand those aspects of it that we don’t now really understand. A couple examples are confinement and the non-perturbative behavior of the electroweak sector.”

    Those are indeed valid areas of inquiry. It’s just that people have looked at them both for many years and they haven’t borne much fruit. They haven’t pointed the way, or revealed anything about clear questions like dark matter or neutrino masses, let alone more exotic questions. Maybe we need to work on them more! Or maybe they’re dead ends. What’s interesting to note is that supersymmetry has been the most promising road to a lot of this stuff: N=2 SYM has taught us a lot qualitatively about confinement, for example.

    Like

  24. Higgs’s theory too was confirmed “in theory”, meaning you can call it confirmed if you like, but then you don’t have to. Theories are normally confirmed neither “in theory” nor using synthetic data alone. The Higgs fable is far more ridiculous than strings and multiverse theories could ever be. Why? As we all remember it vividly (or you think we all suffered amnesia?), at first the official story went in the direction of “Higgs boson that gave mass to everything”. But then, when the community (and finally the general public, which means you’re really busted!) started asking The Question: “- What then gave mass to Higgs boson?”, the High Committee of Wizards of Oz took a few months break and came back with the Directive that it’s actually “Higgs’s field that gave mass to everything”. Oh boy, what a U-turn: a particle that’s one of the rarest things in the known universe could actually form a field?! And then, this field could actually give mass to anything, let alone everything?? Please, give us more fables of the SM format; how about some involving “soups” and other tools from Kitchen where a stove isn’t a a stove and a temperature isn’t a temperature, and there’s about ten million parameters to tune at will so that in the end everyone gets lost in the Wonderland of Confusion. So believe whatever you want, but stop confusing people into disbelieving everything. If you’re so unsure of something, grab a book or borrow a thought from those who knew far more than you do. How about this for starters: “A mathematical equation stands forever” – A.Einstein. And to it related: http://www.mynewsdesk.com/ba/pressreleases/as-big-bang-gets-downgraded-to-a-bang-the-first-scientific-proof-of-the-multiverse-claimed-975493

    Like

  25. Yes, there are two issues here, but (1) makes it possible to position untested (and possibly untestable) theories as ‘progressive’ and (2) is the consequence.

    Like

  26. Mr. Woit,

    In my last comment, I made a couple of very simple, uncontroversial points, and your link doesn’t really address any of them. It basically just says that you like the standard model better than string theory because it’s simpler.

    I don’t really care if the standard model is a particularly simple quantum field theory because that’s not the point I was making. The point of my comment was to draw an analogy between string theory and quantum field theory and point out that it’s unfair to ask for generic predictions of string theory since one doesn’t have such things in quantum field theory either.

    As you know, there are a gazillion different quantum field theories–in fact, there are infinite families of quantum field theories parametrized by various mathematical data–and these theories have all sorts of applications. Some of them, like quantum chromodynamics or quantum electrodynamics, describe particles that we observe in high energy physics experiments. Others are used in condensed matter physics where experimentalists can engineer systems that exhibit all sorts of exotic behaviors. There are still others that don’t describe any real world system but have nice features that allow theorists to study the general formalism in a simpler setting. Some of these theories even have applications in pure mathematics.

    In string theory, the situation is very similar. You have a theory in ten or eleven dimensions, and you can study all sorts of different limits and compactifications of this theory. One popular approach is to compactify the theory on a Calabi-Yau threefold to get a model roughly like the standard model of particle physics, but there are many different ways of doing this. Alternatively, you can compactify the theory on some other compact manifold like a five-sphere, and you end up with a model that describes something roughly like a system of quarks and gluons. By taking other limits of string and M-theory, you get all sorts of other interesting theories, like models of condensed matter physics, noncommutative field theories, topological quantum field theories, and more. These things all have important applications in theoretical physics and pure mathematics.

    Since string theory has so many different applications, it doesn’t make sense to talk about predictions in general. The theories that string theorists study exhibit a range of properties and describe a range of different physical scenarios. The point of my previous post was that the same is true in quantum field theory. You can’t talk about “testing quantum field theory” because the term refers to an enormous class of theories with completely different physics.

    Like

  27. I must salute to both you (Woit) and Haggott about your courage to fight against the huge established institutions. But, I do disagree with your comment, “Woit: … and no good ideas about how to go beyond the SM, as well as a lot of experience with ideas that don’t work and the general principles why.”

    This is in fact the sayings of the other side, “…that this [M-theory and multiverse hypothesis] is the ‘only game in town’, combined with a seemingly inexhaustible optimism that ‘well, it still might be true’.”

    At ‘this’ Webzine, we did discuss a few points on these ‘going beyond the SM and the refuting the multiverse” issues.
    One, ‘string-unification’, see https://scientiasalon.wordpress.com/2014/05/22/my-philosophy-so-far-part-ii/comment-page-1/#comment-2432

    Two, the multiverse-hypothesis can be refuted with three points
    1. Showing that ‘this’ universe is not bubble dependent: everything (nature constants and laws) in this universe arose in this universe.
    2. Showing that ‘multiverse hypothesis’ cannot give rise to anything (nature constants and laws) of ‘this’ universe.
    3. Showing that the region where is beyond the event horizon of this universe is still a part of this universe.
    See, https://scientiasalon.wordpress.com/2014/06/05/the-multiverse-as-a-scientific-concept-part-ii/comment-page-1/#comment-3158

    Like

  28. Mr. Woit,

    I have now posted a response to your comment at the top of the page.

    If you have any specific comments about the actual science involved in applications of AdS/CFT, I would be happy to discuss with you. Saying that these results don’t look useful doesn’t change the fact that they are applications of string theory to observable phenomena.

    Like

  29. You can call me Jim (and, by the way, I also have a PhD).
    Confession. I don’t know what you mean by ‘stringy models of particle physics and cosmology’. Are these phenomenological models based on string theory that predict all the things that the standard model predicts? Better still, do they predict things that the standard model doesn’t predict? Apologies if I seem sceptical, because if they do I honestly believe I would have heard about these (the whole world would have heard about them) and we wouldn’t be having this debate.

    Like

  30. Coel,

    A while back you said “In science, evidence validates *theories* about how things work (rather than entities). If a theory predicts A, B, C and D, and if we validate the theory by empirically verifying A, B and C then we have good evidence (indirect evidence but still good evidence) for accepting D. It doesn’t matter whether D can be falsified so long as A, B and C can be.”
    But that’s theory T1. if another theory T2 predicts A, B, C but *not* D, that “proof” does not work.

    So let T1 = string theory/M theory, T2 = a GUT theory plus loop quantum gravity. Both predict all of standard physics plus the standard hot big bang, inflation, etc. Then T1 with a further ad hoc addition (a mechanism to realise different vacua in different inflationary bubbles) sometimes (i.e. in those inflationary models that are chaotic) predicts a multiverse that will have different constants in different bubbles, and so (if there is indeed a string theory vacuum that gives the standard model of particle physics: yet another unproven hypothesis).T1 can potentially solve the anthropic issue But T2 won’t do so: physics is the same in all bubbles in that case.

    The deduction fails unless you can prove in a testable way that T1 holds rather than T2.

    Like

  31. Interesting story;
    http://phys.org/news/2014-06-universe-dwarf-galaxies-dont-standard.html
    “Pawlowski and 13 co-authors from six different countries examined three recent papers by different international teams that concluded the planar distributions of galaxies fit the standard model.
    “When we compared simulations using their data to what is observed by astronomers, we found a very substantial mismatch,” Pawlowski said.
    With computers, the researchers simulated mock observations of thousands of Milky Ways using the same data as the three previous papers. They found just one of a few thousand simulations matched what astronomers actually observe around the Milky Way.
    “But we also have Andromeda,” Pawlowski said. “The chance to have two galaxies with such huge disks of satellite galaxies is less than one in 100,000.”
    When the researchers corrected for flaws they say they found in the three studies, they could not reproduce the findings made in the respective papers.”

    At some point, they are going to have to go back and review a lot of what is currently considered settled.

    Like

  32. Ah, the ‘you’re not qualified to hold an opinion’ argument. So the only people qualified to criticise a discipline are its own practitioners? Sorry, I don’t buy it. Especially when there’s a growing sense that some of these same practitioners are trying to bend the rules to lend validity to what they’re doing.
    I think the point Peter is making is absolutely right. When the LHC comes back on stream next year with a collision energy of 13 TeV we’ll hopefully have an opportunity to explore the Higgs sector in great detail. Perhaps this is also an opportunity to calm our over-eagerness and rein in our ambitions. Let’s see what we can learn by sharpening the theoretical underpinnings of our understanding of physics at this energy, and forget for a while about dark matter, dark energy and quantum gravity. I suspect that this would be a lot more productive in the short run although, of course, it wouldn’t make quite such spectacular headlines.

    Like

  33. Cathal,
    You’re right. This is a relatively small group of theorists but it’s one that has a disproportionate impact on the wider perception of science because they tend to deal with the ‘big questions’. It’s certainly not my intention to criticise all theoretical physicists and I’d hope the context is clear from the main body of my article (and my book).
    I’d included Helge Krage’s comments because I felt they were particularly relevant. I wouldn’t otherwise have mentioned Barrow and Tipler’s book, which is actually a very comprehensive (but quite uncritical) review of all forms of anthropic argument, including varieties of weak and strong principles.

    Like

  34. Hi Coel and DM,

    You accuse me of quote mining but it is not quote mining if that is how I understood it.

    I had already considered and rejected the idea that “In this view” was meant as a disclosure that this was speculative, I rejected the idea from the context of it.

    I have read it again and it still doesn’t read that to me, He does not appear to be saying that “this view” is highly speculative, on the contrary and in the context of all else they say about the “sum over histories” approach it sounds like they are saying that this “this view” is the correct “view”.

    This is reinforced by the use of the words “In fact”. It would not detract from the readability if they had said something like “If this were the case then there would be many universes with many sets of physical laws”, On the contrary it would have clarified their meaning, if that was their meaning.

    Moreover he says in the next sentence that “this idea” (ie “many universes exist with many different sets of physical laws”) is just a different expression of the Feynman sum over histories.

    He has spent a lot of time pointing out the solidity of the “sum over histories” approach.

    Now I am not sure how you break on the idea that many universes with many different sets of laws is an expression of Feynman’s sum over histories.

    But to say that something is “just an expression of” solid science, sounds as though they are implying that it is also solid science.

    Like

  35. You’re just ignoring my argument. No, it’s not “I like the SM better because it’s simpler”.

    Like

  36. This comment relates to this thread but actually refers to the ongoing discussion about the 50-year period that has elapsed between the ‘invention’ of the Higgs field and the discovery of the Higgs boson (Coel’s comment below). It really helps here to be mindful of the history. Yes, this has taken 50 years, but it’s quite wrong to think that there was no progress in this time. Here’s a rough outline of what actually happened (apologies to any working science historians reading this).

    The idea of the Higgs field was introduced in a series of papers published in 1964 by Higgs, Brout and Englert and Hagen, Guralnik and Kibble.
    The ‘Higgs mechanism’ (it wasn’t called that then) was used by Weinberg and Salam in 1967 as a device to break the electro-weak symmetry. Weinberg used the mechanism to predict the masses of the W and Z bosons. Note the word ‘predict’…
    This was largely ignored because it wasn’t clear that the resulting Higgs-modified quantum field theory could be renormalized. ‘t-Hooft and Veltman showed that it could be renormalized in 1971.
    Perhaps rather astonishingly, Weinberg, Salam and Glashow were awarded the 1979 Nobel prize in physics for their work on electro-weak unification, even though the theory at this stage couldn’t be considered to be proven.
    The W and Z bosons were eventually discovered at CERN in 1983, with masses very close to Weinberg’s original 1967 prediction.
    It’s fair to say (as I do in my article) that the only unambiguous evidence for the existence of the Higgs field is its tell-tale field particle, the Higgs boson. I honestly doubt that the SSC project would have gotten off the ground at all if physicists hadn’t been pretty confident that they would find it (or, at least, find something). If the project hadn’t been cancelled in 1993 we would have discovered the Higgs boson a lot sooner.

    So, it was certainly *not* wrong to talk about the Higgs in high-profile books published since the early 1980s (and many great popular science books did just this). There was nothing ‘fairy-tale’ about this. It was as a truly ‘progressive’ development and, although the relationship between theory and experiment was a bit creaky in the late 1960s, the 1970s and 1980s are considered by many to be a ‘golden age’ in high-energy particle physics.

    This is science as it should be done.

    Like

  37. To expand on my comment, since I realized there is a simple way to explain the basic point: it’s not that I “like” simple theories, it’s that those are the ones that are testable, where the theory has explanatory power. Yes, you can make qfts arbitrarily complicated and untestable if you want, but the simple, testable ones work (spectacularly well). In string theory, the simple, testable models don’t work, and all people are doing is looking for ones complicated enough to evade testability.

    I suspect that this is an issue well-known among philosophers of science, perhaps they could perform a public service by explaining this to string theory ideologs, who are extremely fond of the “QFT just as bad as string theory” argument.

    Like

  38. I’m never sure how to take this sort of argument about a lack of data to challenge the SM. There are huge gaps between the standard beliefs about what the SM predicts about protons and what is experimentally observed. Either the purported derivations of what should be observed are wrong, in which case fixing them would seem like Job 0 for theorists, or the theory itself has a flaw, which would seem (to a layman) like a big hint from nature about Job 1. Maybe protons seem too “composite” and so not fundamental enough to be interesting, but the relegation of these issues to the sidelines of discussions seems peculiar from the outside.

    Like

  39. Jim,

    String phenomenology is the subfield of string theory that attempts to create realistic models of fundamental physics using string theory. If you want to learn more about this subject, take a look at the following webpage, which includes a number of links to survey articles:

    http://ncatlab.org/nlab/show/string+phenomenology

    The basic idea here is to specify a shape for the extra dimensions so as to recover the standard model of particle physics coupled to gravity. Traditionally, this is accomplished by considering heterotic string theory with the six extra dimensions shaped like a six-dimensional Calabi-Yau manifold — but that’s not the only way of doing it, and there probably lots of ways that we haven’t even thought of yet.

    Are these phenomenological models based on string theory that predict all the things that the standard model predicts? Better still, do they predict things that the standard model doesn’t predict? Apologies if I seem sceptical, because if they do I honestly believe I would have heard about these

    The theories that you get by compactifying string theory on a Calabi-Yau manifold reproduce the general features of the standard model (its particle content, gauge symmetries, etc.) and consistently include gravity. The fact that string theory can do this is precisely why you’ve heard of it, and it’s why there was such a surge of interest in string theory back in the 1980s. In addition to the general features of the standard model, string compactifications give theorists the freedom to incorporate new features into their models, like supersymmetry, dark matter particles, and dark energy.

    In this way, theorists have developed lots of different models of cosmology and particle physics. These models are testable in principle, though in practice it may be impossible to test some of them because of the extremely high energies associated with quantum gravity. To give just one example of how these theories can be tested, let me point out that if the BICEP2 result is true, it will wipe out a whole swath of inflationary models based on string theory, while possibly supporting certain other models. And how could this possibly happen if string theory is a completely non-predictive idea as you suggest?

    The point is that predictions of string theory are model-dependent, and it makes no sense to ask about the predictions of string theory without specifying a particular phenomenological model. I should also point out that most string theorists don’t even work on phenomenology; string theory has tons of other applications to quantum field theory, quantum gravity, and pure mathematics. This means that you have to be even more careful when talking about predictions or testability of string theory as some of it’s applications have nothing to do with developing a theory of everything.

    Like

  40. None of this has anything to do with the point I’m making. I’m not talking about whether these theories are simple or complicated. All I’m saying here is that the statements

    String theory is untestable

    and

    Quantum field theory is untestable

    are equally meaningless because the terms “string theory” and “quantum field theory” each refer to enormous classes of distinct ideas about fundamental physics. It’s a very simple point that I’m making, and I doubt that you actually disagree with it…

    Like

  41. If what you are saying is that string “theory” is actually a (large) family of theories, nobody disagrees. But if all members of this family are untestable the problem worsens, significantly, depending on how many members the family has.

    Like

  42. To be precise, “string theory” is not a family of theories. It is a unique mathematical structure (whose existence is still partly conjectural but consistent with many, many consistency checks).

    What I am saying is that one can obtain a large family of theories by looking at different compactifications and limiting cases of this one unique theory. Some of these theories describe the real world, at least in some approximation, but it is absolutely incorrect to say that they are all untestable. Indeed, some of these theories have already been ruled out experimentally. It all depends on what model you’re talking about.

    Like

  43. Massimo,
    The issue isn’t really the testability of particular string theory models, which gets complicated. String theory models that have any hope of reproducing the Standard Model are ferociously complicated (for example, the Calabi-Yau spaces needed are 6d manifolds for which we don’t know an explicit form of the metric, making any calculation with them very hard). For generic values of the various data needed to define a “string theory vacuum”, and thus a string theory model to be compared to experiment, calculating the low energy effective field theory parameters of the theory is way beyond any current technology (which justifies to a lot of people working on improving the technology). The state of the art is that you can at best reliably compute certain discrete data (like the number of generations). All evidence I’ve seen is that you can get any values you want for this discrete data by changing your choice of “string theory vacuum”.

    To the extent people are able to calculate more than something like the number of generations, or the low energy gauge group, it is essentially in a toy model, taking the Calabi-Yau to be a torus or some such. You can also imagine doing calculations with parameters that are not generic, but are very small, making some approximation valid.

    Back in 1985 the hope was that string theory vacua were a small set parametrizing things simple enough to calculate with. What has happened over the last 30 years is that people have found more and more ways to make such vacua (M-theory drastically increased the possibilities), while finding that any of the cases simple enough to analyze and extract predictions from don’t give the SM at low energies. With our limited understanding of what generic examples will give at low energy, all the evidence is that there’s no reason you can’t get the SM this way, but also no evidence you can’t get essentially any QFT at low energy this way.

    Claims that “given any string theory model, you can just calculate its predictions and compare to experiment” are pretty misleading, not mentioning that no one has any idea what the full space of string theory models looks like, or how to do such a calculation except in very special cases.

    I’d describe this as a theory with essentially zero explanatory power. You can get anything you want out of it, and it has shown zero ability to explain anything about the Standard Model.

    It’s true that, in principle, if you consider all qfts, you can possibly also get anything you want by taking complicated enough examples. But, the set of consistent (renormalizable) qfts is much simpler and better understood than the set of string models, and we have a lot of calculational control over large parts of this set. The history of QFT is that from the earliest days, the simplest examples of QFTs did a remarkable job of agreeing with experiment, and we’ve ended up with a rather simple example of a QFT (the SM) that does a spectacular job of total and complete agreement with experiment. If simple QFTs didn’t work, and somebody claimed that it was the really complicated ones that you couldn’t calculate with that were needed, the subject would have never gotten off the ground and long ago been abandoned.

    The basic difference between the two cases is that in the case of a QFT, you specify a small amount of info, and get out a complicated structure of predictions that you can compare to experiment in a detailed way, and this works beautifully. The explanatory power of the structure is huge and unparalled anywhere else in science. The case of string theory is at the opposite extreme: its explanatory power is zero, since you need to put in more info than you get out. Remarkably, MathPhysPhD and others want to claim that these cases are the same, because in both of them in principle you can get more or less anything by using complicated enough examples.

    I’m guessing philosophers of science recognize some well-known issues about theories and their confrontation with experiment, wonder if this sounds familiar.

    Like

  44. You write: “So the only people qualified to criticise a discipline are its own practitioners? Sorry, I don’t buy it.” My apologies, but you haven’t constructed an argument here. Saying you don’t buy it isn’t an actual argument.
    Look, it would be nice if people without expertise in a field were always capable of providing helpful criticism. But it is a plain fact that some subjects simply require a lot of technical knowledge and expertise in order to understand why the practitioners are doing the things they are doing and why they’re stuck.
    Some of your criticisms of the field sound fine — the lack of experimental data being a huge issue, for example — but other of your criticisms speak to a lack of awareness of what is actually going on right now in theoretical physics and what people internal to the field are actually talking about and what their motivations and difficulties are.
    I can speak to this from personal experience, as someone who entered the field not a very long time ago. My own impressions changed a lot between not really knowing the tools of the trade (but reading a lot of things written by folks such as yourself) and then later learning those tools and getting to hear what was actually going on among the people working in the field.
    So when you also write “Especially when there’s a growing sense that some of these same practitioners are trying to bend the rules to lend validity to what they’re doing.”, again, the reason this sounds like a comment from someone who doesn’t really understand what’s going on in the community of high-energy theorists is that while it’s definitely a *symptom* of a deeper problem, it’s simply not the *cause* — see points (1) and (2) of my earlier post.
    Getting people to keep quiet with all the flamboyant rhetoric, while making everyone’s lives less annoying and giving news reporters less exciting stuff to write about, will do absolutely nothing to fix either points (1) or (2), whereas if (1) and (2) were solved, then the flamboyant rhetoric wouldn’t be an issue anymore.
    Now, if you or anyone else inside or outside the high-energy community has concrete proposals that would truly solve either of these two fundamental problems — (1) the crushing lack of experimental data and the slow rate of experiments, and (2) creating new ideas that can successfully navigate the maze of known constraints to go beyond what we already know in the Standard Model and perhaps explain many of the remaining mysteries and even suggest new experiments to be performed — then people will be all ears. But just telling some of the louder members of the community to keep quiet isn’t going to be very well-received by folks in high-energy, and for good reason, because it doesn’t actually solve the basic problems.
    You also write: “When the LHC comes back on stream next year with a collision energy of 13 TeV we’ll hopefully have an opportunity to explore the Higgs sector in great detail.” Hope springs eternal! Nothing would make people in the high-energy community more excited than new data that points a clear path to physics beyond the Standard Model. If we do, then that would partially solve problem (1) all by itself, bombastic rhetoric or not.
    But so far we’ve found nothing of this kind, and there’s a fair chance that we won’t even after the LHC goes up in energy. So then (1) stays unresolved, and what do high-energy physicists do? Perhaps your answer is along the lines of your other comments: “Perhaps this is also an opportunity to calm our over-eagerness and rein in our ambitions. Let’s see what we can learn by sharpening the theoretical underpinnings of our understanding of physics at this energy, and forget for a while about dark matter, dark energy and quantum gravity. I suspect that this would be a lot more productive in the short run although, of course, it wouldn’t make quite such spectacular headlines.” Actually, this is already going on. People are working on non-exotic physics. And they are learning some stuff. There are a lot of people working on jets, collider physics, soft-collinear effective theory, etc. But so far it’s not providing any clear signals of physics beyond the Standard Model.
    If the answer is just for high-energy physicists to give up and permanently lower their ambitions, well, you know that isn’t going to work. Human beings want to work on interesting problems that point to truly new phenomena, and if your answer to them is to rein in their ambitions, then you clearly don’t understand why a lot of people become physicists, or even scientists more generally. And it’s easy for you to tell them to do that, seeing as you’re not the one who has a job doing physics all day.
    Again, this is one reason why criticism by people outside the field is often taken less seriously — it’s not your day’s work we’re talking about here.
    And your comment “I suspect that this would be a lot more productive in the short run” is also hard to take very seriously. How would surgeons or aeronautical engineers or mathematicians feel if someone who wasn’t familiar with the tools of their field told them that? How do you know what would make the field more productive? This is a serious question here — I don’t mean this as a rhetorical jab. What expertise allows you to predict that “lowering ambitions” would make the field more productive?
    And what does “lowering ambitions” mean concretely? String theory is out, presumably. What about supersymmetry? Dark matter? Neutrino masses? Early-universe cosmology? CMB observations? Who gets to draw that line?
    But like I said, paper and ink are cheap. If you have concrete proposals for addressing the actual problems of the field, like (1) or (2), please have at it! Write it up! (And this goes for everyone inside the field as well as outside.)

    Like

  45. Peter, thanks for the clarification. The move been attempted by MathPhys is precisely the same that Greene tried at one point during the debate with Jim. Unsuccessfully, from my point of view as an external observer. Then again, I’m a philosopher of biology, not physics.

    Like

  46. ns, this comment almost didn’t make it through moderation, as I find its tone to be borderline acceptable. However, in the end it comes down to what one means by “outside the field.” Jim has qualifications in physics; Peter is clearly within the field; and so certainly is Lee Smolin. A number of philosophers of physics also known enough about the actual physics to contribute to the discussion. Sure, a layperson with no training in physics has little interesting to say about this issue, but that’s not who’s contributing here.

    As for the question of what’s the alternative, I’m always puzzled by this “it’s the only game in town” type of argument. Start hiring people who work on different approaches, or make available grants that specifically call for novel theorizing about fundamental physics, and I’m sure you’ll get lots of applicants.

    Like

  47. Hi Coel,

    You say,

    In science, evidence validates *theories* about how things work (rather than entities). If a theory predicts A, B, C and D, and if we validate the theory by empirically verifying A, B and C then we have good evidence (indirect evidence but still good evidence) for accepting D. It doesn’t matter whether D can be falsified so long as A, B and C can be,

    I take strong exception to the claim in the last sentence. If your theory predicts A, B, C and D but we have evidence only for A, B, and C, the strength of the evidence for D is entirely dependent on how unique your theory is in predicting A, B and C. There may be a multitude of other theories that account for A, B and C but predict E (or F or …) instead of D; if none of E,F,… have been observed, then A, B and C provide no empirical evidence at all for D — D remains an unsupported conjecture. (Of course, you could claim that your theory is more “beautiful” than alternatives that predict E,F,…, but beauty is in the eye of the beholder, to use a trite phrase, and the purported beauty may even be based on a false premise.)

    My understanding is that you are looking at inflation as being highly successful in quantitatively predicting A, B and C, and it seems to be the “only game in town” that does so. Thus, we should not only take seriously its implications of “eternal-ness” (prediction “D”), we should regard A, B and C as (indirect) evidence for it and its multiverse offspring. While I agree we should take eternal inflation seriously as a consequence of the inflation idea, I disagree that A, B and C provide evidence that eternal inflation is actually “true,” for the above reasons. And I really wish you would stop suggesting otherwise.

    We have no evidence at all that the standard inflation scenario is the unique explanation for the large-scale homogeneity, isotropy, nearly-scale-invariant CMB power spectrum, etc. (By “standard inflation scenario” I mean a large vacuum energy due to a scalar field rolling down a potential, which according to general relativity drives spacetime to expand exponentially for a long enough time to do the job it was invented to do.) In no sense does the current lack of alternatives to inflation imply that no successful alternative will be developed in the future, and I expect you will agree. In fact, inflation has serious conceptual problems of which you may be well aware. Steinhardt has given a popular account of some of these (see “The Inflation Debate: Is the theory at the heart of modern cosmology deeply flawed?” in Scientific American, April 2011). Roger Penrose has given a compelling argument that inflation is extremely improbable, that it is far more likely that the observable universe with its homogeneity and isotropy is the result of a random assembly of particles than something due to inflation (see, e.g. “Difficulties with inflationary cosmology,” Annals of the NY Academy of Science, 1989: v.571, p.249). The existence of serious conceptual problems with the foundations of inflationary theory should give us pause — there is very possibly an alternative picture which could, for example, also be “inflation like” in important respects (exponential expansion of spacetime at very early times, etc.) that doesn’t rely on a scalar field potential to drive the expansion. If one drops the assumption of a preexistent spacetime manifold and requires that it and the metric emerge during cosmogenesis, the problems outlined by Penrose and Steinhardt go away. (In fact, I worked on the foundations of such a scenario for my thesis work.)

    Like

  48. Massimo,

    I really don’t mean any offense here. When I wrote about people outside the field, I didn’t mean that in a pejorative or judgmental sense at all. What I’m trying to do is explain at a descriptive level why people in the high-energy community don’t take some criticisms from people on the outside very seriously. Maybe they’re right to do that, and maybe they’re wrong. But it’s important to understand what their reasoning is if your goal is to change minds and get them to listen. People are more willing to listen if they feel that their motivations are properly understood.

    Indeed, nothing gets a person listening better than showing them that you fully understand and appreciate what their arguments and motivations are.

    I can also be more specific about what I mean when I talk about people outside the field. I mean that if a person hasn’t computed a one-loop scattering amplitude, and hasn’t computed a path integral, and hasn’t computed an effective potential or a gauge anomaly or used a sum rule, and certainly if a person hasn’t tried to construct a new model that goes beyond Standard-Model physics that attempts to navigate the many constraints at issue here and run that idea by people to see if it really does get around those constraints, then one doesn’t truly understand what people are up against. (I’m sure Peter has done some of these things, for example, but I don’t know how much time he has personally spent on the last one. Perhaps he could correct me there!)

    Training in physical chemistry or philosophy of science, while wonderful and very much worth everyone’s general respect, isn’t enough to be an expert in this required sense. But although this knowledge barrier is a difficult problem (http://abstrusegoose.com/272), it’s an eminently fixable problem! Pick up some textbooks, learn the tools, do some of the hard calculations, go to seminars and workshops for a couple of years, and talk to lots of the “ordinary” people in the field about what they’re working on and why, not merely the flashy people with big names who give the news interviews. Throw your own constructive ideas and proposal at them and see how they respond. I guarantee you’ll get a different picture of what’s going on and why, and maybe some humility about how easy it really is to solve the problems that are ultimately at issue.

    Supersymmetry, string theory, etc., all sound pretty crazy in the abstract, especially when one’s knowledge comes from reading popular accounts. (10 or 11 dimensions? NS5-branes? Are they just saying anything that pops into their heads?) But when you look carefully at the constraints that you have to navigate and start piecing together what you need to get around them, you keep coming back to these sorts of ideas. It’s hard to explain unless you look carefully at those constraints.

    This doesn’t mean the ideas are correct, but it gives a reason why in the absence of experimental data, people are working on them (and using them to generate spin-off ideas that are independent of them) other than because the ideas are just part of a fad or because they’re the imposition of a regime of old tenured professors. It also gives a reason people haven’t succeeded by other approaches.

    Let me address your other comment, which is an important one. You write: “Start hiring people who work on different approaches, or make available grants that specifically call for novel theorizing about fundamental physics, and I’m sure you’ll get lots of applicants.”

    In the abstract, that sounds like a great idea. But none of the approaches to going beyond the Standard Model that currently exist and haven’t been dashed by constraints pass muster with critics, apparently. (Supersymmetry, string theory, etc.) So you’d have to look for someone working on something that’s not (a) one of these ideas that are not acceptable, nor (b) an idea that clearly doesn’t work or that has been definitely ruled out.

    So what’s left? As someone in the field, I don’t know the answer, but I’d love to know it.

    So if we can’t find someone working on a promising idea that either is (a) acceptable (not string theory, for example) or (b) that isn’t clearly inconsistent with known constraints, then our alternative is to go with (i.e., hire or give a grant to) someone who simply promises to do something new that doesn’t run into violations of constraints once they’re hired or get the grant. Say, we set aside a small amount of funding for a small population of people who are just told to go do something different, but nobody knows what that is, because all the ideas so far fall into categories (a) or (b).

    So how do we pick these people? Who’s a candidate? How does the hiring/grant process work? How is the candidate to be evaluated as a likely prospect for success, if we can’t use the person’s previous research, which apparently falls into (a) or (b)? Again, these are not rhetorical questions — this could be a very constructive dialogue, and people in the field could clearly benefit from some concrete suggestions beyond merely telling them to just do it somehow, because it’s not obvious.

    Liked by 1 person

  49. ns12345,
    I can’t speak for Jim, but my own goal has never been to quiet the bombasts of the HEP community. That’s hopeless and you’re right, not a solution to the problems of the field. To me what’s at issue are not people, but ideas that deserve to be challenged, whether they’re promulgated loudly by bombasts, or quietly by thoughtful people. For example, Witten is the opposite of a bombast, as well as much more talented and harder working than me, and much more deeply knowledgeable about most topics. I’ve learned a lot from his work and what he has to say, and it wouldn’t occur to me to tell him to shut up about string theory. At the same time, I don’t think he’s always right about everything and some of what he has to say deserves to be challenged.

    It seems to me very much worth challenging the constellation of speculative ideas (strings, susy, guts) that have dominated the field for many years. The public deserves a more honest account of what is going on than they’re getting, and people in the field seem to need to be reminded of what is solid and what is flimsy conjecture. The “why are you doing this, it would be better if you did something positive” argument is a serious one, but it’s just a fact that a counter-weight of solid argument is needed to push back against some of the highly dubious ideas getting a lot of attention and being pushed hard by some with an agenda. There’s a public marketplace of ideas here, and it deserves to have all sides representedtt.

    Then there’s the “multiverse”, which I think is just a major intellectual scandal and looming disaster threatening the subject and its credibility with the public. I don’t see how anyone can think that the problem there is complaints about this rather than the behavior being complaining about.

    Like

  50. There are also two significant issues as regards the inflationary theory that is supposed to underlie the multiverse, in addition to the probability issues highlighted by Penrose and Steinhardt, which are related to the fact that there is no satisfactory measure that determines probabilities for the theory.

    First, no one knows what the inflaton field is – in virtually all the hundred or so versions of inflationary theory, an arbitrary potential function is written down with no solid link to established physics. The one and only exception is if the inflaton is a non-minimally coupled Higgs particle – which might possibly be the case. If so, the version of inflation that occurs is not chaotic.

    Second, the success of the theory depends on the supposed initial quantum fluctuations of the theory somehow becoming classical. There is no adequate theory of how this happens – a huge lacuna in the theory. Some claim that decoherence will solve this, but it does not – one needs a theory of how individual classical events occur, not just an ensemble.

    These issues do not show that inflation is wrong – but they do demonstrate that it’s theoretical foundations are rickety. It’s not a cut and dried theory. And occurrence of inflation is not dependent on string theory/M theory – the great success of jnflation in explaining cosmological observations does not validate that version of quantum gravity. They are often sold as unified package – but that is not necessarily the case.

    Like

%d bloggers like this: