Farewell to determinism

Ilc_9yr_moll4096by Marko Vojinovic

Introduction

Ever since the formulation of Newton’s laws of motion (and maybe even before that), one of the popular philosophical ways of looking at the world was determinism as captured by the so-called “Clockwork Universe” metaphor [1]. This has raised countless debates about various concepts in philosophy, regarding free will, fate, religion, responsibility, morality, and so on. However, with the advent of modern science, especially quantum mechanics, determinism fell out of favor as a scientifically valid point of view. This was nicely phrased in the famous urban legend of the Einstein-Bohr dialogue:

Einstein: “God does not play dice.”

Bohr: “Stop telling God what to do with his dice.”

Despite all developments of modern science in the last century, a surprising number of laypeople (i.e., those who are not familiar with the inner workings of quantum mechanics) still appear to favor determinism over indeterminism. The point of this article is to address this issue, and argue (as the title suggests) that determinism is false. Almost.

Preliminaries

Let us begin by making some more precise definitions. By “determinism” I will refer to the statement which can be loosely formulated as follows: given the state of the Universe at some moment, one can calculate a unique state of the Universe at any other moment (both into the future and into the past). This goes along the lines of Laplace’s demon [2] and physical determinism [3], with some caveats about terminology that I will discuss below. Of course, there are various other definitions of the term “determinism” (see [4] for a review) which are not equivalent to the one above. However, the definition  that will concern us here appears to be the only one which can be operationally discussed from the point of view of science (physics in particular) as a property that Nature may or may not possess, so I will not pursue any other definition in this article.

There are various caveats that should be noted regarding the definition of reductionism. First and foremost, regarding the terms “Universe,” “moment,” “past” and “future,” I will appeal to the reader’s intuitive understanding of the concepts of space, time and matter. While each of these can be defined more rigorously in mathematical physics (deploying concepts like isolated physical systems, foliated spacetime topologies, etc…), hopefully this will not be relevant for the main point of the article.

Second, I will deploy the concept of “calculating” in a very broad sense, in line with the Laplace’s demon — assume that we have a computer which can evaluate algorithms arbitrarily fast, with unlimited memory, etc. In other words, I will assume that this computer can do whatever can “in principle” be algorithmically calculated using math, without any regard to practical restrictions on how to construct such a machine. I will again appeal to the reader’s intuition regarding what can be “calculated in principle” versus “calculated in practice,” and I will not be limited by the latter.

Finally, and crucially, the concept of the “state” of a physical system needs to be formulated more precisely. To begin with, by “state” I do not consider the quantum-mechanical state vector (commonly known as the wavefunction), because I do not want to rely on the formalism of quantum mechanics. Instead, for the purposes of this article, “state” will mean any set of particular values of all independent observables that can be measured in a given physical system (a “phase space point” in technical terms). This includes (but is not limited to) positions, momenta, spins, etc. of all elementary particles in the Universe. In addition, it should include any potential additional observables which we are unaware of — collectively called hidden variables, whatever they may be.

We all know that quantum mechanics is probabilistic, rather than deterministic. It describes physical systems using the wavefunction, which represents a probability amplitude for obtaining some result when measuring an observable. The evolution of the wavefunction has two parts — unitary and nonunitary — corresponding respectively to deterministic and nondeterministic. Therefore, if determinism is to be true in Nature, we have to assume that quantum mechanics is not a fundamental theory, but rather that there is some more fundamental deterministic theory which describes processes in nature, and that quantum mechanics is just a statistical approximation of that fundamental theory. Thus the concept of “state” described in the previous paragraph is defined in terms of that more fundamental theory, and the wavefunction can be extracted from it by averaging the state over the hidden variables. Consequently, in this setup the “state” is more general than the wavefunction. This is also illuminated by the fact that in principle one cannot simultaneously measure both the position and the momentum of a particle, while in the definition above I have not assumed any such restriction for our alleged fundamental deterministic theory.

As a final point of these preliminaries, note that the concept of the “state” can be defined rigorously for every deterministic theory in physics, despite the vagueness of the definition I gave above. The definition of state always stems from specific properties of equations of motion in a given theory, but I resorted to the handwaving approach in order to avoid the technical clutter necessary for the rigorous definition. In the remainder of this article, some math and physics talk will necessarily slip in here and there, but hopefully it will not interfere with the readability of the text.

Main insights

Given any fundamental theory (deterministic or otherwise), one can always rewrite it as a set of equations for the state — i.e., equations for the set of all independent observables that can be measured in the Universe. These equations are called effective equations of motion, and they are typically (although not necessarily) partial differential equations. This sets the stage for the introduction of our four main players:

  • Bell inequalities [5],
  • Heisenberg inequalities [6],
  • Cauchy problem [7] and
  • Chaos theory [8],

which will team up to provide a proof that the effective equations of motion of any deterministic theory cannot be compatible with experimental data.

Let us first examine the main consequence of the experimental violation of Bell inequalities. Simply put, the violation implies that local realism is false, i.e., that any theory which assumes both locality and realism is in contradiction with experiment. In order to better understand this as it regards our effective equations of motion, let me explain what locality and realism actually mean in this context. Locality is an assumption that the interaction between two pieces of a physical system can be nonzero only if the pieces are in close proximity to each other, i.e., both are within some finite region of spacetime. The region is most commonly considered to be infinitesimal, such that the effective equations of motion for our deterministic theory are local partial differential equations. Such equations depend on only one point in spacetime (and its infinitesimal neighborhood), as opposed to nonlocal partial differential equations, which depend on more than one spacetime point. The point of this is to convince you that locality is a very precise mathematical concept, and that it may or may not be a property of the effective equations of motion. Realism is an assumption that the state of a physical system (as I defined it above) actually exists in reality, with infinite precision. While we may not be able to measure the state with infinite precision (for whatever reasons), it does exist in the sense that the physical system always is in fact in some exact well-defined state. While such an assumption may appear obvious, trivial or natural at first glance, it will become crucial in what follows, because it might be not true, from the experimental point of view.

The next ingredient is the experimental validity of Heisenberg inequalities. These inequalities essentially state that there are observables in nature which cannot be measured with infinite precision for the same state. And this means not even in principle, despite any technological proficiency that one may have at one’s disposal. The most celebrated example of this is the uncertainty relation between the position and momentum of the particle. Measuring the position places a finite boundary on measuring the momentum, and vice versa. Given that every state contains the positions and momenta of all particles in the Universe, Heisenberg inequalities are prohibiting us from being able to experimentally specify (i.e., measure) a single state of our physical system.

The third ingredient is a lesson in math — the Cauchy problem. Given a set of partial differential equations, they typically have infinitely many solutions. The Cauchy problem is the following question: how much additional data does one need to specify in order to uniquely single out one particular solution out of the infinite set of all solutions? This additional data are usually called “boundary” or “initial” conditions. The answer to the Cauchy problem, loosely formulated is the following: for local partial differential equations, it is enough to specify the state of the system at one moment in time as the initial data. In contrast — and this is an important and often under appreciated detail — for nonlocal equations of motion this does not hold: the amount of data needed to single out one particular solution is much larger than that needed to specify the state of the system at any given moment, and is usually so large that it is generically equivalent to specifying the solution itself. In other words, given some nonlocal equations, in order to find a single solution of the equations, one needs to specify the whole solution in advance.

The final ingredient is another lesson in math — chaos theory. It is essentially a study of solutions of nonlinear partial differential equations (usually restricted to local equations, so that the Cauchy problem has a solution — this is called “deterministic chaos”). Chaos theory asks the following question: if one chooses a slightly different state as initial data for the given system of equations, what will happen to the solution? The answer (again, loosely formulated) is the following: for linear equations the solution will be also only slightly different from the old one, while for nonlinear equations the solution will soon become very different from the old one. In other words, nonlinear equations of motion tend (over time) to amplify the error with which initial conditions are specified. This is colloquially known as the butterfly effect [9].

Analysis

Now we are ready to put it all together, and demonstrate that a deterministic description of nature does not exist. Start by imagining that we have formulated some fundamental theory of nature, and have specified all possible observables that can be, well, observed. Then we ask the question “Can this theory be deterministic?” given the definition of determinism provided at the outset. As a first step in answering that question, we formulate the effective equations of motion. Analysis of the Cauchy problem of the effective equations (whatever they may look like) tells us the following. If the equations are nonlocal, specifying the state of the system at one moment is not enough to obtain a unique solution of the equations, i.e., one cannot predict the state of the system neither for future nor for past moments. This is a good moment to stress the word “unique” in the definition of determinism — if the initial state of the system produces multiple possible solutions for the past and the future, it is pretty meaningless to say that the future is “determined” by the present. So, in order to save determinism, we are forced to assume locality of the effective equations of motion.

Enter Bell inequalities — we cannot have both locality and realism. And since we need locality to preserve determinism, we are forced to give up realism. But denial of realism means that the state describing the present moment (our initial data) does not exist with infinite precision! As I discussed above, this actually means that Nature does not exist in any one particular state. The best one can do in such a situation is to try to measure the initial state as precisely as (theoretically) possible, thereby specifying the initial state with at least some finite precision.

Enter Heisenberg inequalities — there is a boundary on the precision with which we can measure the initial state of the system, and in absence of realism, there is a boundary on the precision with which the initial state can be said to actually exist. But okay, one could say, so what? Every physicist knows that one always needs to keep track of error bars, what is the problem? It is that the solution of the Cauchy problem assumes that the initial condition is provided with infinite precision. If the initial condition does not exist with infinite precision, the best one can do is to provide a family of solutions for the equations of motion, as opposed to a single, unique solution. This defeats determinism.

But wait, we can calculate the whole family of solutions, and just keep track of the error bars. If they remain reasonably small in the future and in the past (and by “reasonably small” we can mean “of the same order of magnitude as the errors in the initial data”), we can simply claim that this whole family of solutions represents one deterministic solution.” Just like the initial state existed with only finite precision, so do all other states in the past and the future. Why cannot this be called “deterministic”?

Enter chaos theory — if the effective equations of motion are anything but linear (and they actually must be nonlinear, since we can observe interactions among particles in experiments), the error bars from the initial state will grow exponentially as time progresses. After enough time, the errors will grow so large that they will always encompass multiple very different futures of the system. Such a situation cannot be called “a single state” by any useful definition. If we wait long enough, everything will eventually happen. This is not determinism in any possible (even generalized) sense, but rather lack thereof.

So it turns out that we are out of options — if the effective equations of motion are nonlocal, determinism is killed by the absence of a solution to the Cauchy problem. If the equations are local, the initial condition cannot exist due to lack of realism. If we try to redefine the state of the system to include error bars, the Heisenberg inequalities will place a theoretical boundary on those error bars, and chaos theory guarantees that they will grow out of control for future and past states, defeating the redefined concept of “state,” and therefore determinism.

And this concludes the outline of the argument: we must accept that the laws of Nature are intrinsically nondeterministic.

Some additional comments

At this point, two remarks are in order. The first is about the apparently deterministic behavior of everyday stuff around us, experience which led us to the idea of determinism in the first place. After all, part of the point of physics, starting from Newton, was to be able to predict the future, one way or another. So if Nature is not deterministic, how come that our deterministic theories (like Newton’s laws of motion, or any generalization thereof) actually work so well in practice? If there is no determinism, how come we do not see complete chaos all around us? The answer is rather simple — in some cases chaos theory takes a long time to kick in. More precisely, if we consider a small enough physical system, which interacts with its surroundings weakly enough, and it is located in a small enough region of space, and we are trying to predict its behavior for a short enough future, and our measurements of the state of the system are crude enough to begin with — we might just get lucky, so that the the error bars of our system’s state do not increase drastically before we stop looking. In other words, the apparent determinism of everyday world is an approximation, a mirage, an illusion that can last for a while, before the effects of chaos theory become too big to ignore. There is a parameter in chaos theory that quantifies how much time can pass before the errors of the initial state become substantially large — it is called the Lyapunov time [10]. The pertinent Wikipedia article has a nice table of various Lyapunov times for various physical systems, which should further illuminate the reason why we consider some of our everyday physics as “deterministic.”

The second remark is about the concept of superdeterminism [11]. This is a logically consistent method to defeat the experimental violation of Bell inequalities, which was crucial for our argumentation above. Simply put, superdeterminism states that if the Universe is deterministic, we have no reason to trust the results of experiments. Namely, an assumption of a deterministic Universe implies that our own behavior is predetermined as well, and that we can only perform those experiments which we were predetermined to perform, specified by the initial conditions of the Universe (say, at the time of Big Bang or some such). These predetermined experiments cannot explore the whole parameter space, but only predetermined set of parameters, and thus may present biased outcomes. Moreover, one has trouble even defining the concepts of “experiment” and “outcome” in a superdeterministic setup, because of the lack of experimenter’s ability to make choices about the experimental setup itself. In other words, superdeterminism basically says that Nature is allowed to lie to us when we do experiments.

In order to understand this more clearly, I usually like to think about the following example. Consider an ant walking around a 2-dimensional piece of paper. The ant is free to move all over the paper, it can go straight or turn left and right. There are no laws of physics preventing the ant from doing so. A natural conclusion is to deduce that the ant lives in a 2-dimensional world. But — if we assume a superdeterministic scenario — we can conceive of initial conditions for the ant which are such that it never ever thinks (or wishes, or gets any impulse or urge, or whatever) to go anywhere but forward. Such an ant would (falsely) conclude that it lives in a 1-dimensional world, simply because it is predetermined to never look sideways. So the ant’s experience of the world is crucially incomplete, and leads it to formulate wrong laws of physics to account for the world it lives in. This is exactly the way superdeterminism defeats the violation of Bell inequalities — the experimenter is predetermined to perform the experiment and to gather data from it, but he is also predetermined to bias the data while gathering it, and to (falsely) conclude that the inequalities are violated. Another experimenter on the other side of the globe is also predetermined to bias the data, in exactly the same way as the first one, and to reach the identical false conclusion. And so are a third, fourth, etc. experimenters. All of them are predetermined to bias their data in the same way because the initial conditions at the Big Bang, 14 billion years ago, were such as to make them do so.

This kind of explanation, while logically allowed, is anything but reasonable, and rightly deserves the name of superconspiracy theory of the Universe. It is also a prime example of what is nowadays called cognitive instability [12]. If we are predetermined to skew the results of our own experiments of Bell inequalities, it is reasonable to expect that other experimental results were also be skewed. This would force us to renounce experimentally obtained knowledge altogether, and to the question why to even bother to try to learn anything about Nature at all. Anton Zeilinger has phrased the same issue as follows [13]:

“[W]e always implicitly assume the freedom of the experimentalist … This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature.”

Final remarks

Let me summarize. The analysis presented in the article suggests that we have only two choices: (1) accept that Nature is not deterministic, or (2) accept superdeterminism and renounce all knowledge of physics. To each his own, but apparently I happen to be predetermined to choose nondeterminism.

It is a fantastic achievement of human knowledge when it becomes apparent that a set of experiments can conclusively resolve an ontological question. And moreover that the resolution turns out to be in sharp contrast to the intuition of most people. Outside of superconspiracy theories and “brain in a vat”-like scenarios (which can be dismissed as cognitively unstable), experimental results tell us that the world around us is not deterministic. Such a conclusion, in addition to being fascinating in itself, has a multitude of consequences. For one, it answers the question “Is the whole Universe just one big computer?” with a definite “no.” Also, it opens the door for the compatibility between the laws of physics on one side, and a whole plethora of concepts like free will, strong emergence, qualia, even religion — on the other. But these are all topics for some other articles.

At the end, here is a helpful flowchart, which summarizes the main lines of arguments of the article:

flowchart.001

_____

Marko Vojinovic is a theoretical physicist, doing research in quantum gravity at the University of Lisbon. His other areas of interest include the foundational questions of physics, mathematical logic, philosophy, knowledge in general, and the origins of language and intuition.

[1] Wikipedia on Clockwork Universe.

[2] Wikipedia on Laplace’s demon.

[3] Wikipedia on physical determinism.

[4] Wikipedia on determinism in general.

[5] Wikipedia on Bell inequalities.

[6] Wikipedia on Heisenberg inequalities.

[7] Wikipedia on Cauchy problem.

[8] Wikipedia on chaos theory.

[9] Wikipedia on the butterfly effect.

[10] Wikipedia on Lyapunov time.

[11]  Wikipedia on superdeterminism.

[12] I first encountered the term “cognitive instability” as used by Sean Carroll, though I am not sure if he coined it originally.

[13] A. Zeilinger, Dance of the Photons. Farrar, Straus and Giroux, New York, 2010, p. 266.

Advertisements

251 thoughts on “Farewell to determinism

  1. Hi labnut,

    To that I reply:
    1) I knowingly exercise the freedom to direct my thoughts where I choose.

    What you “know about” is a very small proportion of what is going in your brain. There are something like a hundred million million synapse connections in your brain, and that hugely complex neural network is the thing that contains your personality and makes your decisions. Further, there are something like ten thousand million million information-processing events (synapse firings) every second in your brain. How can you assert that you “know” that your thoughts and decisions are independent of the physical causation in that network?

    2) What laws of physics determine the contents of my thoughts?

    It’s mostly the laws of quantum electrodynamics. Plus a heck of a lot of contingent history (though there is nothing in that contingent history that is not a consequence of physical laws playing out).

    3) Why on earth would evolution go to such great lengths to endow me with the convincing illusion that I possess free will?

    The “illusion” of free will is really just the fact that you’re not aware of all the low-level gubbins of the decision-making machinery. It’s not a question of evolution “going to great lengths” to prevent you knowing about all of that, rather that it would have had to go to exceptionally great lengths to give your consciousness monitoring access to all of that, and there is simply no need for it to have done so.

    4) Why on earth would evolution go to even greater and very costly lengths to create consciousness if that consciousness cannot be used to exercise free will?

    Your questions assumes that consciousness is very costly and can only be acheived at great lengths. Perhaps consciousness is simply a necessary consequence of the decision-making process. A decision-making brain has to have some knowledge of its own state, where relevant.

    Thus, I don’t think that any of your questions argue against a compatibilist conception of “free will”, “decision making” and “volition”.

    Additionally, Robin makes a very strong argument about the impossibility of all present and future knowledge being compressed into the distant past, only to emerge now.

    I don’t think that “distant past” determinism is a necessary part of the argument against dualistic free will, or of anything Coyne is saying. Coyne is merely saying that the decision is a product of the physical state of the system.

    The timescale for chaos and quantum indeterminacy to produce divergent outcomes is interesting, but isn’t important for that argument. Partly that is because the brain and its decision making *has* to be deterministic on long-enough timescales in order to be useful, and thus to have evolved.

    Second, since chaos and/or quantum indeterminacy do not give you dualism or anything other than the playing out of the physical laws, it doesn’t affect the concept of “moral responsibility”, which is what Coyne is really on about.

    Finally:

    Second, throwing around the accusation ‘incoherent’ is seldom a useful thing to do. […] The way you use the term, it indicates your attitude(pejoratively), but, to plain about it, your attitude is not interesting.

    You yesterday:

    This is why the arguments of Coyne are so incoherent.

    Just saying.

    Like

  2. Hi steven,

    This is only a “relaxation” if Popperian falsificationism is the correct standard for scientific method. I don’t think it ever was. For one thing, it a priori forbids any historical science (which seems to me to be what it was formulated to accomplish.) Cosmology being an historical science, any real progress means ignoring Popperian falsificationism.

    Falsification is about falsifying predictions, and predictions are about things that you don’t already know, and thus are not restricted to being about “the future”. Thus falsification still applies to historical sciences.

    For example, if you don’t already know some feature of the Cosmic Microwave Background, you can make a prediction of that feature, and then falsify the model that generated that prediction. It is irrelevant that the properties of the CMB were set 14 billion years ago.

    Like

  3. Very interesting comments and to me Schopenheur is always one philosopher who tries to bridge the gaps of philosophic history. His nineteenth century pov still has clarity because it precedes the muddle of follow on psychology, neuroscience and cognitive science.

    The Brains Blog discussion of consciousness http://philosophyofbrains.com/ is insightful and Ian McGilchrist’s thesis that the left brain craves closure and determinacy while the right craves new and open experience may reveal the limitations embedded in the theory and these discussions.

    I’m more of an optimist because cognitive science does make progress and I’d like to see more understanding and delineation of the time domains in our own cognitive machinery. Namely touch, feel and language plus meaning are more direct while the visual system allows us to feel indirectly with translatable qualia. It is these built in translation functions of the visual system which we seem to adapt to mathematical reason and perhaps better understanding of these process’ will lead us further.

    Liked by 1 person

  4. If anything is out of date, it is the OP’s definition of determinism. It is uncertain anybody other than Calvinists who’ve stuck with double predestination, adhere to the OP’s insistence on predeterminism and fatalism.

    Like

  5. But clearly, this is a philosophical position, not a scientific one. And reading the article, it is an argument for a certain philosophy of law, which, frankly, ignores centuries of philosophy of law debate from which existing laws derive. Further, Coyne assumes a position that he can judge laws we have, without any suggestion that he is willing to engage in the political issues concerning how we change the laws.
    Finally: “For a determinist, punishment has three rationales: deterrence of others, rehabilitation of the criminal, and protection of society from the criminal.” I have no problems with these rationales; but one doesn’t need determinism to get any of them. E.g., one can easily imagine Christian or Kantian arguments for these rationales; but Coyne, eliding the history of philosophy of law, can’t address the significance of this.
    (I like Coyne’s site, and even post there occasionally; but I’m growing really suspicious of some scientists over-reaching into philosophy without admitting that’s what they’re doing, and ignoring the history of the debates they are getting into.)

    Like

  6. Hi ejwinner,

    There seems to be a common theme here of taking rather uncharitable interpretations of Coyne’s posts. E.g.;

    I have no problems with these rationales; but one doesn’t need determinism to get any of them. […] Coyne, eliding the history of philosophy of law, can’t address the significance of this.

    Of course you don’t need determinism to get those three! (Where did he say that you did?) The significance of that list of three is what is *not* there, there is no “retributive vengeance” as a rationale for punishment. The religious notion of hell as a punishment is purely retributive (it is not a good deterrent, since no-one sees the punishment, and it isn’t rehabilitition, nor is it removal of the criminal from society). Coyne thinks that notions of retributive punishment (inspired by religious and dualistic attitudes) are too prevalent in US society and make the justice system inhumane.

    But clearly, this is a philosophical position, not a scientific one. […] I’m growing really suspicious of some scientists over-reaching into philosophy without admitting that’s what they’re doing, …

    It is entirely obvious that that is what the post is doing! Only a small fraction of the posts on Jerry’s site are strictly about science, many of them are social advocacy. Further, I think it is wrong to see either science or philosophy as decoupled from wider concerns of society.

    Coyne assumes a position that he can judge laws we have, without any suggestion that he is willing to engage in the political issues concerning how we change the laws.

    Advocating views on a widely read blog *is* a part of political activity!

    Like

  7. Coel, I’m sorry, you are in error. For one thing, historical models cannot as a rule be falsified in this fashion. Initial conditions, boundaries, all sorts of supplementary assumptions are necessary. Such complexity is why it’s not always possible even to generate precise predictions, much less unique predictions. Popper condemns ad hoc assumptions to save models from falsification, and he is confident in his ability to spot a degenerating research program, which apparently resembles pornography in being both undefinable and obvious. Unfortunately for the practice of science, his philosophical powers are useless. Further, the logic of historical models relies heavily on inductions.. Popper of course had a grave problem with induction. This alone should be a red flag, but here we are.

    And in a larger sense it verges on the absurd. Unlike Popper, I believe science is not just a unique method (or even a set of methods,) It is also the entire catalog of statements about the material world (which by the way includes human behavior!) which can be reasonably treated as true, corresponding to reality. In other words, facts. The goal of science is first of all to find these facts, not to create theorems that generate the facts. In cosmology, for instance, the map of the CMBR is as much science as a model for its formation. Multiple models that produce the CMBR but do not generate unique and therefore falsifiable predictions are nonetheless science because they adhere to the facts.

    Like

  8. But, philosophy of law derives from centuries of existing law, not the other way round!

    The position that any notion of the human mind which is able to make decisions in spite of the state of the human body is a scientific position because it is based on scientific facts and well supported theories. Maybe it’s also a philosophical position, which is nice for those holding that tenet since it turns out to be the truth, but that doesn’t make it philosophy.Kant formulated the nebular hypothesis, which is used in cosmology. This doesn’t make cosmology Kantianism.

    Philosophy is also committed to the proposition that notions of the human mind that are independent of the body. Religion also has such notions, although they tend to speak of souls. Does that make philosophy religion? I think uncritically accepted disguised religious concepts should be viewed as a problem for philosophy instead of a valuable tradition. Uncritically accepted religious and philosophical concepts by contrast are commonly held to be as problematic in science as uncritically accepted common sense.

    Like

  9. It seems to me that your extremely narrow definition of determinism foreclosed practically any discussion on the topic itself.

    Like

  10. Maximus: “Secondly, it is assumed that if the universe is deterministic there must be a way to mathematically predict the future from the present perfectly. Here there is two problems: how do we know this is the case? – maybe there is some area where it breaks down. The second problem: Yes, maybe it is the case, but maybe not with the current state of the art mathematics – which all arguments presented relies on.”

    Amen!

    Like

  11. Coel,

    There seems to be a common theme here of taking rather uncharitable interpretations of Coyne’s posts.

    That is to be expected when one sees Coyne’s crude summary of justice. Coyne comes along with his simple, shallow view so he amply deserves the uncharitable interpretations. He should have done some background reading and made sure he understands the subject.

    notions of retributive punishment (inspired by religious and dualistic attitudes)

    Your statement, like that of Coyne, shows ignorance of the motivation and principles underlying retributive justice. To help you along I recommend you study the relevant SEP article -http://plato.stanford.edu/entries/justice-retributive/. But this is only the tip of the iceberg, there is a great deal more reading for you to do. There is really a large amount of deep legal scholarship devoted to the issue.

    I wish Aravis was back.

    Like

  12. Steven, are you the one with an upcoming PBS TV show? Is your philosophy of science creeping into your show? If so, maybe you can convince SciSal to let you do a guest post plugging your show.

    Like

  13. Huh. Well, this view might have already been voiced someplace else in the comments, but I have not read all the comments. I read the blog post.

    I diverge pretty early in the argument since the possibility that Nature itself is at core probabilistic was jettisoned, and that’s what I view her as, both as a student of Science, and what I feel. Yet, to me, that’s just another kind of Determinism, even if it isn’t a spin-up-the-wheels-on-the-ultracomputer-and-let’s-predict-the-next-15-seconds-of-the-Universe-perfectly kind. In that, there remains no Bohr-like space for souls and that sort of rubbish (pardon if that offends). It’s just the rules of calculating the future are different, and futures are governed by probability distributions, not cause-and-effect.

    I’m kind of amused that “cause and effect” remains part of modern parlance in philosophy of Science. Sure, logic is a great guide, but there are phenomena and systems to which, I urge, it just isn’t applicable. I’d say any system which is quantitatively described as a set of coupled differential equations has escaped the realm where “cause and effect” are applicable. Which phenomena are the causes? which the effects? Feedbacks break that idea.

    Things get a LOT simpler if Nature’s probabilistic roots are embraced. I tend to use “probabilistic” since “non-determinism” is ambiguous. It could mean “probabilistic” or it could mean that, at each collapse of an N-way quantum state, the Universe forks into branches with one branch following each of the possibilities. I say the latter speculation is, in Peter Woit’s terms, “not even wrong”, because it cannot be falsified.

    Liked by 1 person

  14. mogguy: “Scientifically, intrinsically (all) Laws of Nature are Non-Determined.”

    A law of a system can make the states or the outcomes of that system to be non-determined. But, if a law of a system itself is non-determined, it is not a law by definition. Thus, the above statement is totally wrong and thanks for your argument to show its wrongness.

    While both Maximus and mogguy have pointed out that Marko’s view is very much unsounded, I would like to second their points with three simple examples.

    Example one: death of a life is ‘predetermined’. During the lifetime, the monkey can dance any which way (predictable or unpredictable) as he pleases, but no dance of any kind (absolute no) can dance his way out of that predetermined box. By the way, the ‘entire’ biological mechanisms are determined by the biological laws. One example is ‘existential introduction’.

    Example two: in every star, it is a big ‘quantum’ dancing inferno which by all means is chaotic with zillions of undetermined microstates. But all that chaos and undetermined ‘microstates’ are boxed by a weakering (the gravity). Furthermore, its ‘death’ is also predetermined, although there are a few (about 3 to 5) different pathways. A few choices are by all means not non-determination. In this example, the determined ‘box’ (the star) and the predetermined fate are great example of determinism. Two example is ‘existential generation’.

    Example three: at LHC, many trillions of protons have collided. Yet, the collision debris form only a few hundred patterns. Is a choice of N (N is finite) the definition of ‘un-determinism’? Yes, it is undetermined among N but is totally determined of not being able to go out of the confinement of N. Thus far, all those N patterns show ‘No’ SUSY, ‘NO’ extract dimension, ‘NO’ dark matter, ‘NO’…. All those ‘NOs’ could be ‘predetermined’.

    Choices among a ‘fixed’ N is by all means not undeterminism. A true undeterminism must have choices among a ‘non-fixed’ N (unbounded, if not outright infinite). Even for an open system, if it has some predetermined ‘NOs’, it is not an undetermined system.

    Like

  15. Marko,

    You state in your article:

    “Enter chaos theory — if the effective equations of motion are anything but linear (and they actually must be nonlinear, since we can observe interactions among particles in experiments), the error bars from the initial state will grow exponentially as time progresses.”

    My point is just that you don’t need chaotic behavior for this to be true! Even for nonchaotic ordinary or partial differential equations, the error bars can grow exponentially as time progresses. For chaotic systems, the errors tend to grow even faster than that, and of course they exhibit other behaviors like bifurcation of solutions and dense orbits.

    I gave an example in one dimension just to illustrate this fact–you can easily extend this to a three dimensional system of ODE’s if you want something more physical.

    “My point here is that the above statement of chaos theory has nothing whatsoever to do with any numerical algorithms or approximations used or otherwise. It is a statement about the properties of exact solutions of DEs. The point of the article is that the “correctness” of the initial condition doesn’t really exist, and thus one can never specify a unique solution. No numerical algorithms or approximations ever entered the argument.”

    And my point is that you don’t even need to bring in chaos theory for this to be true. Your statement about the exact solution of DE’s relying on the correctness of the initial condition holds not only for the system describing the doubly jointed pendulum (commonly considered chaotic) but also for the single pendulum (commonly considered non chaotic.) If the first three parts of your arguments hold, there is no need to have chaos theory to reach your conclusion. Just having the definition that predictability means that the world follows systems of partial or ordinary differential equations is enough mathematically.

    Like

  16. You’re right, Disagreeable Me, I was just considering the movement of the wheel itself. When the ball is included the total system the roulette wheel and ball together may very well be chaotic. A better example then, would just be a Wheel of Chance (aka Wheel of Fortune) where the wheel is the only thing moving.

    Like

  17. Robin,

    Lorenz is pointing out that chaotic systems are difficult to predict. This does not mean that any system that is difficult to predict is chaotic. Chaotic systems also have other properties such as bifurcation (nonuniqueness) of solution to the differential equations describing them, and when you map out the movement of the object, it tends to fill the entire space. A system can be difficult to predict, yet still have neither of these properties.

    My point above was not that the error (or deviation–essentially the same in this context) for chaotic systems was linear, it was that the error for nonchaotic systems like the example I gave above can grow exponentially.

    Like

  18. Great article! But…

    Also, it opens the door for the compatibility between the laws of physics on one side, and a whole plethora of concepts like free will, strong emergence, qualia, even religion — on the other. But these are all topics for some other articles.

    This, right at the end, is where you lost me.

    Compatibilist free will is, well, compatible with determinism, and any other type of free will is incoherent. Either there is determinism or there is randomness (or a combination of both); neither is what people mean with libertarian or supernatural free will. They wouldn’t be happy to believe that decisions like whether to murder someone are based on die casts. What they want to have is a deterministic behaviour of their mind (e.g. they are nice, so they won’t ever murder) that is somehow (?) not determined by the past but comes entirely from within themselves. But to the negligible degree that that makes sense it is again determinism.

    Admittedly I have no idea what strong emergence is as opposed to simply emergence, and the latter would appear to be compatible with determinism. Same for qualia, which are just a fancy word for the observation that our brains differentiate between various kinds of sensory input.

    And sorry, but what has religion to do with anything? “There is randomness in the universe, therefore it is not unreasonable to believe in magic?” I can only assume I am missing a step of reasoning here.

    I once read a post by Sean Carroll. Of course as a non-physicist I sometimes have trouble following him, but in that case he pointed out that to the best of our understanding all processes are time-reversible, and that all ultimately all laws of nature may be merely probabilistic. For example, entropy: it is not impossible that entropy of a closed system decreases, merely vanishingly unlikely. So that may have primed me to find the above post convincing. The point is that a system of particles behaving probabilistically does not provide us with magical free will or magical creator gods either.

    Like

  19. “But, philosophy of law derives from centuries of existing law, not the other way round!”
    Actually we’re looking at a long discussion between philosophy and the politics that gets us law. It’s really not the ‘either/or/ you’re presenting here.
    “Philosophy is also committed to the proposition that notions of the human mind that are independent of the body.” As a Pragmatist, I guarantee you that this is false; and I really don’t know where you picked up this notion.

    Like

  20. Coel,
    “‘ Coyne, eliding the history of philosophy of law, can’t address the significance of this.’
    Of course you don’t need determinism to get those three!”
    The significance of this is that one needs politically to engage with those who don’t necessarily share your other values; Coyne seems (I could be wrong) unwilling to go that far.
    “The religious notion of hell as a punishment is purely retributive” – but not all religious thinkers go this far. I have no tolerance for any super-naturalism, But I am able to discern the complexity of religious thought on social matters.
    “Coyne thinks that notions of retributive punishment (inspired by religious and dualistic attitudes) are too prevalent in US society and make the justice system inhumane.” And I agree with him; but that doesn’t lead me to buy his determinism. (The whole point of my previous paragraph.)
    “Further, I think it is wrong to see either science or philosophy as decoupled from wider concerns of society.” Agreed again; I simply suggest that these issues be brought to the fore, even if only in off-hand remarks, so we know what we are actually getting when we discuss such issues.
    “Advocating views on a widely read blog *is* a part of political activity” – absolutely; which is why Coyne needs to address that openly.
    It is just more fair to say, “philosophically, I hold X,” rather than “I’m a biological scientist, I hold X,” which is misleading. (And I again suggest reading Cashmore, whom Coyne defers to as explanation of determinist justice theory, because the latter is precisely Cashmore’s claim.)
    You didn’t address: “I’m growing really suspicious of some scientists over-reaching into philosophy without admitting that’s what they’re doing” – but that’s the core problem here.
    (BTW, off topic, at Massimo’s suggestion I read some John Dupre, specifically “Darwin’s Legacy.” I found it quite cogent to this problem, and recommend it. Reductionism may not be all it’s cracked up to be.)
    (Meanwhile, I’ve addressed two of our previous issues on my blog; as to the problem of Darwin’s “Descent of Man,” I believe this deserves a longer, close reading, which may take some time.)
    Again, I think Coyne’s site delightful; but he does on occasion over-reach into philosophy without warrant.
    (Frankly, I’m growing persuaded that determinism is a position reached deductively, apriori; if this is true, then the notion that it is derived from scientific analysis is not only frivolous, it is false.)

    Like

  21. “… Tegmark’s level IV multiverse (the Mathematical Universe Hypothesis). This entails something equivalent to the MWI …”

    That sounds interesting, DM. Can you give a ref., or explain it a bit more?

    Like

  22. I can’t proofread at all. “”Philosophy is also committed to the proposition that notions of the human mind that are independent of the body are viable propositions.”

    “And reading the article, it is an argument for a certain philosophy of law, which, frankly, ignores centuries of philosophy of law debate from which existing laws derive.” Maybe my version can be contested on the grounds that rhetoric, useful for Greek courts, was the origination of philosophy. But I’m afraid this one is just wrong. As to where I picked up the notion? Strong emergence of mind, qualia, the way many distinguish mind and body, sensations treated as non-constitutive of the mind but some sort of events that impinge upon it, etc.

    Like

  23. Coel: First, as I said above, Coyne is simply wrong about alleged incompatibility of determinism and religion.

    No, I take that back. Again, per Pauli, Coyne is not even wrong on this issue. Again, until he can actually understand religion, consciousness, or logic (determinism and religion are not logically incompatible, as well as the actual quasi-deterministic religions I mentioned), he ought to find some golden silence on this subject. IMO, he makes himself look more clueless every time he talks on this subject.

    As for his idea that the ideas like Labnut are dangerous to society? If anything, I’d say Coyne’s ideas are dangerous. And, the idea that they might undercut moral responsibility isn’t the main reason I say that. Rather, it’s because of that “s-word,” namely, “scientism.”

    ===

    Compatibilitist free well, especially as understood in your way, is also why I reject the whole idea; there’s nothing with which some idea of free will, or my “something like free will,” needs to make itself compatible to in the first place.

    Like

  24. stevenjohnson,
    That There are philosophers (well in the minority) “committed to the proposition that notions of the human mind that are independent of the body are viable propositions,” I don’t deny. But this is really atypical of philosophy at present, as is empirically verifiable in the literature.
    “But I’m afraid this one is just wrong” -Even Coel doesn’t deny this, it is obvious from the given language of Coyne’s article. Are we to interpret Coyne’s meaning separate from the words that he gives us?
    “Strong emergence of mind, qualia, the way many distinguish mind and body, sensations treated as non-constitutive of the mind but some sort of events that impinge upon it, etc.”
    I have no idea what you’re talking about here, you seem to be interpreting philosophical discourse just as you please.

    Like

  25. I’m with this and other commenters. Again, I don’t think any of us is being uncharitable in our interpretations of Coyne.

    That said, per both retributive and distributive justice, I reject the likes of John Rawls. Why?

    I strongly recommend Walter Kaufmann’s “Without Guilt and Justice” to explain that better than I can. But, my summary is that there is no such “view from nowhere,” and so, in dealing with individuals as individuals, justice of either stripe isn’t possible.

    Like

  26. There are tests to certify randomness, and evidence that quantum hardware makes a difference.

    No. There are pseudorandom generators that pass those same tests. The quantum hardware is a marketing gimmick. Even in the cited article, the authors “go to some length to point out that it is impossible to prove absolute randomness.”

    Compatibilist free will is, well, compatible with determinism, and any other type of free will is incoherent. Either there is determinism or there is randomness (or a combination of both); neither is what people mean with libertarian or supernatural free will.

    No, this is nonsense. Sticking to the topic, the OP claim is that modern physics demonstrated against determinism. Whether that is best explained by randomness or free will or something else, no one here has demonstrated.

    Like

  27. Hi ejwinner,

    I’m rather baffled at the problem here. It is blatantly obvious that many of Coyne’s posts are social campaigning; only a minority are narrowly about science. He often addresses social issues, the effect of religion on society, philosophical issues, cowboy boots, cats, what he ate on his holidays, and much else.

    I don’t see that philosophy is some weird domain where one may only make remarks if you first hoist a warning flag saying “hey everyone, I am now doing philosophy”. Coyne just writes about whatever he feels like. Where is the problem? Are any readers really confused by this?

    On determinism: As I’ve said, Coyne is not wedded to determinism, he accepts the role of quantum indeterminacy and has said so several times. As I’ve also said, he is a bit sloppy in not always mentioning that. That’s because it is not actually relevant to his target, which is religious-style dualistic free-will. Coyne is using “determinism” as a shorthand for “absence of dualistic free will and decisions being made by material stuff acting in accord with the laws of physics”. Whether or not there is quantum dice-throwing in there is pretty irrelevant to his main point, which is about social campaigning.

    Lastly, I’m not sure I understand your complaint about Coyne being “unwilling to engage politically”. He has a day job in addition to his blog, and has been writing a second book. He seems far more politically engaged than your average citizen.

    Like

  28. Hi Philip,

    From that technology review page:

    Significantly, they leave unanswered the question of how convincing this evidence is that they’ve gathered and instead go to some length to point out that it is impossible to prove absolute randomness.

    So, there’s good reason to believe QM is truly random (assuming there is only one universe, at least). My point is that we can never be sure that it is, and even the authors of that paper seem to agree.

    Like

  29. Hi Labnut,

    Try not to get too upset. To say that I think libertarian free will is incoherent is not to insult you but simply to describe why I reject it. Saying you are wrong it is not the same as calling you a fool.

    When we use that term we usually mean something along the lines of, the argument clearly lacks a rational basis or contains obvious contradictions.

    Indeed. That is exactly what the word means. No more and no less. So why say a long sentence when a short phrase communicates the exact same idea?

    1) I knowingly exercise the freedom to direct my thoughts where I choose.

    I agree, as long as we interpret freedom along compatibilist grounds.

    2) What laws of physics determine the contents of my thoughts?

    All of them I suppose, but that question doesn’t really make sense. What laws of physics determine which continents go where? Unless you believe that continents have free will, the question doesn’t really help your case much.

    3) Why on earth would evolution go to such great lengths to endow me with the convincing illusion that I possess free will?

    It hasn’t. You do have (compatibilist) free will. As Coel says, any impression you have that this couldn’t arise in a deterministic universe is not so much a costly evolved illusion as a result of your ignorance combined with a failure of imagination (I don’t mean these terms pejoratively. We are all ignorant of the detailed workings of the brain and we all struggle to imagine how biological processes can give rise to human cognition).

    4) Why on earth would evolution go to even greater and very costly lengths to create consciousness if that consciousness cannot be used to exercise free will?

    On my view, consciousness is just what it is like to be a system with the functions the human brain has. What evolved were those functions. Consciousness is just an inevitable consequence of them.

    I don’t find your “humorous” assertion that I am a robot unlike you with your free will to be particularly amusing. It’s getting old, it’s patronising and I wish you’d drop it.

    Like

  30. Hi Robin,

    I do think your view and Dennett’s are similar, even if you don’t see it yourself. You don’t seem to be talking about libertarianism to me. Yes, there are processes which mix deterministic and random elements, but this is not what libertarianism seems to require. It seems to require elementary interactions which are neither deterministic nor random. A rough sketch of what this might look like is if the random events of QM are not actually random at all but influenced by something spiritual.

    Labnut would not describe it in those terms, as he likes to think of himself as a naturalist, but that is the most reasonable gloss I can put on libertarianism. Labnut’s approach seems to be to assert libertarianism but not offer any explanation for how it is supposed to make sense apart from that there is still much we don’t know.

    By the way, I didn’t say all religious people are libertarians. I said most libertarians are religious.

    Like

  31. It seems not to be in that particular blog, having searched under “III”, under “Everett”, and under “Quantum”. I did then read it all, and interestingly learned that you had discovered Tegmark’s idea independently, so that’s also interesting. Perhaps it is a different blog where you explain how his Level III, the Everett multiverse, is in some sense a consequence of his Level IV, the mathematical multiverse.

    Sorry, Massimo, for raising this topic again, but I’m only asking for info on a single special topic that seems mysterious.

    Like

  32. I’m sorry I wasn’t clear. The referent for “this one” was the claim that laws were derived from debate in philosophy of law, not about Coyne’s article. Maybe you can argue philosophy began as the art of trial rhetoric in ancient Athens. The proposition is a little strong but not crazy. But the laws came first, and philosophical reflection came later.

    I strongly disagreed with Coel about his Popperism but I haven’t commented on Coyne at all. Personally I read Coyne’s remarks as genuine preliminary thoughts about the coherence of inflicting stronger penalties on the basis of premeditation. I doubt he took the time to be careful about phrasing the background assumptions to that particular question. Perhaps he should have, but maybe it’s impossible to foolproof any argument against a hostile reading. And I say this despite strongly disapproving of Coyne for other reasons. What I vaguely remember from the past entries in his blog on the subject (I too like to look at the cat pictures) suggests to me he favors a determinism more similar to what I suggested way up thread. The OP of course had dismissed that unargued.

    Since you believe me to misrepresent the philosophical discourse, it would be best just to cite a few quotations as a reminder to the others, and be done with it. My personal feeling is that not having access to technical journals (and limited access to even classic literature) doesn’t forbid commenting on what I have read and digested to the best of my ability.

    Like

  33. Sorry, I misunderstood what you were looking for an explanation of.

    It’s pretty trivial really. On the MUH, all mathematically consistent universes exist. That includes all the different universes of the MWI. So, the MUH more or less entails the MWI, but not in a way which rules out other intepretatations.

    Like

  34. Coel,
    ” It is blatantly obvious that many of Coyne’s posts are social campaigning” – Of course; also I would hope Coyne does engage in some political activism beyond his professional obligations, as many concerned professionals do, even if it amounts to only donating to causes or writing letters and emails on issues of concern, as I believe he has mentioned in the past.
    The immediate post in question had to do with his saying he didn’t understand how there could be degrees of difference between acts of fatal violence according to the law, which thus necessitate (according to law and legal theory) differences in degree of sentencing. He said that all such acts (whether acts of passion or premeditated), being predetermined, the fault on the part of the purveyor is precisely the same – none – suggesting that therefore sentencing should be the same in every case, the function of which is explained by the three rationales mentioned.
    Now given this, it would seem that he wants to argue legal theory here, and that he wants to propose a major change in law. I would have liked that remarked.
    Perhaps I have not read enough of Coyne to know how wedded he is to determinism; but taking him at his words I have read, he does claim a “hard incompatibalism” and “strict determinism” (his words). That doesn’t mean he doesn’t allow the role of a social environment to change minds, and he has said he does; but it does suggest his target is just free will per se, and not any one ‘religious dualistic’ version of it.
    Finally one doesn’t need a hard incompatibalist strict determinism to argue against religious ethics, if that is indeed the intent.

    Like

  35. Hi DM,

    “I don’t understand the pointer basis problem”

    The pointer basis problem is not easy to formulate without math, unfortunately. The essence of the problem is the fact that the total wavefunction can be split into “worlds” in (infinitely many) nonunique ways. This is due to basic linear algebra — all bases in a vector space are equally good.

    “”That said, the random events in QM are not pseudorandom.”
    So we assume. I maintain it is impossible to know.”

    Oh but no, that’s precisely the point of Bell inequalities! It *is* possible to distinguish pseudorandom from intrinsically random. The pseudoranom generator has a hidden regularity (the algorithm) which makes the simulation obey the Bell inequalities. That’s the concept of realism (as I described it in the article). If you want your simulation to reproduce violations of Bell inequalities, you either have to produce intrinsically random numbers, or you have to give up locality. So (if locality is maintained) Bell inequalities are essentially a criterion for experimental testing of whether randomness is intrinsic or pseudo.

    “Computers are not bound by locality. They can jump around computing things here and there as they like.”

    Please do not confuse nonlocality of equations of motion with nonlocal execution of an algorithm. The issue with nonlocality of EOM is that there is insufficient data to uniquely fix a solution. No algorithm can work around that, regardless of the order of steps it may take. “Guessing” of the missing data is impossible for a computer — a pseudorandom generator is just another equation to solve, it does not really help in any way (one can even formulate a mathematical proof of this, but it would be too technical for a blog comment).

    “If you can make predictions, you can make a simulation which corresponds to those predictions. Both involve understanding the phenomena well enough to have a step-by-step procedure (an algorithm) to calculate what will happen.”

    In QM, we can make predictions of probability distributions, not actual outcomes. So we cannot predict “what will happen”, but only a probability of it happening. This holds both for pen-and-paper predictions and for a computer algorithm predictions.

    A computer algorithm can solve equations of QM, i.e. evolve the wavefunction (deterministically). But the collapse of the wavefunction (or choosing a basis to split the wf into “worlds” in MWI) requires introduction (one could also say “creation”) of a new piece of information, new piece of initial data — something that simply was not present in the algorithm up to that point. No algorithm can create new data on its own — it can only manipulate data that already exist.

    Like

  36. Sorry, Marko, I don’t buy it, though I admit I am not competent to discuss it at a technical level. I’ll try to explain why I don’t buy it though.

    So (if locality is maintained) Bell inequalities are essentially a criterion for experimental testing of whether randomness is intrinsic or pseudo.

    Right. But I’m not assuming locality is maintained. I’m assuming computers can ignore locality.

    Please do not confuse nonlocality of equations of motion with nonlocal execution of an algorithm.

    I think the two are equivalent. Bell’s theorem is consistent with hidden variables if we allow non-local (superluminal) interaction. In the case of a simulation of a pair of entangled particles, whichever measurement is simulated first can influence the simulation of the second, no matter how distant the two are from each other in virtual space. In this way, we can guarantee measurements which are consistent with Bell’s theorem.

    The issue with nonlocality of EOM is that there is insufficient data to uniquely fix a solution.

    I maintain we don’t need to uniquely fix a solution. If we have a family of solutions, a pseudorandom number generator allows a computer to select one possible future from that family arbitrarily. And again, Bell doesn’t rule out hidden variables if we allow non-locality.

    In QM, we can make predictions of probability distributions, not actual outcomes.

    I know! Which is why I said that you can make a “story” which is consistent with Bell’s theorem and why I said you can make statistical predictions. You can’t predict what will actually happen, but you can, using a pen and paper, simulate a series of events which behaves in a manner which is characteristic of a real quantum system. And a computer can do the same, and for all we know that could be what drives the universe we observe.

    But the collapse of the wavefunction (or choosing a basis to split the wf into “worlds” in MWI) requires introduction (one could also say “creation”) of a new piece of information, new piece of initial data — something that simply was not present in the algorithm up to that point

    OK, but there’s no way to know what that data should be, and if that’s the case there’s no way to know when it’s wrong. In that case we can just make something up, and that’s what pseudo-random number generation allows us to do.

    Like

  37. stevenjohnson,
    certainly law per se came before philosophy per se; but the history since has been a long interchange between theories and practices, reflection and public debate. They cannot be partitioned precisely as a means of exclusion.
    I am not remarking Coyne’s position hostility, but skeptically. Indeed, I was predisposed to it when I first encountered it; but Coyne in one post referenced Andrew Cashmore’s argument for a blame-free justice system, and that article got under my craw and caused me to rethink the matter. See Cahmore, http://www.pnas.org/content/107/10/4499.full, and also Henrik Anckarsäter’s reply to it, http://www.pnas.org/content/107/28/E114.full, which is brief but to the point. Biology simply does not yet have the account of human behavior convincingly detailed enough to yet make any claims on “moral responsibility and penal law” (as Anckarsäter puts it).
    Coyne has been considering these issues for a while, and I hope he continues to do so, and to continue discussing them on his site. But when he asks how differing degrees of transgression can be determined for the sake of differing degrees of sentencing, good heavens! Politicians, lawyers, judges, and legal theorists have been discussing these issues for years, the laws don’t just pop into place based on simple biases. There is a record of these discussions that can (and to me, should) be queried, before raising such a question as though it’s never been asked before. Doing so actually takes the matter into a space of idealized abstraction – “what degrees of difference can be allowed given that determinism eliminates personal responsibility in such matters?’ That, I suggest, is a questionable strategy.
    As to the issue of what texts or positions in philosophy do/ do not depend on theorizing a ‘disembodied mind:’ The library of literature assuming an embodied mind, deriving from questions concerning the development of an organism with its environment, is quite extensive, so I will only mention a few: one can begin with the empiricism of Hume and Mill, or with the psychology of William James, or the naturalism of Santayana. The language theories of the later Wittgenstein or Austin also place themselves in embodied minds in a real world of social context. The kind of extreme idealism you fret about is really quite exhausted, surviving only in the margins (e.g., committed Husserlians who rarely publish outside of their own specialized journals).
    I am not as caught up with recent philosophy as I would like; but your mention of the problem of qualia indicates that you may have a misunderstanding here as well. The discussion of qualia cannot occur without assuming an embodied mind, since qualia problematics are precisely about sensory experiences and their interpretation. (See: http://plato.stanford.edu/entries/qualia/.)
    Finally: “not having access to technical journals (and limited access to even classic literature) doesn’t forbid commenting on what I have read and digested to the best of my ability.” I absolutely agree, and engage the same practice myself on many matters. However, my concern is that we should not prejudge philosophical positions without inquiry. (Although, to be fair, we all do at some point.)

    Like

  38. Mark,

    “My point is just that you don’t need chaotic behavior for this to be true! Even for nonchaotic ordinary or partial differential equations, the error bars can grow exponentially as time progresses.”

    Oh, now I see what you mean! Sure, you’re right about that statement: the error-bars in the solutions of PDEs can grow exponentially even if they are not chaotic. I agree.

    However, that is not enough for my argument. Namely, the errors to the solutions of PDEs do not *necessarily* grow exponentially (you gave one example which does, I can give another example which does not). There are systems of PDEs that have non-exponential solutions. The problem here is that we do not know what these PDEs (for a fundamental theory) look like, so we cannot make any strong statements about exponential growth.

    However, if I know that PDEs are nonlinear and are generic or complicated enough, I know that they will *certainly* show chaotic behavior, which will *certainly* lead to at least exponential growth of error bars. So chaos theory is there just to plug this hole in the proof — namely that we don’t really know what our PDEs actually look like, and therefore don’t know if the errors will grow exponentially on their own, or not. But chaos theory ensures that they always will, so we don’t need to inspect the exact form of the solutions, etc.

    I hope this is more clear now. 🙂

    Like

  39. Hi ejwinner,

    … his [Coyne’s] saying he didn’t understand how there could be degrees of difference between acts of fatal violence according to the law, which thus necessitate (according to law and legal theory) differences in degree of sentencing. […]

    What you suggest that Coyne said is fairly different from what he did say (a good tactic when criticising someone directly is to actually quote them).

    What Coyne did say was: “I need to think about whether premeditation makes such a huge difference” followed by some suggestions as to why it might make a difference, and then he concludes with: “These are just some tentative thoughts, and I welcome ideas from readers”.

    Now given this, it would seem that he wants to argue legal theory here, and that he wants to propose a major change in law. I would have liked that remarked.

    You really are immensely picky, and amazingly critical of anyone writing something even slightly differently from how you would have! What Coyne finishes with is: “If you know the law, perhaps you can explain the legal rationale for charging someone with a more serious crime when the same act is shown to be premeditated.”

    The fact that he wants to “argue legal theory” is blatantly obvious! Further, he is clearly thinking aloud and wants to discuss things further before making a stronger statement. You seem to want his posts to come with labels “In this post I am doing philosophy” and “in this post I argue legal theory”· Isn’t it obvious enough without the labels?

    Finally one doesn’t need a hard incompatibalist strict determinism to argue against religious ethics, if that is indeed the intent.

    Did anyone say that one does?

    Like

  40. Tegmark asserts, as “Level IV”, the ‘physical existence’ of ‘all mathematical structures’, whatever the insides of the single quotes might come to mean when we get a better grip on them. In particular, if there were any such structure which ‘encompassed’ Everett’s Level III, it would indeed be trivial that IV —>(a feeble non-Tegmarkian form of III).

    But, expressed in Tegmark’s terminology, Everett/Tegmark III asserts a huge, non-trivial amount more: that the particular mathematical structure which is our universe (multiverse, omnium, …pick your word) has that Everettian ‘quantum interpretation’, not merely that some mathematical structure does.

    Like

  41. To Marko: (13Sept 14.43)

    Marko: “Thank you for noticing! 🙂 Unfortunately, it seems that there are very few comments that are discussing the topic of the article itself. Maybe I shouldn’t have ever mentioned free will etc…”

    I’m afraid it would have happened anyway!
    Raise ‘Cause > non-Determined Effect’, ‘Free will/Supernature’ will also raise their heads and mostly as views recoiling from the ‘non-responsibility’ of behaviour (-particularly moral) entailed by rigid Determinism unless diluted by Compatiblism.

    To Labnut: (13Sept 14.43)
    “Coyne maintains that the contents of my thoughts are rigidly determined by the laws of physics. To that I reply:
    1) I knowingly exercise the freedom to direct my thoughts where I choose.
    2) What laws of physics determine the contents of my thoughts?
    3) Why on earth would evolution go to such great lengths to endow me with the convincing illusion that I possess free will?
    4) Why on earth would evolution go to even greater and very costly lengths to create consciousness if that consciousness cannot be used to exercise free will?”

    (1) Do you really do this? I can’t doubt another’s conviction but I don’t think I knowingly exercise the freedom to direct my thoughts entirely where I choose.
    (2) Physical electro-chemical events.
    (3) See (1) above.
    (4) What if Consciousness and the exercise of Freewill are one and the same “illusion”, precisely the same brain event?

    To Coel: (13Sept 17.15) “The religious notion of hell as a punishment is purely retributive (it is not a good deterrent, since no-one sees the punishment, and it isn’t rehabilitition, nor is it removal of the criminal from society).”

    It doesn’t work as a “notion” but it sure as Hell works as a deterrent if you have religious *conviction*, a total belief, that supernatural Hell awaits in a next (endless) life. It has done so for millennia of human history and still does. Maybe not perfectly but certainly effectively. It may not prevent crimes of passion but has prevented much deliberate petty theft and adultery say.

    To SciSal: Is it possible to annotate the Comments/Replies somehow for easier reference? E.g. (Comments) 1,2,3 etc and (Replies to Comments) as 3.1,3.2,3.3 etc.

    Like

  42. Hi phoffman56,

    I take it as an axiom that when two mathematical structures are perfectly isomorphic (they are identical), they are the same structure. The set of all even numbers and the set of all prime numbers are not the same set, but both contain the number two, and in my view this is the same number two.

    The set of all Everettian level III universes contains one which is isomorphic to this universe. According to the above axiom, it actually is this universe. That doesn’t necessarily mean that the other interpretations are false though. Perhaps the wavefunction really does collapse, but in that case it would still imply that we are in a multiverse consisting of a different parallel universe for each way it could collapse. This may not quite be EQM (which assumes the wavefunction does not collapse) but it’s pretty damn close for all practical purposes. Since we have no reason to believe the wavefunction does actually collapse, I think EQM is the most reasonable, parsimonious interpretation.

    But actually, I think all interpretations could be right. If it turns out that there are no empirical differences between the different interpretations, then they all yield a high-level description of the universe which is isomorphic. If this is the case, then all those emergent high-level descriptions are the same mathematical object, and so if there is a version of this universe which works according to the de Broglie/Bohm interpretation and one which works according to some version of Copenhagen, but both behave identically, I don’t think that’s a meaningful difference and I think they are both the same universe.

    Like

  43. Briefly replying to DM’s last (lost its “reply button” on mine anyway!):

    Correct about isomorphic structures being the same of course, but I think that is irrelevant to whether Tegmark’s III implies IV.

    When you say
    “The set of all Everettian level III universes contains one which is isomorphic to this universe. ”
    you are already assuming Everett’s interpretation holds. I assume “this” means our physical universe.

    So I think IV could hold with III failing.

    With some reservations, I accept both at present, so no need to try to convince me.

    Sean Carroll has a recent article with a potentially falsifiable claim which follows from Everett, but not from Copenhagen. If so, they are more than interpretations.

    David Deutsch, who proved the existence of the universal quantum computer as an abstraction, has a very readable article, easy to get from his webpage, in the form of a book review, which argues that Everett’s is virtually the only one which makes any sense, quite independently of testing it empirically.

    Like

  44. Hi DM,

    I do think your view and Dennett’s are similar, even if you don’t see it yourself. You don’t seem to be talking about libertarianism to me.

    Well it is not really my view I am talking about, only my intuitions about my volition and what I see as the intuitions the majority have about their volitions, although it is only based on a formal, although fairly wide ranging gathering of information.

    What the facts of the matter are, I don’t know – I think that we don’t know enough about the universe or the nature of consciousness to make a decision about it.

    But it does seem to me that, where there is no reason to do otherwise, I should just go with my intuitions about my mental processes. Some seem to suggest that I should believe my intuitions are wrong until I have evidence that they are correct.

    Incidentally I agree with Coyne on the basis for a system of justice, but not his reasoning behind it.

    I think that there are perfectly good reasons for going with a system based on deterrence, harm minimisation and rehabilitation without tying it to a dubious metaphysical belief.

    Moreover I think it is very dangerous to start tying it some particular metaphysical belief system because that will become the reason people give for rejecting the idea.

    Like

Comments are closed.