*Introduction*

Ever since the formulation of Newton’s laws of motion (and maybe even before that), one of the popular philosophical ways of looking at the world was determinism as captured by the so-called “Clockwork Universe” metaphor [1]. This has raised countless debates about various concepts in philosophy, regarding free will, fate, religion, responsibility, morality, and so on. However, with the advent of modern science, especially quantum mechanics, determinism fell out of favor as a scientifically valid point of view. This was nicely phrased in the famous urban legend of the Einstein-Bohr dialogue:

Einstein: “God does not play dice.”

Bohr: “Stop telling God what to do with his dice.”

Despite all developments of modern science in the last century, a surprising number of laypeople (i.e., those who are not familiar with the inner workings of quantum mechanics) still appear to favor determinism over indeterminism. The point of this article is to address this issue, and argue (as the title suggests) that determinism is false. Almost.

*Preliminaries*

Let us begin by making some more precise definitions. By “determinism” I will refer to the statement which can be loosely formulated as follows: *given the state of the Universe at some moment, one can calculate a unique state of the Universe at any other moment* (both into the future and into the past). This goes along the lines of Laplace’s demon [2] and physical determinism [3], with some caveats about terminology that I will discuss below. Of course, there are various other definitions of the term “determinism” (see [4] for a review) which are not equivalent to the one above. However, the definition that will concern us here appears to be the only one which can be operationally discussed from the point of view of science (physics in particular) as a property that Nature may or may not possess, so I will not pursue any other definition in this article.

There are various caveats that should be noted regarding the definition of reductionism. First and foremost, regarding the terms “Universe,” “moment,” “past” and “future,” I will appeal to the reader’s intuitive understanding of the concepts of space, time and matter. While each of these can be defined more rigorously in mathematical physics (deploying concepts like isolated physical systems, foliated spacetime topologies, etc…), hopefully this will not be relevant for the main point of the article.

Second, I will deploy the concept of “calculating” in a very broad sense, in line with the Laplace’s demon — assume that we have a computer which can evaluate algorithms arbitrarily fast, with unlimited memory, etc. In other words, I will assume that this computer can do whatever can “in principle” be algorithmically calculated using math, without any regard to practical restrictions on how to construct such a machine. I will again appeal to the reader’s intuition regarding what can be “calculated in principle” versus “calculated in practice,” and I will not be limited by the latter.

Finally, and crucially, the concept of the “state” of a physical system needs to be formulated more precisely. To begin with, by “state” I do not consider the quantum-mechanical state vector (commonly known as the wavefunction), because I do not want to rely on the formalism of quantum mechanics. Instead, for the purposes of this article, “state” will mean *any set of particular values of all independent observables that can be measured in a given physical system *(a “phase space point” in technical terms). This includes (but is not limited to) positions, momenta, spins, etc. of all elementary particles in the Universe. In addition, it should include any potential additional observables which we are unaware of — collectively called *hidden variables*, whatever they may be.

We all know that quantum mechanics is probabilistic, rather than deterministic. It describes physical systems using the wavefunction, which represents a probability amplitude for obtaining some result when measuring an observable. The evolution of the wavefunction has two parts — unitary and nonunitary — corresponding respectively to deterministic and nondeterministic. Therefore, if determinism is to be true in Nature, we have to assume that quantum mechanics is not a fundamental theory, but rather that there is some more fundamental deterministic theory which describes processes in nature, and that quantum mechanics is just a statistical approximation of that fundamental theory. Thus the concept of “state” described in the previous paragraph is defined in terms of that more fundamental theory, and the wavefunction can be extracted from it by averaging the state over the hidden variables. Consequently, in this setup the “state” is more general than the wavefunction. This is also illuminated by the fact that in principle one cannot simultaneously measure both the position and the momentum of a particle, while in the definition above I have not assumed any such restriction for our alleged fundamental deterministic theory.

As a final point of these preliminaries, note that the concept of the “state” can be defined rigorously for every deterministic theory in physics, despite the vagueness of the definition I gave above. The definition of state always stems from specific properties of equations of motion in a given theory, but I resorted to the handwaving approach in order to avoid the technical clutter necessary for the rigorous definition. In the remainder of this article, some math and physics talk will necessarily slip in here and there, but hopefully it will not interfere with the readability of the text.

*Main insights*

Given any fundamental theory (deterministic or otherwise), one can always rewrite it as a set of equations for the state — i.e., equations for the set of all independent observables that can be measured in the Universe. These equations are called *effective equations of motion*, and they are typically (although not necessarily) partial differential equations. This sets the stage for the introduction of our four main players:

- Bell inequalities [5],
- Heisenberg inequalities [6],
- Cauchy problem [7] and
- Chaos theory [8],

which will team up to provide a proof that the effective equations of motion of any deterministic theory cannot be compatible with experimental data.

Let us first examine the main consequence of the experimental violation of Bell inequalities. Simply put, the violation implies that *local realism is false*, i.e., that *any theory which assumes both locality and realism is in contradiction with experiment*. In order to better understand this as it regards our effective equations of motion, let me explain what locality and realism actually mean in this context. *Locality* is an assumption that the interaction between two pieces of a physical system can be nonzero only if the pieces are in close proximity to each other, i.e., both are within some finite region of spacetime. The region is most commonly considered to be infinitesimal, such that the effective equations of motion for our deterministic theory are local partial differential equations. Such equations depend on only one point in spacetime (and its infinitesimal neighborhood), as opposed to *nonlocal* partial differential equations, which depend on more than one spacetime point. The point of this is to convince you that locality is a very precise mathematical concept, and that it may or may not be a property of the effective equations of motion. *Realism* is an assumption that the *state* of a physical system (as I defined it above) actually exists in reality, with infinite precision. While we may not be able to *measure* the state with infinite precision (for whatever reasons), it *does exist* in the sense that the physical system always *is in fact in some exact well-defined state*. While such an assumption may appear obvious, trivial or natural at first glance, it will become crucial in what follows, because it might be not true, from the experimental point of view.

The next ingredient is the experimental validity of Heisenberg inequalities. These inequalities essentially state that there are observables in nature which cannot be measured with infinite precision for the same state. And this means not even in principle, despite any technological proficiency that one may have at one’s disposal. The most celebrated example of this is the uncertainty relation between the position and momentum of the particle. Measuring the position places a finite boundary on measuring the momentum, and vice versa. Given that every state contains the positions and momenta of all particles in the Universe, Heisenberg inequalities are prohibiting us from being able to experimentally specify (i.e., measure) a single state of our physical system.

The third ingredient is a lesson in math — the Cauchy problem. Given a set of partial differential equations, they typically have infinitely many solutions. The Cauchy problem is the following question: how much additional data does one need to specify in order to uniquely single out one particular solution out of the infinite set of all solutions? This additional data are usually called “boundary” or “initial” conditions. The answer to the Cauchy problem, loosely formulated is the following: for local partial differential equations, it is enough to specify the state of the system at one moment in time as the initial data. In contrast — and this is an important and often under appreciated detail — for nonlocal equations of motion this does not hold: the amount of data needed to single out one particular solution is much larger than that needed to specify the state of the system at any given moment, and is usually so large that it is generically equivalent to specifying the solution itself. In other words, given some nonlocal equations, in order to find a single solution of the equations, one needs to specify the whole solution in advance.

The final ingredient is another lesson in math — chaos theory. It is essentially a study of solutions of nonlinear partial differential equations (usually restricted to local equations, so that the Cauchy problem has a solution — this is called “deterministic chaos”). Chaos theory asks the following question: if one chooses a slightly different state as initial data for the given system of equations, what will happen to the solution? The answer (again, loosely formulated) is the following: for linear equations the solution will be also only slightly different from the old one, while for nonlinear equations the solution will soon become *very* different from the old one. In other words, nonlinear equations of motion tend (over time) to amplify the error with which initial conditions are specified. This is colloquially known as *the butterfly effect* [9].

*Analysis*

Now we are ready to put it all together, and demonstrate that a deterministic description of nature does not exist. Start by imagining that we have formulated some fundamental theory of nature, and have specified all possible observables that can be, well, observed. Then we ask the question “Can this theory be deterministic?” given the definition of determinism provided at the outset. As a first step in answering that question, we formulate the effective equations of motion. Analysis of the Cauchy problem of the effective equations (whatever they may look like) tells us the following. If the equations are nonlocal, specifying the state of the system at one moment is not enough to obtain a unique solution of the equations, i.e., one cannot predict the state of the system neither for future nor for past moments. This is a good moment to stress the word “unique” in the definition of determinism — if the initial state of the system produces multiple possible solutions for the past and the future, it is pretty meaningless to say that the future is “determined” by the present. So, in order to save determinism, we are forced to assume locality of the effective equations of motion.

Enter Bell inequalities — we cannot have both locality and realism. And since we need locality to preserve determinism, we are forced to give up realism. But denial of realism means that the state describing the present moment (our initial data) does not exist with infinite precision! As I discussed above, this actually means that Nature *does not exist in any one particular state*. The best one can do in such a situation is to try to *measure* the initial state as precisely as (theoretically) possible, thereby specifying the initial state with at least some finite precision.

Enter Heisenberg inequalities — there is a boundary on the precision with which we can measure the initial state of the system, and in absence of realism, there is a boundary on the precision with which the initial state can be said to *actually exist*. But okay, one could say, so what? Every physicist knows that one always needs to keep track of error bars, what is the problem? It is that the solution of the Cauchy problem assumes that the initial condition is provided with infinite precision. If the initial condition *does not exist* with infinite precision, the best one can do is to provide a *family* of solutions for the equations of motion, as opposed to a single, unique solution. This defeats determinism.

But wait, we can calculate the whole family of solutions, and just keep track of the error bars. If they remain reasonably small in the future and in the past (and by “reasonably small” we can mean “of the same order of magnitude as the errors in the initial data”), we can simply claim that this whole family of solutions represents one deterministic solution.” Just like the initial state existed with only finite precision, so do all other states in the past and the future. Why cannot this be called “deterministic”?

Enter chaos theory — if the effective equations of motion are anything but linear (and they actually must be nonlinear, since we can observe interactions among particles in experiments), the error bars from the initial state will grow exponentially as time progresses. After enough time, the errors will grow so large that they will always encompass multiple *very different futures* of the system. Such a situation cannot be called “a single state” by any useful definition. If we wait long enough, everything will eventually happen. This is not determinism in any possible (even generalized) sense, but rather lack thereof.

So it turns out that we are out of options — if the effective equations of motion are nonlocal, determinism is killed by the absence of a solution to the Cauchy problem. If the equations are local, the initial condition cannot exist due to lack of realism. If we try to redefine the state of the system to include error bars, the Heisenberg inequalities will place a theoretical boundary on those error bars, and chaos theory guarantees that they will grow out of control for future and past states, defeating the redefined concept of “state,” and therefore determinism.

And this concludes the outline of the argument: we must accept that the laws of Nature are intrinsically nondeterministic.

*Some additional comments*

At this point, two remarks are in order. The first is about the apparently deterministic behavior of everyday stuff around us, experience which led us to the idea of determinism in the first place. After all, part of the point of physics, starting from Newton, was to be able to *predict the future*, one way or another. So if Nature is not deterministic, how come that our deterministic theories (like Newton’s laws of motion, or any generalization thereof) actually work so well in practice? If there is no determinism, how come we do not see complete chaos all around us? The answer is rather simple — in some cases chaos theory takes a long time to kick in. More precisely, if we consider a small enough physical system, which interacts with its surroundings weakly enough, and it is located in a small enough region of space, and we are trying to predict its behavior for a short enough future, and our measurements of the state of the system are crude enough to begin with — we might just get lucky, so that the the error bars of our system’s state do not increase drastically before we stop looking. In other words, the apparent determinism of everyday world is an approximation, a mirage, an illusion that can last for a while, before the effects of chaos theory become too big to ignore. There is a parameter in chaos theory that quantifies how much time can pass before the errors of the initial state become substantially large — it is called the Lyapunov time [10]. The pertinent Wikipedia article has a nice table of various Lyapunov times for various physical systems, which should further illuminate the reason why we consider some of our everyday physics as “deterministic.”

The second remark is about the concept of *superdeterminism* [11]. This is a logically consistent method to defeat the experimental violation of Bell inequalities, which was crucial for our argumentation above. Simply put, superdeterminism states that if the Universe is deterministic, we have no reason to trust the results of experiments. Namely, an assumption of a deterministic Universe implies that our own behavior is predetermined as well, and that we can only perform those experiments which we were predetermined to perform, specified by the initial conditions of the Universe (say, at the time of Big Bang or some such). These predetermined experiments cannot explore the whole parameter space, but only predetermined set of parameters, and thus may present biased outcomes. Moreover, one has trouble even defining the concepts of “experiment” and “outcome” in a superdeterministic setup, because of the lack of experimenter’s ability to make choices about the experimental setup itself. In other words, superdeterminism basically says that Nature is allowed to lie to us when we do experiments.

In order to understand this more clearly, I usually like to think about the following example. Consider an ant walking around a 2-dimensional piece of paper. The ant is free to move all over the paper, it can go straight or turn left and right. There are no laws of physics preventing the ant from doing so. A natural conclusion is to deduce that the ant lives in a 2-dimensional world. But — if we assume a superdeterministic scenario — we can conceive of initial conditions for the ant which are such that it never ever thinks (or wishes, or gets any impulse or urge, or whatever) to go anywhere but forward. Such an ant would (falsely) conclude that it lives in a 1-dimensional world, simply because it is predetermined to never look sideways. So the ant’s experience of the world is crucially incomplete, and leads it to formulate wrong laws of physics to account for the world it lives in. This is exactly the way superdeterminism defeats the violation of Bell inequalities — the experimenter is predetermined to perform the experiment and to gather data from it, but he is also predetermined to bias the data while gathering it, and to (falsely) conclude that the inequalities are violated. Another experimenter on the other side of the globe is also predetermined to bias the data, *in exactly the same way as the first one*, and to reach the identical false conclusion. And so are a third, fourth, etc. experimenters. All of them are predetermined to bias their data in the same way because the initial conditions at the Big Bang, 14 billion years ago, were such as to make them do so.

This kind of explanation, while logically allowed, is anything but reasonable, and rightly deserves the name of superconspiracy theory of the Universe. It is also a prime example of what is nowadays called *cognitive instability* [12]. If we are predetermined to skew the results of our own experiments of Bell inequalities, it is reasonable to expect that other experimental results were also be skewed. This would force us to renounce experimentally obtained knowledge altogether, and to the question why to even bother to try to learn anything about Nature at all. Anton Zeilinger has phrased the same issue as follows [13]:

“[W]e always implicitly assume the freedom of the experimentalist … This fundamental assumption is essential to doing science. If this were not true, then, I suggest, it would make no sense at all to ask nature questions in an experiment, since then nature could determine what our questions are, and that could guide our questions such that we arrive at a false picture of nature.”

*Final remarks*

Let me summarize. The analysis presented in the article suggests that we have only two choices: (1) accept that Nature is not deterministic, or (2) accept superdeterminism and renounce all knowledge of physics. To each his own, but apparently I happen to be predetermined to choose nondeterminism.

It is a fantastic achievement of human knowledge when it becomes apparent that a set of experiments can conclusively resolve an ontological question. And moreover that the resolution turns out to be in sharp contrast to the intuition of most people. Outside of superconspiracy theories and “brain in a vat”-like scenarios (which can be dismissed as cognitively unstable), experimental results tell us that the world around us is not deterministic. Such a conclusion, in addition to being fascinating in itself, has a multitude of consequences. For one, it answers the question “Is the whole Universe just one big computer?” with a definite “no.” Also, it opens the door for the compatibility between the laws of physics on one side, and a whole plethora of concepts like free will, strong emergence, qualia, even religion — on the other. But these are all topics for some other articles.

At the end, here is a helpful flowchart, which summarizes the main lines of arguments of the article:

_____

Marko Vojinovic is a theoretical physicist, doing research in quantum gravity at the University of Lisbon. His other areas of interest include the foundational questions of physics, mathematical logic, philosophy, knowledge in general, and the origins of language and intuition.

[1] Wikipedia on Clockwork Universe.

[2] Wikipedia on Laplace’s demon.

[3] Wikipedia on physical determinism.

[4] Wikipedia on determinism in general.

[5] Wikipedia on Bell inequalities.

[6] Wikipedia on Heisenberg inequalities.

[7] Wikipedia on Cauchy problem.

[8] Wikipedia on chaos theory.

[9] Wikipedia on the butterfly effect.

[10] Wikipedia on Lyapunov time.

[11] Wikipedia on superdeterminism.

[12] I first encountered the term “cognitive instability” as used by Sean Carroll, though I am not sure if he coined it originally.

[13] A. Zeilinger, *Dance of the Photons*. Farrar, Straus and Giroux, New York, 2010, p. 266.

Coel,

When I express opinions it is always clear they are my private opinions, unconnected with my corporate role. I do not use my corporate role to give any authority to my private opinions.

Coyne has every right to express his private opinions but he is doing it from his platform as a scientist. He is using the implied authority of his post to give implicit backing to his ideology. It is evident that his science is infected by his ideology.

Then there is a large gap between what he thinks and what is the truth. When I examine my own behaviour carefully I cannot see any evidence that my belief in dualistic free will is harmful. In fact I would argue the opposite, that it is beneficial. If you disagree you are welcome to explain how a belief in dualistic free will is harming me. I suspect you are going to have to resort to some very imaginative reasoning.

LikeLike

Speak of the devil… as it happens Coyne just posted an article wondering about the difference between premeditated and nonpremeditated murder given that we have no free will.

“As someone who doesn’t think that Pistorius, or any other criminal, had any choice about their actions, and that the nature of any punishment should be take that determinism into account, I need to think about whether premeditation makes such a huge difference. As I see it (and I know others will disagree), the laws of physics had already determined that Pistorius was going to murder his girlfriend that night.”

His main point as per usual is how this effects applying legal penalties for crimes…

“For a determinist, punishment has three rationales: deterrence of others, rehabilitation of the criminal, and protection of society from the criminal. How would each of these be more serious under premeditation?”

Notice that retribution is removed. If the devil made you do it, there is no reason to punish you. That argument is not valid of course, but it’s his schtick.

That said I believe he agrees with using the term volition.

LikeLike

johsh,

I think you are talking about teleology. The world exhibits determinism and non-determinism but we exhibit teleology. Because teleology is such a loaded word, some want to insist that we are only deterministic. Nobody has shown that to be the case. All they can do is argue that ‘it must be so’. The problem with ‘it must be so’ arguments is they contain an assumption that present day knowledge is sufficient for that conclusion and an implied argument that an alternative solution cannot be imagined.

It fails on both counts. Present day knowledge is not sufficient to reach a conclusion and future science may well reveal a solution we can’t imagine today. That is often the case with science.

LikeLike

Marko Vojinovic: “The point of this article is to address this issue, and argue (as the title suggests) that determinism is false. Almost. … By “determinism” I will refer to the statement which can be loosely formulated as follows: given the state of the Universe at some moment, one can calculate a unique state of the Universe at any other moment (both into the future and into the past). … so I will not pursue any other definition in this article. … which will team up to provide a proof that the effective equations of motion of any deterministic theory cannot be compatible with experimental data. … Then we ask the question “Can this theory be deterministic?” given the definition of determinism provided at the outset.”

Nice strategy, but sorry, I must disagree with you for two reasons.

One, it is totally nonsense.

Two, it is totally wrong.

First, it is totally nonsense:

You have using many predetermined definitions to prove those predetermined views both in your article and in the subsequent replies to many critics. The followings are a few examples.

Marko Vojinovic: “… realism means that the state describing the present moment (our initial data) does not exist with infinite precision! … Realism is an assumption that the state of a physical system (as I defined it above) actually exists in reality, with infinite precision.”

I do not want to debate about {what is realism?}. I just want to ask one simple question. As I cannot measure and describe you (Vojinovic, a physical state and object) with infinite precision, are you really exist? Before reading this article, I had zero measurement on you. But, your non-existence in my mind is only the result of my ignorance. Right now, I still know about you very little (far from infinite precision measurement). Can I declare your nonexistence? When I catch a fox-tail, I know that there is (or was) a fox. The ontological reality is totally independent of the epistemic capability. Even if the epistemic limitation is intrinsic, it has no right to kill any ontological reality, and I will discuss this more in due time. By all means, you have mess up your (or the entire physics community) ignorance as the only reality. Your above statement is totally nonsense.

Marko Vojinovic: “Note that metaphysics is not an observable thing. … different interpretations are not different theories, …”

What the heck is the Occam’s razor for? If two different theories produce two different observables, they can be weeded out by comparing those observables, and there is no need for Occam’s razor.

Indeed, the metaphysics (A) of physics (A) in domain (A) is not observable by ‘definition’. But, metaphysics (A) can become physics (B) in domain (B). There are zillions such examples, such as the metaphysics of Newton gravity (instantaneity) will definitely become a new physics. In fact, ‘all’ metaphysics are observable (might not be via a gadget), including the metaphysics of ‘creation’ (the creation law). Most observables are not direct-gadget observable but via the inference and telescoping. By all means, neutrino is not direct-gadget observable. It is observable only via a theory (the momenta conservation law) inference (the missing energy). Then, using the inferred observation as a telescope to probe other not gadget observables. With this ‘inferring and telescoping’ tool, all metaphysics are observable when two issues are proved.

A. Every metaphysics (A) is always a physics (B), including the metaphysics of ‘creation’.

B. Every physics (B) is observable with the ‘inferring/telescoping’.

Those ‘interpretations’ of QM (physics A) did not become a physics (B); so, they are interpretations, not theories. When one of the interpretations (metaphysics (A)) becomes a physics (B), …

With the above, it is fair to say that your above statement is another totally nonsense.

Marko Vojinovic: “After enough time, the errors will grow so large that they will always encompass multiple very different futures of the system. Such a situation cannot be called “a single state” by any useful definition. If we wait long enough, everything will eventually happen. This is not determinism in any possible (even generalized) sense, but rather lack thereof. … If there is no determinism, how come we do not see complete chaos all around us? The answer is rather simple — in some cases chaos theory takes a long time to kick in.”

Come on! If 14 billion years is not long enough time for this indeterminate-devil to destroy this orderly-universe, I will very much discount it. Another total nonsense.

Two, it is totally wrong:

In the ‘Graham Priest’ article, I showed a litmus test for Buddhism, and it consists of opening four locks.

Lock-one: Cabibbo angle (13.5 degrees), Weinberg angle (28.75 degrees), [(1/Alpha) = 137.0359 …]

Lock-two: Planck data (dark energy = 69.2; dark matter = 25.8; and visible matter = 4.82)

Lock-three: the pegs-lock which can only be opened by the exact pegs when they are inserted into the peg-key-holes. There are 48 peg-key-holes in this physical universe, and every peg is distinguished with a set of ‘name-codes’. The 48 matter particles form this pegs-lock.

Lock-four: {delta P x delta S > ħ} lock.

If a thing {Christianity, Buddhism or modern physics (multiverse, SUSY, M-string theory, Higgs mechanism, etc.)} is able to unlock (derive the keys) those locks, it will definitely be ‘correct’. You are obviously trying to describe a particular view in physics. If you are correct, you can ‘derive’ (produce) the keys for those locks. If you cannot, your sayings are simply wrong or useless.

There are two facts here.

One, if you can produce one key to unlock one lock, you can unlock all locks.

Two, with these locks, this universe is totally ‘confined’. Although the monkeys in this locked cage can do all types of monkey dancing with unpredictable moves, they can never (never,…, never,…) go beyond this confinement. That is, the determinism (no way out) runs supreme while the monkey chaos just a baby clown, (see http://prebabel.blogspot.com/2013/01/welcome-to-camp-of-truth-nobel-laureate.html ).

LikeLike

This discussion has stray from the original claim: that 20th century physics shows that determinism in our universe is false.

Questions of free will, randomness, scientism, and determination in hypothetical parallel universes are left open for another day.

LikeLike

He says:

But what does it matter what he thought if the laws of physics had determined the outcome anyway? Presumably the laws of physics had already determined what he thought and what he claimed. They also determined exactly what Coyne believed and how Coyne would respond.

This is why the arguments of Coyne are so incoherent. He cannot talk about the subject without smuggling in volition. Without volition there is nothing to be said. Say something and you are smuggling in volition.

LikeLike

Interesting article, but it repeats a common confusion about chaos theory. Chaos theory is not actually needed here at all to obtain the final piece of the argument!

Consider the simple ordinary differential equation y(t) = k dy/dt for some constant k. Then the solution to this equation is y(t) = y_0 e^{k t}. If k has been measured with some error, then the solution y(t) will differ from the correct solution at a rate that grows exponentially. No chaos theory needed!

[The mistake here is that the author is confusing numerically stable algorithms for approximating differential equations with the solution to the differential equations themselves. In a numerically stable algorithm for a DE as time progresses the error grows linearly if the simulation is started with the correct initial conditions, but if the initial conditions themselves are specified incorrectly, then the solution to the incorrect problem can diverge from the true solution at an exponential rate.]

Note that the roulette wheel exhibits what is typically considered random behavior (as the dice mentioned throughout) but neither system is chaotic in any way.

LikeLike

I agree that a simple exponentiation exhibits sensitivity to initial conditions but no chaos — the difference is to the same degree and not particularly difficult to quantify.

But a roulette wheel? I think that counts as chaotic. Or if not then what do you think distinguishes it from, say, a chaotically moving double pendulum?

LikeLike

The roulette wheel follows a predictable pattern, in a double pendulum the second joint doesn’t have a predictable pattern.

http://video.mit.edu/watch/double-pendulum-6392/

LikeLike

Agree with Schalfy, we should broadly stick to determinism, QM, and anything else that might directly relate.

One thing I always like wondering about is what the discoverer of a monumental theorem or experimental result actually thinks are the philosophical repercussions. Oddly enough, many of the people who cite John Bell’s results in refutation of realism or determinism neglect to understand that he never denied either himself and actually defended the realist (and deterministic) pilot wave theory for his entire life.

A excerpt from an article (about the same fluid dynamic experiments and connections to Bohmian mechanics that I posted in a previous comment) that just came out on Phys.org today sums it up nicely:

John Bell, the Irish physicist whose famous theorem is often mistakenly taken to repudiate all “hidden-variable” accounts of quantum mechanics, was, in fact, himself a proponent of pilot-wave theory. “It is a great mystery to me that it was so soundly ignored,” he said.

Read more at: http://phys.org/news/2014-09-fluid-mechanics-alternative-quantum-orthodoxy.html#jCp

LikeLike

For those who doubt that he is a determinist, Jerry Coyne posts this today:

So yes, whether the laws of physics are deterministic is very important to how some scholars look at the world. And his physics is about a century out of date.

LikeLike

What’s predictable about the roulette wheel? The ball bounces around rather haphazardly.

LikeLike

@labnut Am i not result of my past actions/habits/thoughts ? Is a person’s character totally random ? Can a person not shape his character/habits ? Can a person not *deterministically* optimize his well-being ?

LikeLike

Maximus: “Tell me, how does a function defined on a uncountably infinite set perfectly describe a discrete system (that is, a system with a finite amount of elements)?”

Dominik Miketa: “Easy: emergence and induction. … We have QFT and the Standard Model, QM, GR, SR, fluid dynamics etc. All of these have followed mathematical laws that have been in many cases later derived from the equations governed by their more fundamental constituents. … Call it an inductive argument if you will.”

Maximus’ question is the ‘key’ issue for the final reality. The answer for that question is the ‘key’ for the creation.

Dominik Miketa, your reply is just talking. Why satisfies yourself and misleads others with such a meaningless answer. It is not time to discuss this issue at here as yet. But, I can give a hint here.

There are two infinities (countable and uncountable) and both of them must be ‘represented’ as ‘concrete’ objects in the ‘physical’ universe. With these two ‘representations’, Maximus will be answered. And,

Nature physics = Nature math

LikeLike

If the author is wrong about chaos theory, as you say, then Edward Lorenz is wrong about chaos theory in “The Essence of Chaos”, because the way Marko is describing it is just what Lorenz says.

In general, Lorenz applies chaos to any dynamical system which is sensitively dependent upon initial conditions.

And the growth of the error is anything but linear.

LikeLike

What do you find extraordinary about randomness? Is ist just an intuition? My objection to determinism is on the basis of the miraculous information compression that it would involve.

Incidentally if Newtonian physics were the case it would be, in a sense, random. So would the EQM. In any system described by a smooth function there would be no such thing as the exact state of a system at a given time.

LikeLike

I would point out that for the last one hundred years ago probability has been used to describe microscopic systems, so you can hardly say that determinism is in keeping with empirical observation.

Any physical system will appear magic if you consider it in particular ways. As I point out elsewhere the EQM would still be random in a sense, as would Newtonian physics if it were the case.

But strangely no one appears to be addressing the reason I find determinism extraordinary. As I say, if determinism is true then my choice for breakfast this morning was somehow encoded into the state of the universe when the first slightly imperfect replicators were appearing on Earth, or when the nebulae where forming, or at the Big Bang.

If you say that you think that my choice for breakfast was encoded in the state of the universe when the first nebulae were forming, how exactly do you think this was encoded?

How was the Brandenburg Concerto or the shape of a grasshopper encoded in the in the Big Bang?

As I say, a miraculous compression ratio.

LikeLike

I am talking about the beliefs that most people have about their volitional processes. If you consider that fits your definition of compatibilist free will then most people are probably compatibilist, although this would have to be tested with some sort of research (and not the kind of sloppy research that people like Coyne depend on for this).

If you are saying that a kind of free will, which very few people believe in, does not exist then – what is the point of even saying it?

What Coyne, Dennett and Harris, among others are claiming, is that the beliefs that the majority have about their own volitional processes is wrong. I question that. They show no evidence and unlike the scientismists, I need evidence.

Incidentally, even for libertarian free will, can you define it reasonably precisely and show explicitly why it is ruled out by physics?

LikeLike

Carroll called for dropping Popperian falsificationism. This is only a “relaxation” if Popperian falsificationism is the correct standard for scientific method. I don’t think it ever was. For one thing, it a priori forbids any historical science (which seems to me to be what it was formulated to accomplish.) Cosmology being an historical science, any real progress means ignoring Popperian falsificationism. It seems to me candor about this is a good thing. Being an historical science, cosmology is concerned with investigating the real history of the universe. In this pursuit, general relativity’s incompatibility with quantum mechanics is not just a foundational issue that can be dismissed with Copenhagen’s antirealism. That’s why your objection that the Everettian cosmologists are just a “small cohort” is irrelevant.

LikeLike

I suppose you can simply rephrase the question: is there a deterministic model of reality (one that “saves the phenomena”) ? If the universe cannot be described mathematically then it is not deterministic either.

LikeLike

The only context in QM where I’ve heard that nonlocality is being discussed in the context of the Cauchy problem is the dBB interpretation — the equations of motion there are explicitly nonlocal, and consequently the initial-value problem is ill-defined. Most other interpretations of QM prefer to preserve locality of effective equations of motion, at the expense of giving up realism, factual definiteness, or something similar.

Now, most of the people who do research on dBB usually consider only the nonrelativistic version of the theory, which is nonlocal in space but local in time, and thus actually deterministic (in time evolution). So they avoid facing the seriousness of the Cauchy problem, but they work with an experimentally wrong theory (any nonrelativistic theory is in contradiction with the Michaelson-Morley experiment, for example). Any attempt to make dBB fully relativistic will mix space with time, inducing time nonlocality from space nonlocality, and thus defeating deterministic time evolution.

The fact that the Cauchy problem has no solution in the relativistic dBB case is probably one of the reasons why it is not a very popular topic to discuss or publish papers on. Banging your head against a brick wall is not a good strategy for earning a salary… 😉

So I’m not surprised that the topic of Cauchy problem is hard to find in QM literature. Nevertheless, most physicists and mathematicians doing research on QM are always acutely aware of the Cauchy issues, and if anyone dares to forget about it, there will always be someone in the audience, or a referee, or a mentor, or a collegue, to remind them about it. 🙂

As a final remark, note that specifying both the present state and all past states of the system is still not enough data to uniquely solve a nonlocal equation (generically, at least — there could be some special nonlocal equations where it is enough, but these are not relevant for physics).

LikeLike

Ha! sorry I really didn’t see that! But is that chaotic?

LikeLike

I’ve been wondering about the differences in ‘behavior’ between random, indeterminate, and now chaotic too

LikeLike

Yes, into 37 (European) or 38 fixed slots. That doesn’t sound like chaos to me any more than 36 possible combinations on a pair of dice. Otherwise, how would one set true odds?

LikeLike

The same way all the infinitely fractal intricacy of the Mandelbrot set is encoded in z_(n+1)=(z_n)^2 + c. A lot of surprising and complex stuff can come from applying a pretty simple set of rules.

Or, the same way all works of English literature are encoded in the phrase “all sequences of characters less than 10 million characters long”. When nothing is ruled out, then specifying the whole set is trivial. If the universe is infinite (as I suspect) or if MWI is true (as I also suspect), then everything that can happen does happen.

LikeLike

If you’re siding with Dennett then you’re talking about compatibilism.

Plenty of people (mostly religious) believe in libertarian free will. Defining it is not so easy, because it is in my view incoherent, but it has something to do with the idea that there is something about human decision-making which is not reducible to mathematical laws. Compatibilist free will, the kind you’re talking about, works even on determinism, whereas libertarian free will demands not only that determinism is false, but that some events are neither deterministic nor random but

something elseAs I say, it’s incoherent.

You might want to ask a libertarian such as our very own Labnut for clarification. I’m sure he would take issue with some of my characterisation of it.

LikeLike

Marko’s arguments does not rely on an imagined possibility of describing the universe perfectly by mathematics, but on the properties of existing mathematical theory. So no, I cannot rephrase the question in that way. It may be that it is describable perfectly well by mathematics, but for mathematics which Marko’s arguments do not hold for.

In any case I have several complaints about the mathematics arguments, even in the case the current mathematical theories are applicable. See my reply to Dominik above.

LikeLike

Hi Thomas,

I don’t think the fact that it ends up in one of a discrete number of states has much bearing on whether the physical system of a bouncing wall and a spinning wheel is chaotic, according to the mathematical definition of chaos.

LikeLike

I guess. I’ve always found the idea of true randomness to be troubling, perhaps because, by definition, there is no way to choose truly randomly with an algorithm, so it’s hard for me to accept that nature can do what appears to me to be impossible. I could be wrong.

What do you find miraculous about the compression ratio? Is it just an intuition?

Personally, I don’t think there’s anything miraculous about the compression ratio. If you set any deterministic system going, then its entire future is mapped out from its initial state. This kind of stuff is trivially possible in mathematics and there’s no reason apart from your intuitive disbelief (and perhaps QM) to doubt it is true of this universe. You can see this in cellular automata such as Conway’s Game of Life, or even in the procedural generation of content for video games.

For instance, there’s a game in development called No Man’s Sky which deterministically generates millions of star systems with planets, unique flora and fauna, cave systems etc, which you can explore. The whole thing is generated from a seed taken from one of the game developer’s phone numbers. Quite a compression ratio!

LikeLike

Hi DM,

First of all, I must say that I am very flattered by the praise of the people regarding the quality of the article. I really didn’t expect that. Thanks! 🙂

As for MWI — as i said in one of the early comments, I went out of my way to write the arguments of the article such that they are independent of any interpretation of QM, including MWI. I did this precisely to avoid the kind of discussion that others have started. I am fairly familiar with MWI (and also fairly critical of it), but one thing that amazes me is the level of hype, misuderstandings and half-truths that are being buzzed around the web regarding it. For example, nobody seems to care about the fact that whole MWI rests on the successful solution of the pointer basis problem, while this solution is nowhere to be found. The half-knowledgeable proponents of MWI are simply not aware that this problem even exists. The truly knowledgeable proponents of MWI (like Sean Carroll) acknowledge that pointer basis problem exists, but avoid discussing it in public and for non-expert audience (I can only guess why they do so).

So I just wanted to state the argument in a QM-independent way, precisely to avoid the subsequent discussion degenerating into half-educated bashing one or the other interpretation. Apparently the discussion degenerated anyway, despite my efforts to keep the article “clean of interpretation choices”. 😦

Regarding other definitions of determinism — you are welcome to adopt another definition and discuss it. My interest rests with the definition I used, and the reason for it is its applicability to free will, emergence, religion, etc.

As for free will, God, etc. — I didn’t actually state any arguments, but just hinted that this is something that can be argued for in the absence of determinism (as I defined it). I did not participate in subsequent discussions on those topics, because I am yet to state my terms and arguments. Hopefully in another article.

As for your main question,

“You mean, for example, that Nick Bostrom’s simulation argument is a non-starter? That it is logically impossible that the world we see could be the product of a computer simulation?”

In short, yes. Note, however, that this is not a logical impossibility, but rather an experimentally falsified possibility.

“all a computer has to do is pick one future that is consistent with the differential equations, and it can do so pseudorandomly”

The word “pseudorandomly” is at the crux of the matter here. First of all, a word of warning — people often throw around statements about randomness very carelessly. The concept of randomness is very tricky to define (ask any mathematician), and should not be used lightly.

That said, the random events in QM are not pseudorandom. Pseudorandom numbers follow some algorithm, which could be represented as another law of physics (one which we just didn’t discover yet). This apparent randomness could thus be encoded in the deterministic evolution of the so-called “hidden variables” according to this unknown law of physics. The argument of the article allows for the existence of hidden variables (see the defintion of the “state”), and their existence does not offer a way out back into determinism. Btw, this is not just my statement — hidden variables are something that has been discussed inside-out since Bohr and Einstein, and there is a copious amount of literature on them.

Long story short, pseudorandom generator will not do — you need a proper, “intrinsically random” generator, something that cannot be a product of an algorithm (an uncomputable function, in math-speak). So computationalism is, as you put it in the question, a “non-starter”.

“Chaos and Cauchy only show that there are multiple futures consistent with the approximate laws and approximate state of the universe we have managed to discover — it doesn’t show that there are not hidden deterministic processes going on which determine which state is realised.”

I would agree with this if you consider chaos and Cauchy alone, but the ability to account for any possible hidden variables is precisely the power of the Bell and Heisenberg inequalities. That was the whole point of all the groundbreaking work done by Heisenberg, Bohr, Einsten(-Podolsky-Rosen), Bell, Aspect, and numerous others.

Finally, the difference between epistemic versus ontological limitations that you and others have raised is something I have really missed to address in the article, and now I feel sorry about that. But shortly put — I am not really interested in ontology that cannot be grasped epistemically. That is really the domain of religion and philosophy, but not a domain of science and (I dare say) philosophy of physics. If you claim that the world is deterministic on a level that is unknowable, untestable and unobservable, that is something noone can argue against. But it is also not a very useful concept if you are trying to make an argument for a clockwork-universe, or for free will, or other things that can (arguably) have observable consequences in the world. So in my view, ontology without support from epistemology can exist, but is useless for constructive discussions. Of course, you are welcome to disagree with this — I recognize that it is a matter of personal choice. 🙂

LikeLike

I don’t think so, Mark. Roulette and Dice (Craps) are not chaotic. You would have to randomize the number of slots and colors (sometimes blue or green, sometimes 50 slots sometimes 5, etc.)into which the ball would fall on each spin or randomly change the number of sides (or their values) on the dice on each throw to achieve chaos. Further, there would be no method for the house to determine odds, Roulette and dice are not chaotic.

LikeLike

Just a short comment — the fact that Bell’s theorem implies what it implies has nothing whatsoever to do with what Bell believed in, let alone what others knew about his beliefs. 🙂

That said, in physics community it is usually well known that the motivation Bell had to derive his results was precisely to disprove quantum mechanics, loosely speaking. It was an ironic twist that the experimental results of Aspect (and others after him) actually used Bell’s own theorem to prove his motivation wrong. 🙂

LikeLike

Mark,

“Consider the simple ordinary differential equation”

Chaos theory needs at least three phase-space degrees of freedom, so an ordinary differential equation in a single variable will never display chaotic behavior. That said, in the physical world there is obviously more than three degrees of freedom, so chaos theory results are important.

“The mistake here is that the author is confusing numerically stable algorithms for approximating differential equations with the solution to the differential equations themselves.”

No, I am not confusing approximation with an exact result. In the article I am always talking about exact solutions of differential equations, never about approximations and numerical algorithms.

“In a numerically stable algorithm for a DE as time progresses the error grows linearly if the simulation is started with the correct initial conditions, but if the initial conditions themselves are specified incorrectly, then the solution to the incorrect problem can diverge from the true solution at an exponential rate.”

The statement of the chaos theory can simply be phrased as follows. Pick an intial condition and find a unique exact solution to the DEs. Then pick another, nearby initial condition and find the other unique exact solution. Compare the two solutions away from the initial point — the solutions will be very different, much more than initial conditions were.

My point here is that the above statement of chaos theory has nothing whatsoever to do with any numerical algorithms or approximations used or otherwise. It is a statement about the properties of exact solutions of DEs. The point of the article is that the “correctness” of the initial condition doesn’t really exist, and thus one can never specify a unique solution. No numerical algorithms or approximations ever entered the argument.

LikeLike

A perfect example of the triviality of the argument as prosecuted by Coyne.

I have yet to check, but I doubt that he specifies exactly how long ago (when the Earth was cooling? when the first nebulae formed? at the Big Bang?) that the laws of physics determined that he would murder his girlfriend, or give the reasoning/calculation that he is basing this on.

LikeLike

I misspoke, I should say the growth of the deviation, rather than the error.

LikeLike

Robin –

Your assertion in the first sentence (and I assume you’re obviously referring to QM itself) needs far more analysis than you’ve given it. Yet again, the Schrodinger equation evolves completely, totally deterministically. Please read this article for more details: http://plato.stanford.edu/entries/determinism-causal/#QuaMec

And just to touch on it again, I think I can see pretty clearly how a causally deterministic system evolves. I think virtually everyone else can. The onus on you is to make me understand where this fundamental, mysterious randomness comes from? What inherently drives this ontological chaos (not in the chaos theory sense, but in the full blown no order, no structure, no law)? What kind of void of crazy does it emerge from?

Yes, I think that things that happen in the universe were going to happen because the laws of physics and the initial conditions only went down one way. It might be hard to accept, but I’ve gotten over it a long time ago.

LikeLike

Marko –

I understand that. I don’t think I said anything even remotely linked to Bell’s thinking somehow influencing the mathematics of the theorem; that would be a pretty stunning claim.

What I’m saying is, one of the most brilliant theoretical physicists this world has ever known obviously analyzed his own result, and he didn’t at all take it as implying indeterminism (irrespective of his motivation to prove QM wrong). He was always committed to realism and determinism (as I am). I think I’ll take his vantage point over anyone else’s any day of the week.

LikeLike

HI DM,

A fractal is not complex, it is merely intricate. And a fractal is merely a repetition.

So you are saying that there is an algorithm which will reliably generate the Commedia, for exmaple from that phrase?

LikeLike

For deterministic systems, everything that is not explicitly ruled in is ruled out.

There are plenty of mathematical patterns which, even given an infinity and infinite amount of time will not generate so much as “Mary had a little Lamb”.

Specifying “everything” is far from trivial, of all possible evolving mathematical patterns there are vanishingly few that will give you “everything” from a computational perspective.

Really, the only way to get more out than you put in is to have local randomness. With local randomness, everything is not just a function of the initial conditions.

LikeLike

Hi DM,

All from a phone number you say?

Oh, and a program that can use it as a seed to generate the required distribution of numbers from it, and the program to exploit that distribution to generate the star systems etc etc and the machine that could run that program and the intelligence to design such a machine and to understand the mathematics which took only about, say, 4 billion years to evolve and the process whereby there is a physical environment where that is even possible came about.

No, not such an impressive compression ratio.

Nevertheless I take your point that true randomness is not necessarily required in order to generate the diversity and that a process that can generate information with the correct distribution.

Still, not I would say, a trivial starting point.

Unless you think there could have been an algorithm and only an algorithm (and presumably something to run it on) at the dawn of time I don’t think it is particularly relevant that it is something an algorithm cannot do.

In one case there is exactly one next state. In the other there is more than one next state and no fact of the matter as to which it would be. I can’t see the impossibility in the latter.

LikeLike

DM,

First. labels are misleading things, often smuggling in all sorts of unwanted meanings. So I am not going to use labels and I advise you not to apply a label to my beliefs, for the same reason.

Second, throwing around the accusation ‘incoherent’ is seldom a useful thing to do. When we use that term we usually mean something along the lines of, the argument clearly lacks a rational basis or contains obvious contradictions. But then if that is what you mean it is better to say so and point why you think that is so. The way you use the term, it indicates your attitude(pejoratively), but, to plain about it, your attitude is not interesting.

My belief in free will is this. Firstly, I am free to direct my thoughts anywhere over the domain of my knowledge, at any time I choose and in any way I choose. Secondly, I am free to envisage any future I choose, plausible, rational, irrational, unlikely or not. Thirdly, I am free to construct new thoughts, imaginatively and creatively. In other words my mind is free to roam over the past, the present, the imagined future, the possible and the creative. That is true free will. Any resultant choices I make will be heavily constrained by practical considerations(obviously).

That does not mean my mind is totally free. It is constrained by the limits of my knowledge, my intelligence, my emotions and the powers of my imagination. From time to time it is further constrained by the glass of red I wine I consumed. So I am talking about the freedom of my mind to roam over an internal terrain, within certain practical limits. This does not mean I lack free will, only that it acts within certain boundaries.

Coyne maintains that the contents of my thoughts are rigidly determined by the laws of physics. To that I reply:

1) I knowingly exercise the freedom to direct my thoughts where I choose.

2) What laws of physics determine the contents of my thoughts?

3) Why on earth would evolution go to such great lengths to endow me with the convincing illusion that I possess free will?

4) Why on earth would evolution go to even greater and very costly lengths to create consciousness if that consciousness cannot be used to exercise free will?

Coyne cannot show that (1) is false.

Coyne cannot answer questions (2), (3) and (4).

So Coyne would seem to have a very weak case.

Additionally, Robin makes a very strong argument about the impossibility of all present and future knowledge being compressed into the distant past, only to emerge now. I think Robin’s argument clinches the case for free will. Your reply to Robin missed the point quite disastrously. If you want to take this up I will explain to you.

I am quite happy to accept your assertion that you lack free will. It would go a long way to explain some of the conversation. But I know I exercise free will and I am very sorry that you so evidently lack this ability. I really don’t know what we can do to restore your free will. I will speak to my sangoma and ask him for advice.

I look forward to your automated, robotic reply.

LikeLike

DM,

“Since there is no difficulty in producing apparent randomness from underlying determinism, and since there is no way to tell the difference, the possibility always remains open that the universe is deterministic.”

There are tests to certify randomness, and evidence that quantum hardware makes a difference.

http://www.nature.com/srep/2013/130409/srep01627/full/srep01627.html

http://www.technologyreview.com/view/418445/first-evidence-that-quantum-processes-generate-truly-random-numbers/

As a practical matter, if you were writing software that required bytes of true (vs. pseudo) randomness, you could make a call to HotBits*, or, better, install your own hardware.

* https://www.fourmilab.ch/cgi-bin/Hotbits?nbytes=128&fmt=c

LikeLike

Schlafly,

“This discussion has stray from the original claim: that 20th century physics shows that determinism in our universe is false.”

Thank you for noticing! 🙂 Unfortunately, it seems that there are very few comments that are discussing the topic of the article itself. Maybe I shouldn’t have ever mentioned free will etc. to to begin with, we would probably read more comments about the topic of the article.

LikeLike

Admired the essay! But…

Marko: “we must accept that the laws of Nature are intrinsically nondeterministic.”

(Another “This statement is not true”?)

“intrinsically” is a revealing qualifier. Implying *not extrinsically*? Would it not be better to assume that the Laws of Nature are both non-Deterministic AND Deterministic(at different levels) with (at least) some/many Laws remaining of which we are in complete ignorance?

Scientifically, intrinsically (all) Laws of Nature are Non-Determined. Yet many Laws that we use confidently, (operating at the level of our sensory perception, say) the ones that are usually affecting human lives and behaviour, these we can safely regard as Determined. Although sometimes, even if an unbroken ’cause>>effect’ seems to be operating, we cannot ‘definitely’ forecast effects-to-come in the Future, at best only hypothesising/evaluating these as being very probable/possible/highly uncertain.

Has any bridge failed without a ’cause’ being found explaining the ‘effect’ after the event? Is not the speed of light a Determinable ‘constant’?

Interestingly, if “both nondeterministic AND deterministic(at different levels)”, at which level(s) for FreeWill -if any?

LikeLike

Hi Irqvy, while it is certainly true that neuroscientists/biologists are largely basing their opinion on the immediate mechanisms (biological processes) some really do refer to that determinism going all the way down to the underlying physics. Check out the post I and another make later in the thread on Jerry Coyne’s latest foray into free will. There he explicitly links it to the physics.

LikeLike

Hi Marko,

It was a great article, you deserve the praise.

As a quick aside, I don’t know that much quantum mechanics really. More than most, but that’s not saying much. I don’t understand the pointer basis problem (in fact the most illuminating thing I could find on it in Google is in a comment you made on Sean Carroll’s blog). I find the MWI to be plausible because, for reasons I won’t get into here, I believe in Tegmark’s level IV multiverse (the Mathematical Universe Hypothesis). This entails something equivalent to the MWI even if other interpretations turn out to be true. There would always be a universe which differs from this one only in that certain quantum measurements turn out differently. I know that issues such as the measure problem arise for all these multiverse scenarios, but I don’t think these are sufficient reason to doubt them.

The reason the MWI is of particular relevance to your article is that it would defeat any attempt to apply non-determinism to free will, emergence, religion etc. If the universe is only apparently non-deterministic to observers because everything happens somewhere, there is no room for God or libertarian free will to intervene.

So we assume. I maintain it is impossible to know.

That is so only if we assume locality. Computers are not bound by locality. They can jump around computing things here and there as they like. They’re not even bound by time. If they find that they have computed something which is inconsistent, they can rewind the simulation and make another choice. Much of the rest of your response seems to rest on this mistake.

If a universe-simulation were such a non-starter, we would be unable to make even statistical predictions about quantum mechanics. If you can make predictions, you can make a simulation which corresponds to those predictions. Both involve understanding the phenomena well enough to have a step-by-step procedure (an algorithm) to calculate what will happen. If you can, with a pencil and paper, work out a story of two remote observers making measurements so as to conform to and illustrate Bell’s theorem, you are effectively producing a simulation and a computer could do the same.

I am making no such claim. I only disagree that your argument rules that out.

But epistemic non-determinism is surely uncontroversially the case, so if that’s all your argument proves then it’s not really making a terribly interesting point. As long as the possibility remains open that the world is fundamentally deterministic, then it seems to me we’re back where we started.

LikeLike

Marko, it’s a common occurrence, not your fault! But that’s okay, people who where their interests lie, I suppose!

LikeLike

DM,

“”

The reason the MWI is of particular relevance to your article is that it would defeat any attempt to apply non-determinism to free will, emergence, religion etc.“”Ah, so you are clearly admitting your ideological motivation. Well, we knew it all along but it is nice to see it out in the open.

LikeLike

I don’t agree with this interpretation of what I said.

LikeLike