The formal Darwinism project

charles-darwin.jpg_1396593169[From time to time the Editors at Scientia Salon select an interesting paper from the primary scientific or philosophical literature to highlight for a broader public. These posts simply include the abstract of the paper and a few choice quotations, occasionally accompanied by brief editorial comments. The idea is to develop an appreciation for what front line scholarship looks like, and hopefully why it matters.]

Our new pick for a “notable” paper is “The formal darwinism project in outline” by Alan Grafen, published in Biology and Philosophy on 25 January 2014. (Unfortunately, the full paper is behind a paywall, but you can request a copy from the author here.)

Here is the abstract:

The formal darwinism project aims to provide a mathematical frame- work within which important fundamental ideas in large parts of biology can be articulated, including Darwin’s central argument in The Origin (that mechanical processes of inheritance and reproduction can give rise to the appearance of design), modern extensions of evolutionary theory including ESS theory and inclusive fitness, and Dawkins’ synthesis of them into a single structure. A new kind of argument is required to link equations of motion on the one hand to optimisation programs on the other, and a major point is that the biologist’s concept of fitness maximisation is not represented by concepts from dynamical systems such as Lyapunov functions and gradient functions. The progress of the project so far is reviewed, though with only a brief glance at the rather complicated mathematics itself, and the centrality of fitness maximisation ideas to many areas of biology is emphasised.

Basically, what Grafen has been trying to do — for years now — is to merge population genetics (the mathematical theory that describes changes in gene frequencies in natural populations) with optimization theory (the mathematical theory that describes adaptation). The same issue of Biology and Philosophy includes a number of commentaries on the main paper.

Here are some choice bits from the paper:

The goal of the formal darwinism project is to construct a mathematical bridge between two of the many ways of studying natural selection. One approach is population genetics, in which models are constructed that trace the change over time of the frequencies of some defined set of genotypes. … The other approach is based on the expectation of finding good design in nature: this stretches back at least to natural theology in the eighteenth century, and was invigorated and reinforced as a scientific approach by Darwin (1859) and later Fisher (1930).

Today, as molecular biologists choose to call some of their discoveries ‘mechanisms’, and ascribe ‘functions’ to enzymes, they use purposive language and so they also adopt the design approach. It is arguably impossible to undertake work in many areas of biology without doing so: purpose in explanations has great power, and attempts to do without it in ethology (for example, Kennedy 1992, reviews his earlier campaign in ethology as well as bringing in further subjects), have long ago been abandoned as unworkable.

A fly in this ointment is that there are serious reasons to doubt that fitness is in fact maximised. The central assumption of the approach has been known to be untrue in general for decades, and it is here that the other of the two approaches to studying natural selection becomes relevant.

In 1984, I coined the term ‘Phenotypic Gambit’ for the research strategy of studying organisms in ignorance of the actual genetic architecture of the trait in question … The Phenotypic Gambit articulates the assumption that is usually made implicitly in this work, and the formal darwinism project aims to understand better why and how the gambit works when it does, and also to identify and understand those cases in which the gambit fails.

Population genetic models (Ewens 2004) are examples of dynamical systems, either differential or difference equations, and are usually highly specific about the genetic architecture. … To represent the design approach, it is natural to take optimisation programs, as used in microeconomics, operations research and game theory, as the formal structure. The project then became about constructing formal links between the mathematics of motion and the mathematics of optimisation.

Because population genetics is fundamental, the first task is to construct the optimisation program from the contents of the population genetic assumptions.

One major purpose of the formal darwinism project is to assist in showing that the exact genetic architecture may reasonably be ignored in many circumstances when the form of a trait is the focus of biological study.

If you exclude simple Mendelian traits like coat colour in Pocket Mice, horn type in Soay Sheep and colour polymorphisms in Gouldian Finches and Snow Geese then we know very little about the molecular genetic basis of most traits. The majority seem to be so polygenic and the effect of each locus so small that identified polymorphisms usually explain a tiny fraction of the genetic variance

36 thoughts on “The formal Darwinism project

  1. Grafen says that purposive language and the design approach cannot be abandoned. Interesting. I wonder how Jerry Fodor would feel about this.

    “there are serious reasons to doubt that fitness is in fact maximized.”

    I’m sure the ID crowd will find this a fruitful article to mine quote.

    Like

  2. I don’t know anything about current mathematical models of evolution, but I certainly welcome the attempts to construct them. The ability to explain various properties of plants and animals that we see around us by rigorous mathematical reduction to dynamics of individual genes or DNA molecules would be a triumph for the idea of evolution (if it fits the observational data well) or its tombstone (if it doesn’t fit the observational data). Either way we learn something.

    There are a lot of questions one could ask/compute in such a framework. For example, given some basic genetic makeup of some elementary life form like bacteria from ancient past, and given its reproduction rate, and given the rate of random mutations of its DNA/RNA molecule during reproduction, and given the rate of survival in some specific environmental conditions, how much time does it take for the bacteria to mutate enough to develop into a multicellular organism which survives longer and reproduces faster in the same environment? The ability to calculate an order of magnitude for such a time would be a great thing, as it could then be compared to the time it actually took for life on Earth to develop and become dominated by multicellular organisms.

    Or as another example, given the genetic makeup of, say, a rabbit, its rate of reproduction, random mutation rate, etc., how many reproduction cycles would it take for a rabbit to mutate enough to develop a third eye, on the back of its neck? Naively speaking, a third eye which looks behind one’s back would be an obvious advantage against predators, and improve rabbit’s survival rate. How much time would it take for this to happen through random mutations?

    Being able to compute answers to such questions would be awesome, as long as it gives the correct ballpark figure for the answers. On the other hand, if it gives answers that are generally too fast (anyone remember the “Evolution” movie from 2001?) or too slow (since life on Earth had at most a couple billions of years to evolve to its present level of versatility), we would have to abandon the random-mutations-selected-by-environment idea, and start looking elsewhere (intelligent design, panspermia, or otherwise…). So having a computable model would open up to doing some great science.

    As I often like to say — quantitative arguments are called “science”, qualitative arguments are called “handwaving”. 😉 It would be a welcome change to move the idea of evolution from the latter to the former. 🙂

    Liked by 1 person

  3. “A new kind of argument is required to link equations of motion on the one hand to optimisation programs on the other …”

    I would like to point readers to Chiara Marletto’s “Constructor Theory of Life” (http://arxiv.org/pdf/1407.0681v2.pdf). This paper describes how Constructor Theory (which approaches physics not through “equations of motion”, but through descriptions of what’s possible and impossible) can be used to explain how the appearance of design and function is possible without a “designer”.

    James

    Liked by 3 people

  4. Marko,
    Naively speaking, a third eye which looks behind one’s back would be an obvious advantage against predators, and improve rabbit’s survival rate

    There is always a cost, for example a third eye requires a larger brain and this might be why we don’t see ‘obvious’ solutions. The model must factor in the costs/unintended consequences and this is the really hard part.

    since life on Earth had at most a couple billions of years to evolve to its present level of versatility

    Maybe less than half that time if the snowball earth hypothesis is true(http://bit.ly/1fdjAVw), though some maintain it accelerated evolution.

    Ross L Stein has a radical alternative view based on the process philosophy of Alfred Whitehead:
    http://www.hyle.org/journal/issues/10-1/stein.htm.
    He says
    Abstract: Molecular change is central to chemistry and has traditionally been interpreted within a metaphysical framework that places emphasis on things and substance. This paper seeks an alternative view based on process metaphysics. The core doctrines of process thought, which give ontological priority to becoming over being, cohere well with modern chemical thinking and support a view of molecules as dynamic systems whose identities endure through time as patterns of stability. Molecular change is then seen as excursions to new stability patterns. Finally, when molecular change is viewed as foundational to emergent complexity, process metaphysics allows evolution to be seen as creative molecular advance.

    Like

  5. By the same labnut-mooted fellow , a few years earlier, which might throw light on from where his later ‘intellectual contribution to the philosophy of chemistry’ came:

    Theodicy for a World in Process: God and the Existence of Evil in an Evolving Universe
    Author: Ross L. Stein
    Quodlibet Journal: Volume 2 Number 4, Fall 2000
    ISSN: 1526-6575

    ……including, for example the following deep philosophical scheme, which I cannot reproduce in its marvellous 2-dimensional pattern:

    ” ….

    Development of an Evolutionary Theodicy

    Epistemology
     
    Ontology
     
    Affirmation of the existence of the natural world and humanity through the senses.

    Affirmation of the existence of God through experience.

    Science as the systematic account of the natural world.

    Theology as the systematic account of our experience of God.

    Theories of the evolutionary origins of the universe and humanity.

    Theologies of process thought and liberal Christianity.

    Evolutionary Theology
     
    Evolutionary Theodicy

    …”

    Is the author of the paper summarized here by Massimo also from the coterie of those uncomfortable with present-day biologists’ dominant evolutionary theory?

    It might also be mentioned that James’ suggested paper comes from the (I think small) group generated by David Deutsch, whose ideas are worth very serious study by aspiring philosophers IMHO!

    Like

  6. I’m a big fan of attempts to mathematize biology. Ever since Wigner’s article on the unreasonable effectiveness of mathematics in the natural sciences, many have argued that about it’s “ineffectiveness” in areas like biology, psychology, and sociology.

    Great to see more progress on the mathematical front in this field and others that haven’t seen as much penetration until recently. It never ceases to amaze me how universal mathematics truly is.

    Like

  7. This is from Grafen’s paper above: “purpose in explanations (in biology) has great power, and attempts to do without it in ethology … have long ago been abandoned as unworkable.”

    Our need for an understandable narrative no doubt underlies the above observation. A “phenotypic gambit” in complex organisms presumably can be evaluated from the standpoint of what could the possible advantages and disadvantages be in the adaptive process of observed phenotypic features? However, presumably the earliest life forms were ‘simple’ bacteria with relatively few genes. What would the gambit have been in that situation, what could have driven the urge to survive? Were pure physical forces driving the process?

    Any suggestions?

    Like

  8. I strongly doubt that something like this can be successful; evolution is perhaps too large and messy and complex and multi-layered to be “connected” into one mathematical system, expect perhaps in the most abstracted and simplifying manner.

    Also, how does this idea sit vis-a-vis the boogeyman of reductionism? Isn’t that just the problem: trying to force one approach onto several different levels of detail and several different levels of emergence when distinct approaches for each level have long proved to be more fruitful?

    Like

  9. Hi PH,
    By the same labnut-mooted fellow , a few years earlier, which might throw light on from where his later ‘intellectual contribution to the philosophy of chemistry’ came

    Yes, Ross Stein does certainly have unusual ideas. I should explain the spirit in which I linked his ideas. I don’t know what to make of them, they are challenging, unusual and interesting. For all these reasons I found it worth reading about and I have come away mildly sceptical but better informed. I put them here as a kind of intellectual excursion from the central theme but you might think of it as a pointless diversion or dead-end. That’s OK. It may help to read this straightforward introduction to process philosophy from IEP, the Internet Encyclopedia of Philosophy.

    A while ago Massimo posted the essay Structural realism and the nature of structure. At its most fundamental level it addresses the question of why there is pattern in existence rather than just randomness. Their answer was that structure is ontologically basic. That is appealing but does not explain time and change. Whitehead argued instead that becoming, change, was ontologically basic and structure is the outcome. From his perspective, time and therefore change, is most basic and therefore becoming is the nature of reality. Our default, everyday assumption is that objects are real and basic. This is the third primary perspective. To summarise, we may give ontological priority to being, becoming or structure.

    They all address the underlying question – why is existence patterned rather than random? The paper is relevant to this question where it says “The other approach is based on the expectation of finding good design in nature“. It seems to be looking for a mathematical description of how design could emerge. That sounds to me very much like a kind of structural realism.

    Here is another, accessible, paper by Alan Grafen which helps to clarify his ideas – Formalizing Darwinism and inclusive fitness theory. If you cannot read the paper cited by Massimo this is a good substitute.

    He “ say[s] that the mathematical framework is designed to represent one central argument of the Origin, namely, that the mechanical processes of inheritance and reproduction can give rise to the appearance of design.“.

    If that is possible then I think it is a good argument for structural realism.

    Like

  10. “A fly in this ointment is that there are serious reasons to doubt that fitness is in fact maximised. The central assumption of the approach has been known to be untrue in general for decades, and it is here that the other of the two approaches to studying natural selection becomes relevant..”

    The first sentence seems ambiguous. “Maximise” could refer to movement in the direction of a maximum. Or it could refer to the achievement of a maximum (a peak of a mathematical function). On the first interpretation, to doubt that such a process occurs is to doubt adaptation altogether. On the second interpretation, it seems pretty uncontroversial to doubt that a strict maximum is reached. In fact, I’m doubtful whether there is even a precisely definable (overall) fitness function to be maximised. It’s important to distinguish between messy reality and idealised, simplified mathematical models.

    What “approach” is the author referring to here? Apparently an approach “based on the expectation of finding good design in nature”. That approach doesn’t require the achievement of a maximum. Good design doesn’t have to mean maximally good design. The human eye is a pretty good design, despite the blind spot.

    It seems to me the author is mistaking a conceptual question (about the propriety of using “design” language) for a question of fact about maximisation.

    Like

  11. P.S. I recognise that Grafen is trying to merge populations genetic with optimization theory, not trying to do away with the latter. The passage I quoted above seems something of an aberration, and perhaps shouldn’t be taken too seriously.

    Like

  12. Aaron,

    “Grafen says that purposive language and the design approach cannot be abandoned. Interesting. I wonder how Jerry Fodor would feel about this.”

    Grafen is simply stating a fact: that’s how biologists talk and write. There is no implication of actual intelligent design, which means:

    “I’m sure the ID crowd will find this a fruitful article to mine quote.”

    They are welcome to it. They have amply demonstrated their intellectual dishonesty by now, so I truly don’t give a damn what they say.

    Marko,

    “The ability to explain various properties of plants and animals that we see around us by rigorous mathematical reduction to dynamics of individual genes or DNA molecules would be a triumph for the idea of evolution”

    I don’t think that’s the idea. Indeed, I don’t think that’s possible. Rather, the idea of the formal Darwinism project is to articulate a connection between two so far largely separate branches of evolutionary theory: population genetics, which deals with changes in gene frequency regardless of whether they are adaptive or not; and optimization theory, which deals with adaptation regardless of its genetic basis. I don’t know whether this particular project will succeed or not, but such a bridge is sorely needed to provide a more complete theory of evolution.

    “how much time does it take for the bacteria to mutate enough to develop into a multicellular organism which survives longer and reproduces faster in the same environment?”

    Again, I think this is way beyond the scope of the current project, and I’m skeptical it can be done at all.

    “quantitative arguments are called “science”, qualitative arguments are called “handwaving””

    Hum, no. By that standard Darwin’s original book would be handwaving. A conclusion that would stun the hell out of every biologist or philosopher of science.

    Liam,

    “presumably the earliest life forms were ‘simple’ bacteria with relatively few genes. What would the gambit have been in that situation, what could have driven the urge to survive? Were pure physical forces driving the process?”

    I’m not sure I understand the question, but my hunch of an answer is: the very same process that drives adaptive evolution in general, natural selection. The bacteria that didn’t have the “urge” to survive were simply statistically less likely to do so, which is why they didn’t leave descendants.

    Alex,

    “how does this idea sit vis-a-vis the boogeyman of reductionism? Isn’t that just the problem: trying to force one approach onto several different levels of detail and several different levels of emergence when distinct approaches for each level have long proved to be more fruitful?”

    But don’t we need bridge principles between levels? Do we want to give up the idea of connecting different levels of analysis when it is possible and fruitful?

    richard,

    “On the first interpretation, to doubt that such a process occurs is to doubt adaptation altogether. On the second interpretation, it seems pretty uncontroversial to doubt that a strict maximum is reached.”

    Not exactly. It has been known for some time in population genetic theory that there are plenty of instances (e.g., when there are strong epistatic interactions between loci) where fitness maximization is not possible even in theory: natural selection then becomes a “satisficying” rather than a maximizing process.

    “Good design doesn’t have to mean maximally good design. The human eye is a pretty good design, despite the blind spot.”

    I don’t think Grafen would disagree with you.

    “It seems to me the author is mistaking a conceptual question (about the propriety of using “design” language) for a question of fact about maximisation.”

    I didn’t get that impression.

    Like

  13. Hi Liam,

    Yes I’d like to give your question above (https://scientiasalon.wordpress.com/2015/06/15/the-formal-darwinism-project/comment-page-1/#comment-14829) a response. It seems to me that complexity will have absolutely NO bearing upon the “drive,” as you put it, for positive traits to be selected evolutionarily. Regardless of any phenotypic simplicity, traits which find a niche should do a bit better and thus survive, while those that do not, should perish. It’s important for us to separate between our own epistemic need for “purpose in explanation,” and nature’s perfect lack of such requirements. Evolution presumably doesn’t need to figure anything out at all, but is rather just a process of circumstances which occur.

    As for your question, “Were pure physical forces driving the process?”, this is certainly what I would say, though you may also find my definitions to be a bit tautological in this regard. (Fortunately I at least still find them useful!) I consider “physical” as that which occurs through cause/effect. Here there is foundation, or reason for events to happen as they do. Thus the non-physical contains no such founding, and so is consistent with our term “magic.” Because I believe that all of reality does happen through a cause/effect dynamic, I presume that purely physical forces drive all processes.

    I should also mention that the “design” we see in evolution, should simply be a natural anthropomorphic handicap of ours. The human of course must figure things out, and therefore when we look at what evolution seems to have created, it’s natural for us to see this in terms of something that was “designed.” (Of course our gods also generally display anthropomorphic features.) But when we consider evolution, I do think it wise to avoid such tendencies.

    Like

  14. Alex,

    how does this idea sit vis-a-vis the boogeyman of reductionism?

    It doesn’t. For reductionism to be discussable, we need to have an effective theory (high-level) and the structure theory (low-level), so that one can attempt to reduce the former to the latter. But the effective theory must be *constructed* first, which is the topic of the article. Only after that construction is complete, one can attempt at reducing it to, say, biochemistry or whatever structure theory one may have.

    Massimo,

    the idea of the formal Darwinism project is to articulate a connection between two so far largely separate branches of evolutionary theory: population genetics […] and optimization theory […]. I don’t know whether this particular project will succeed or not, but such a bridge is sorely needed to provide a more complete theory of evolution.

    Well, that’s what I meant also, but maybe I phrased it in a clumsy way. I fully agree that it is needed for a more complete theory.

    By that standard Darwin’s original book would be handwaving. A conclusion that would stun the hell out of every biologist or philosopher of science.

    I think it was rather obvious that my statement was somewhat of a hyperbola. 🙂 But regardless, I think you would agree that there is a world of difference between a fully quantitative theory and a qualitative-only theory. If you recall my essay on reductionism, there was that example of the Solar neutrino problem — the mismatch by a factor of two between a theory and experiment can lead to substantial and fundamental modifications of our understanding of nature. So as long as the theory is merely qualitative, it runs a grave risk of missing some very important insight, and lulls the scientists into thinking they have the description of the phenomenon “under control” with the present theory. This can be extremely misleading, more often than not. There is also the famous historic example of “two small clouds at the clear horizon of physics” at the end of 19-th century, etc.

    One must not get fooled by qualitative-only correctness. See also my reply to Eric, below.

    Philosopher Eric,

    “Were pure physical forces driving the process?”, this is certainly what I would say, though you may also find my definitions to be a bit tautological in this regard.

    I think one should rephrase Liam‘s question more precisely, as: “Were only the known physical forces driving the process?”. This is a bit more well-defined, and until you manage to show with quantitative precision that the answer is “yes”, the only honest scientific answer is “we don’t know”.

    Like

  15. Hi Marko,

    If we take Liam’s question as, “Were ONLY THE KNOWN physical forces driving the process?” then the answer is quite clear to me. Of course there is more happening than what we know about! In the end, we’re just idiot humans. But then if we take it as “Were ONLY physical forces driving the process,” as I did, then excluding the possibility of “magic,” this is true by the definitions that I’ve presented. Still I do find this way of thinking quite useful. In fact from here I even term the physicist’s “natural uncertainty” as “unnatural,” or “magic,” if it is taken ontologically. As I think I’ve mentioned before, I am ultimately a determinist. If you are referring to a scientific answer however, then we’re referring to seperate plains — yours being epistemic, and mine being ontological.

    Like

  16. Philosopher Eric,

    If you want to talk ontology, you better have it backed up by some seriously established epistemology first. Otherwise, I dare to propose the following scenario. We know that at the scale of a human head consciousness appears. There is no reason to deny the possibility of the same phenomenon appearing on larger scales, say of a planet’s biosphere (you would be hard-pressed to argue against this, given that it is present in the brain of every human, and even in some other animals). So ontologically, I could argue that the planet is self-conscious, and it could purposefully modify its own hardware to evolve in a particular direction, as per its intentions (dare I say “design”?). Just like a human can modify their brain hardware into rewiring itself while intentionally learning, say, to play a piano. In such a scenario, assuming that the planet became self-aware at some point in its history, it could very well “drive” the evolution of more recent species toward a particular purpose, “intelligently designing” mammals, humans, dolphins, and other higher-level organisms. So evolution as we know it (random mutations with environmental selection) may be vastly accelerated by the “intelligent design” of the planet’s consciousness. No, you say? Short of having a quantitative model of evolution that can compute the time necessary for a prehistoric single-cell organism to evolve into a human, how can you be sure that the planet had enough time to evolve human species solely by random mutations? Imagine you had a sophisticated model that can predict this, and the result of its computation turns out to be time longer than the current lifetime of the Solar system. What would you conclude, as a rational, scientific person?

    Unless you manage to reduce (epistemologically) all the known effects of evolution to, say, biochemistry laws that we already know about, and by doing that prove that the above self-conscious-planet scenario doesn’t fit the data, that scenario (as ridiculous as it may sound to some people) is an ontologically allowed alternative to normal evolution. You know, some people even try to communicate with the planet’s consciousness (and call it “praying to a God Creator”). In your face, strong atheists! 😀

    On a more serious note, you may want to read on the second part of my essay about reductionism (here and here), especially the “anything goes” property of an ontological “theory of everything”.

    My point is that ontology cannot tell you much, unless it is backed up by epistemology. And if your epistemology is shaky and qualitative-only, you’re swimming in deep water without a life vest. Anything goes.

    And don’t even get me started on determinism, or lack thereof. 😉 That’s where the already well established epistemology goes into straight contradiction with your deterministic ontology. 🙂 You can read about that here.

    As I said, lacking a detailed quantitative model of evolution, which fits the data well, the only honest scientific answer to the question which forces govern evolution is “we don’t know”. Ontology or otherwise.

    Liked by 2 people

  17. It would certainly seem reasonable to expect that only physical forces were responsible for the origin of life. There would be no reason to expect otherwise since it appears that simple organic molecules are common in the solar system, that there were abundant energy sources available on the early earth, plenty of time for chemical reactions to take place, and many sheltered environments that may have protected fragile molecules from decay. It seems less clear whether this process could be described by a simple mathematical formula.
    http://darwinskidneys.blogspot.com/

    Liked by 1 person

  18. Formalization and application of qualitative reasoning (QR) is an active subject of study. It would be interesting to see how it could be applied to biological evolution.
    * Qualitative knowledge representation & reasoning in physical, biological and social sciences.
    * Formalization, axiomatization, and mathematical foundations of qualitative reasoning.
    http://qr15.sift.net/ (QR2015)

    Like

  19. The formal Darwinism project: a mid-term report (2007)

    http://onlinelibrary.wiley.com/doi/10.1111/j.1420-9101.2007.01321.x/full

    “The project is pragmatically useful for a number of reasons: here are some of them. Biologists often complain that although fitness is a central concept in biology, and it is agreed that fitness is in simple cases just the number of surviving offspring, it is hard to define more widely: the project proposes that we choose to define fitness in relation to an optimization programme, which raises the next issue. Biologists and mathematicians have for many years agreed to ignore each other on the question of whether selection leads to optimization of fitness. Biologists, while recognizing the existence of cases like sickle cell, are prepared to base whole research programmes on the hypothesis that selection does in substance lead to fitness optimization. Mathematicians rejected this view decades ago, and when they do discuss it today, maintain the line that the optimization view is all too simple, indeed hopelessly naïve. The design-making power of natural selection was the central point of Darwin’s argument, and a formal representation of it would resolve this damaging split. Finally, systematizing the theory of natural selection is bound to turn up all kinds of details that make sense in retrospect, and resolve long-standing issues: some are discussed later in the paper.”

    Liked by 2 people

  20. Marko’s points are well taken.

    Some optimistic possible protometabolic scenarios are reviewed a little here: (http://phys.org/news/2015-03-chemists-riddle-life-began-earth.html). However, some very smart physicists say that the forces of nature, as we understand them, would not be able to explain the evolution of life, since life has many features of design (http://arxiv.org/pdf/1407.0681v2.pdf). They propose a Constructor Theory of Information that dispenses with the need of intrinsic design in nature, in order to explain the actual presence of design in life. This (http://www.scientificamerican.com/article/a-meta-law-to-rule-them-all-physicists-devise-a-theory-of-everything/) seems preliminary and beyond me.

    Liked by 1 person

  21. Hi Marko,

    Actually the silly scenario that you’ve shown me above (https://scientiasalon.wordpress.com/2015/06/15/the-formal-darwinism-project/comment-page-1/#comment-14842) isn’t something that I dispute ontologically — and I’d even take gods and the supernatural here! Apparently all I can really be sure about is that “I think,” since the rest may be perfectly fabricated.

    Nevertheless I do hope to show you that I have something other than epistemology to reasonably back my displayed ideas regarding ontology. My point was actually true by definition (and therefore tautological, yes). I do find these definitions useful however, and wish Einstein would have given them a try (though Coel might once again straighten me out about my hero/legend!). I’ll now begin:

    Ontologically speaking, if “physical” is that which occurs through cause/effect, then here there shall be foundation, or reason for such events to happen just as they do. To fully conceptualize this, imagine a very simple Newtonian world, perhaps of blocks, that held no atoms or anything else tricky/complex. Here if past and future events were not perfectly based upon its simple structure at any given moment, then this would mean that something other than cause/effect had indeed transpired. In fact, physics in our world would be virtually impossible to explore without the overwhelming presence of cause/effect. Nevertheless it has generally been reasoned that “complexity” shouldn’t be sufficient for indeterminism ultimately. Instead observations associated with Heisenberg’s Uncertainty Principal seem to be what places the general physics community in your camp.

    The thing is however, by theorizing such epistemological observations to demonstrate an ultimately indeterminate reality (ontology), we might indeed be fooling ourselves. For example, hasn’t it become relatively accepted in the physics community that there are more dimensions to existence than the four that we can readily measure? Before we claim that our observed uncertainty in very sensitive quantum measurements mandates an ultimately indeterminate existence, shouldn’t we acknowledge the great magnitude of our ignorance? Albert Einstein, at least, was able to display such caution.

    I do not know if Heisenberg’s Uncertainty Principle ultimately demonstrates something which occurs beyond cause/effect, but if it does, here is my point: Through my definitions this can effectively be referred to as ordinary “magic.” In an ontological sense it seems far more likely to me that discrepancies in sensitive quantum measurements do not demonstrate such voids in cause/effect (or standard magic), but rather human ignorance.

    Marko I do realize that my position makes me “the outsider,” while you have nearly an entire physics community supporting your position. Nevertheless as I mentioned to you weeks ago (https://scientiasalon.wordpress.com/2015/05/14/freedom-regained/comment-page-2/#comment-14285), this stuff has bugged me since college. I do suspect that I’ll never accept ontological indeterminacy, (though I still don’t quite know that you do?). I feel quite fortunate to speak with you about this however. Hopefully at some point you’ll also consider some of my most prized ideas!

    Like

  22. Warren Ewens’ commentary on the “Outline” was good value, and the I found Grafen’s response to the commentaries more useful than the target article.

    In the spirit of some the comments in the thread:

    http://www.molbiolcell.org/content/25/22/3441.full

    Informal models have always been used in biology to guide thinking and devise experiments. In recent years, formal mathematical models have also been widely introduced. It is sometimes suggested that formal models are inherently superior to informal ones and that biology should develop along the lines of physics or economics by replacing the latter with the former. Here I suggest to the contrary that progress in biology requires a better integration of the formal with the informal.

    Liked by 2 people

  23. Hi Marko,

    We know that at the scale of a human head consciousness appears. There is no reason to deny the possibility of the same phenomenon appearing on larger scales, say of a planet’s biosphere …

    The only situations that we are confident exhibit consciousness are brains that are products of a vast number of interations of natural selection, and are clearly devices that absorb information, process and reflect on it, and then use it to make decisions in order to pursue goals, where those goals again derive from evolutionary programming.

    To extend that to the planet’s biosphere is a wild extrapolation based on little but a poor metaphor. The Earth’s biosphere simply doesn’t do natural selection, in that the biosphere does not produce child biospheres which are then selected between — and as a result there is no reason to believe that the biosphere has “goals” or “thinks” or is “conscious”.

    Imagine you had a sophisticated model that can predict this, and the result of its computation turns out to be time longer than the current lifetime of the Solar system. What would you conclude, as a rational, scientific person?

    I’d conclude that the model was not sophisticated enough and was wrong (barring some astonishingly good reason to believe in self-awareness and intelligence at the biosphere level).

    As I said, lacking a detailed quantitative model of evolution, which fits the data well, the only honest scientific answer to the question which forces govern evolution is “we don’t know”.

    We have a pretty good understanding of the basics of what governs and drives evolution, a basic understanding that seems entirely adequate for the task (though of course there are always details to study further and deeper understanding to develop).

    Like

  24. Hi davidlduffy,

    The perspective piece by Gunawardena that you cite beautifully illustrates the problem being addressed: formal v. informal models in biology. (Might this be an example of Godel’s theorem regarding formal systems? Maybe not.)

    Hi Coel,

    You say that we “have a pretty good understanding of the basics of what governs and drives evolution”, but many recognize that there are major gaps in the narrative (informal model).

    We can now imagine how a complex molecule like RNA or DNA came about – there are a number of proposed scenarios. But what ‘drove’ it to incessantly proliferate and presumably thus producing scarcity of raw materials. The molecules then found relief by seeking out different niches. When a niche became crowded, that is when competition probably reared its ugly head. Bacteria are supposedly ‘simple’ creatures but recent studies indicate that their existence is one of continuous war and battle for survival using very ‘sophisticated’ weapons against themselves and others. In fact, the biosphere essentially is a system that necessarily feeds on itself – autophagic. It could be looked at as a single intelligent system, as Marko suggested, in theory. Like everything, it all depends on perspective.

    Liked by 3 people

  25. Hi Coel,

    In your comment above to Marko (https://scientiasalon.wordpress.com/2015/06/15/the-formal-darwinism-project/comment-page-1/#comment-14847), I think you need to fully appreciate that he was simply teasing me about the possibilities which do reside in ontology. Furthermore it seems to me that you were recently teasing everyone in exactly this same manner about how your iPhone happened to b conscious! I certainly did take his point as a valid one however, and so wouldn’t trivialize it through the epistemological concerns that you’ve mentioned. But now to narcissistically take the discussion back over to me, I did not merely write my last comment (https://scientiasalon.wordpress.com/2015/06/15/the-formal-darwinism-project/comment-page-1/#comment-14847) with him in mind, but certainly you as well. I really would like to know if you get my point here, and perhaps even see some value to it?

    If we define “physical” as a cause/effect process (bolstered I think by the Unger and Smolin book that Massimo has been mentioning: https://scientiasalon.wordpress.com/2015/04/21/smolin-on-mathematics/), then that which doesn’t happen in this manner may quite logically be referred to as “magic.” I can’t say that such magic does not occur, but I can say that by definition, if ontological reality is not perfectly predetermined by means of cause/effect, then the discrepancy here shall ontologically be the same concept that we commonly know as “magic,” or that which has no foundation from which to occur.

    Thanks guy!

    Liked by 2 people

  26. Hi Liam Ubert,

    But what ‘drove’ [RNA or DNA] to incessantly proliferate and presumably thus producing scarcity of raw materials.

    I agree that we don’t know much about how the first self-replicating molecule came about — though that is abiogenesis rather than evolution — but given that it did come into existence we don’t need any more explanation for what `drove it to proliferate’ than the fact that is self-replicates. As Darwin said, the rest of the process then happens “naturally” (= automatically).

    the biosphere … could be looked at as a single intelligent system,

    Only as a horribly bad and misleading metaphor! (For reasons as in my previous comment.)

    Hi Eric,

    Marko … was simply teasing me …

    Well I did actually wonder that, so apologies to Marko if I misunderstood his intent.

    … you were recently teasing everyone in exactly this same manner about how your iPhone happened to b conscious!

    No, at no point did I say that my iPhone was conscious. And I explicitly stated that I was not saying that.

    What I said was that my iPhone could do *meaning*. Namely that it “understood” the “meaning” of some languages but not others. (Explicitly, that it could link certain language sounds to entries in its database, but not others, and such linkages are effectively what “meaning” is.)

    I also explicitly stated that I was *not* bundling the concept of “meaning” up with all sorts of other things like consciousness, but was rather dealing with the much more limited concept of “meaning”. I also said that bundling such concepts together just makes the issues unsolvable, and that teasing apart the various concepts involved in what a brain is doing is the way to make progress.

    Since you ask me about what is “physical” or “natural”, my response is that it is a pretty uninteresting question. Defining “physical” or “natural” would only be interesting if there were a viable and serious concept of something “non physical” or “super natural” to contrast them with, but until then there is nothing much to be said on the topic.

    Like

  27. Philosopher Eric,

    Ontologically speaking, if “physical” is that which occurs through cause/effect […]

    Ok, here’s the short rundown of the argument (you might want to look up the flowchart at the end of my determinism essay, for reference): physicists have found that identically prepared physical systems display different behavior (i.e. same cause, different effects). People doing experiments in QM face this situation on everyday basis, and the double-slit experiment is a textbook example. There are two ways you can approach this. One way is to give up causality. The other way is to claim that the initial systems were not really prepared identically, i.e. there are some “hidden variables” that make them different, and we are merely ignorant of those. In that case, the experimental violation of Bell inequalities mandates that these hidden variables must obey nonlocal equations of motion. And for such equations, the initial-value problem is ill-defined, which means that the future depends on more than just past and present. And again one has to give up causality. So the experimental data is in straight contradiction with causality. Moreover, any ontology you may imagine must fit the above data (otherwise it is just wrong), so determinism won’t work for any choice of ontology, except for superdeterminism, which is cognitively unstable (like solipsism, Boltzmann brains, the Matrix, etc).

    Regarding this ultimate lack of causality — if you wish, you are welcome to call it “magic”, but that’s just a matter of terminology. Personally, I wouldn’t call it that.

    hasn’t it become relatively accepted in the physics community that there are more dimensions to existence than the four that we can readily measure?

    No, that’s just hype. String theory features extra dimensions, and keeps jumping through incredible hoops in desperate attempts to get rid of them (they’re called “compactification scenarios”). Most other theories work in four dimensions from the get-go.

    Nevertheless as I mentioned to you weeks ago […]

    Yes, I remember, but I ran out of replies in that thread so I couldn’t respond, sorry. 😦

    Coel,

    We have a pretty good understanding of the basics of what governs and drives evolution […]

    I think Massimo has already disagreed with that statement. And I believe many other biologists would also. The need for a better understanding is the very topic of the essay of this thread.

    Well I did actually wonder that, so apologies to Marko if I misunderstood his intent.

    Don’t worry, no harm done! 🙂 Although, some time ago I did play a devil’s advocate for the “conscious planet” scenario in some conversation, and I was actually surprised how far one can push the idea. My opponents needed to assume a vast amount of various metaphysical “-isms”, just to keep that scenario off the table. So it is a surprisingly robust idea, IMO. But I won’t seriously defend it, of course. 🙂

    Liked by 2 people

  28. Thinking informally about the difference between formal and informal models, suggests that in the former relationships are quantitatively defined in a closed system and can be internally tested to high levels of rigor. Resolution of discrepancies may require a revised model and/or additional information.

    Informal models are imaginative systems with very incomplete understanding and ill-defined boundaries. Deductions can only be tested by doing additional testing to try to clarify questions.

    In physics formal models predominate, in biology informal models do. This differential approach may explain some of the controversies surrounding attitudes regarding science, scientism and metaphysics: how much imagination will be tolerated? Some prefer to hew closely to the data while others are brave enough to speculate beyond the evidence. Some even believe that we should free ourselves from this hegemony of evidence, so to be truly free to create ourselves in our own self-image. I believe that freedom to speculate and wonder is fundamental, as long as diligent efforts are made to understand the evidence.

    Understanding the strengths and weaknesses of the evidence may be the most important aspect of our efforts. Bringing together as much information as possible, from as many sources as possible would provide a better perspective and and will improve outcomes. This roughly corresponds to the medical model of diagnosis and management which I recommend to everyone. 😉

    Liked by 1 person

  29. Re Marko’s comments above re causality – all the causal models I deal with involve injection of randomness (it doesn’t matter – except metaphysically 😉 – if this is quantum or classical). Just consider the transmission of a gene from a diploid parent. There can be an iron law that the distribution of outcomes follows even if an individual event can’t be predicted.

    Hi Liam U – my own sympathy lies with formal models as far as empirical data can currently support them. Big chunks of genetics are one of the more mathematical areas eg population and evolutionary genetics, so I think the prospects for a formal Darwinism of some kind are good, and there are lots of contributions from statistical physics as well as game theory. For example, I couldn’t resist this paper from Physical Review Letters that uses “Glauber’s formula and self-consistent equations of the Schwinger-Dyson type” to solve for probabilities in recombinant inbred lines.

    Nevertheless, we usually end up with some kind of linear approximation to truth at the moment in biology overall. This is true even where the model may not be a good statistical fit to data (because the sample is large), but in terms of understanding might be the right complexity. A slightly different phenomenon is where a physically accurate model might be a “worse” fit in likelihood terms than a linear model that includes higher order (polynomial) terms added in that “soak up” noisy features of data. You may be aware of some to-and-fro about this in climate modelling, for example.

    There are a few recent papers that show the stochastic optimization that evolution does can get stuck in local maxima (well we knew that) or even in between. One I came upon discusses one tiny domain,

    RNA sequence and secondary structure
    , where for 20-base RNA molecules there are 1e12
    possible genotypes but only 11219 possible phenotypes. In this situation you get “survival of the flattest”, where a less advantageous phenotype that is produced by a wider range of genotypes will be found more easily, and then fixed. This paper gives a model in which a deleterious mutation will often go to fixation if the landscape is relatively flat. The point is that the shape of the fitness landscape is set by the constraints of physics, chemistry (eg the “choice” of nucleotides RNA has ended up with – contingent or inevitable?), and then the ecology (!)

    Liked by 1 person

  30. I don’t have much to say about the Formal Darwinsim project itself – it still in its infancy, and has produced limited results so far. I do have a general remark to make, which I’ve hesitated posting, because of its speculative nature. But I wonder if the search for a ‘unified theory of evolution’ isn’t doomed to failure. Evolution may not be a single process following a long chain of dependent causes. ‘Evolution’ may simply be an umbrella term encompasses a wide variety of processes, some of which complement and contribute to others, some of which may be accidentally intersecting with these without much complement or contribution. If so, then many of the principles assumed to be discoverable in each of these processes may simply be poorly grounded.

    One moment of this I wonder about: What if ‘fitness’ is purely a post-hoc observation? What if there is nothing in either genes or species directed toward any such goal or quality?

    If I throw a six-face die, the number that results is determined ‘fit’ in relation to the game I’ve chosen to play. But if I’m not playing any game, it’s just the number itself – I can’t even say whether its appearance is ‘lucky’ or not.

    It’s not that the research into the possibility of a unified theory of evolution doesn’t produce interesting knowledge and insight, it has and it does. But the hope may be in vain; and one wonders if greater insight might not be developed by abandoning it.

    Like

  31. My fifth, and so final comment for this one…

    Coel you told me:

    Since you ask me about what is “physical” or “natural”, my response is that it is a pretty uninteresting question. Defining “physical” or “natural” would only be interesting if there were a viable and serious concept of something “non physical” or “super natural” to contrast them with, but until then there is nothing much to be said on the topic.

    I think that you might have “put the cart in front of the horse” a bit here. We first must take a given definition for the term “physical” (which could actually be anything conceivable) and then check to see what does and doesn’t seem to fit. So under some definitions for the term there obviously will be both varieties of existence. Furthermore from the cause/effect “physical” that I’ve presented, Heisenberg’s Uncertainty Principal does seem to present a supernatural dynamic (ontologically that is, not epistemologically). If a beautiful naked woman were to materialize in front of me by means of such “uncertainty,” for example, then from the presented definition the event may be termed “noncausal,” “supernatural,” “magic” and so on. I suspect that most physicists would instead give this a more palatable “natural uncertainty” classification however (after emphasizing it’s ridiculousness of course!). If this were to occur however, I’m able to concede such uncertainty in an epistemological sense, though ontologically I’d presume that the event was quite predestined, and this is given my belief that the past ultimately founds the future. Otherwise this, or even observed uncertainty regarding those double split experiments, has been defined as “supernatural” through a void in causality.

    Hi Marko,

    Apparently I falsely presumed from your “Farewell to Determinism” paper, that you meant it in an ontological sense. For this reason I did originally have problems with your position, and unfortunately this may have tainted my perception of you as well. Sorry about that! You’ve straightened me out by now however, and I’m certainly happy to find you as a well spoken and learned man of epistemology.

    Though there is obviously tremendous practical need for people like yourself, I’ll surely never be considered “epistimologist.” I am a stereotypical, “head in the clouds,” “daydreaming,” and yes “ultimate,” adherent to ontological speculation. When something happens that’s based perfectly upon physical causes, here there is also complete foundation for the event to happen in this specific manner. Everything will thus be accounted for. But if things breaks down such that there is not complete premise for all that happens, in an ontological sense the symmetry, or perhaps unity, becomes destroyed. It is this ontology, I suspect, which prevented Einstein from ever conceding to the mainstream. It’s not so much that he had disrespect for the work that was occurring, I think, but more that he had tremendous respect for the wonders of the ultimate natural world.

    Like

  32. Hi Massimo, I don’t have much time to write these days, so keeping my comment short. I don’t see a problem with people trying this. However I am not sure I agree that intentionality is as important in biological discussions as suggested here (the last part of discussion in the full article almost read like something from ID). More importantly, I have serious doubts how successful it could be.

    Outside of trying to build an interactive model between two “levels” of emergent properties (which some of this would require and likely be exceedingly difficult), I am uncertain how it would deal with non-genetic evolutionary components (symbiosis) and evolutionary capacitance. That last point in particular would severely hit a gene to phenotypic trait to fitness/selection calculation.

    Like

  33. Massimo,

    I agree. My “interesting” was sincere not sarcastic. If the design approach cannot be abandoned, doesn’t that give some succor to Fodor’s argument that there is no distinction between selection-of and selection-for?

    I mean, the response to Fodor’s argument, if I’m understanding it, should be “The apparent non-distinction between selection-of and selection-for is just an artifact of purposive language, which is just a shorthand way of talking, a crutch of our design inclined psyches.” But if the purposive language is unavoidable, even in terms of the math, then Fodor might have a point.

    The article is beyond my ability to grasp, though. So I hope that I’m missing what he means by “purposive language cannot be abandoned.”

    Like

Comments are closed.