Appendix 2:

 

The Probability Bound and the Resource Exhaustion Argument

(Crunching the Numbers)

 

 

Whatever is a Probability Bound?

The probability bound, or probability boundary, is a threshold of improbability that mathematicians say rules out chance or accidental processes altogether, a boundary beyond which chance cannot go. Mathematicians vary in their opinion of what level of improbability should be used for this boundary. Professor William Dembski tells us that proponents of this concept through history, such as the famous French mathematician Emile Borel, or the secret code breakers of the National Security Agency, have suggested different magnitudes for this limit based upon their own perspectives and technical purposes.

 

While the language of some writers on the subject seems to suggest that this boundary is a pure mathematical and theoretical limit that absolutely precludes an event occurring by chance, I think (because probability computations themselves tell us the pure mathematical possibility of an event) we must consider the probability bound a practical, not a theoretical limit. This is true only as long as the mathematically computed probability remains the smallest fraction above zero. In other words, what the mathematicians seem to be saying about the probability bound is that for all practical intents and purposes in this universe we may consider an improbability that exceeds the probability bound limit to be a physical impossibility for accident to produce in our universe, although some minute theoretical chance remains in purely mathematical terms.

 

This is the only way I can make sense of the concept, as a practical not a theoretical limit. However, not being a mathematician, I am not sure that some of the mathematicians who have addressed the topic have not meant to say that, “Yes, the probability bound limit is a genuinely theoretical limit, not just a practical one.” William Dembski’s discussion seems to suggest this. To me this position produces a contradiction with probability theory numbers, which say the probability is still somewhat greater than zero, so I personally think that the purely theoretical version of the probability bound is a flawed concept. But, for our purposes here, the practical version at least is relevant to computing the plausibility of the concept of accidental evolution, even if the theoretical version has problems.

 

Regardless of whether one calls it a practical boundary or a theoretical one, acknowledgement that such a boundary exists seems to be implicit even in the work of famous evolutionists. In his classic book, This View of Life, George Gaylord Simpson cites Julian Huxley (page 202): "to produce such adapted types by chance recombination...would require a total assemblage of organisms that would more than fill the universe, and overrun astronomical time."

 

One might call G. G. Simpson’s way of formulating the problem as the “resource exhaustion argument.” Modern mathematician William Dembski has his own version of the resource exhaustion argument. It combines standard probability theory with the known physical and time limits of our universe. Dembski argues that any event with a probability less than 10-150 cannot rationally be expected to be the result of chance. He calls this limit the “universal probability bound.”

 

Dembski goes on to say that 10-150 is the most generous limit ever proposed in the scientific literature. Emile Borel proposed 10-50 as a universal probability bound below which chance could definitely be precluded. Some scientists use 10-94; still others 10-120; but no professional involved in probability-based endeavors believes a probability less than 10-150 can occur by chance.

 

Obviously, my current estimate of the improbability of neo-Darwinian evolution here in Evo-Smevo goes thousands of orders of magnitude beyond this threshold at less than 1 chance in 10-6,545,300. An event that exceeds the probability bound, even by such astounding margins, clearly should not be expected to be the result of chance. Theories accruing such a low probability cannot be considered scientifically credible.

 

Dembski’s argument keys on the maximum number of physical events available in the history of the known universe. To determine what that number is, he multiplies the estimated number of physical particles in our universe, 1080, times the number of actions a single particle can perform per second, 1045, times an estimate of the number of seconds available in the history of the universe, 1025. Even if all the particles stay constantly “busy,” there is a maximum of roughly 10150 particle events available since the Big Bang with which to accomplish the work of evolution (and absolutely everything else). In other words, even if evolution were to exclusively use the smallest and shortest event type available in the physical world (this is very generous as evolutionary events are obviously not that small), there are at most 10150 separate events possible in the history of the universe to date with which evolution could get the job done.    

 

Standard probability theory requires the trial and error search through half the alternatives available to make the occurrence of an event from random causes at least 50% probable. Probability theory is not something esoteric that I pulled in from left field here to make my case; it is a foundational tool of science, very much requisite to doing science at all. The entirety of science is a probabilistic endeavor.

 

Professional gamblers will quickly tell you that standard probability theory holds true in the long run over a large number of trials. Thus, to say with Doolittle, Dawkins, Strickberger and the neo-Darwinists that anything at all can happen in three or four billion years simply because it is a “large” amount of time, regardless of how steep the improbabilities become, is mathematically invalid, and incorrect science. By definition, science must always go with the probabilities, not against them.

 

The larger the sampling base, the more nearly will results tend to match the predictions of probability theory. Therefore, the large expanse of time in evolution tells us that we can very confidently expect standard probability assumptions to hold true—not that we can disregard them. Because the total event process of evolution of all life forms on earth is very large, we can safely make the assumption of a probability outcome near the standard expectation predicted by probability theory.

 

Given our estimate above of the probability of accidental evolution half of the available alternatives equals 5 X 106,545,300. There are only 10150 physical events available in the history of our universe (and those are extremely small events), so obviously evolution won’t be able to try half the alternatives. In fact, the time and physical resources of our universe would be exhausted before trying even a trillionth of a trillionth of a trillionth…(continued to 500,000 occurrences of “trillionth”) of the alternatives!      

 

An occasional fluke might occur, but because the event process of evolution is so large, we can proceed on the safe working assumption that the average achievement of the sophisticated biological designs of life should not occur by chance until approximately half the alternatives (both workable and unworkable) have been tried first. Once we make this bridging assumption based upon the high confidence that standard probability theory holds true for extremely large samples it is then possible to compute a minimum resource requirement for the universe’s having produced half the possible alternatives before hitting upon our tree of life by accident.

 

We have seen that the number of alternatives involved in random chance-based evolution are so large that one needn’t proceed further to see that the universe doesn’t have sufficient resources to get the job done via the neo-Darwinian process. Obviously, our universe cannot afford to deal out half the cards in the game of random chance-based evolution. It can’t even get close. Therefore, science is not entitled to affirm accidental evolution as more likely than a non-accidental process.

 

While this mathematical proof of resource exhaustion fully defeats the credibility of the theory of accidental evolution, it is easy to get lost in the abstractions of enormous numbers. In an attempt to make the point somewhat more concrete, I will now attempt to propose my own common sense application of the resource exhaustion-probability concept.

 

My form of the resource exhaustion argument hinges on the fact that accidental processes tend to be very wasteful, making a big mess while exploring so many failed alternatives. It is based upon the same mathematical principles, but focuses on the trillions and trillions of tons of biotic waste materials an accidental evolutionary process would necessarily produce.

 

One Big Mess—the Cost of Doing Business with an Accident

Empiric observations confirm what probability theory tells us because when we look at random processes in biology we see that they just don’t seem to be contributing anything useful to what evolution needs to get the job done. Thousands of mutagenesis experiments have been performed and no viable mutations significant enough to evidence progressive macroevolution have been observed (generally no viable mutations at all). Under such prolonged heavy bombardment by mutagens, some laboratory specimens, the fruit fly, for example (which is the most common laboratory subject) should have evinced some macroevolutionary potential.

 

The force of the evidence is so compelling as to suggest that some natural law or other requires this result: a law that forbids random processes from frequently producing nontrivially ordered results and forbids the random production of functional order of the magnitude we see in living systems. Maybe there is actually a natural law that says “Don’t hire accident as your architect for life,” just as there is a common sense rule derived from standard probability theory that says “Don’t hire monkeys to type your literary masterpiece (or your prenuptial agreements).”

 

Seeing no such law established in our current theoretical base other than the definition of entropy in thermodynamic law (which is a close relative) and in standard probability theory itself (which is not a law of physics, but of math), I will now propose one. This proposal does not originate with me in concept, although the specific formulation is mine. Professor William Dembski has previously made a similar proposal under the moniker “Law of Conservation of Information (LCI).” Dembski credits Peter Medawar as having previously originated the LCI concept in a somewhat weaker form.

 

I have attempted this reformulation because I assume that most readers prefer a common sense discussion to technical math. I occasionally use a little basic math, but not much. Whether it has really turned out to be less technical than Dembski I will have to leave to the reader to judge. I simply followed where the logic of the formula led me.

 

Don’t let my use of a strange acronym scare you off; the underlying concepts are simple. I call my version of Dembski’s LCI, “ORLEF-B” (Ordered Result Limitations for Entropic Forces—Biology). ORLEF-B constrains the increase in order that can arise as the result of the application of an accidental, random, or disordered force to a biological system.

 

To be clear, what I am doing here is not merely a restatement of the 2nd Law of Thermodynamics. The 2nd Law says something different. It says that entropy (disorder) tends to increase in a closed system and never decreases. The 2nd Law of Thermodynamics applies only to the larger universe as a complete system. It also assumes our universe is a closed system that receives no energy transactions from outside. The 2nd Law does allow for local increases in order, but only when they are offset by decreases in order occurring elsewhere in the universe. The 2nd Law stipulates that no increase in the total order of the universe is possible.

 

ORLEF-B is different. ORLEF-B is not concerned with the overall tendency to disorder in the closed system of the entire universe. It is concerned with the destructive effects that we see disordered event processes (random mutations) to have on highly ordered biological machines (the rule is assumed to hold for other kinds of machines as well). ORLEF-B proposes three things in general terms: 1) it is both a logically and mathematically justifiable assumption that that which consistently tends to destroy biological machines (random mutations) will not build any biological machine of substantial complexity without first wasting an enormous amount of resources; 2) random mutations will not be able to advance the functional complexity of existent biological machines without also wasting an enormous amount of resources; and (3) the physical laws that govern our world entail that the costs in matter, energy, and time required for random forces to create and evolve our known tree of life would far exceed the resources that have been available in our universe through its history.

 

This can (in theory) be computed with confidence and empirically confirmed. ORLEF-B is a hypothesis that can be tested extensively to gain greater and greater corroboration for the rule and more precision in the math. Tests involve observing what random forces are seen to do in the universe on different event scales and noting how many (if any) biological machines are spontaneously produced or enhanced by random forces; noting the change in magnitude of functional complexity produced by random mutations; and by calculating the resource cost to the universe for the entire process.

 

One goal of ORLEF-B research would be to quantify a typical random mutation transaction and to develop a reliable average value for a constant reflecting the ratio of cost in expended resources to units of functional biological complexity achieved. Should such research ever be undertaken, conclusive evidence for ORLEF-B should not be long in coming.

 

It is possible that there is no scientist who will dispute ORLEF-B even now, minus the further philosophical conclusions I have drawn from it. The current evidence suggests that accidental mutations fail to produce viable evolutionary form change in living creatures no matter which route they are proposed through. Incremental accumulations of single nucleotide changes initially do less harm than larger transfers, but this process alone is too slow to meet the evolutionary timetable.

 

Noted evolutionary biologist Douglas Futuyma confirms that mutations having large effects are indeed a problem: “Mutations with large effects are often deleterious, but some evolutionary biologists believe that such mutations have sometimes been important in evolution.” I take him to mean that, although the entire evolutionary process could not have been substantially grounded in large mutations, on exceptional occasions a mutation with large effect occurred that fit into an evolutionary scheme and aided an advancement. Futuyma implies that the evolutionary process is otherwise predominantly grounded in smaller mutations (this may or may not be the opinion in evolutionary biology prevailing when you read this book).

 

ORLEF-B implies that, if beneficial large mutations did occasionally occur, more complete information concerning the circumstances would reveal them not to be fully accidental. ORLEF-B codifies into natural law the age-old wisdom of common sense: “Accidents don’t make machines, they break machines.”

 

Due to the subtlety, complexity, and variety of biological subsystems, it will take a lot of future research to hone the ORLEF-B constant to precision (and the value of the constant will vary for different kinds of systems). Nonetheless, it is easy to demonstrate that the ORLEF-B constant must fall within a certain range. That range guarantees that the cost of generating Earth’s tree of life with a random process would be so exorbitant as to far exceed the time and physical resources available in the history of our universe.

 

On a common sense level ORLEF-B merely confirms what we already know intuitively: random forces do more harm than good. They especially harm the fragile complex designs of sophisticated living machines. ORLEF-B says that, when the complexity of a machine goes beyond a critical threshold (and all of life is past that threshold), random forces will on average not progress the system, but rather degrade it, and that so much “effort” would be required to get past this natural tendency that the resource cost would be exorbitant.

 

In simplest form, ORLEF-B can be stated as follows: “ORLEF-B requires that the aggregate total mass of disordered results produced by random mutations to biological systems will always vastly outweigh any ordered results produced by those same mutations (with statistically insignificant numbers of exceptions).”

 

Being a close relative of Dembski’s LCI, it is possible that ORLEF-B can be derived from it. I leave that question to professional mathematicians. I think ORLEF-B is also at least partially implicit in the definition of entropy itself and will turn out to be derivable from thermodynamic law combined with the molecular/elemental transactional formulae of chemistry and physics.

 

Translated to terms directly relevant to evolutionary science, ORLEF-B says that each and every random mutational transaction can be expected to fail to produce viable biological form change modules that will eventually combine to produce macroevolution. In a sense, then, ORLEF-B is the exact converse of neo-Darwinian theory.

 

Neo-Darwinian evolution says that in a purely random system mutations will produce any level of complex design given sufficient time, and that our world is sufficiently random that random mutations aided by natural selection could have produced the tree of life in approximately 4 billion years. ORLEF-B says no, that will never happen because our world is not sufficiently random to allow most mutations to be called “random;” the modules of evolutionary change are too complex for a random mutation to produce them in real evolutionary time at all; and, (now we get to the key testable element of ORLEF-B) if sufficient random mutations were in fact employed to produce the tree of life, it would entail the production of a quantity of bio-refuse many times the mass of our universe.

 

In LCI Dembski says something very similar: you can’t get more complex specified information (CSI) out of a physical process than goes into it. Dumping more and more random mutations into a system doesn’t produce increased order; it only tends to break the ordered systems already present. What this suggests for making our evolutionary language more honest is that when mixed forces (partially ordered and partially disordered) are introduced into a system, we should attribute any net increase in order produced to the effects of the ordered components, not the disordered components.

 

I recognize that there are exceptions, but these are statistically so rare that they don’t affect the rule. Our language in evolutionary science should be reflecting the rule stated by ORLEF-B, not the statistically insignificant exceptions.

 

In addition to correcting misleading language, what I wish to do with this ORLEF-B argument is to add to evolutionary discussions a mathematical constant with a definite value (or range of values) that can be used in answering concrete questions concerning resource exhaustion. In other words, I want to put this endless and futile politicized debate in evolutionary biology to bed once and for all by installing into the accepted theoretical base a recognized scientific axiom that authoritatively answers the question, “Could an accident have done this or not?”

 

So, you may want to ask, concerning ORLEF-B, “Minus, extensive additional research to refine it, how can it be of present use in answering the question of whether accidental evolution is possible.” While ORLEF-B can’t presently quantify biological effects of random forces in specific contexts, by computing an undisputable underestimate of total resource costs it can answer the question.

 

ORLEF-B provides an indirect test for the theory of accidental evolution in terms of the presence or absence of dysfunctional design attempts in the fossil record. While evidence of many of these attempts might be plausibly expected to have disappeared due to conditions not conducive to making fossils, ORLEF-B predicts that the total biomass of such dysfunctional designs would have been so enormous as to exceed the Earth’s biomass by an astronomical volume. Thus, accidental evolution did not occur.

 

The known sensitivity of developmental systems, in addition to biological complexity generally, makes the scenario of aborted design failures (and absence of fossil records for failed designs) fully plausible. But what value shall we initially set for the ORLEF-B constant just to get a feel for how such a study would work and to generate an initial ballpark estimate? What ratio of entropic unusable waste byproducts to ordered structures and systems does an accidental process typically produce?

 

I propose what I consider a safe and conservative starting value of a ratio of 10116 disordered units to every ordered biologically viable unit produced by accident. This is a composite of Meyer’s/Axe’s probability determination for randomly synthesizing a single new protein from nonliving chemicals (10-125) and the corresponding probability value for randomly synthesizing a single new protein inside a living organism (10-77). Both of these obstacles (and many others) have to be overcome by a random process to generate the tree of life.

 

Keying on protein synthesis is proper because all living things are composed of proteins and proteins are key components of most life-critical processes. The degree of efficiency of random processes generating life forms, as opposed to, say, generating much simpler inert substances like coal or water, should be similar to that for random protein synthesis. However, this is still only a rationally guided guess. The proper ratio for the ORLEF-B constant could turn out to be somewhat smaller than what I have proposed, or perhaps somewhat larger. But let’s run the math using my proposed initial value to get a concrete feel for how the process of evaluating accidental evolution using the ORLEF-B constant actually works.

 

The current biomass of Earth is estimated at 1,850,000,000,000 tons. Arbitrarily assuming living systems are at least 90% ordered yields a well-ordered biomass of more than 1,665,000,000,000 tons, or approximately 1.6 X 1012 tons. At 2,000 pounds per ton that is 3.2 X 1015 pounds of well-ordered biomass. Using 10116 as the value for the ORLEF-B constant, producing that well-ordered biomass of 3.2 X 1015 pounds requires a large mess of disordered byproducts equal to 3.2 X 10131 pounds. This exceeds the total mass of the universe by trillions and trillions of times.

 

Reasonable estimates of the mass of the universe have been made in the range of 3 X 1055 grams on the low end to as much as 1.6 X 1060 kilograms. A kilogram equals 2.2046 pounds. Converted, the high end estimate of the mass of the universe is somewhat less than 3.53 X 1060 pounds. So, initial estimates for the ORLEF-B constant show accidental evolution on earth to have been impossible in this universe.

 

Is ORLEF-B really a serious candidate for a natural law (or at least a biological axiom derivable from biological complexity and mathematics)? There are three good reasons to assume ORLEF-B holds true:

 

(1)    Meyer cites peer-reviewed random protein synthesis studies to ground his estimate that random processes will produce biological junk (or poison) 1077 times for every time they create a useful protein even within a living system with strict controls in place to protect the host creature. That number grows to 10125 for random mutations outside a living system.

(2)   Thousands of mutagenesis studies show that no biologically viable ordered results have come from billions of random mutations.

(3)   Thermodynamic science’s definition of entropy says that truly random energy can never be reclaimed to do any useful work, in biology or anywhere else. Genuinely random mutations would be close relatives of entropic forces, seldom having a positive effect on an ordered biological system.

 

How many more studies must reveal exactly the same results before standard scientific (inductive) logic justifies the assumption that future accidental mutations will not be beneficial, and that infrequent neutral or mildly beneficial mutations will never link to form the large and highly complex combinations of multiple gene sets, gene expression markers, and the corresponding microtubule alterations needed to cause macroevolution? Good science does not permit the maintenance of a purely theoretical assumption in the face of universally contradictory empirical data.