Main Menu



Evolution – Smevolution!


An In-Depth Analysis of the Evolution Debate

Copyright 2005 Rick Harrison




Part 1


Section 2


Last modified 16 August 2017




More on Punctuated Equilibrium & Gaps in the Fossil Record

Frequent large jumps or gaps in the fossil record commonly (and erroneously) referred to as punctuated equilibrium[1] tell us that at times whole sets of complex features appear to have been proposed and achieved in a single step. If, with C.R.C. Paul,[2] we assume the fossil record is at least good enough to reveal the general patterns of evolutionary change, that is, if we take the current lessons the fossil record has to teach literally, and combine these with what is now known of the irreducible complexity of biological systems, we must acknowledge that the evolutionary process made even more frequent and larger jumps than neo-Darwinian theory has yet to admit, most likely changing entire sets of features at the same time.


This feature of the fossil record is very much in accord with both Professor Michael Behe’s irreducible complexity thesis and with our present knowledge of genetics. Both tell us that several genes (often many) plus additional integration work is required for viable biological form change.


Accidental dynamics are a highly improbable cause of accelerated evolution events, with or without gaps in gradualism. In most cases there would be insufficient time to propose changes by accident and test them by natural selection. There would be insufficient time to arrange the unlikely coincidence of a biological change that is presently beneficial for the host creature that would also just so happen to be a useable component of a future structure or function of some very different kind of creature on the evolutionary tree of life. It is therefore unlikely that a full set of intermediates representing the very small changes neo-Darwinian theory requires would be in the fossil record for these periods of accelerated evolution.


One can respond by saying, well perhaps natural selection had not yet acted, but was invoked later after the accelerated evolutionary phase was over. No doubt that occurred to some extent, but the problem is that later is too late to save neo-Darwinian theory. If we wait until after the evolutionary bursts of punctuated equilibrium are over to invoke natural selection we are saying that biological machines were designed and created in less time than the classic version of neo-Darwinian theory describes and that natural selection wasn’t there to contribute to the building process.


Natural selection is the neo-Darwinian wild card for explaining how an accident can create a complex machine. Natural selection is supposed to help evolution by locking in each sequential step to future designs, assuming only that the step is advantageous to the initial host organism and others along the way to an event of macroevolution that can build upon the newly evolved feature. Otherwise, even neo-Darwinists agree that complex life can’t be evolved without either intelligent design or at least a dumb form of orthogenesis. Everyone agrees that accident needs lots of help to build a machine, be that machine living or otherwise.


No doubt some of the gaps will be narrowed by additional fossil discoveries, but the accelerated periods of punctuated equilibrium will always remain, and they will always remain a problem for neo-Darwinian theory. The bottom line is that the historical path evolution is seen to have taken as represented by the fossil record, including gaps and indications of punctuated equilibrium, does not match the predictions or expectations of neo-Darwinian evolutionary theory.


Even some of the top neo-Darwinists are backing away from gradualism. “Thus, more often than not, the gradual origins (if indeed there were gradual origins) of species and higher taxa have not been documented.”[3] As famous evolutionist Stephen J. Gould has written, the gradualist scenario of classic Darwinian theory is no longer defensible.


What Causes the Accelerated Periods of Punctuated Equilibrium?

While I think the most honest answer to that question is, “We don’t know,” here is what the classic evolutionary texts say. Frequent rapid jumps in the fossil record are postulated by punctuationists to arise from the geographical isolation of small to moderate sized populations. The combination of geographic isolation and small population size is thought to make it more likely that a radically new feature could emerge (due to loss of “genomic cohesion” factors). In small isolated communities newly evolved features could avoid being lost by re-emersion into the genomes of the former much larger population. Minus the geographic isolation, future interbreeding with the larger community would tend to remove the new feature before a suitably large number of individuals having that feature were produced to support its continuance.


While isolation of a small subpopulation does give novelty a better chance to survive if it occurs, biological form change novelties of the size and complexity represented by the gaps in the fossil record (and occurring at the speed represented by the punctuated equilibrium periods of accelerated evolution) are simply not going to be proposed by accident in the available time in the classically proposed neo-Darwinian way: random mutations occurring at the standard mutation rate of 10-9. Honest math and science forbid it.


Therefore, geographic isolation combined with the classic neo-Darwinian model is not going to be nearly enough to generate the rapid evolutionary events known as punctuated equilibrium. A more rapid engine for biological form change generation is required. That engine must also be non-random if it is to match the fossil record, which does not show the huge number of deformed species design proposals an accidental mutational engine would have produced. Evolution also needs some kind of non-random help with integrating new genes into complex biological systems. New research shows that millions of years are required for the integration of one new gene.


Since the time of Mayr’s genetic cohesion factor hypothesis we have learned that transpositional elements in the genome tend to have (yet to be explained) periodic bursts of intense activity. This, in addition to the similarly intense activity of morphogenetic fields in the developmental genome, is now hypothesized to account for much of the radical form change of evolution. These events clearly must take us beyond the classically presumed standard mutation rate. The fact that so much genetic change is being accomplished so fast without constant serious injury to the gene line shows the process to be nonrandom.


If and how these bursts will be tied to geographical isolation-triggered cohesive factors and technical aspects of classic neo-Darwinian population studies remains an open question, but the classic neo-Darwinian model seems clearly to have failed to explain both punctuated equilibrium and macroevolution generally.[4] The evolutionary model we are seeing implicitly immerge from recent research shows bursts of intense genetic transpositional activity, not gradual changes from minor mutations, and those bursts have produced results that show a strong mathematical bias for viable biological system design far outside the parameters or random chance.


Loss of genome cohesion factors, such as Mayr postulates, and other causes would generate the phenomenon of new features tending to emerge more frequently from isolated communities the same with intelligently designed genome evolution as with accidental evolution. Competitive dog breeders may achieve the perfect purebred Dalmatian, but if it jumps the fence and gets lost among city mongrels its purebred gene line will eventually disappear through reimmersion into the larger mongrel gene pool. Logic alone says that there will be a strong correlation between emergence of new biological features and geographic isolation, regardless of whether random mutations or intelligent design was the source of the form change proposal.


My impression of Ernst Mayr is that, in order to preserve the integrity of science, he would rather admit the new evidence and integrate it into the theory of evolution than go into denial for political reasons. As an example, Mayr apparently did recently admit that the constancy of the morphological form of species through history was another problem for the Darwinian model.[5] A truly random process would have shown less constancy in form and more variation. Eventually natural selection would reign in the process of variation; variable species designs would eventually move back to showing a strong preference for the most biologically fit, but not before much more variation had taken place and been preserved than we have seen in the fossil record.


Yes, neo-Darwinian evolutionists have given us an impressive toolkit to better develop insight into how Mother Nature manages the preservation and relatively minor sculpting of major biological form variations, once they occur. However, they have given us nothing conceptually coherent or mathematically valid in regards to either the initial creation of the requisite vast new increments of biological information or the hugely complex control systems necessary to translate that information into the fantastic machines of life.


Astronomical and irreducible complexity in living systems, astronomical improbability, insufficient time and resources, large jumps, periods of acceleration, no biomechanic for macroevolution, no demonstration of how random change can get the job done, hypersensitivity of amino acids and proteins to random mutation, no explanation for the creation of first life from dust, no explanation for the creation of the first genomes—the accidental dynamic just doesn’t explain the construction of the tree of life.


The conceptual mistake the neo-Darwinists ask us to make is analogous to a dog breeder saying that her management approach, which consistently produces best of breed and best of show winners, is all that is necessary to explain the creation of puppies. Explaining the shepherding of genomes is not the same thing as explaining the construction of genomes. This is the major flaw in the neo-Darwinian argument. It is a fatal flaw, in light of all the evidence. Neo-Darwinian theory must be dismissed as a fully inadequate explanation of life.


Let’s briefly consider one additional factor that argues against accident, the need for redundancy. We can then sum up the improbabilities for all the factors discussed in a chart that distills a ball park improbability estimate for the accidental theory of evolution.


The Shanks and Joplin Challenge: “I feel so redundant!”

Some biologists, notably Niall Shanks and Karl H. Joplin, have claimed that observed redundancy in biological systems refutes Michael Behe’s irreducible complexity thesis. Redundant systems could, in theory, allow a mutating creature to absorb the destructive impact of random mutation in one offline subsystem while another subsystem carries on operational functions.


In his reply to Shanks and Joplin in Philosophy of Science, Professor Behe dismisses these objections for several good reasons.[6] Foremost among those reasons is the fact that many of the examples Shanks and Joplin give do not qualify as meeting the level and type of complexity Behe’s irreducibly complex systems exhibit. Indeed, some don’t involve living systems at all; they are merely the catalytic cycles of raw chemicals. The processes cited are simple and the parts are not well-matched to each other. Thus they can hardly be considered representative of living organisms.


To give credit where credit is due, yes, there is quite a bit of functional system redundancy (as opposed to gene allele redundancy, which is very common) in biological systems—much of that only very recently discovered. Shanks and Joplin point out that there are some clear instances where cells and systems do the same thing in more than one way. Shapiro and von Sternberg also indicate that some repetitive active DNA segments provide functional backup, though not all.[7]


However, in other instances repetitive segments are required to work in concert with their duplicates to accomplish a specific task. Mere repetitiveness does not ensure full functional system redundancy, and the mere presence of an allele to backup each gene doesn’t ensure functional system redundancy either, as alleles frequently vary significantly from the primary gene they backup. Furthermore, finding a few genetic locations offering some true redundancy does not equate to the possibility of risk-free random tinkering of any particular kind substantial enough to change a functional system, minus a full system analysis, something we can rarely produce for any living system at this point in research.


Genes cannot be replaced with just any randomly modified substitute with impunity, even in redundant areas. Single genes have been found to be managing more than one biological process. A gene may be redundant in having a backup for one of several of the processes it regulates or for one of the proteins it codes for, but it may have no redundancy available for other critical processes or proteins it manages. The randomly modified gene that replaces it may not do all the things the original gene was responsible for doing, including those things for which there is no redundant backup.


For example, random mutations to gene P53 in the mouse may allow embryonic development to continue more or less unaffected because another gene also directs the same developmental processes in the meantime (or most of them), but the mouse without an intact P53 becomes susceptible to cancer. One of gene P53’s critical functions is redundant, but another is not. Further research may find redundant coverage for all of P53’s functions, but nothing critical hangs on this. It is only an example used to illustrate that laying out a list of redundant features in biological systems can be misleading, both in terms of the irreplaceability of many multitasking genes and in terms of the generally inadequate total level of protection against destructive mutations that biological redundancy offers to the organism as a whole.


To show that there are some examples of truly redundant systems in biology is not to show that accidental evolution of the entire tree of life could have occurred. This requires full redundancy of all critical components. There are no indications that living systems have this much redundancy. If there were Shanks and Joplin’s examples would have been much stronger, more directly relevant, and more numerous.


Have neo-Darwinists shown all critical biological systems to be redundant for all organisms? No. Shanks and Joplin show a few poor examples of poor redundancy in nonliving systems. To give accidental evolution even a ghost of a chance it must be shown that all complex biological systems presently have or previously had true and full functional systems redundancy. This will be a tough order to fill. There are many examples of complex living systems that do not appear to have true functional systems redundancy (practically all of them).


Even if there were such redundancy, the accidental theory of evolution would still fail. To demonstrate the feasibility of accidental evolution one must demonstrate not just a defense against destructive mutations via redundancy, but show how an accidental process could generate, coordinate, and integrate so many complex alterations in real time.


If we can make inferences to the past from the present as Shanks and Joplin impliedly do in pointing to present redundancies in biological systems as representative of past evolutionary events, the present absence of redundancy in many key biological systems justifies the assumption of the absence of that redundancy in the past as well. This means that on Shanks and Joplin’s own logic Professor Behe’s irreducible complexity thesis holds for those key systems.


Charles Darwin allowed that it only took the existence of one biological system or feature that could not be produced by accumulated small random mutations to refute his theory.[8] Following Professor Behe’s lead, ID theory proponents point out that there are very many critical systems in biology that are both irreducibly complex and non-redundant.


Even if science could show full redundancy, for that redundancy to contribute to the rebuttal of Behe’s irreducible complexity thesis, science would also have to show that living systems manage that redundancy in the correct way so that living systems are protected against critical design information damage. When critical errors are made by random mutations to the primary genes of a complex functional system the genetic management system would have to switch control to a backup system. To my knowledge this has not been shown to be the standard procedure in living systems, though it apparently does occur in some cases.


What usually happens is that DNA error-checking and repair systems fix the errors and there is no need to switch to a back-up system. In other words, what we know of how biological systems actually work indicates that genetic redundancy is not fully universal and where it exists in part it is not usually managed as a backup system. Most small random genetic mutations don’t have a chance to cause any biological form change at all in major biological systems because the DNA error-checking and repair system quickly reverts the change back to the original form.


Another biological process that the redundancy-helps-accidental-evolution theorists must show exists doesn’t seem to exist either. When random mutation-generated changes to biological systems occur as a hypothesized first step in evolution and are prevented from doing damage to system function by having a backup system kick in and cover the same territory, the next thing that must occur to produce an event of evolution is that the alteration must find its way from the copy of the genome in the non-reproductive cells (somatic cell line) where it occurred into a reproductive cell (germ cell line) in order to be transmitted to future offspring. There is no indication of a biological process having existed in many species across  the full spectrum of the phyla that does genetic transfers from somatic cells to reproductive cells.


If the mutations occur directly to a reproductive cell the transfer problem goes away, but the issue of redundancy is completely irrelevant because random mutations to reproductive cells have no destructive effect on the operational genome. No backup gene is required for changes made directly to reproductive cells that only affect genetic material to be transmitted to offspring (as opposed mutations to the controlling genes within the operational genome that affect the health and function of the reproductive cell).


Contrary to Shanks and Joplin’s claim that the existence of redundant biological systems refutes Michael Behe’s irreducible complexity thesis, until some process is found that routinely transfers DNA from the somatic cell line to the germline across a broad spectrum of species, genetic redundancy is completely irrelevant to the accidental evolution vs. intelligent design debate. The known inability for this kind of transfer to occur in most creatures (other than plants and sponges, which do have such transfers) is called the Weismann barrier, and is a well accepted dogma of science. So, it is not likely that the redundancy argument against Behe’s irreducible complexity thesis supporting intelligent design theory will ever have relevance to the debate.


Does Chalkboard Space (Junk DNA) Help Accidental Evolution?

In addition to a few operational biological systems that have other functions that effectively back them up, there are three other aspects of living systems that in some cases might partially buffer irreducibly complex systems against the destructive effects of accidental tinkering. DNA sequences and genes are frequently duplicated into inactive redundant forms. There is an impressive amount of information storage redundancy, and also some “chalkboard space,” if you will, for evolution to tinker around with in the form of “junk DNA.” If the theory of the late Professor Susumu Ohno and many others is correct, there is a significant amount of “junk DNA” in genomes that serves no direct and immediately useful purpose. And, of course, every gene has an inactive twin, an allele that can perhaps be harmlessly modified. I say “perhaps” because the allele may at times be called to play an active part in managing biological systems. And the allele is not always identical to its primary gene partner.


So, yes, duplicate genes/DNA segments, alleles, and junk DNA could function in a sense as blank “chalkboard space” for accidental evolution. Random mutation could, at least at times, “work” harmlessly in that space while the active genes take care of the cell’s daily genetic functions, the active genes themselves being assisted in warding off destructive random mutations by DNA error detection and repair systems.


A few decades back, estimates of the amount of “junk DNA” began at something over half for humans (though that has been rapidly and steadily coming down) and around 5-10% for bacteria. Most other creatures range somewhere in between.


The chalkboard space thing sounds good on the surface, but it constitutes a vast oversimplification of the evolutionary task. It is not just one gene that must be changed at the same time, given the irreducible complexity of many biological systems. Once produced, an otherwise viable change involving at least several genes (usually many) must be integrated into the larger operational system in real time in a manner that does not seriously impede performance. This task is a showstopper for accidental evolution.


The only way junk DNA space would permit the building and implementation of systems advancements without destructively impinging upon actively coding or regulating regions of DNA is if an entire integrated subsystem were assembled and then launched as a complete working unit along with a variety of DNA snippets that must be patched into just the right places in perhaps hundreds of locations around the entire genome to insure trouble-free integration of the change. Here we can see that the availability of masses of simple “chalk board” space does not solve all critical problems for the theory of accidental evolution. The two central problems of system integration and simultaneous complimentary changes to many gene sequences remain unsolved. Irreducible complexity is real. It is a fact of biology, not a political slogan.


If evolution were to do both of these things in inactive DNA space before launching the full set of changes into the active genome it would mean that natural selection would not have a chance to quality control the building process. This means that the only way junk DNA chalkboard space could succeed in producing viable life form evolution is by use of a process that is not neo-Darwinian. Natural selection will not “see” the new developing set of integrated changes until it is fully done and placed into the active genome, until it alters the form and function of the organism.


Given irreducible complexity, natural selection cannot help build the new design one small step at a time as neo-Darwinian theory requires. The only explanatory power the neo-Darwinian theory of evolution has ever had derived from the contribution of natural selection to preserving each small step of design modification. Take away natural selection and neo-Darwinian theory explains nothing.


The genetic “chalkboard,” for whatever good it might actually do evolution, continues to shrink before our eyes. The “junk” is turning out to have a purpose after all. In his Internet article on “Uncovering the Hidden Meanings of the Genome,” Dr. Paul Nelson discusses the work of John W. Bodnar, et al., which reveals a hidden language resident in junk DNA.[9] Recent developments in genome research suggest that one of the purposes of “junk DNA” may be precisely to facilitate evolutionary development. This may be done through feeding the thought-to-be-junk repeats and transpositional segments back into the primary genome, not in a truly random way, but rather in a manner closely managed by the genome itself.[10]


An amazing amount of the mystery of junk DNA has already given way to intensive genome research. Scientists like James A. Shapiro and Richard von Sternberg have adopted the working assumption that ‘junk DNA’ is not junk at all. “Indeed, we may come one day to regard erstwhile ‘junk DNA’ as an integral part of cellular control regimes that can truly be called ‘expert’.” This techno-prophecy has already been substantially realized since Shapiro and von Sternberg's 2005 article with the discovery that the thought to be nonfunctional heterochromatin section of the genome contains essential genes and assists with several important genome functions.[11]


Having such a substantial change in the estimate occur within a very few years suggests that the presumed to be junk portion of the genomes may soon be revealed to be effectively nil. As of 2013 comments to that affect are already starting to come out in the scientific literature. In addition to already mentioned discoveries about the important part genomic proteins called “ubiquitin” play in managing gene activation, a recent article in the journal Molecular Biology and Evolution confirms the trend of junk DNA going away and suggests that we have been hasty in drawing conclusions about what is and is not essential genomic material.[12]


Reduction in the “junk” portion of DNA increases the improbability of accidental evolution two ways. It increases the complexity of the organism’s functional design and it leaves less room for accidental tinkering—less blank chalkboard space, possibly none.


Bottom Line on Redundancy

There are four “bottom lines” to this discussion of redundancy and chalkboard space: 1) functional backups for very few critical biological functions have been found at a time where we have sufficient vision into biochemical systems to have discovered most such redundant functions should they have been present; 2) even if all living systems contained 100% functional redundancy such that they were genuinely protected against accidental tinkering by random mutation this would not solve the time, complexity, and improbability problems for accidental evolution—it would not turn a monkey typist into Hemingway; 3) experimental studies show that, whatever degree of biological redundancy there may be, it is not enough to allow destructive random mutations to make a major constructive input to the process of life’s evolution; and 4) redundant systems are more expensive in terms of time and physical resources to build by accident—Mother Nature would have run out of both before she could scratch the surface on even a non-redundant design for life.


The third “bottom line,” perhaps, requires some justification for readers who have not delved into the biological journal articles. The results of mutagenesis studies belie the significance of the few examples of partially redundant biological systems cited in Shanks and Joplin’s article “Redundant Complexity: A Critical Analysis of Intelligent Design in Biochemistry.”[13] All the indications from the laboratory indicate that redundancy in biological systems is seldom true or full redundancy. Random mutations are producing exclusively destructive or neutral results regardless of any redundancy that may be present. Nothing substantive in terms of viable biological form changes that could combine to form the steps of macroevolution of complex creatures has come from these endless laboratory-contrived random mutations.


The empirical evidence of multitudinous genetic studies demonstrates that random mutations create almost exclusively destructive effects irrespective of any redundancy that may be present. This means that either redundancy is an insufficient aid to accidental evolution or that the type and magnitude of redundancy necessary to facilitate accidental evolution is simply not there. There is every good reason to affirm that both are true.


Over thirty years ago, Eric Wieschaus and Christiane Nüsslein-Volhard did a classic large-scale mutagenesis experiment on the fruit fly aimed at saturating its genome for mutations affecting embryonic development.[14] The effects of a strong mutagenic chemical on millions of flies were observed. The study was successful enough to win them the Nobel Prize. The 95% plus credibility of the full saturation of this genome for developmental affectations is confirmed in Genetics, September 2004.[15] However, Dr. Paul Nelson reports that at an American Association for the Advancement of Science (AAAS) meeting in 1982 that Eric Wieschaus reported noting no viable mutations among the results.


Shapiro and von Sternberg report that the darling of the laboratory, the fruit fly, Drosophila melanogaster, used in the Wieschaus/Nüsslein-Volhard study has from 33.7-57% repetitive DNA. That’s a lot of potential “redundancy”—yet still no viable results were seen from 95% saturation. These results are not surprising. Remember, random mutations are indiscriminate and imprecise. They are not intelligently guided to affect only one of two or more repetitive DNA segments. An exposure to toxics sufficient to affect one side of a duplicated segment may also mutate its backup.


It only takes one nucleotide change to alter a DNA triplet that codes for an amino acid. As little as two amino acid substitutions in the amino acid sequence of a protein can, and usually does, destroy the proper function of the protein.[16] The destruction of a protein’s biological function via accidental genetic mutation is thus a much simpler achievement than the formation of cancer, which requires up to 20 genetic pathway exposure events,[17] and cancer is an enormously common event.


Compared to producing macroevolution, accidentally causing cancer is a walk in the park. Remember, typically hundreds of nucleotide changes to many genes are required (in most cases nearly simultaneously) while leaving most other nucleotides in tact to generate a new biological feature. An accidental evolutionary process, then, will always break a living machine before it advances it, or so nearly always that accidental evolution is deprived of any rationally defensible statistical engine for producing evolutionary change in real time.


It is worth noting for the record that there has never been a demonstration of accidental mutations generating any new biological system of even minimal complexity. The phenomenon of cancer is perhaps the most complex thing that we have seen an accidental mutation dynamic achieve, and it is going to be hard to build a tree of life out of that. In addition to a few trivial changes to genomes affecting size or color, all we have seen produced by accidental mutation is outright breakage of biological systems and regression in the function and structure of genes.


The dynamic of accidental mutations is basically a shotgun approach to design. The fact that shotguns don't work to build or fix watches, is hardly surprising. Redundancy, then, cannot rationally be expected to solve all of accidental evolution’s problems, and redundancy is far from being a universal aspect of critical biological systems.


“We have a winner!”

Having survived the Shanks/Joplin redundancy challenge, Professor Michael Behe's irreducible complexity thesis remains strong. Professor Behe reminds us that as the evidence against neo-Darwinian evolution inexorably mounts we must at some point concede that the threshold of sufficiency has been crossed.[18] The following table summarizes that mounting evidence in the form of factors affecting the probability of an accidental origin of all life forms.


Table 1: Improbability of Neo-Darwinian Evolution


Factor Description


One chance in

1. The improbability of getting the first independent life form from a chemical soup would roughly equate to the improbability for creating one gene/protein outside a living system multiplied times itself by the number of genes required, thought to be around 1,500 genes.


10-125  X  10-125 X 10-125  X  10-125 X 10-125  X  10-125…repeat to 1500 occurrences, or 10-187,500 



Note: this number is actually generous towards the accidental theory of evolution; it integrates some prior help that Mother Nature has given the process of protein construction beyond what would be available to a purely accidental process. It also equates the probability of producing one gene by accident with the probability of producing one protein by accident. Genes are frequently more complex in that they direct more actions than just the creation of a single protein. The more strict calculation for pure accident to create one standard-size gene at 1,000 bp is 1 chance in 10601. Producing the requisite 1,500 genes by pure accident is then improbable to the tune of  10-901,500.  10601 is an approximate conversion of 41,000, which represents a 1,000 base pair sequence in the four letter language of DNA.

2. The improbability of accidentally differentiating the first single celled organism into the 200 different cell types used by the human body.



3. The improbability of randomly generating enough useable proteins to create 100 million species (an underestimate). Current estimates of the actual protein inventory are not presumed to be complete, but suggest 85,000 different proteins at a minimum are involved for the human species alone, and probably two or three hundred thousand for the complete tree of life. Thus, this number is a vast underestimate, but the later steps require most if not all of those additional proteins not computed here.



(As a stand-alone factor, strictly computed, this should be 10-77 raised to the power of 85,000, or 10-6,545,000.  However, adjusting down for the work already done in creating the first independent life form at step 1, OL, it would be something on the order of 10-6,357,500).

4. Systems proteomics, the complexity of protein-protein interactions, appears far beyond what an accidental process could achieve (This will be an immense number as our best computers cannot model this process due to its complexity.)



5. Accidentally creating one cellular machine: a ribosome. Much of this cost will have already been accounted for in steps 1-5, but not all of it.


6. Accidentally creating a 2nd cellular machine: a cellulosome. Much of this cost will have already been accounted for in steps 1-6, but not all.


7. Accidentally creating many thousands of cellular machines. Much of this will have been covered in step 5 under protein construction, but presumably not all.


8. Accidentally achieving the genetic mechanisms for DNA transcription, replication, and regulatory functions, including the alternative reading frames/splicing process and trans-positional mechanisms of the more complex organisms. The regulation of cell functions is another enormous increase in complexity.



9. Accidentally developing complex epigenetic systems such as methylated DNA gene activation marking systems and RNA mediated gene silencing systems add yet another level of complexity that must be coordinated and integrated by accident to achieve advancement in biological function (proper gene expression).[19]



10. Reaccomplishing the jump from single-celled to multi-cellular organisms multiple times, i.e., at the beginning of each phyla



11. The Cambrian Explosion of most animal forms in a brief 5-10 million year period radically reduces the time available for a random process to work.



12. Time bottlenecks from events of punctuated equilibrium. Periods of accelerated evolutionary change increase the improbability of an accidental process producing the events.



13. The total biological complexity of all of an organism’s hierarchical systems embedded ten levels deep taken together vastly increases the improbability that a random process could construct them. A single organism, the human body, has trillions of cells each with potentially thousands of parts (certain proteins are made on demand, that is, they are situation specific) doing millions of things per minute. 



14. The complexity of the human immune system, known to be capable of producing as many as 1012 antibody neutralizing immune cell configurations.


15. The complexity of the human brain and nervous system is beyond our current comprehension. (100,000,000,000 brain cells, each with thousands of connecting fibers)


16. The complexity of the DNA regulatory, damage recognition and repair systems



17. The added improbability of accidentally developing several critically related complimentary traits at the same time



18. Convergence. The independent achievement of the same design feature over and over again in separate evolutionary routes multiplies the workload and improbability of the accidental evolutionary process significantly.



19. Fine art in nature. Neo-Darwinian theory cannot explain the origin of designs of this type, or explain how they can be preserved in a randomly mutating environment where natural selection has no concern for artistic aspects.



20. The absence of a common ancestor and links between the make it more improbable that an accidental process was the driving force of evolution.



21. Minimal dysfunctional designs in the fossil record show that few random attempts were made, if any, prior to hitting upon correct design. This is supremely improbable for a random process.



22. Mass extinctions and ice ages cause the loss of significant evolutionary progress at least seven times in Earth’s history.



23. Complexity and fragility of developmental processes precludes the possibility of accidental tinkering generating anything except tragic birth defects.



24. Coordinating complex changes in DNA and cell membrane microtubule control structures increases the difficulty level and improbability that an accidental process could arrange events of macroevolution. Since we don’t know how this could be done, the improbability that an accident could arrange such a complex and closely orchestrated event in real time during a momentary event of cell reproduction might be truly enormous.



25. Genetic changes to areas of the genome governing somatic behavioral routines for such things as mating behavior must be synchronized with other genetic changes, thus further complicating the evolutionary process and reducing the chance of accidental success.


26. Standard probability theory applied to the complete library of life. It is impossible to improve one masterpiece by accident, let alone millions in succession. The library of life analogy is simply a nonmathematical expression of the rules of probability theory. These rules dictate exponentially increasing improbability for an accidental achievement of a series of specific results in sequence. The magnitude of improbability increases faster where the situation is complex and many alternatives are available. Standard probability theory, a basic and accepted tool of mathematical science, is the reason why the factors in this table are multiplied in the formula below instead of added.



27. The irreducible complexity/system integration problem. The close interaction of many parts of cellular and bodily systems with many others requires simultaneous implemen-tation of corresponding changes to multiple parts, systems and genes—another exponential increase in the total improbability of accidental evolution derived from sequential or simultaneous achievement of multiple events, each having high improbability.



28. The lack of a neo-Darwinian biomechanic for change, a biochemical pathway for the accidental evolution of life after so much dedicated investigation and so much biomechanical visibility adds to the improbability that there is an accidental pathway to be found. Read Granville Sewell’s paper  A Mathematician’s View of Evolution” in The Mathematical Intelligencer. Vol. 22, no. 4 (2000): 5-7.



29. Major reversals of initial progress in multiple events of mass extinction; solubility of proteins in water defeating early accidental protein construction; fragility of early life forms; frequent early deaths of creatures hosting new gene mutations, etc.



30. Universality of the DNA dictionary: the improbability that an accidental process would not have tried other DNA translation schemes beyond the minimal variations that have been seen in tightly circumscribed exceptions.



31. Time management programs, biological clocks that synchronize independent events, order dependent sequences of critical multiple step processes, and establish critical time windows for process completion, add yet another layer of complexity and improbability that forbids construction of living machines via an accidental process.


32. The probability of our universe being formed in the manner it was, that is, producing gasses instead of black holes, thus laying the foundation for the possibility of life, as computed by Sir Roger Penrose.




Total Improbability of Accidental Evolution:           > 10-6,545,300


The improbability estimate for the accidental origin and evolution of life is therefore at least 10-6,545,300 (This includes Penrose’s number for the life-friendly universe plus the accidental production of at least 85,000 proteins). Accidental life is actually much harder than that because there are thousands more proteins to build, plus the additive difficulty of further accidental achievement of the portions of the 30 additional unquantified terms not already covered by the creation of proteins involved in those steps. The number used for step 1, the creation of first independent life is also an enormous underestimate (see the note at step 1).







Have Questions on the Chart?: Of course, you do. Some readers may wonder why I gave the Myer/Axe protein synthesis numbers deference over seemingly much larger improbability estimates for accidental gene production. If a single gene is improbable to the tune of 4-1000, then the human genome is improbable to the magnitude of 4-1000 multiplied times itself some 20,000 times, yielding an improbability of 4-20,000,000. Such numbers are sufficiently enormous to make a strong case against accidental evolution. I might also have offered the much smaller but seemingly more authoritative number for key biological protein generation offered by famous mathematician Fred Hoyle and renowned physicist Paul Davies, 10-40,000, as the base figure that starts the probability estimate in Step 1.


The math for computing the improbability for random gene production is based upon powers of 4, representing the 4-character “alphabet” of life formed by the 4 primary nucleotide options. Powers of 4 are hard, at least for most of us, to translate to the normal base 10 number system, and hard to mentally follow in a discussion for that reason. So, for the purpose of a quick scan visualization function in the chart, I went with the Myer/Axe number, which is already formulated as a power of 10.


Although authoritative for its own purpose, the smaller number used by Hoyle and Davies, 10-40,000, fails to capture all the aspects relevant to the debate over accidental evolution. It only covers the production of 2,000 of the key enzymes necessary to generate the first living cell. It could be that Hoyle and Davies believe these 2,000 enzymes, once present, can produce all the other required proteins, or that the source mechanism that produces the 2,000 can go on with little additional alterations to produce the other 200,000 proteins at minimal cost in additional probability.


Be that as it may, as Hoyle well knew, this partial estimate fully refutes the theory of accidental creation of life, even at a glance. Sir Fred Hoyle was a world-class expert mathematician, personally in the Penrose-Hawking class, though not focused on physics (an astronomer). Hoyle understood that we had incomplete physical and mathematical data. He basically did us a favor in his article, "The Universe: Past and Present Reflections" in Engineering and Science, November, 1981, by keeping the discussion simple. The reader can probably appreciate his gesture at this point.


Hoyle knew there was a lot more to it, but he didn’t want to lose readers in endless numbers. In writing a brief article for a popular science magazine Hoyle gave us enough to see the problem at a glance and left it at that. I am sure there are many readers who wish I would have done the same thing. J


But my purpose here is different than Hoyle’s or even than most ID theorist writers. Hoyle, as an atheist, was making a technical point about the impotency of accident as an engineer of complex systems. ID theorists, typically, are trying to just get the door open for science to consider ID theory as a plausible contender to neo-Darwinian evolutionary theory. But I am not trying to just open the door to ID theory; I am trying to close the door on the ludicrous theory of accidental evolution with absolute finality. It’s a lie and an insult to scientific integrity—a materialist, Communist propaganda fiction.


Hoyle was a proponent of the theory of panspermia (transplantation to Earth of life or its key precursor elements from outer space). Hoyle’s purpose was to show that we would need to include the use of all the universe’s physical and time resources to even get close to a plausible method for life to have been created by any means that was even partially random—not just the resources of Earth. Hoyle felt that one or more transplantation events from space, events that brought biological elements, biological information, and/or biological organisms to Earth, were necessary to get a theory of life’s origin past the threshold of improbability where one would simply have to dismiss it out of hand. Francis Crick, co-discoverer of the structure of DNA, also advocated the theory of panspermia.


Given the improbability of an accident creating life using only Earth’s scarce resources in time and physical particles, this is a very logical conclusion to draw. The evolutionary process needs all the help it can get. However, even transplantation of key elements of life from space is not enough to solve all problems for the theory of accidental evolution. The figures in the chart above essentially rule out accidental life from any source in our known universe.


The initial highly ordered state of matter and energy at the Big Bang, once properly understood, will almost certainly come to be seen as having contributed something significant to the precursor events that incrementally built the blueprint of life, so that will have to be taken into account as well as the possibility of panspermia. Might that initial order have somehow transferred in from a prior emanation of our universe?


Sir Roger Penrose is now telling us (as of 2014) that there may have been many prior “universes” that preceded ours (really just prior emanations of the same universe). Once physics advances sufficiently to get a better mathematical grasp on this concept, there may be influences from those prior emanations (pre-Big Bang) to consider as well. Penrose gives an excellent 27 minute audio talk on the Science Friday website on his recent thoughts on physics and cosmology that is worth a listen.


How far did Sir Roger Penrose’s number, 10-300, go towards taking all of this into account? The most correct answer is “I don’t know.” Sir Roger Penrose, although a fascinating guy to read and to listen to, tends to quickly lose me in his discussions because he is ultra-smart and in the habit of addressing students having a solid background in advanced physics and mathematics—a background I don’t have. (In the Science Friday talk on the web he doesn’t do this, fortunately; he keeps his remarks very clear and nontechnical for the general public.) What we can confidently assume is that Penrose did as thorough a job as was possible for humans to do, given the physics and mathematics available at the time his number was offered.


My best take on Penrose’s number is that it covers the move from a hypothetical fully accidental world to the one we have that is governed by the strongly life-favoring natural laws and constants of physics and chemistry, and may make some allowance, as far as the current state of the science of physics allows, to estimate the contribution of the initial ordered state of matter and energy at the Big Bang.


What Penrose’s number presumably doesn’t include is a consideration of the additional form-determining factors that biological complexity suggests must be there, but which have yet to be elucidated by physics. As we see in the chart above, a step by step computation of the improbability of life produces a much higher improbability figure than the one Penrose offers. It therefore seems reasonable to presume that Penrose hadn’t intended to account for the total arrangement of all physical particles into our well-structured world.


Penrose was, in a sense, moving forward from the hypothetical chaos of a purely random world to the known natural laws and constants of the physics of our world that were somehow mysteriously engendered at the Big Bang. In contrast, what I do here is to move backward from observed biological complexity to infer a higher improbability number by means of standard probability theory (without being able to trace the physical steps Mother Nature used in the construction of the complex physical and biological structures of our world from the Big Bang forward).


Is my approach valid? Yes. And, of course, it is not my approach; it is merely an application of standard probability theory to the observed characteristics of physical structures and systems. I have done nothing outside the bounds of traditional science.


What these probability computations tell us is that our world, despite what the neo-Darwinists would have us believe, is not accidental at all, not in the sense of being fully chaotic with no strong bias for life. The chart above gives us a rough feel for the enormous size of that bias.


Dr. Stephen Meyer has observed that the information content of the natural laws currently known to science is insufficient to produce the kind of biological information structures we see in life. We are left with a huge gap in accounting between the magnitude and specific forms of ordered structures that natural laws and physical constants alone can account for and the much greater amount of order we find in ultra-complex living systems. To me this suggests that intelligent design is more probably the source of life than a dumb form of orthogenesis, which would plausibly be more fully accounted for in natural law.


While these ballpark estimates don’t yield fine precision, they do give us a feel for just how strong the case against accidental evolution really is. If you only have a $300,000 loan with which to buy a house, the fact that the real estate agent can’t tell you if the house you are presently looking at is one that goes for $850,000 or $10 million is irrelevant to resolving the question of purchase: you’re in the wrong neighborhood.


PS on the Numbers Game Vis-à-vis Multiverse Theory

No doubt some fast-thinking readers are wriggling in their chairs, saying, “Wait a minute! Penrose’s theoretical model that has countless emanations of our universe occurring before the Big Bang solves the probability problem for accidental evolution by providing sufficient additional resources to overcome the resource exhaustion argument.” It does appear that way at a glance, but it’s not quite that easy.


First of all, an enormous number of prior “universes” would be required, or a small number enormously old and/or enormously large. This is not an insurmountable problem in concept, but it is a thesis that is practically impossible for science to confirm.


To solve an established scientific problem, a theoretical solution doesn’t just have to sound good and conceptually fit the problem; it has to be accessible to scientific verification and demonstrated to be true. The problem with all theories of multiple universes, or prior emanations of one universe, is testability.


If the larger parent world is random/accidental at its foundations as many neo-Darwinists, atheists, and Marxists suggest, a great number of those other universes would have radically different natural laws. If the natural laws in the other/prior universes are very different from our own our science could not probe them and confirm the nature of their physical processes directly. Our scientific instruments could not function reliably enough in those worlds to tell us what if anything those worlds had contributed to the formation of life on Earth. Total mass or energy transfers, perhaps, could be determined in some cases, but input of biological information would either be impossible or, at least in most cases, impossible to determine.


If, on the other hand, the natural laws of the other/prior universes are the essentially the same as our own, nothing is gained for the accidental theory of the creation and evolution of life. Our thermodynamic laws in physics/chemistry say that nothing ordered ever arises from pure chaos, from pure disorder, or from pure entropy, whichever term you prefer. The possibility of life and the other ordered structures of our universe coming from a completely random, chaotic beginning of things in our world assisted by all the other worlds with laws like our own is therefore straightforwardly ruled out.


If the Marxist, atheist, neo-Darwinian accidental worldview is correct, the parent world’s being primarily driven by pure randomness means that chaotic secondary worlds (worlds with no physical laws and constants at all) should vastly predominate within the set of prior universes. There can be no scientific probing and tracking of worlds without natural laws and physical constants. After that, worlds with natural laws vastly different from our own would vastly predominate over the very few worlds that would have laws similar to ours (assuming a random spread of the forms these universes took on). In these unlike worlds too, no scientific probing and tracking, or precious little, would be possible to Earth’s science.


The bottom line on multiple or prior universes being used as refutations of the intelligent design probability and resource exhaustion arguments is that those universes have not even been shown to exist, let alone shown to exist in the right numbers, sizes, and scientifically tractable conformations. The probability and resource exhaustion arguments of intelligent design theory must be considered as having presumptively refuted the accidental theory of evolution until such time (millions of years hence, if ever) when sufficient evidence that massive numbers of other universes, or massively old universes, transferred into our world the larger part of biological design information necessary to guide physical process in the direction of our tree of life.


This information transfer would apparently have to have been in the form of subtly arranged electromagnetic energy of some kind (DNA molecules could not survive the intense heat of the Big Bang), and thus confirmation of such an event will not be quick and easy. In the meantime, while Penrose’s prior universe theory is being further refined and investigated, if science is to retain epistemological integrity, our default theory of evolutionary science should include the element of intelligent design, with accidental evolution being considered vastly less probable.


If the existence of other or prior universes that generated massive resources transfers into our world could be proved, it would not rule out God or intelligent design. Our side of the process is still nonrandom, showing an enormous bias for life. Even if an argument could eventually be made millions of years hence that the bias for life arose from a much larger multi-universe’s primordial chaos, it would not rule out God or intelligent design. It would merely require the hypothesis of a creator with plenty of time on his or her hands, a creator who happens to prefer stirring soup to welding heavy machinery.


The two competing theories would then have to face off in a detailed analysis of which theory best explains all that we know about our world. In Part 2 of this book I argue that the Judeo-Christian-Islamic theory of God, a form of intelligent design theory, best explains what we know of our world, all things considered. Stir on dude!





Is the Probability Argument Even Valid in Concept?

Defenders of neo-Darwinian evolution have strangely attempted to tell us that probability is irrelevant to scientific explanation in evolution, while it remains the very essence of explanation everywhere else in science. Nowhere else do scientists or engineers ignore standard probability theory when important questions hang in the balance.


Neo-Darwinists suggest we shouldn’t put credence in well-supported probability estimates, when we are known to do exactly that everywhere else in science, industry and life. At times they go so far as to imply that there is no standard and reliable way to compute probabilities, and that anyone’s view of probability is as good as anyone else’s. This is all complete nonsense. The computational rules of probability theory are both well-defined and invariable.


For those unfamiliar with such things, the computational rules for probabilities are laid out clearly in most math textbooks and many reference books, including the fifth edition of James' Mathematics Dictionary and Armstrong's Elements of Mathematics. The rules for standard calculation of probabilities have been known since they were originated by the French mathematician Blaise Pascal in the seventeenth century![20]


Since the advent of quantum mechanics, it has been indisputable that the very foundations of science, even the natural laws, are probabilistic.[21] Absolutely all of science’s arguments have now taken the form of a probability argument. In Scientific Laws, Principles, and Theories, Robert E. Krebs gives a candid and succinct statement of the probabilistic nature of science:


Historically, all effects or events were assumed to have a cause, or possibly several causes, or to be co-events. We now know that many natural events are described and predicted by statistical probabilities, not mathematical certainties. This is true of very large events in the universe, as well as the very small events as related to subatomic particles and energy. These very small events led to quantum theory and indeterminacy (uncertainty principle), resulting in some problems with the cause-and-effect concept for accepted physical laws.


Scientists do not think in terms of possible or could, or impossible or couldn’t, but rather in terms similar to likely or credible (probable), or unlikely or incredible (improbable). This of course, makes the use of statistical methods such as probability theory a powerful tool.[22]


Physicists at least understand that all of science is a probabilistic endeavor, and that probability theory is a valid enterprise. John Gribbin informs us in Q is for Quantum: An Encyclopedia of Particle Physics, that quantum mechanics, especially, integrates probability theory into its understanding of all that occurs at the level of subatomic particles: “Alternatively, you can think of it as another manifestation of the probabilistic nature of the quantum world—where everything is governed by the rules of probability, nothing is certain.”[23]


A more “down to earth” case in point is the meteor threat. We are reasonably comfortable that a 1 in 6000 chance of a catastrophic meteor strike in the next century will not happen.[24] On the other hand, in proposing the ludicrous theory of accidental evolution science asserts that a possibility representing much less than one chance in 10,000,000,000,000,000,000,000,000,000,000,000,000,000...continued to well over 6.7 million zeros, comprises our best explanation for the origin of life? Is this logically consistent? No.


In theory, such a dismissible chance could constitute the “best” explanation, but that is only if we actually have no minimally qualifying scientific explanations at all. In other words, it is the “best” among a group that are all failures. In this case the “best” is not good enough. In such situations we should resort to humility and honestly say that we presently just don’t know.


A probability as small as the one accruing to accidental evolution is not minimally qualifying as even scientifically credible, let alone best theory. No matter how bad the presently available alternatives are, accidental evolution is still not good enough for science to take seriously, let alone ballyhoo as the pinnacle of human scientific achievement.


NASA wouldn’t spend valuable research dollars on a mission with such an abysmal chance of success; nor would any competent research scientist that didn’t have an ulterior political agenda. Nowhere else in science will a scientist affirm a hypothesis with such an infinitesimal probability—unless of course the materialists, Marxists, or Communists need it to support their political theory.


Why then do we allow our entire scientific research community and educational system to be tethered to this absurdly improbable assumption of accident as the basis of life’s origin? The answer is twofold: we fell for a professionally orchestrated scam and we didn’t do our own homework.


We let Marxist public relations experts turn Charles Darwin into a God. Darwin was an excellent scientific thinker for his time, but the goo is gone and so is the time of Charles Darwin. We can now “see” into cells and genomes and accidents are not what we are finding there. Here’s some more homework.


Robert Shapiro reports in a recent article in Scientific American that Nobel Laureate Christian de Duve has called for a rejection of immense improbabilities as essentially equivalent to the miraculous and therefore outside of science.[25] In a similar vein, in footnote 17 to chapter 3 of The Fifth Miracle, theoretical physicist and noted science author Paul Davies confirms that anywhere else in science, when the improbability of a theory or hypothesis becomes so great, scientists reject it out of hand.


An explanation that relies on freaky circumstances, although not impossible, is inherently implausible. We may take the odds against those circumstances as a quantitative measure of our disbelief, or lack of confidence, in the fluke theory.


Shapiro, in arguing for a metabolism-first as opposed to RNA-first origin of life, uses the probability form of argument to support his own thesis, arguing that the improbability of RNA being accidentally formed in the prebiotic environment of Earth suggests a metabolism-first origin of life. Neither of these theories is definitely ahead of the other in my opinion. I think proteins came first and in the process of folding and/or unfolding released a library of information by being chemically back-translated into a master genome for life. Proteins are more durable than RNA and the laws of physics and chemistry facilitate amino acids folding into biologically useful forms. Having all or most of a master genome encoded into protein structures gets past the irreducible complexity hurdle by producing a genome that builds all the required parts at the same time, and it explains the presence of so much apparently useless “left over” DNA in the genomes. But the point here is that science depends upon and always uses probability to support, confirm, challenge, and refute its theories.


The science of weather is built around probability. Gamblers depend upon it to make decisions. Who would bet real money on a roulette wheel with 106,545,300 number/color options? Birth control methods are based upon probabilities. Probability is key to winning sports strategies. Even rules of engagement in life and death confrontations of armed combat are derived from probabilities.


Neo-Darwinian evolutionists are being inconsistent with the general rules for the practice of science in dismissing the probability arguments against the accidental evolution of life. Noted evolutionists, such as Mark Ridley, use estimates of genetic probabilities in their own arguments where it is helpful to their position. One of the primary tools the neo-Darwinists use to construct the hypothetical tree of life, maximum likelihood, is a straightforward application of statistical probability theory.[26] Thus, the neo-Darwinists’ attack on probability theory in debates about accidental evolution versus intelligent design theory constitutes a desperate rhetorical tactic that has no epistemological grounding in science.


Evolutionary scientists have been inconsistent with normal scientific method and standards in another way. When encountering assertions of the highly improbable, scientists traditionally ask for a concrete and detailed explanation of the mechanics of the process in order to first establish that such an implausible thing is even physically possible within the context in which it occurred. This is a sanity check neo-Darwinian evolution has never had, and one it couldn’t pass. The next step, requisite to even marginally legitimizing a fluke theory, is to seek a causal explanation of what brought such an unlikely sequence of events into being in the first place.


In the case of accidental evolution, neo-Darwinian theorists have failed on both counts. We don’t know the biomechanical processes of evolution. (How many of you readers knew that? Raise your hand.) And we don’t know the causal chain of events that produced first life and got the evolutionary ball rolling.


Darwinists do not provide a process description that links minor changes into major body form innovations. Nor do they give a causal explanation that can account for the highly improbable origin of complex biological information from accidental processes (or the origin of first life). Every time we look at the genetic and molecular structures and processes in order to try to discover the biomechanics of evolution we find a new and seemingly insurmountable obstacle to an accidental process having originated life or generated major biological form change. 


G. G. Simpson was one of the fathers of the new evolutionary synthesis of the 1950s. He acknowledged that a chance process could not achieve such adaptations as have been seen in the history of evolution without exhausting all the resources of the universe first. In doing this he implicitly admitted not only the validity of the probability form of argument, but that it is sufficient to unequivocally establish purpose in evolution. Simpson did not call the purpose he saw in nature “cosmic purpose,” or “divine purpose,” or even “intelligent design,” he merely called it “purpose,” dismissing true accident as an impossibility. He apparently believed that science could go no further in its conclusions than to rule out accident.


Simpson believed that, by accomplishing a series of improbability-reducing steps one step at a time, the evolutionary process could achieve what is admittedly astronomically improbable taken as a whole. This is basically what Richard Dawkins and modern neo-Darwinists call “cumulative selection.” My advice to the reader is beware “cumulative selection.” There is no way to cheat probability theory.


It would be a mathematical contradiction to say that, once the complexity of a system output was accurately described and the number of alternatives to achieving it by accident were identified, that the overall probability could be reduced by intermediate physical systems that build in an increasing bias for life. Darwinists simply ignore the probability costs in producing the incremental advancements in physical and biological systems that build the overall funnel in nature towards life. They say, “Look, the advent of this process, and this process, and this other process make life’s creation much less difficult, so the improbabilities of getting life started or of a macroevolutionary event in our world are not all that great.” But they don’t honestly compute the improbabilities involved in having those processes arise step by step in an accidental world. They start counting probabilities only after the hard part is done and nature has already become a life-building machine.


When a probability computation is accurately computed for a random event process (in this case a hypothetical truly accidental evolutionary process) based upon a complete and correct system output description (the enormously complex organisms of the tree of life), that probability cannot change no matter what intermediate events occur. If the achievement of the intermediate systems that facilitate building the tree of life via cumulative selection is accomplished by complete accident there is a probability cost for each step.


Once the improbability of randomly getting a complex result from a system at point ‘Z’ is computed based upon the structural and functional complexity of the result, it is irrelevant which path was taken to get there. The total minimum improbability remains the same. Only the manner in which that improbability is divided up among the intermediate steps ‘A’ to ‘Z’ changes.


Positing a ridiculous, roundabout and inefficient pathway to the result that is not obligatory could increase the improbability, but the minimum improbability cannot be reduced beyond what a physical description of the structural and functional complexity of the resultant system entails. The overall improbability of producing the tree of life by accident isn’t reduced by cumulative selection; it is merely paid for in incremental payments instead of a lump sum.


Once nature’s life-building machinery is complete, subsequent production of life is easier than before the machinery was built, yes. But the probability cost for achievement of each step in building the machinery must be included in the total improbability estimate for the accidental evolution of life. This is how neo-Darwinian evolutionists try to cheat probability theory, by not beginning to compute probabilities until after nature has built a system that exhibits a huge bias for life.


Neo-Darwinian evolutionists don’t start counting improbabilities until after life exists and has progressed to the point of having self-transformational genomes. At that point, of course, natural selection will preserve any improvements in biological form and function, regardless of whether they are produced within a system that is predesigned to find the correct result “randomly” from that point, or predesigned to find the correct result by some form of non-random partially guided transpositional process. But those systems are predesigned and accidental processes cannot produce those systems. Accident cannot get life to the point where it can produce acceptable proposals for species design for natural selection’s approval. The cumulative selection concept in neo-Darwinian theory cheats correct computations of probability for accidental evolution by only starting to record probabilities after the hard part of building the life-building machinery is done.


Because of the untold trillions of “cards” in the deck of life, the probability costs are very high for randomly producing the cumulative selection system. That system is nothing more than the series of steps nature took in manifesting its huge bias for the creation of life in living creatures having transformational genomes. As mathematician William Dembski teaches us, “cumulative selection” is a way to try to get a free lunch in terms of probability costs by starting to count probabilities only after a huge bias for the production of the tree of life has already been manifested in nature. We have to count the probability cost of nature moving from a purely unbiased state to the heavily biased-for-life state by means  of fully random or accidental processes alone.


This system of cumulative selection is itself a hugely complex life-building machine-building system. The improbability estimate for nature building such a system by accident is extremely high. It is not any easier to get an automobile manufacturing plant by accident than it is to get an automobile by accident.


“Cumulative selection” sounds awfully good, if you don’t look at it too closely. However, it is really just magic dressed up in pseudo-scientific jargon.


Yes, it is true that at some point after certain living systems are achieved it becomes less and less difficult to achieve further evolution of advanced life forms (though it is never easy), but we have to pay the probability costs for achieving each of those key steps in their first occurrence. For example, the achievement of the first living organism reduces the improbability of achieving new biologically viable proteins from 10-125 to 10-77. That’s a big reduction in improbability, but we had to first pay the additional cost of achieving the first living organism by accident.


Having a transpositional genome that periodically moves preformed sequences of DNA nucleotides around within the protected environment of a living cell/creature further increases the chance of mutational success by many orders of magnitude. But we have to first pay the probability cost of achieving the protected living environment and those fantastically complex genomic systems by accident.


Accidental evolution, even with cumulative selection, does not turn out to be easy as pie, as Dawkins, Strickberger, Doolittle, and company would have us believe. The neo-Darwinian concept of cumulative selection only describes the fact that evolution was achieved by a system that became increasingly efficient as it went along; it doesn’t explain how an accident could produce that kind of system. Cumulative selection does not bring the total improbability down within the scientific threshold of credibility; it doesn’t change the probability at all.


In the final analysis, the improbabilities established for neo-Darwinian evolution by modern intelligent design theorists come from a complete and scientifically rigorous (an “honest”) description of the physical structures and systems of life. They are based upon peer-reviewed studies and uncontested facts of biology. Table 1 above gives us a rough idea of what those improbabilities are, and it is an intentionally gross underestimate.


The probability argument of Stephen Meyer, William Dembski, and the intelligent design team is valid, and it is in no way affected by the concept of “cumulative selection.” Taken as an argument for accidental evolution as opposed to a simple objective process description, cumulative selection must be viewed as a verbal slight of hand that asks us to substitute the small probability cost of spontaneously getting a car from a fully automated robotic automobile factory that is somewhat hit and miss but eventually gets the job done for the enormous probability cost of producing the robotic car factory completely by accident.


Stacking the Deck 

Scientists such as W. Ford Doolittle have offered criticisms of some of the early forms of the probability argument:


To switch gaming metaphors, wonder in the face of the improbability of a horse is like bemusement over receiving any particular hand in a game of bridge. Since there are 4 X 1021 possible hands, any single hand is incredibly unlikely, and one would be foolish to anticipate receiving it—but no hand is any more unlikely than any other…[27]


But, of course, this is a bad analogy as regards the origins and development of life. Certainly broken, incomplete or otherwise flawed machines are much more likely to be dealt out of an accidental process than elegant, sophisticated, and efficiently functional mechanisms. We may not be justified in wondering at getting a horse as opposed to an elephant; but, if the process is truly random/accidental, we are justified in wondering that we got a biologically viable form of either instead of a disabled mutant. We are also justified in wondering at getting something alive versus dead, or sophisticated versus simple.


To be plausible even on the surface Doolittle’s argument must assume a standard deck where all cards occur with more or less equal frequency. Once again the neo-Darwinists are not describing the system before assigning the probability. In this case they are ascribing the description of another system to that of life, that of a specific card game, contract bridge, which uses a standard deck where each denomination occurs with equal frequency. Is the system description of life the same as the game of bridge? No; not even close, and yes the difference is mathematically significant—it is in fact startling.


We have just seen that in the biology of life the odds against randomly generating a single new biologically viable protein by accident is approximately 1077 to one. And that is after life is already created. Now that’s a tough card game to win! We shouldn’t be any more surprised at getting one of the massive number of biologically nonviable possibilities from an attempt to randomly create proteins than at getting any of the other nonviable possibilities, but we should be surprised at getting a viable protein—very surprised.


Doolittle has us asking the wrong question. The evaluation of the scientific credibility of a thesis does not hinge on the question of whether we will be psychologically surprised. Scientific credibility hinges upon hard math, the application of standard probability theory rigorously applied after the features of the correct physical system involved have been carefully described.


Unlike the card game of bridge, the living systems of biology contain some cards that are fantastically rare. Biologically viable proteins are one of the rare cards and functional genes are another. So, let’s play the game of life for a moment instead of the game of bridge.


DNA sequences that do anything useful for life have been found to be vanishingly rare among all the possibilities in comparison with those that are deleterious or neutral. Clearly, if you play blackjack with a multi-trillion-card deck having 4 of every denomination out of every 52 cards except the ace, of which there is only one in the entire deck, not every hand is going to be as probable as any other. In the game of life on Earth, a biologically viable protein is that ace. The deck of life is very nonstandard compared to bridge. And it is not just a multi-trillion card deck in the sense of a few trillion cards. The game of trying to get just one biologically viable protein involves more than a trillion, trillion, trillion, trillion, trillion, trillion cards.


One has to adjust statistical expectations in accordance with the known content of the deck of the actual game one is playing. This is the first and most elementary principle of probability theory: describe the system accurately first, then compute the probability afterward. Doolittle invites us to make the mistake of failing to first accurately describe the deck of cards being used before calculating probabilities. But much more than that, he has asked us to accept the deck of bridge as a functional description of human physiology and the rules of bridge as analogous to the process of evolution! Gross error and mathematical oversimplifications are what one tends to get from the neo-Darwinists. It’s all easy as pie, you know.


Well, get ready for the weight-watchers regimen from Hell, ’cause it’s an awfully big pie. A human body employs approximately 85,000 different proteins. The improbability of achieving a single viable protein is 10-77, and the probability of achieving them all in sequence by accident is 10-77 times itself 85,000 times! (10-6,545,000). This ain’t bridge baby! Even if we didn’t care which of our millions of species were produced, the probability of getting any one of them, if they were to have 85,000 biologically viable proteins in their structure, would still be 10-6,545,000.


The neo-Darwinian argument is not good science. It is not really science at all. They just say, “Well, improbable things can happen, and 4 billion years is a lot of time. Therefore, accidental life is possible within scientific thresholds of credibility.” Sounds good so it must be true, right? No. The second statement is false. When you do science and math instead of rhetorical word games you see that the thesis of accidental life falls far outside the thresholds of scientific credibility.


Finally, as the late and much beloved Andy Rooney might say, there is another thing I don’t like about all of this. Criticisms of the probability argument have been far too casual. Richard Dawkins claims to have explained the whole problem away by starting his explanation, in Climbing Mount Improbable, in the middle of the process after life forms and genomes have already been built. But, as we have just seen, most of the probability cost occurs precisely in those two steps: abiogenesis, and building the self-transformational transpositional genome. Dawkins skips the hard parts then declares neo-Darwinian theory a glorious success. Evolutionary science has to be able to do better than that.


Dawkins has not climbed Mount Improbable, in the sense of the real process of life’s creation. He has ignored that process, and substituted an oversimplified “straw man” fictional mountain in its place just as Doolittle substituted the game of bridge for the real description of living systems.


Neo-Darwinist theory has not advanced from the time Darwin issued it in 1859! (Also the time of Marx) It remains a protoplasm-era theory in terms of the degree of complexity it ascribes to the processes of life and evolution, invoking the same “magic” of dogmatic philosophical concepts ungrounded in hard data from genetics and microbiology.[28] While neo-Darwinists admit that the goo is gone (protoplasm) they still describe life as if it were as simple as Darwin viewed it in 1859.


Despite the fact that accidental evolution has been definitively refuted, neo-Darwinian theory retains an overall credibility in the minds of scientists and the public rivaling that of the theories of gravity and relativity! This artificially contrived and politically motivated continuance of the outdated theory of the accidental evolution of life is the biggest scandal in the history of science and philosophy.


Monkey Business

But what about the famous mathematician, Emile Borel? Doesn’t Borel’s monkey typist theorem disprove the probability argument? Since monkeys somehow have the ability to type themselves out of a probability bind (although human writers certainly don’t), how big of a problem can the accidental evolution of life really be?


Once again, lack of a true system description is the problem. Borel’s demonstration is only valid given the artificial rules and assumptions he sets for his proof. Those rules and assumptions are the rules of pure mathematical theory; they are not compatible with the realities of the physical, biological world. Like the game of bridge, the monkey typist theorem is another broken analogy that doesn’t reflect the truths of physics and biology.


Borel says that, given infinite time, monkeys typing randomly would eventually recreate Shakespeare’s Hamlet and in fact all of the great works of history. Should such an imaginary example give us good reason to deny irreducible complexity in biological systems, the concrete results of mutagenesis studies, protein synthesis studies, genomic and cellular complexities, the fossil record, and the inability to originate life or reproduce evolution in the laboratories of our own non-theoretical world? No.


Did evolution occur over super-enormous amounts of time approaching the infinite? No. Were the conditions purely random? No. Our universe is not only demonstrably, but dramatically not a fully random system, and according to our current scientific view of cosmology (Big Bang theory) it has not been around for even the smallest fraction of infinite time, or for the smallest fraction of the time required to make accidental evolution credible. Even if the “book of life” had only one volume consisting of only a million characters using a 26-character alphabet (it has whole libraries), attempting to type such a book by accident would exhaust the time and physical resources of our universe trillions upon trillions of times over before the probability of completing it would approach the scientifically credible.


In addition to what we know of our world’s actual history, there is a strong, purely theoretical, refutation of the applicability of the monkey typist theorem to our world inherent in thermodynamic law. If the monkeys are typing in our world, the real one, then the natural laws of this world apply. The definition of entropy in thermodynamic law states that entropic energy can never assume a significantly ordered state again; it is completely and permanently useless. In theory, complete randomness in the physical world in any form is 100% entropy. But, due to the laws of chemistry and physics, any event involving physical objects held together and controlled by those laws is never completely random in the sense that the atoms and molecules are largely organized and their interactions governed by laws that preclude fully random behavior. Even air molecules are organized. In positing full randomness, Borel is postulating an imaginary condition that can never be realized in the physical world


Still, in the realm of pure theory, if we are to be consistent with science’s own laws of thermodynamics, we must affirm that no ordered result can ever accrue from a fully random state of anything in our world in any amount of time. To apply Borel’s monkey typist theorem to the physics of our world therefore creates a contradiction. Borel says ordered systems, classic books, would eventually be produced from purely random events, but our laws of thermodynamics in physics say they would never be produced if the source were purely random.


You may object that “Surely, Borel means nearly random but not fully random, for nothing in our physical world other than waste energy ever becomes fully random.” You would have to ask Borel what he meant, but if he allows for partially random and not fully random he encounters problems guaranteeing the result. The counterexample is where a slight bias for repetitive characters or strings of gibberish of any of thousands of different kinds takes the typing process off in the wrong direction from which it cannot return. The classic novels are never written, except with very major flaws. The only way to solve this problem is to specify that the slight bias is in favor of the novels, and this of course defeats the proposition that true accident has created them.


If science could trace the history of our universe back to a chaotic time when no natural laws existed and show that pure random chaos transitioned into our orderly universe by a sheer statistical fluke, then it would be appropriate to apply Borel’s monkey typist theorem to the real physical world. But this is not the history of our universe provided by the physicists and cosmologists. Our universe has always been governed by the same thermodynamic laws that say that no ordered system can come from a purely random state of matter and energy. Perhaps the Big Bang qualifies, but then again perhaps it doesn’t—we just don’t know, and we presently have no way to find out.


We don’t, of course, live in a fully random or accidental world. Many of the physical constants that comprise the directional bias that drives physical processes toward life have now been described by Roger Penrose, Hugh Ross, Simon Conway Morris, D. D. Axe, Stephen Meyer, Michael Denton, William Dembski, and others. The rapid achievement of complex life forms on Earth implies that a strong bias for life is inherent in the laws and processes of nature.[29] Our world is, therefore, not the purely random system assumed by Borel in his proof. The monkey typist theorem, therefore, cannot be applied to evolutionary biology. The monkeys will have to find work elsewhere in pure mathematics.


Some neo-Darwinists will say these kinds of critiques of Borel as applied to evolution are beating a dead horse, that they have admitted the world to be nonaccidental for decades. The problem is that while professional neo-Darwinists admit that evolution is non-accidental to maintain scientific credibility within their scientific disciplines, the atheist and materialist lay commentators in the public sector, and occasionally a degreed scientist as well, resurrect accidental evolution and Borel’s monkey typist theorem along with it as if it were an unanswerable argument against intelligent design. Now the world is not accidental when a materialist needs to keep his or her scientific reputation and credentials in tact, now it is again when the Marxist, Communist, materialist, and atheist commentators want to score points with the public.


Every time scientists, philosophers, and theologians turn their back, the dead horse of accidental evolution rides again in the public press and on the Internet. It mysteriously rises and gallops through the countryside with the ghost of Emile Borel riding high in the stirrups with a monkey banging away on a typewriter on his back. Trust me: that horse ain’t dead in neo-Darwinian rhetoric; it remains one of their primary workhorses.


As a notable example, the public minded citizens at the National Center for Science Education remain certain that Borel’s monkey theorem refutes intelligent design definitively and proves that all of us who support ID are blithering idiots who can do neither math nor science. They present an article entitled, “Creationism and Pseudomathematics,” that is in an excerpt from a book by Thomas Robson on their Web site. Robson begins with the scientifically unsupportable assumption of an infinite universe and infinite time contrary to Big Bang theory, which is the currently accepted cosmological theory in science.


He invites us to confuse the dismissibly small possibility of a universe with our time and physical resources having hit upon life by accident within its present lifespan with a universe doing it after having had a nearly infinite amount of time to try and a nearly infinite amount of resources to work with. This is a simple mistake of substitution, the fallacy of equivocation. Perhaps an example will help.


Let’s say we set up 30 playing cards as targets for a shooter with a shotgun. We put the cards within the widest dispersal width for pellets of a typical single round from the shotgun at 15 yards range, then do a long term test to see how often the random pattern of pellets will be just right to knock all the cards down with one shot. Let’s say that the long term test shows that once in every 100 shots all 30 playing cards will be hit by a single shotgun blast.


Here, although all the cards might be knocked down in the first shot in a series, the odds are only 1 chance in 100 that they will be. As the shooting continues over time the odds increase incrementally with each round of shooting. Any time after 50 rounds the odds are greater that the shooter will knock down all the cards than not, much greater after 80 rounds, and a near certainty after 95 rounds.


The fallacy in the NCSE argument is to equate the much greater probability that will occur later, after the process has gone well beyond the time and physical resources that have actually been available to the process of evolution in Earth’s history, with the dismissibly small probability that exists at the beginning within the actual resource limits evolution had to work with. Yes, accidental evolution is theoretically possible somewhere over infinite time using infinite physical particles (so is everything else possible under those conditions), but it is not probable here on Earth given Earth’s more limited resources.


The description of the “shooting gallery” of accidental evolution we have produced above shows that it takes on average 106,545,300 shots to “take down the cards.” That is trillions upon trillions upon trillions…of tries. Our universe has only had the time and physical resources to produce 10150 nearly instantaneous microscopically small events. That equates to being on the first round of shooting for a shooting trick that requires on average not 100 but 106,545,150 attempts! Given infinite time and infinite “ammunition” it will eventually happen, but it is not rational or scientific to expect it to have occurred on the first try. That is the scientific credibility quotient of the theory of accidental evolution.


We have two, and only two, competing theories to choose from, accidental evolution and intelligent design. The only other alternative is to say “Nature is just that way.” That is not a theory. A theory is a scientific construct produced to explain physical events and enhance prediction. To say “Nature is just that way” is not to give, or even seek an explanation, it is to forsake explanation. While we have to allow that the position that “Nature is just that way” could be a scientific fact; it is not a scientific theory. And it is not a fact we can ever demonstrate to be true because a yet to be discovered intelligent designer always remains theoretically possible.


Yes, accidental evolution is possible, but for every chance of it being true there are a trillion, trillion, trillion, trillion…chances that either intelligent design occurred or that nature is just that way, designed for life, but having no designer. Using inductive logic, which is the basis of science, all of our experience tells us that things that are designed for a purpose in fact do have a designer. So, of the latter two options, intelligent design theory is by far the more probable.


Why are we locked into voting for accidental evolution just because it can’t be proved fully impossible? Intelligent design can’t be proved impossible, and it is much more likely to have occurred.


The author of the NCSE article says our numbers are all wrong for the probabilities of producing the proteins and other structures of life by accident, and that he can’t show us his scientific support for that statement in the present article for the same reason that technical scientific support isn’t given in other public presentations (presumably, limitations of space, or that we, the public wouldn’t follow the technical jargon, or that we are just too dumb to get it). That is the classic logical fallacy of the argument from authority, “Trust me I’m an expert,” or the “I left my homework in the car” fallacy—trust me, the proof is somewhere else. How hard can it be to link to another file that has the proof in it?


If an issue in science has ever been politicized in history this one has. We should not trust anyone who does not present their supporting case.


I have shown you the reasons the numbers in Table 1 above are legitimate as a ball park estimate. The Robson article at NCSE hasn’t shown you anything. All it does is pick a few people out of the crowd of intelligent design authors who said something wrong and use those isolated statements as straw men to knock down and pretend he has defeated the entire extensive set of legitimate arguments for intelligent design theory. That is another logical fallacy, called “the straw man fallacy.” It is like saying I caught Gerald Ford or Dan Quayle in a misstatement so all the candidates arguments on all topics are wrong—yet another fallacy, “hasty generalization.”


The NCSE article’s score on logic is terrible. Everything there is a straw man, a false representation of the ID argument. For example, what the cited ID authors have been quoted to say is not representative of the ID position as posed by the primary authors, Dembski, Behe, and Meyer. I have been researching ID theory for fifteen years now and have not come across either of the authors Robson mentions.


Robson makes the following statement which is also very misleading about the true nature of the ID argument. “Anti-evolutionists, of course, will continue to employ their probability arguments against the natural formation of proteins, cells, and the like, despite everything said in this article.” (my emphasis) I have emphasized the word “natural” because mainstream ID authors do not make probability arguments against the natural formation of proteins (some of the religious formulation ID authors may do this). The mainstream ID authors make their probability arguments against a random or truly accidental formation of proteins. Here Robson commits the logical fallacy of equivocation, falsely identifying intelligent design theory with Creation Science, and only the weakest sector within Creation Science.


Mainstream ID authors are saying nature is biased to facilitate the building of proteins, if only over long periods of time via an imperfect process, biased to statistical expectations that a truly accidental process could not achieve. They are not saying the formation of proteins is supernatural as opposed to natural, but they are saying that natural does not equate to accidental. This is because our world is designed to be life-friendly. The argument that Robson claims to have defeated is not the argument of mainstream intelligent design theory proponents or my version of the ID argument presented in this book.


Robson may be arguing against Creation Scientists here instead of intelligent design authors, but the problem is that he clumps them all together, as does the NCSE. Intelligent design theory and Creation Science are saying very different kinds of things. Intelligent design theorists, at least the mainstream ID theorists, are not even “anti-evolutionists;” they allow that evolution occurred, but assert that built-in design information must have guided the process, else the tree of life could not have been achieved in the available time.


(Note: See Appendix 6 for a careful discussion of the differences between Creation Science (6-day biblical creation) and intelligent design theory posed with all the religious elements removed.)


If, as Robson claims, intelligent design authors are the ones not interested in doing good science and math, why are we voting for the high probability theory, and he and NCSE are voting for the incredibly low probability theory? If they are interested in practicing good science, why do they forbid intelligent design theory to be amended to remove all religious elements. The theory of evolution itself started out with some versions having religious elements. These versions have since been jettisoned. But the NCSE won’t allow intelligent design theory to do the same thing, to divest itself from prior versions having connections to religion. Yes, if the secular intelligent design theories having no religious elements were admitted into science there would still be some religious proponents of ID theory who held different versions. It would be a bit confusing, but no more than the variations on theories of evolution, physics, or cosmology are confusing. Where does it say that there can be only one version of a type of theory?


Intelligent design theorists allow that ET could be the designer of life, but the NCSE and like-minded proponents of neo-Darwinian evolution insist that all possible and future formulations of intelligent design theory must be considered religion and not science—no matter what the theory actually says! That’s not good science; it violates the charter and traditional practice of science that allows theories to be freely amended so they may be advanced in the face of legitimate criticism and contrary observations.


It is also reminiscent of fascist information control. People who take such an undemocratic and unconstitutional approach to freedom of speech, thought, and scientific discussion are not the people you want to believe when they say “Trust me I’m an expert.” Insist on seeing the so-called full supporting technical scientific argument. I will guarantee you they won’t have one; you’ll just get more circular wordplay.


Neither Emile Borel’s monkey typists nor the NCSE’s “trust me, I left the supporting argument at the office” approach can save the failed theory of accidental evolution. Nature is neither truly random nor is the time available to evolution infinite.


The real questions of evolutionary science are tied not to what is possible in infinite time with no rules of natural law and no highly ordered beginnings at the Big Bang, but to what is likely in the limited time and actual conditions of Earth’s history. And, yes science does dismiss theories having probabilities greater than zero—every day, every time they have to choose between two hypotheses neither of which has complete certainty nor is fully impossible but possess significantly different levels of probability.


As we discuss at length in this book, the events of cosmic and evolutionary history (combined with natural law) have strongly funneled event outcomes in the direction of life’s creation and evolution. This is not a random model. Therefore what monkeys may or may not be able to do in infinite time typing randomly is just irrelevant to the question of evolution of life on Earth. The monkeys did not have an opportunity to type Hamlet here by accident because Earth’s physical history as a plain matter of fact is not random—and it is very brief compared to the size of the task. Professor James W. Valentine reveals in chapter 3 of On the Origin of Phyla that it would, on average, take a monkey 10180 years just to type the first sentence of Darwin's Origin of Species.


This is a trillion, trillion, trillion (continued to 15 repetitions) years and the Earth has only existed less than 5 billion years—our universe less than 14 billion years. The genomes of the tree of life are the information equivalent of many libraries, not just a single book, and certainly not just a single line.


The NCSE claims intelligent design theorists are not interested in good science, but the NCSE is not doing the math at all in terms of computing the actual probabilities in relation to the time available in evolutionary history or the size of the genomes. This amounts to selective omission of evidence (if only subconsciously), which is not science but politics.


On the other hand, William Dembski’s resource exhaustion argument does the math. It shows accidental evolution to be an untenable scientific theory. Dembski is doing strict science, mathematics and physics, and he does not ignore relevant data. What has the NCSE offered as their strict scientific argument: “anything might happen over a long period of time” and “I left the proof in the office.”


In ignoring the math, using terrible logic, and voting for the improbable over the probable, who is it that is not really interested in good science, intelligent design theorists or the NCSE? And how can hearing both sides of the issue ever be a threat to science students in a democratic society? Banning any coherent theory from discussion, whether it is ultimately proved mistaken or not, is not a method of science, but politics. If lack of substantial evidence were cause for censorship, accidental evolution should never have been discussed at all!


ID authors are not asking to force ID theory down our students’ throats, only to give them a chance to hear both sides of the story. Information control such as NCSE proposes in this article has no place in a democracy. It is a totalitarian approach, not a democratic one. Our students deserve to hear both sides of this debate. 


While the intelligent design scientists at Discovery Institute do not invoke religion in any way in their formulations of intelligent design theory, other scientists have invoked religion (Creation Science proponents), but they too have scientific arguments that need to be heard minus the religious add-ons. It is only Marxists, Nazis, and Fascists that need fear a full hearing of the issues, not citizens of a free democratic country.


Our conclusion: The monkey typist example is fully disconnected from the facts of biology, physics, and cosmology. It can in no way be defended as representative of the actual biomechanics of evolution. It is another purely rhetorical trick, relevant only to the math of infinities, offering no scientific wisdom for the real world, which is nonrandom, natural law-driven, and has a relatively brief history compared to the vast problem of originating life by accident.


Marxists most likely originated this propaganda trick of misapplying Borel’s monkey typist theorem to evolutionary biology. From there it has been unwittingly adopted by scientists who were either inept at logical reasoning, who sensed that materialist politics were dominant in science and decided not to rock the political boat that employed them, or who had a conscious or unconscious bias in favor of the atheistic-materialistic worldview. Some scientists, even outside Communist nations, have adopted the atheist-materialist worldview as their personal equivalent of a religion. People with religious faith are not the only human beings that can bias the pure application of science with personal prejudices or political stratagems. Marxists, materialists, and atheists do it too.


OK, OK. I Give Up. How Can Accidental Evolution Be Demonstrated?

The whole point of the intelligent design argument, of course, is to argue that no such proof can be expected. From what we now know of microbiology and genetics, “accidental evolution” has become an oxymoron, a self-contradiction. Truly accidental mutations carry no information and therefore can, themselves, produce none.


True, accidental mutations might unlock information already present by breaking symmetry or destabilizing atoms, or by breaking open molecules in proteins or other biotic/biomolecular structures that carry information. But in this latter case the situation is like a classic novel being sent to someone in the mail. Accident did not produce the novel, but it might contribute to the event of the envelope that contains the novel being torn in order to reveal it. The huge error in neo-Darwinian theory is to confuse tearing open the envelope, which is the best accidental mutations can ever do, with writing the masterpiece.


Clearly there is no longer a way to rationally defend accidental evolution. But, since the neo-Darwinists will never give up trying, we will have to remain wary of them combining manipulated research and manipulated press to try to preserve the Marxist-materialist-atheist worldview in a world full of contrary evidence. Perhaps (with tongue in cheek) one of their future propaganda thrusts might go something like this:


Evolutionary News You Can Use

Quick sellout at playoffs. Airport gridlock inexplicably linked to genetically evolved gum.


MP (Misappropriated Press)

1 April 2030



Government Chewing Gum Scientists Announce Startling Breakthrough on Evolution!


Undisclosed National Laboratory:

Evolutionary scientists today announced the long awaited results of a tri-decade, multibillion-dollar “totally unbiased” study on evolution jointly sponsored by the National Center for Science Education, American Atheists, and CSICOP. In this study genetic material was garnered from a host of clearly beneficial accidental mutations, then massively copied via bacterial plasmid “factories” and distilled into a line of pleasant tasting chewing gums intended for public consumption. In this way anyone who doubts accidental evolution can demonstrate for themselves the enormous capacity for beneficial change that is resident in an accidental process simply by imposing a selected modification on his or her own genome. A few of the more popular flavors are:


Embelish-Mint – Said to be the most popular flavor among mainstream evolutionary scientists, this genetic alteration impairs the rational ability to evaluate evidence in such a way that minimal data is exorbitantly overstated. Further study of evolution is then deemed unnecessary. This is clearly beneficial because it allows for a politically expedient proof of the fiction of accidental evolution with a minimal cost to the taxpayer, freeing up funds for further tax breaks to Marxist-front nonprofit science education groups.


Imped-O-Mint – These accidental mutations interfere with brain function such that one cannot properly recognize a blueprint, machine assembly programmed instructions, or other intelligently designed mechanisms as such. The saving grace for evolution in this genetic line is that the gum itself is genetically modified. While dating, the gum lodges between the teeth producing a dark discoloration with an odor so unpleasant as to cause the relationship to be broken off entirely. This reduces population stress and isolates small groups of persons using the gum, thus removing genetic cohesion factors, which optimizes the accidental evolutionary process.


Gov-O-Mint – An unpleasant mix of random flavors that never reach agreement. This alteration achieves no productive result, but is marketed at ten times its value. It clearly demonstrates that neo-Darwinian theory can be successfully applied to social processes as well as biology. This mutation is said to prove once and for all that inefficient processes, wasteful of both time and resources, are absolutely common in nature (at least human nature). Young consumers will be so affected that they inevitably gravitate to careers in Congress.


Pep-O-Mint – Perhaps the flavor with greatest long-term sales potential, this mutation randomly tinkers with the sex hormone regulation of the elderly and is augmented with a dose of Viagra and caffeine so intense as to guarantee that the wife will either visit relatives for the entire three weeks of the playoffs or suffer the consequences. Recommended dosage, four times a year, or as needed depending upon sports championship scheduling.


Many of the elder statesmen of evolutionary biology could not be reached for comment on this new development, and, in an odd coincidence, were all observed shuttling from Ticket-Master to the airport.


-----------(MP International)



Summary of Other Stories in the Evolutionary News:


Neo-Darwinists announce discovery of accidentally evolved airplane!


“It’s only a phylogenetic inference for now, but there is no doubt intermediate fossils will be found.”


Local pawnshop owner stymies police detective with cutting edge science.


“These watches were not stolen; they evolved spontaneously in the back lot,” the shop owner asserts. “Everyone knows William Paley has been refuted by Richard Dawkins. I just walked out back and picked them up like fishing worms.”


After several evolutionary scientists testified, the Federal Appeals Court was forced to agree—tosses pawnshop conviction. Judge Smith opined: “My hands were tied by Kitzmiller. The detective’s case invoked the concept of intelligent design, assuming those watches could not have spontaneously arisen in the back lot of the pawn shop by sheer accident. It was not properly scientific. Council for the defense produced expert witnesses from the best universities in the country. They all agreed that anything can happen given sufficient time. Therefore, reasonable doubt remained.” Lead attorney for the defense, Melvin Wellae cinched the case with his usual brilliant closing argument: “Well, I…I mean, shit happens. Just look at the Congress. If chaos can come from intelligence, intelligence can come from chaos.” The case was dismissed.


What to watch for in your neighborhood: Everyone is winning at Monroe Strickberger’s Lottery!


+Bonus Story:

Monkey typists do it again: three consecutive best sellers!





Of the opinion that things could never get that bad? Think that propagandized exaggeration would never supplant rigorous science? Perhaps you’d better read this: Dr. Michael Behe on the Theory of Irreducible Complexity. What Professor Behe tells us, in essence, is that politicization of science is not confined to fiction; it has actually occurred.


A case in point is related in the April 7 2006 issue of Science, the nation’s leading scientific publication. The politicization in the Bridgham article is not necessarily conscious and intentional. The same problem occurs throughout the neo-Darwinian rhetoric of several decades and at this point may be absorbed as an assumed element of the modern scientific culture. But my point is that by reflecting such a strong bias for accidental evolution against the clear tide of evidence now going the other way, the term “politicization” is not a misnomer. The original dynamics that generated the bias for the accidental worldview in modern science was “political” in the sense of being a personally preferred materialist or atheist worldview. This bias was later intentionally cultivated and entrenched into academia by supporters of Marxist/Communist philosophies doing overt propaganda, including almost certainly in some cases Communist government-employed intelligence agents.


Along the same lines, read the classic novels Atlas Shrugged by Ayn Rand, and That Hideous Strength, by C. S. Lewis to see that the co-opting of science by politics is a real danger. 


Behe’s analysis of the 2006 Bridgham article in Science reveals the fact that fully insignificant data is being hyped as proof of the evolutionary potential of random mutations. In this case, a mutation has done nothing more than reduce the biological efficiency of a protein. Because the ancient and modern proteins compared in the article are so similar, inheritance is confidently assumed, and the claim is then made that random mutations caused the “evolution” of the less efficient from the more efficient.


One wants to say, “Whoop de do! Well yes, I suppose it did, but this is not progression; it is regression.” And that is precisely what random mutations always do: degrade efficiency! There might be a rare exception, but the exceptions are statistically insignificant compared to the overall enormous task of producing the tree of life.


Instances of regression cannot prove progressive evolution by accident. The Bridgham study represents flagrantly invalid evolutionary logic. While it is true that not all steps in evolution were progressive, many of them must be progressive or the tree of life can’t grow to include more complex creatures from the simple ones. There is nothing wrong with studying and describing the situation with the devolved protein, but interpreting that protein as proof of the potential of accidental evolution is a transparently invalid move.


The fact that this insignificant modification is touted in the nation’s premier science journal as substantial evidence for neo-Darwinian theory tells us that this is truly the best neo-Darwinists can do. They do not have bonafide examples of accidental mutations that generate progressive complex biological designs.


There is no direct evidence for such degradations playing any major role in evolution. We have yet to find evidence of a master genome or other information bank that might have been present early on and possibly unfolded via something like symmetry breaking of a very information-rich RNA master library. While it is as likely as not that such a master genome did exist, such a concept equates to having a more perfect blue print for life’s design than what we presently have appear at the beginning of life’s history via non-Darwinian means, and then slowly degrading that blueprint through destructive mutations. But having the blueprint of life as a given at the beginning certainly suggests intelligent design, and it is not compatible with the accidental neo-Darwinian evolutionary model.


None of the versions of evolutionary theory considered by science to date, including neo-Darwinian theory, describe the degrading of superior genes into inferior ones as a major evolutionary dynamic, though obviously it has occasionally occurred. So, touting this devolved protein as being somehow a significant corroboration of the neo-Darwinian theory of accidental evolution is a pretty desperate move. It reveals how truly weak the neo-Darwinian position is in regards to hard data-based evidentiary support.


Professor Susumu Ohno’s master genome hypothesis could, in theory, reconcile the better parts of the two concepts: progressive evolution and symmetry breaking and/or mutational degradation-based devolution. The presence of a more perfect set of design information within a master genome available at the beginning of the Cambrian that then unfolded under the pressure of entropy/mutation (much as the structures of the cosmos are hypothesized to have unfolded under similar pressures after the Big Bang in so-called “symmetry breaking” events) would be an elegant and very explanatory model, if we could find the master genome. Such a model fits the real capacity of accidental mutations. They can’t build complex machines, but they might easily be able to break open “sealed” physical or biological packets of information, releasing that information into the evolutionary event process.


We must take care about language use in the evolutionary debate. On purely linguistic grounds it is proper to call a degenerative event an event of “evolution.” But it is not proper to call it “design construction by accident” or “proof of neo-Darwinian theory.” There are known instances of degenerative evolution, and studying those cases are valid as an exposition of part of the total process of life’s development. However, instances of degeneration do not show us how to build the tree of life.


There is nothing wrong with the scientific data of the Bridgham article, only the conclusions that some may try to draw from it. The data does reveal a relationship between an ancient and a modern protein. But we cannot permit transparently invalid logic to masquerade as the best science can do. This sends all the wrong signals to our science students. It is, in effect, a message that it is OK to subvert science to politics. Unfortunately, it is a practice we have too long gotten use to in the arena of evolutionary thought.


Completing the Steps in the Chain of Large Scale (macro) Evolution

Very few studies even look at completing the bridge between microevolution and macroevolution. Perhaps that is because it would reveal just how precious few of the biomechanical steps have been described in the event history of evolution. Forget about adding up a complete inventory of the hypothesized small steps that explain each of the big evolutionary changes; we just don’t have that information.


Until a biomechanical bridge between the radically different body types on the tree of life can be established with confidence, the sparse evidence we have of small changes within a few species, are not properly construable as evidence of macroevolution from a random or accidental event dynamic. They are only evidence of microevolutions of the specific types actually seen. And they are not evidence that such changes have occurred absolutely everywhere else across the full spectrum living creatures, especially in contexts involving greater complexity and interdependence. We know a lot about living systems, but we don’t know the biomechanic pathways of evolution.


Niles Eldridge, one of our leading evolutionary theorists, informs us that even now little work is being offered in the direction of completing the bridge between micro and macroevolution. “Little work is geared to bridging the conceptual gap between microevolution and macroevolution, the latter taken simply as large-scale, long-term accrual of adaptive change.”[30] Even in this admission, Eldridge’s remarks camouflage the significance of the problem. The gap is not merely a conceptual one; it is a physical, biochemical gap that is both quantifiable and enormous. Through the history of evolutionary science Darwinian scientists have evinced the noncritical habit of simply assuming microevolution will add up to macroevolution. There is no direct support for this assumption.


Identical footsteps leading up to one side of the Grand Canyon and away from the other side are insufficient in and of themselves to prove a man leaped across the canyon, known limitations being so clearly to the contrary. The appearance of the occasional treetop at random intervals along the route of the canyon is insufficient to provide a genuine path. Yet this is comparable to the amount of the biochemical chain of events of the hypothesized event of accidental evolution that has actually been demonstrated. 


Yes, there are undeniable similarities in the genomes across the tree of life. And, yes, they do argue heavily for inheritance. But they can only do that if we grant the validity of the probability form of argument. The argument for inheritance (since we haven’t actually seen the ancestral links and cannot demonstrate them with certainty from the historical record) is that it is too improbable that such extreme similarity would occur without it having been inherited—although, in theory, as the neo-Darwinists love to say, anything can happen by accident.


Once we grant the probability form of argument (as science certainly does grant everywhere else), accidental evolution loses scientific credibility. We are then obliged to acknowledge the plausibility of intelligent design. The neo-Darwinists have trapped themselves in trying to disavow the use of the probability argument when it supports intelligent design because throwing out the probability argument means there is no reason to grant inheritance across the tree of life. If we throw out inheritance we don’t even have evidence for the basic theory of evolution.


Fossils may have given us the gross superstructure of the tree of life (once again only within certain, admittedly substantial probabilities), but they have not given us the biomechanics of an accidental evolutionary process. To date we have nothing of substance of the biomechanical pathway of accidental evolution other than a few small microevolutions.[31] The biomechanics of the full path to the primary macroevolutions remains unknown.


But Doesn’t Homology Indirectly Prove the Tree of Life Largely Correct?

Yes, I will agree that homology establishes the relationships among creatures on the tree of life to high ranges of probabilities. This doesn’t give us a gap-less tree or biomechanical pathway amenable to accidental events, however. Homology, in the most general sense is similarity. This might be similarity of structural patterns, similar (not necessarily identical) genetic sequences, common cellular processes, common developmental processes, common anything, really.[32] Inheritance is strongly established by these homologies (if we grant the probability argument), and the tree of life sketched out by tracing homologous relationships can confidently be assumed to usually be close to the historical event of evolution.


But, perhaps surprisingly, even the seemingly safe minimal claim of inheritance is not established with full certainty by nested similarities or homologies alone—certainly an accidental process is not established. A similarly impressive chart of functional, structural and compositional similarities can be drawn between human-manufactured equipment and machines: an army tank, a bulldozer, a fork lift, a farm tractor, dump truck, and a school bus. They are all structurally and functionally related, but neither accidental mutation nor physical inheritance is the origin of their similarities.


The similarities in those vehicles derive from having intelligent human designers in common, or who borrowed among themselves. Although borrowing and intellectual inheritance often occurred, sometimes it didn’t. Despite the somewhat different purposes for each vehicle, substantial similarity was unavoidable. The designer was constrained by the physical necessity of the vehicles all having to do similar tasks on the same planet, so the nature of the task drove similar solutions in multiple design applications. Even where optional engineering approaches were available, why reinvent the “wheel” when a proven solution is ready to hand?


Should we demand that intelligent designers be devoid of practical wisdom? The strong appearance of inheritance therefore does not disprove intelligent design. Even proved inheritance is compatible with intelligent design in evolutionary biology. If you are the designer, why not take advantage of a natural cascade of event dynamics that does a large part of the work for you?


The various hypothetical trees of life that Darwinists have constructed by tracing structural, functional, and genetic similarities are, I confess, a good bet to be the actual path evolution took. They are the best bet we have. So, yes, phylogenetic inference construction based upon genetic sequence comparisons is a rigorous science,[33] and perhaps a fine art and a fascinating craft as well. But the exact version of the family tree of life that process produces depends in part upon starting assumptions. And it depends upon the specific inferential method employed; the methods of phylogenetic inference vary widely among researchers and are far from fully proved. And there are loose ends, contradictions, unanswered questions, and, above all else, less than full certainty.


Two questions remain presently unanswered that, when answered, could radically affect our view of the reliability of a particular approach to sequence analysis, which in turn affects phylogenetic inferences: 1) the question of the extent of convergence in evolutionary history—the extent to which convergence upon the same or similar genetic sequences was achieved through different physical-historical routes; and 2) the possibility of the presence of Susumu Ohno's "pananimalia" genome (or the functional equivalent of such a genome subtly hidden in a plethora of locations across natural systems perhaps reaching to the subatomic level).


So, a certain amount of humility is called for in asserting a given phylogenetic tree of life. At a minimum, having these trees full of largely reliable educated guesses does not argue one whit for an accidental process because the gaps between the taxonomic entries on the trees are frequently large and nongradual. While these phylogenetic inferences are well-supported generally, they do not tell us how the steps between creatures were biomechanically achieved.


The bottom line here is that the presence of homology in the tree of life is fully compatible with intelligent design. For example, the classic case of homology, the pentadactyl (five-pronged) limb, provides a beneficial, if not optimal, skeletal structure for hands, feet, wings and fins. It is therefore a logical feature for an intelligent designer to reuse across a large taxonomic span of bioengineering applications where organisms must perform similar tasks in similar environs. It is also a logical thing for natural selection to preserve, regardless of intelligent or non-intelligent origins, due to its enormous survival fitness advantage.


Neo-Darwinists feel that a living biological inheritance is necessary to explain the substantial similarities observed among very disparate creatures.[34] In other words, they feel it would be too improbable that such complex similarities could be achieved through independent multiple routes by accident. Obviously, this could well be true, but the reader should by now recognize that the neo-Darwinist reasoning relies upon the validity of the probability form of argument, which argument also establishes intelligent design as a more likely explanation of life than accident. Therefore, when neo-Darwinists dismiss the ID probability argument out of hand they are contradicting their own argument for basic evolution! They can’t have it both ways.


What About Progression from Simple to Complex—and the Exceptions?

The order of the main groups in the fossil record does reflect a rough and irregular progression from the simple to the complex, although it is not a universal theme. This hardly refutes intelligent design. The logical sequence of building algae and simple plants first in order to produce the oxygen-rich atmosphere needed for complex animal life makes as much sense for intelligence as for an accident. Preparing food sources before introducing the creatures that eat them makes perfect sense for both approaches as well. Food is simpler than the creature that eats it, so accident as well as an intelligent designer would stumble upon the food first in most cases, with a few exceptions like tigers eating humans.


Most machines that humans put together (by intelligent design) are done the same way: logical steps, often using modular design increments, moving from the simple to the complex. Complex machine construction can rarely be done any other way. Merely moving from simple to complex therefore tells us nothing about whether a process is driven by intelligence or accident. And the fact that not all evolutions moved designs to more complex forms doesn’t rule out intelligence as the source of the tree of life. As Kurt Vonnegut once said of literary methods, “If it works, it works.”


While much of what we see in the history of Earth is compatible with both intelligent design and an accidental process, the absence of indications of a haphazard approach in the fossil record is more telling. It strongly argues that accident was not the engine of evolution.


What indications could there be of a haphazard approach? Massive waste in the form of dysfunctional organisms—broken designs. Appendix 2 to this book argues in terms of numbers a principle we already know from direct experience: an accidental process is wasteful.


To illustrate this, let the reader suppose for a moment that he or she is an art student. We have all been there, if only in kindergarten, with mixed success. So we saunter into class, slide into our desk, and find out that today’s assignment is either to paint or to do origami (Japanese paper cutting). We choose origami and are told to cut a triangle, or a circle. Fine, there it is, another job well done. (We’ll just slip the scraps from our failures into our textbook, like that…and…no one will see them.) Well, it’s going to be a good day after all. What’s next?


The instructor is writing on the board again—not a good sign. “Now,” the teacher announces, “let’s all make ten hierarchically embedded levels of design in our art, including the origami. The first level must have 3,000,000,000 attributes that are functionally related to at least 5 others (corresponding to the human genome), and the other levels several millions each, also with five functional relations. The ten levels correspond to the minimum vertically integrated levels of complexity of a living animal’s biological system: gene; gene regulation system; cellular subsystems; cells; organs; tissues; inter-organ communication/regulation systems; systems with body-wide functions such as the respiratory, glandular and circulatory systems; functional body parts; and the total body plan.


My point about wasteful indications in the fossil record is this: how many sheets of paper will go into the trashcan before you meet the instructor’s specifications this time? Given infinite time and resources, you might do it? Well, maybe, maybe not. Given finite time and finite resources? The question then becomes, “Well, how much time and how much resources?” What if the instructor wants the project turned in before class lets out? And this is with intelligence on your side.


Have the instructor tell you to do the same project blindfolded, without aide of memory, and using no preconceived design concept, and what are your chances? Nil, for practical purposes. Is a pure accident going to get this job done? No. And will there be clear physical indications left behind if accident should try such a thing for millions of years? Yes, and lots of them.


Here you see the basic gist of Professor William Dembski’s resource exhaustion argument, based upon the “probability bound.” There aren’t enough hours and “sheets of paper” available in the history of the universe to give you a trillionth of a trillionth of a trillionth of a trillionth of one percent of probability of success. In this situation the teacher flunks, not the student. The assumption that all the biochemical steps in the enormous evolutionary chain can be accomplished by accident is unreasonable and unscientific. It mismatches the fossil record and runs afoul of honest math.



Main Menu


[1] Richard Dawkins takes care to point out that the true theory of punctuated equilibrium as originated by Gould and Eldredge does not assert that there are large non-gradual gaps in the steps in evolution, but, rather, that at times evolution speeds up while still maintaining its gradual character. For my part, I concede only that this is the definition of the theory of punctuated equilibrium, not that it represents what actually happened in historical evolution. The fossil record does in fact show large gaps that are bridged over relatively brief periods of time. In the absence of a known, definite biomechanical pathway employed by nature to generate these evolutionary segments, there is nothing to rule out a non-gradual process having been used, and now with what we know of genetics and microbiology, much to suggest a non-gradual process.

[2] C. R. C. Paul, “The Adequacy of the Fossil Record,” in Problems of Phylogenetic Reconstruction, edited by K. A. Joysey and A. E. Friday (London: Academic, 1982).


[3] Futuyma, Evolutionary Biology, 131.

[4] Bruce Alberts, et al., Molecular Biology of the Cell, 3rd ed. (New York: Garland Publishing, Inc., 1994), 394. W. E. Lonnig and H. Saedler, “Chromosome Rearrangements and Transposable Elements,” Annual Review of Genetics, vol. 36 (2002): 389-410. Scott F. Gilbert, John M. Opitz, and Rudolf A. Raff, “Resynthesizing Evolutionary and Developmental Biology,” Developmental Biology, vol. 173, no. 2 (1996): 357-372. Gilbert et al. say that the emerging synthesis in evolutionary biology is that the classic Darwinian model for macroevolution that says population dynamics can explain macroevolutionary events has failed. They cite segments of the developmental genome, called “morphogenetic fields,” as the primary source of macroevolution.

[5] W. E. Lonnig, “Dynamic Genomes, Morphological Stasis, and the Origin of Irreducible Complexity,” in Dynamical Genetics, edited by Valerio Parisi, Valeria De Fonzo and Felippo Aluffi-Pentini, Dr. S. G. Pandalai Managing Editor, Kerala, India: Research Signpost, 2004.

[6] Michael J. Behe, “Self-Organization and Irreducibly Complex Systems: A Reply to Shanks and Joplin,” Philosophy of Science, vol. 67, no. 1 (2000): 133-155.

[7] Shapiro, “Repetitive DNA,” 228.

[8] Darwin, Origin, 210

[9] Paul Nelson, “Uncovering the Hidden Meanings of the Genome,” Access Research Network, Literature Survey 19:1, published to the Internet at; Svetlana A. Shabalina and Nikolay A. Spiridonov, “The Mammalian Transcriptome and the Function of Non-coding DNA Sequences,” Genome Biology, vol. 5, no. 4 (2004): 4.

[10] Kidwell, Margaret G. and Damon R. Lisch, “Transposable elements and host genome evolution,” Trends in Ecology & Evolution, vol. 15, no. 3 (2000): 95-99.

[11] James A. Shapiro and Richard von Sternberg, “Why Repetitive DNA is Essential to Genome Function,” Biological Reviews, vol. 80, no. 2 (2005): 243; Christopher D. Smith, Sheng Qiang Shu, Christopher J. Mungall, and Gary H. Karpen, "The Release 5.1 Annotation of Drosophila melanogaster Heterochromatin," Science, vol. 316, no. 5831 (2007): 1586-1591.

[12] Gang Fang, Eduardo Rocha, and Antoine Danchin, “How Essential Are Nonessential Genes?” Molecular Biology and Evolution, vol. 22, no. 11 (2005): 2147.

[13] Niall Shanks and Karl H. Joplin, “Redundant Complexity: A Critical Analysis of Intelligent Design in Biochemistry,” Philosophy of Science vol. 66, no. 2 (1999): 268-282.

[14] Christiane Nüsslein-Volhard and Eric Wieschaus, “Mutations Affecting Segment Number and Polarity in Drosophila,” Nature, vol. 287, no. 5785 (1980): 795-801.

[15] David D. Pollock and John C. Larkin, “Estimating the Degree of Saturation in Mutant Screens” Genetics, vol. 168, no. 1 (2004): 489-502.

[16] Meyer “Biological Information,” 219; Axe, D. D., “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors,” Journal of Molecular Biology, vol. 301, no. 3 (2000): 585-595.

[17] Rensberger, Life Itself, chap. 12.

[18] Michael Behe, “Darwin’s Breakdown: Irreducible Complexity and Design at the Foundation of Life” in Dembski, Intelligence, 94.

[19] Cristina Tufarelli, “The Silence RNA Keeps: cis Mechanisms of RNA Mediated Epigenetic Silencing in Mammals,” Philosophical Transactions of the Royal Society, vol. 361, (2006): 67-79.

[20] Robert C. James and Glen James et al., Mathematics Dictionary, 5th ed. (New York: Van Nostrand Reinhold, 1992), 330-331; James W. Armstrong, Elements of Mathematics, 2nd ed. (New York: Macmillan Publishing Co., Inc., 1976), chap. 9; Amir D. Aczel, Probability 1 (New York: Harcourt Brace & Company, 1998), chap. 11.

[21] Schroeder, Science of God, 26.

[22] Robert E. Krebs, Scientific Laws, Principles and Theories (Westport, Connecticut: Greenwood Press, 2001) 2, 4, 6, 10-11.

[23] John Gribbin, Quantum, 417.

[24] David Chandler, “Too Close for Comfort,” New Scientist, vol. 186, no. 2505 (2005): 37. Also see the current impact risks at NASA’s Near Earth Object Program Website at          

[25] Robert Shapiro, “A Simpler Origin for Life,” Scientific American, vol. 296, no. 6 (2007): 50.

[26] Mark Ridley, The Cooperative Gene (New York: The Free Press, 2001); Michael J. Sanderson, "Reconstructing the History of Evolutionary Processes Using Maximum Likelihood" in Douglas M. Fambrough, ed., Molecular Evolution of Physiological Processes (New York: The Rockefeller University Press, 1994), 13-26.

[27] W. Ford Doolittle, “Evolutionary Creativity and Complex Adaptations: A Molecular Biologist’s Perspective,” in Creative Evolution, edited by John H. Campbell and J. William Schopf (Boston: Jane and Bartlett Publishers, 1994), 52.

[28] For a concise historical synopsis of the antiquated and fully incorrect protoplasm theory see Stephen C. Meyer, Signature in the Cell (New York: HarperCollins Publishers, 2009), 44-50.

[29] See Fazale Rana and Hugh Ross, Origins of Life; Michael Denton, Nature’s Destiny, and Stephen Meyer, “Biological Information.”

[30] Niles Eldridge, Macroevolutionary Dynamics: Species, Niches, and Adaptive Peaks, (New York: McGraw-Hill, 1989), 59.

[31] Granville Sewell, “A Mathematician’s View of Evolution,” The Mathematical Intelligencer, vol. 22, no. 4 (2000): 5-7.

[32] Brian K. Hall, ed., Homology: The Hierarchical Basis of Comparative Biology (San Diego: Academic Press, 1994). The scientific definition of homology is technically more complex than I have represented here. In fact, there are many and varied definitions. Substantial professional disagreements remain concerning homology.

[33] Gonzalo Giribet, “Stability in Phylogenetic Formulations and Its Relationship to Nodal Support,” Systematic Biology, vol. 52, no. 4 (2003), 554.

[34] W. Ford Doolittle, “The Nature of the Universal Ancestor and the Evolution of the Proteome,” Current Opinion in Structural Biology, vol. 10, no. 3 (2000): 355-358.