top of page

LATEST DEVELOPMENTS

January 2018: Fisher’s Famous Theorem Has Been “Flipped”

 

The 2014 edition of Genetic Entropy stated that a publication was in preparation that would disprove the historically pivotal “Fundamental Theorem of Natural Selection”, developed by Ronald Fisher. This key new paper has finally been published.

 

Ronald Fisher was one the great scientists of the last century, and his theorem, published in 1930, and was the foundational work that gave rise to neo-Darwinian theory and the field of population genetics. This new paper shows that Fisher’s mathematical formulation and his conclusion were wrong. Furthermore, the new paper corrects Fisher’s work — thus reversing Fisher’s thesis and establishing a new theorem. Fisher had claimed that his theorem was a mathematical proof of evolution — making the continuous increase in fitness a universal and mathematically certain natural law. The corrected theorem shows that just the opposite is true — fitness must very consistently degenerate — making macroevolution impossible. The new paper by Basener and Sanford, is in the Journal of Mathematical Biology (available here).

 

Fisher described his theorem as “fundamental”, because he believed he had discovered a mathematical proof for Darwinian evolution. He described his theorem as equivalent to a universal natural law — on the same level as the second law of thermodynamics. Fisher’s self-proclaimed new law of nature was that populations will always increase in fitness — without limit, as long as there is any genetic variation in the population. Therefore evolution is like gravity — a simple mathematical certainly. Over the years, a vast number of students of biology have been taught this mantra — that Fisher’s Theorem proves that evolution is a mathematical certainty.

 

The authors of the new paper describe the fundamental problems with Fisher’s theorem. They then use Fisher’s first principles, and reformulate and correct the theorem. They have named the corrected theorem The Fundamental Theorem of Natural Selection with Mutations. The correction of the theorem is not a trivial change — it literally flips the theorem on its head. The resulting conclusions are clearly in direct opposition to what Fisher had originally intended to prove.

 

In the early 1900s, Darwinian theory was in trouble scientifically. Darwin’s writings were primarily conceptual in nature, containing a great deal of philosophy and a great deal of speculation. Beyond simple observations of nature, Darwin’s books generally lacked genuine science (experimentation, data analysis, the formulation of testable hypotheses). Darwin had no understanding of genetics, and so he had no conception of how traits might be passed from one generation to the next. He only had a very vague notion of what natural selection might actually be acting upon. He simply pictured life as being inherently plastic and malleable, so evolution was inherently fluid and continuous (think Claymation). When Mendel’s genetic discoveries were eventually brought out of the closet, it could be seen that inheritance was largely based upon discrete and stable packets of information. That indicated that life and inheritance were not like Claymation, and that biological change over time was not based upon unlimited plasticity or fluidity. Mendel’s discrete units of information (later called genes), were clearly specific and finite, and so they only enabled specific and limited changes. At that time it was being said: “Mendelism has killed Darwinism”.

 

Fisher was the first to reconcile the apparent conflict between the ideas of Darwin and the experimental observations of Mendel. Fisher accomplished this by showing mathematically how natural selection could improve fitness by selecting for desirable genetic units (beneficial alleles), and simultaneously selecting against undesirable genetic units (deleterious alleles). He showed that given zero mutations, the more there are good/bad alleles in the population, the more natural selection can improve the fitness of the population. This is the essence of Fisher’s Theorem. This was foundational for neo-Darwinian theory — which now reigns supreme in modern academia.

 

Remarkably, Fisher’s theorem by itself illustrates a self-limiting process — once all the bad alleles are eliminated, and once all the individuals carry only good alleles, then there is nothing left to select, and so selective progress must stop. The end result is that the population improves slightly and then becomes locked in stasis (no further change). It is astounding that Fisher’s Theorem does not explicitly address this profound problem! Newly arising mutations are not even part of Fisher’s mathematical formulation. Instead, Fisher simply added an informal corollary (which was never proven), which involved extrapolation from his simple proof. He assumed that a continuous flow of new mutations would continuously replenish the population’s genetic variability, thereby allowing continuous and unlimited fitness increase.

 

The authors of the new paper realized that one of Fisher’s pivotal assumptions was clearly false, and in fact was falsified many decades ago. In his informal corollary, Fisher essentially assumed that new mutations arose with a nearly normal distribution – with an equal proportion of good and bad mutations (so mutations would have a net fitness effect of zero) . We now know that the vast majority of mutations in the functional genome are harmful, and that beneficial mutations are vanishingly rare. The simple fact that Fisher’s premise was wrong, falsifies Fisher’s corollary. Without Fisher’s corollary – Fisher’s Theorem proves only that selection improves a population’s fitness until selection exhausts the initial genetic variation, at which point selective progress ceases. Apart from his corollary, Fisher’s Theorem only shows that within an initial population with variant genetic alleles, there is limited selective progress followed by terminal stasis.

 

Since we now know that the vast majority of mutations are deleterious, therefore we can no longer assume that the mutations and natural selection will lead to increasing fitness. For example, if all mutations were deleterious, it should be obvious that fitness would always decline, and the rate of decline would be proportional to the severity and rate of the deleterious mutations.

 

To correct Fisher’s Theorem, the authors of the new paper needed to reformulate Fisher’s mathematical model. The problems with Fisher’s theorem were that: 1) it was initially formulated in a way that did not allow for any type of dynamical analysis; 2) it did not account for new mutations; and 3) it consequently did not consider the net fitness effect of new mutations. The newly formulated version of Fisher’s theorem has now been mathematically proven. It is shown to yield identical results as the original formulation, when using the original formulation’s assumptions (no mutations). The new theorem incorporates two competing factors: a) the affect of natural selection, which consistently drives fitness upward; and 2) the affect of new mutations, which consistently drive fitness downward. It is shown that the actual efficiency of natural selection and the actual rate and distribution of new mutations determined whether a population’s fitness will increase of decrease over time. Further analysis indicates that realistic rates and distributions of mutations make sustained fitness gain extremely problematic, while fitness decline become more probable. The authors observe that the more realistic the parameters, the more likely becomes fitness decline. The new paper seems to have turned Fisher’s Theorem upside down, and with it, the entire neo-Darwinian paradigm.

 

Supplemental Information — Fisher’s informal corollary (really just a thought experiment), was convoluted. The essence of Fisher’s corollary was that the effect of both good and bad mutations should be more or less equal — so their net effect should be more-or less neutral. However, the actual evidence available to Fisher at that time already indicated that mutations were overwhelmingly deleterious. Fisher acknowledged that most observed mutations were clearly deleterious — but he imagined that this special class of highly deleterious mutations would easily be selected away, and so could be ignored.  He reasoned that this might leave behind a different class of invisible mutations that all had a very low-impact on fitness — which would have a nearly equal chance of being either good or bad. This line of reasoning was entirely speculative and is contrary to what we now know.  Ironically, such “nearly-neutral” mutations are now known to also be nearly-invisible to natural selection — precluding their role in any possible fitness increase. Moreover, mutations are overwhelmingly deleterious — even the low impact mutations. This means that the net effect of such “nearly-neutral” mutations, which are all invisible to selection, must be negative, and must contribute significantly to genetic decline. Furthermore, it is now known that the mutations that contribute most to genetic decline are the deleterious mutations that are intermediate in effect — not easily selected away, yet impactful enough to cause serious decline.

 

January 2018: More Real World Evidence of Genetic Degeneration

 

The newest edition of Genetic Entropy (2014), has shown that genetic degeneration is not just a theoretical concern, but is observed in numerous real-life situations. Genetic Entropy has reviewed research that shows: a) the ubiquitous genetic degeneration of the somatic cells of all human beings; and b) the genetic germline degeneration of the whole human population. Likewise Genetic Entropy has reviewed research that shows rapid genetic degeneration in the H1N1 influenza virus. Genetic Entropy also documents “evolution in reverse” in the famous LLEE bacterial experiment (article available here).

 

  • A new paper (Lynch, 2016) written by a leading population geneticist, shows that human genetic degeneration is a very serious problem. He affirms that the human germline mutation rate is roughly 100 new mutations per person per generation, while the somatic mutation rate is roughly 3 new mutations per cell division. Lynch estimates human fitness is declining 1-5% per generation, and he adds; “most mutations have minor effects, very few have lethal consequences, and even fewer are beneficial.”
     

  • Our new book “Contested Bones” (available at ContestedBones.org) cites evidence showing that the early human population referred to as Neanderthal (Homo neanderthalensis) was highly inbred, and had a very high genetic load (40% less fit than modern humans) (Harris and Nielsen, 2016; Roebroeks and Soressi, 2016). See pages pages 315-316. This severe genetic degeneration probably contributed to the disappearance of that population (PrÜfer et al., 2014; Sankararaman et al., 2014).
     

  • Similarly, the new book Contested Bones (pages 86-89), cites evidence that the early human population referred to as “Hobbit” (Homo floresiensis), was also inbred and apparently suffered from a special type of genetic degeneration called “reductive evolution” (insular dwarfing) (Berger et al., 2008; Morwood et al., 2004). This results in reduced body size, reduced brain volume, and various pathologies (Henneberg et al., 2014).
     

  • Contested Bones (pages 179-210) also cites evidence that the early human population referred to as Naledi (Homo naledi), was likewise inbred and suffered from “reductive evolution”, again resulting in reduced body size, reduced brain volume, and various pathologies.
     

  • Contested Bones (pages 53-75) also cites evidence that many other early human populations, broadly referred to as Erectus (Homo erectus), were inbred and suffered from “reductive evolution” (Anton, 2003). However, it seems the genetic degeneration of Erectus was less advanced—generally resulting in more moderate reductions in body size, brain size, and pathologies. Indeed, many paleoanthropologists would fold both Hobbit and Naledi into the more diverse Erectus category.
     

  • An important but overlooked paper, written by leading population geneticists (Keightley et al., 2005), reported that the two hypothetical populations that gave rise to modern man and modern chimpanzee both must have experienced continuous genetic degeneration during the last 6 million years. The problems associated with this claim should be obvious. Their title is: Evidence for Widespread Degradation of Gene Control Regions in Hominid Genomes, and they state that there has been the “accumulation of a large number of deleterious mutations in sequences containing gene control elements and hence a widespread degradation of the genome during the evolution of humans and chimpanzees.(emphasis added). 
     

  • A new paper (Gaur, 2017), shows that if a substantial fraction of the human genome is functional (is not junk DNA), then the evolution of man would not be possible (due to genetic degeneration). Gaur states that human evolution would be very problematic even if the genome was 10% functional, but would be completely impossible if 25% or more was functional. Yet the ENCODE project shows that at least 60% of the genome is functional.
     

  • A new paper (Rogers and Slatkin, 2017), shows that mammoth populations were highly inbred and carried an elevated genetic load (likely contributing to their extinction due to “mutational meltdown”).  
     

  • A paper (Kumar and Subramanian, 2002) shows that mutation rates are similar for all mammals, when based on mutation rate per year (not per generation). This means that mammals (both mice and men) should degenerate similarly in the same amount of time. This suggests that the major mutation mechanisms are not tightly correlated to cell divisions.
     

  • A new paper (Ramu et al., 2017), shows that the tropical crop, cassava, has been accumulating many deleterious mutations, resulting a seriously increasing genetic load, and a distinct decline in fitness.
     

  • Another paper (Mattila et al. 2012), shows high genetic load in an old isolated butterfly population. “This population exemplifies the increasingly common situation in fragmented landscapes, in which small and completely isolated populations are vulnerable to extinction due to high genetic load.”
     

  •  Another paper (Holmes, E. C. 2003), shows that all RNA viruses must be young—less than 50,000 years. This is consistent with our H1N1 influenza study that show that RNA virus strains degenerate very rapidly.

 

References:

 

Anton S.C., Natural History of Homo erectus, Yearbook of Physical Anthropology 46:126-169, 2003.

Berger L.R. et al., Small-bodied humans from Palau, Micronesia, PLOS ONE 3(3):e1780, 2008. 

Gaur, D. 2017. An Upper Limit on the Functional Fraction of the Human Genome. Genome Biol. Evol. 9(7):1880–1885. doi:10.1093/gbe/evx121 Advance Access publication July 11, 2017

Harris K. and Nielsen R., The genetic cost of Neanderthal Introgression, Genetics 203: 881-891, 2016.

Henneberg M. et al., Evolved developmental homeostasis disturbed in LB1 from Flores, Indonesia denotes Down syndrome and not diagnostic traits of the invalid species Homo oresiensis, Proc Natl Acad Sci, USA 111(33):11967-11972, 2014.

Holmes, E. C. 2003 Molecular Clocks and the Puzzle of RNA Virus Origins. Journal of Virology Apr. 2003, p. 3893–3897

Keightley PD, Lercher MJ, Eyre-Walker A. (2005). Evidence for widespread degradation of gene control regions in hominid genomes. PLoS Biol 3(2): e42.

Kumar S. and  Subramanian, S. 2002. Mutation Rates in Mammalian genomes. PNAS 99 (2), 803-808.

Lynch, M. 2016. Mutation and Human Exceptionalism: Our Future Genetic Load. Genetics, Vol. 202, 869–875  http://www.genetics.org/content/202/3/869

Morwood M.J. et al., Archaeology and age of a new hominin from Flores in eastern Indonesia, Nature 431:s3, 2004.

Mattila A., et al. 2012. High genetic load in an old isolated butterfly population. PNAS | Published online August 20, 2012. www.pnas.org/cgi/doi/10.1073/pnas.1205789109  

PrÜfer K. et al., A complete genome sequence of a Neanderthal from the Altai Mountains, Nature 505(7481):43-49, 2014.

Ramu, P., et al. 2017. Cassava haplotype map highlights fixation of deleterious mutations during clonal propagation. Nature Genetics 49 (6) 959-965.

Roebroeks W. and Soressi M., Neandertals Revised, Proc Natl Acad Sci, USA 113(23):6372-6379, 2016.

Rogers R. and Slatkin, M. 2017. Excess of genomic defects in a woolly mammoth on Wrangel island. PLOS Genetics | doi: 10.1371/journal.pgen.1006601 March 2, 2017

Rupe, C. and Sanford, J. 2017. Contested Bones. FMS Publications, Waterloo, NY

Sankararaman S. et al., e genomic landscape of Neanderthal ancestry in present-day humans, Nature 507(7492):354-357, 2014.

Real-life Examples of GE

DECEMBER 2015: Sanford et al. have just recently published a peer-reviewed paper that validates a major claim made in the book Genetic Entropy. The paper was published September 17, 2015, in the journal of Theoretical Biology and Medical Modeling ("The waiting problem in a model hominin population").

The book Genetic Entropy asserts there is a profound waiting time problem (see 2014 edition, chapter 9, page 133-136). This assertion strongly supports the previous work by Behe and others. Stated most succinctly, the waiting time problem is simply – there is not enough time for evolution to establish even the most trivial amount of new information. For example, in a typical mammalian population there is not enough time to establish within the population's genome as much information as would be equivalent to a specific "word" (string of nucleotides) in a specfic context. This waiting time problem was illustrated in the book Genetic Entropy using some simple calculations. The calculations were based upon the known human mutation rate (per nucleotide per generation), assuming a generation time of 20 years, a population size of 10,000 individuals, a 1% fitness benefit for all individuals that carry the newly created target "word" (i.e., a specific string of nucleotides), within a specific genomic location (context). These calculations showed that for such a population to establish even a single-letter word, on average, required about 18 million years. It was argued that as the word size increases linearly, the waiting time would increase exponentially.

 

The authors of the recently published paper tested these claims by employing state-of-the-art numerical simulation experiments to realistically enact the establishment of short genetic words within a model hominin population. These authors rigorously demonstrate that the waiting time problem is very real, validating the author's earlier mathematical approximations, and showing that as word size is lengthened, waiting times increase exponentially (Table 2 and Figure 2). This new paper shows that the waiting time problem is overwhelming – clearly showing that classic neo-Darwinian theory has failed in a profound way. Even given best-case scenarios (using parameter settings that are grossly over-generous), waiting times are consistently prohibitive, even for the shortest possible words. Establishment of a two-letter word requires at least 84 million years. A three-letter word requires at least 376 million years. A six-letter word requires over 4 billion years. An eight-letter word requires over 18 billion years.

 

This waiting time problem is so profound that even given the most generously feasible timeframes, evolution fails. The mutation/selection process completely fails to reproducibly and systematically create meaningful strings of genetic letters (comparable to simple English words). While this problem is universal, it is most clearly demonstrated for small mammalian populations, such as the hypothetical hominin (ape-man) population that is thought to have given rise to modern man.

 

While most authors who have published on the waiting time problem have acknowledged the reality of the waiting time problem, as discussed in the recent paper, some authors have tried to dismiss the problem (see table below). In every such case the dismissive authors have first shown the waiting time problem is as serious (or more serious), than the recent paper shows, but then invoke special atypical conditions to try and reduce waiting times as much as possible. When these "special conditions" are carefully examined, in every case they are far-fetched and ad hoc, and amount to grasping at straws.

Some of you will note that in Genetic Entropy, calculations suggest a waiting time of 18 million years for a one-letter word, while in the recent publication a one-letter word only took 1.5 million years. This is because in the recent paper the 1.5 million years reflected a scenario where the fitness benefit of the genetic word was 10%, while in the book the assumed fitness benefit was 1%. When we did the same experiment but with the lower 1% fitness benefit (see discussion section of paper), we got 15.4 million years waiting time (very close to the 18 million years derived by mathematical approximation in the book Genetic Entropy).

 

Given that genomes must constantly be accumulating deleterious mutatios at significant rates, and given the beneficial mutations are vanishingly rare, and given that evolution cannot create meaningful genetic words even given deep time – it seems that neo-Darwinian theory has come undone on every level. 

 

How will the evolutionary community deal with the waiting time problem and numerous related problems? Given the reality of the waiting time problem, it seems clear the established evolutionary theory has very literally run out of time. At this point, shouldn't all honest scientists begin to acknowledge that there really are serious problems with the current ruling paradigm? 

JULY 2015: Overall, the reader reviews on Amazon.com are exceptional (with 75% of the readers rating it 5 out of 5 stars). However, occassionally a heckler will post comments that may sound authoratitive and convincing on the surface, but do not properly address the issues. Amazon reviewer, Gerard Jellision, (who is addmittedly untrained in genetics) is one such person. Dr. John C. Sanford (author of Genetic Entropy) was asked by supporters to respond to his misinformed and hostile review. Here is how Sanford responded to his criticisms:
 

 

Hi Gerald – I try not to get caught up in the blogosphere, but several people have asked me to respond to you hostile review of Genetic Entropy. I encourage you to read the new edition of this book (2014, go here), which is greatly improved, and which better addresses many of the issues you are raising. Thanks for acknowledging that I have better academic credentials than you do, especially in the area of biology and genetics. 

 

You claim: “Sanford has never contributed to basic research in population genetics. I studied over a hundred scientific papers in this field while researching this review, and I found no references to anything by Sanford”. 

 

As a physicist, did you really study 100 population genetics papers? Certainly you did not study my 100+ publications listed here. In addition to authoring the book Genetic Entropy, and being the primary editor of the book Biological Information – New Perspectives, I have published 21 papers since I entered the niche of theoretical genetics 15 years ago (go here). Most significantly, I have led a team of research scientists in the development of a powerful new tool for studying population genetics, employing comprehensive numerical simulation. Our program, Mendel’s Accountant, is the first comprehensive and biologically realistic simulator of the entire mutation/selection process (mendelsaccountant.info).This new program is the state of the art in the field. This work has led to 9 recent scientific publications (go here). Yet you are correct that almost none of my work is being cited by my peers. Like yourself, most of my peers are very committed ideologically, and simply do not wish to open the door to dialog regarding the very real problems associated neo-Darwinian theory. So regarding their refusal to cite my work and publications – does that reflect poorly on me or on them? As you know, in science it is considered highly unethical to deliberately suppress relevant peer-reviewed literature.

 

You suggested that perhaps I was unable to earn full professorship. Actually when it came time for my department to review me for promotion to Full Professor I was in the glory of my university career, and my promotion was beyond doubt. But by then I was starting my own company and was planning to step down as a salaried professor, so in good faith I did not feel I should ask my colleagues to go through all the paperwork required for promotion.

 

You suggested that in writing this book I was out of my field. I have been a full-time research geneticist for 35 years – and during my first 20 years I was heavily involved in plant breeding, real-world genetic selection, and genetic engineering. Contrary to your assertions, many of my 100+ publications (go here) are high-level theoretical papers. For the last 15 years I have focused almost exclusively on theoretic genetics. I think there are very few scientists that can match my qualifications to address the topics of selection limits and genetic degeneration. I strongly suspect I have studied the theoretical problems associated with genetic degeneration as closely as any other scientist (except perhaps Michael Lynch, who did a great deal of work in this area, and whom I often quote because he acknowledges most of the problems I discuss, including mutational meltdown).

 

You mock the idea that I had anything to fear when I began to challenge the current mutation/selection paradigm. What you do not know is that, in fact, I was expelled for being a Darwin Doubter, and had to fight to be re-instated. Do you deny that “doubting Darwin” is widely recognized as academic suicide? I personally know many excellent scientists who have lost their jobs and careers merely by acknowledging the evidence that life is designed (see “Slaughter of the Dissidents” here). Consider a young untenured professor who is open to the idea of intelligent design and is considering the possible role of a designer in biology. Examine your own heart – would you be one of the first to cast a rock at him or her? Intense hatred toward Christian’s who hold a design or creation view is quite common in academia – but you have to experience it to believe it.

 

Your technical objections are mostly drawn from blog sites like Panda’s Thumb, and you are generally shooting from the hip – by your own admission you know very little about the field. I do not have time or space to debate all the details, but the new 2014 edition (here) answers most of the standard technical objections. Very quickly:

 

1. I show a larger “zone of no-selection” than does Kimura because Kimura’s analysis of the problem was over-simplified, and I have actually examined the problem in much greater depth than he did (go here and here).
 

2. Most people in the field think the human genome is clearly degenerating, but they dismiss this as merely arising due to relaxed selection. But those who have examined it most closely realize that even with intense selection there is still a profound problem (see many quotes within Appendix A of Genetic Entropy).
 

3. Please plot the biblical lifespans for yourself, and then tell me that the biblical data is not remarkable. In the older edition, I plotted the ages-of-death of key Biblical figures, included Christ. Naturally, Jesus did not die of old age, he was crucified – but his age of death (33 years), was similar to the average age of death during the Roman Empire (45 years – compared to life spans of hundreds of years in the earliest generations). The new 2014 edition has a much better plot, and all the data for the entire analyses are available (here).
 

4. About 100 years ago, Fisher imagined that half of all mutations might be beneficial – because he knew almost nothing of modern biology. To him, genes were “beads on a string”. The essential elements of “Fisher’s Theorem” can now be rigorously falsified (paper in preparation). We now know that a gene essentially operates like executable computer code. In an executable computer program (or as in the text of an instruction manual), random changes of any of the zeros and ones (or text letters) will obviously be almost universally deleterious.  By far, the most extensive analysis of mutation accumulation is the Long Term Evolution Experiment by Lenski et al., That work shows that the rate of beneficial mutations is less than 1 per million. Furthermore, that experiment shows that most of the documented beneficials were loss-of-function mutations, reflecting the widely understood phenomenon called reductive evolution.
 

5. Synergistic epistasis happens, but it completely fails to stop genome-wide accumulation of deleterious mutations. We have shown this rigorously in numerical simulations (here).
 

6. The evidence for real world genetic entropy is not just seen in the Biblical data. It is seen in the past human genome (here), in the present human genome (Lynch, M. 2010. Rate, molecular spectrum, and consequences of human mutation. PNAS 107 (3): 961-968), in virus populations (H1N1, here), in the endangered cheetah population, and in bacteria (American Society for Microbiology, mbio.asm.org September/October 2014 Volume 5 Issue 5 e01377-14. and Koskiniemi et al., Selection-Driven Gene Loss in Bacteria, PLOS Genetics, 2012.).
 

Although I have been asked to defend my work and my character against your reckless accusations, I feel no ill will toward you, and wish you well.

 

Sincerely – John Sanford

MAY 2015:  The latest version of Genetic Entropy has just been translated into the Chinese language and is available at no cost as a free PDF download - a small enough file size to be distributed at will through an email file attachment. It is our hope that Genetic Entropy  will be widely distributed to Chinese speaking persons throughout the globe. To download the Chinese E-book version of Genetic Entropy, click the following link:

 

SEPTEMBER 2014: A very recent paper (Rands et al., 2014 - see abstract below), claims that 8.2% of the human genome is evolutionarily “constrained” (i.e., has remained largely unchanged since the earilest mammals evolved). This is based upon the observed similarity in human genomes compared to genomes of other mammals. This is not remarkable. However, evolutionists are now using these calculations to claim that this proves that the rest of the genome (91.8%) must therefore be “junk DNA”. This is a very irrational conclusion. It is not suprising that parts of the human genome would be shared by most mammals – such genomic regions must encode functions which most mammals have in common (i.e. mammillary glands, common biochemical functions, etc.). However, it should be obvious, that other parts of the genome must encode those functions that make each mammal unique. A human, a whale, a bat – each has unique capabilities. Large parts of mammal genomes are different because they prescribe functional information that allows a human to do science, a whale to dive a mile deep, and a bat to fly and echolocate.

 

bottom of page