Another “before and after” text from a new science writer

Jon Paul Hildahl, a postdoctoral researcher at the University of Oslo, wanted to try some popular science writing and produced this text. We worked on it a bit together and then he revised it. Here are the “before” and “after” versions. I’ll provide some commentary in the next post. The main issues were editing – removing redundant or unnecessary language, unraveling a bit of the science, and providing illuminating explanations for a more general audience. Thanks for providing this, Jon – you truly have a future as a science writer (alongside your own excellent research, of course). Jon currently works in the group of Gareth Griffiths at the University of Oslo, where I conduct a week-long course every December for Masters’ students. It’s always one of the highlights of my year. Thanks to Jon for letting me post this, with his name.

BEFORE:

Regulation of immunity and disease resistance by commensal microbes and chromatin modifications during zebrafish development
Jorge Galindo-Villegas et al 2012 PNAS

It is obvious that we are not alone in this world, but it is becoming increasingly clear that we are not even alone in our own bodies. We are covered inside and out by small critters called microbes that include many helpful bacteria, archaea and fungi, collectively called commensals. The resident population that we carry throughout much of our life is called our microbiome. It is very abundant; our body contains 10 times as many microbial cells as human cells. It is not surprising then that these cohabitants play an important role in human health. Indeed their effect on animal health is an area of active research. In particular, it is becoming clear that a little dirt is good for you, especially in your early formative years. It has been shown in multiple animal models that microbes in the environment during early development can help establish the immune system and protect the host from attack by disease causing bacteria. This study helps to clarify the mechanisms by which this initial microbial exposure is controlled at the cellular and genetic level using a powerful fish model.

These researchers have used two powerful models to delineate the role of commensal bacteria during development of the immune system: 1.) germ free condition in 2.) the zebrafish model. Many studies of the role of environmental bacteria use germ free models. This provides a reference to what would happen in the absence of resident microbes. This can then be compared to the natural situation of exposure and colonization by commensals. Fish are exposed to a rich microbial ecosystem in their aquatic environment, which suggests that they have evolved ways to deal with environmental microbes both good and bad. You might ask, however, what a fish can tell us about human biology? Luckily, many if not most developmental processes are conserved among distantly related animal groups. Additionally, the zebrafish have many advantages as a research animal since they develop quickly in transparent eggs that can be easily followed and manipulated. They also have a well-characterized genome and sophisticated genetic tools that allow researchers to add or subtract gene products and measure the level of gene expression. It is known that the initial and fast acting (also called innate) immune system develops within days and before hatching for zebrafish.

In this study, the authors were able to follow the immune response of zebrafish from the time they hatch, at around two days after fertilization, and for the first days of exposure to the external environment when commensal colonization is believed to occur. They showed that zebrafish have a rapid and punctuated innate immune response after hatching, peaking after one day and then decreasing. This initial activity improves the response of early immune cells, providing a better protection against pathogenic bacterial infection and tissue damage compared to fish reared in germ free conditions. The researchers were also able to show that innate immune cells respond by a conserved mechanism, involving a intracellular response pathway by the myeloid differentiation primary response protein 88, MyD88. Another important finding from this study is that epigenetic regulation, which modifies the ability of genes to be expressed, modifies the immune response such that a robust emergency response is in place in case of infection or injury, while reducing the risk of adverse immune effects due to excessive inflammation by providing initial responders (antimicrobial effector proteins) that are not limited by epigenetic regulation.

AFTER:

The ying and yang of germ warfare

None of us go through life alone – not even within our own bodies. We are covered inside and out by microbes that include many helpful bacteria, archaea and fungi, collectively called commensals. The resident population that we carry throughout much of our life is called our microbiome. It is very abundant; each body contains 10 times as many microbial cells as human cells. It is not surprising then that these cohabitants play an important role in human health, an area of active research. One of the results is to show that a little “dirt” is good for you, especially in your early formative years. Studies using several animal models show that during early development, environmental microbes help establish the immune system and protect the host from disease-causing bacteria. A recent paper entitled, “Regulation of immunity and disease resistance by commensal microbes and chromatin modifications during zebrafish development” uses a powerful fish model to provide new insights into the mechanisms by which this early microbial exposure mediates cellular and genetic responses.

Jorge Galindo-Villegas and colleagues at the University of Murcia in Spain have compared zebrafish in two settings to clarify the role of commensal bacteria during immune system development: fish raised in a normal environment, and those raised in germ-free conditions. Germ-free models are commonly used to simulate what might happen in the absence of resident microbes, compared to the natural situation of exposure and colonization by commensals. Fish are normally exposed to a rich microbial ecosystem in their aquatic environment, which suggests that they have evolved ways to deal with environmental microbes that have both good and bad effects.

What, you may ask, can a fish tell us about human biology? Luckily, most significant developmental processes are conserved among distantly related animal groups. And zebrafish have many advantages as a research animal: They develop quickly in transparent eggs that can be easily observed and manipulated. Their well-characterized genome and sophisticated genetic tools allow researchers to add or subtract molecules and measure how genes – including the components of the immune system – respond. Another advantage is that the initial, fast-acting (“innate”) part of the immune system develops within days – even before zebrafish hatch.

The authors of this study followed the immune response of zebrafish from the time they hatch (at around two days after fertilization) through the first days of exposure to the external environment, when most commensal colonization is believed to occur. They showed that zebrafish have a rapid and punctuated innate immune response after hatching, which peaks after one day and then decreases. This initial activity improves the response of early immune cells, providing better protection against later pathogenic bacterial infections and tissue damage than is observed in fish reared in germ-free conditions. The researchers also showed that innate immune cells respond to these early infections using a mechanism that is found in many other animals, including humans. The response activates a biochemical signaling pathway in cells involving the myeloid differentiation primary response protein 88, or MyD88, which helps recognize microbes and initiate a immune response.

Another important finding from the study is that during early development, factors that influence the way DNA is packaged alter the patterns by which genes typically respond to stimuli. While fish that are exposed possess the same genes as fish that are not, early infections and environmental conditions cause their cells to establish patterns in which certain genes become active and others remain silent. The effect of this type of “epigenetic” regulation is to provide an extra level of control, giving cells the ability to mount a robust emergency response in case of infection or injury, but without the adverse immune effects – which can happen when inflammation reaches a serious level. Even fish raised under germ-free conditions mounted a slight immune response by this means. In contrast, antimicrobial effector proteins, which provide the fish with a fast-acting initial response system, have sustained high expression that is not limited by epigenetic regulation. Altogether, this study nicely demonstrates how commensal bacteria are closely intertwined with the development of the their host’s immune system.

Author: Jon Paul Hildahl
Link to the free full text of the original article

Twang science 2: Communication (Fake paper 2)

Dear editor,

I am writing with regard to the recent publication in your journal concerning the acquisition, maintenance, and loss of a type of speech called a twang. Terris et al. make only cursory mention of – and thus fail to do justice to – a hypothesis that speaking with a twang might be associated with a retrovirus or another pathogen. Our lab has been pursuing this question for over 20 years and I would like to clarify the current status of the debate.

Our search for a pathogen involved in language perception and speech began with a series of observations on the phenotype: in many ways, the spread of the phenotype resembles an epidemic that is tied to particular regions. For example, Valley Fever, or coccidiodomycosis, is caused by a fungus found in dry areas of the Southwestern United States. The fungus forms spores that are spread by winds, particularly when the soil has been disturbed by storms, construction, agriculture, four-wheel drive offroading, motorbiking, or other sports activities. Inhaling the spores leads to an infection in some people.

It is estimated that about a two-thirds of the population of some regions of the Southwest will test positive for the fungus Coccidioides spp. at some point in their lives. Only a fraction develop flu-like symptoms. In severe cases, nodules form on the lungs. Their onset and their severity vary from person to person, likely for genetic reasons, which also play a role in whether the pathogen affects organs beyond the lungs. A weakened immune system greatly increases susceptibility. Symptoms may disappear and reappear over the course of a lifetime.

In many ways the spread of the twang resembles such diseases, which are caused by a pathogen restricted to a particular geophysical niche. There are “hotspots”, particularly in the Midwest, where penetrance reaches nearly 100 percent, surrounded by zones of variable penetrance. Geographical barriers may play a role in limiting its spread. The Rocky Mountains, for example, divide an eastern region of pronounced twang from western areas where it is hardly found at all. There is some evidence that following the Dust Bowl, which saw massive migrations from Oklahoma to California, the pathogen was transported to the western coast, where it was responsible for the rise of “Valley Girl” speech. It has been estimated that in their clothing and shoes, immigrants brought approximately two tons of Oklahoma dust to California. The pathogen may have come along for the ride.

Infants seem particularly susceptible; virtually every child born in a hotspot will acquire the twang, independent of his or her genetic background. Some studies indicate that the degree of penetrance is associated with socioeconomic factors. This, too, is common for pathogens associated with dirt or a lack of sanitary infrastructure. An intriguing observation comes from recent epidemiological work that links the severity of a family’s twang to the number of open beer bottles and pizza boxes lying around the house. Another correlation is the number of rusty cars parked behind the house. In each case, the higher the number, the more severe the twang.

Those exposed during early childhood typically suffer from the twang to some degree their entire lives. Interestingly, those who leave a hotspot for many years – usually decades – may lose many of its features. However, if a person returns home, for example during Thanksgiving, he or she experiences a dramatic but temporary increase in twang speech patterns. This likewise reflects the behavior of some pathogens: removed from their ideal environment, they reproduce only slowly or enter a phase of latency. Contrarily, someone who moves to a hotspot later in life may at some point begin to show symptoms, but only after prolonged exposure.

The hypothetical pathogen does not seem to be transmitted from person to person. Children raised by twang-positive parents in a twang-negative environment do not typically show symptoms. Weaker phenotypes that are occasionally observed might be explained by transmission through contact with fomites such as dust-ridden clothing, furniture, or beer bottles that have accompanied the family without being properly cleaned before a move.

The findings of Terris et al. are intriguing but do not in any way contradict the pathogen hypothesis. A range of infectious agents are known to affect CpG methylation patterns and the expression of genes. Tumors in particular regions of the brain that affect speech patterns may cause symptoms by disturbing neural networks, but they may also be accompanied by changes in the epigenetic regulation of genes.

Validating the twang-pathogen hypothesis will require studies of the metabiome of those affected compared to controls. We have recently carried out such studies using a cohort similar to the patients and controls described in the paper by Terris et al. Our preliminary work, which is currently being revised for publication, has identified three potential candidates: the strongest correlation involves a retrovirus which bears some similarity to the feline leukemia virus, and there is a somewhat weaker association to two species of fungi whose spatial distribution closely matches that of the twang. At the moment we cannot rule out combinatorial effects caused by multiple pathogens, whose lifecycles depend on a delicate balance between body homeostasis and external factors in the environment.

Sincerely,

Bob Luser

The future will come sooner than you think: A manifesto for science communication in biomedical research

Note: This is the first of two parts. The second, which I will publish next week, discusses strategic and practical measures which will be necessary to address the issues it raises. I hope that the two pieces will trigger a very wide debate in the science research, communication, and teaching communities, and I will use this site to integrate comments and feedback along the way.

I.

For biomedical researchers, learning to communicate with the public is more than a way to acquire useful skills – it’s a social responsibility. Today’s scientific work will have profound effects on society that may come sooner than we think. Researchers need to help prepare for change, and they need to start now.

 

For years, biomedical scientists have spoken of a revolution in which findings from basic research will lead to new forms of diagnosis, treatment and prevention for major diseases that affect mankind. The pace of discovery and development has surpassed the most optimistic predictions of researchers from even just a few years ago. The public may have a different impression: Research operates on a different timescale than daily life. Scientists know that it may take decades for “potential drug targets” or “new therapeutic approaches” to affect a broad group of patients. The road from the laboratory to the clinic has more stages than the Tour de France, and it takes a lot longer to reach the finish line. Yet records are continually being broken all along the route – in terms of time, costs, automation and efficiency. There is no speed limit on biomedical progress; it is zooming down the fast lane at a pace that threatens to leave political, economic and social structures lagging far behind. It’s impossible to predict when and where the next leap forward in biomedicine will occur – breakthroughs often appear in the places you would least expect. Take the case of the biotech company that was using genetic engineering to try to create tulips with a more vivid purple color. In the process they discovered small interfering RNAs – which have become immensely important tools for research and the basis of numerous experimental therapies.

Cumulatively, progress arising from across the spectrum of research is starting to have significant effects on society. This impact will surely increase, and it will happen even if progress comes in small steps rather than some single, magnificent cure for a major disease. My children can surely expect to live a decade or two longer than I – and this is probably a conservative estimate. They will have to support an elderly population that lives longer and longer, will likely have to deal with the political fallout of an increasing health gap between industrialized countries and the rest of the world, and will face other serious consequences. Something similar happened over the course of the 20th century: vaccines, antibiotics, modern sanitation, and the development of modern surgical techniques added decades onto people’s life expectancy, but this happened at a time of rising birthrates in the developed world.  Today’s situation is different, and unless we plan for these situations well in advance, society will face dramatic and difficult adjustments. Coping with the biomedical revolution will require intensive interactions between scientists, physicians, politicians, economists, lawmakers, insurance companies, sociologists, and many others. Currently these groups receive almost no training in talking to each other and have little experience working together.

I think this has two important implications for scientists. First, they should accept a greater degree of social responsibility for the consequences of their work. This means doing everything they can to ensure that society is prepared to integrate their discoveries in the healthiest way possible; it also requires high standards of ethical behavior. This suggests the second point: Researchers must become much more engaged in public education and communication and will require new kinds of training to become involved. Scientists and clinicians will be the first to have a sense of the pace of change, and should serve a central role as both multipliers and a kind of early-warning system for the public. Professional science communicators will have an important role in this process – for example, by helping researchers develop their communication and teaching skills – but the task is too important to leave entirely to them.

We urgently need to start a very wide, public debate that engages all future stakeholders (i.e., everyone). It should draw on creative new modes of reaching school children, who are the scientists, decision-makers, patients, and workers of tomorrow and will directly experience the effects of the biomedical revolution. Society is already feeling the first symptoms; we can’t wait any longer. People need to learn to communicate across disciplinary boundaries at an early age and keep talking to each other as they advance along different educational paths and careers. This will require that they develop new skills, but that should happen anyway: The ability to communicate clearly and effectively pays off at every stage of a career in science and nearly every other field. Sadly, most European schools and universities lack a system to accomplish this – a point addressed in part 2.

Adequately addressing these issues will require the cooperation of partners at many levels: individuals, schools, institutes, and state and federal governments. The next section of this paper presents some specific ideas for short- and long-term actions that would be helpful and need to be undertaken soon. The most urgent point is to help teachers, scientists, and other groups of potential “multipliers” develop new skills and new, creative ways of engaging their pupils and the public. These groups will need to work closely together to prepare society to cope with the effects of biomedical research – which may be quite dramatic, and may come much sooner than we think. That can only happen if they first learn to talk to each other, are motivated, and are given many opportunities to do so.

The Kansas Creationists vs. the Evolutionary Atheists

Leaving Flatland and its flawed debate

Note: This article is being published in under the same title in the current edition of the magazine Occulto. Hodge, Russ. “The Kansas Creationists vs. the Evolutionary Atheists.” Occulto Issue e, Summer 2013, Berlin. Edited by Alice Cannava. ISSN 2196-5781. pp. 64-85. You can obtain a printed copy of the journal at  http://www.occultomagazine.com

My daughter was leaving Germany for a year to explore the American half of her genome. Rather than the liberal Kansas town where I went to school, she was headed for the southern half of the state, colored deep red on political maps. “You’ll be fine if you don’t discuss politics, religion, or guns,” I advised her. “Or Charles Darwin.” His name alone provokes a strong reaction in my home state, as I found out after writing a book on evolution.[1] Everyone has an opinion and you don’t have to pass a test before you jump in to a scientific debate, giving it the character of a barroom brawl. The topic leaves few Kansans sitting on the fence. Maybe because we use a lot of barbed wire.

Barbed wire was patented in 1867, nine years after Darwin and Wallace foisted evolution on the world. Out on the prairie, farmers began fencing up their lands, threatening the culture of cowboys and cattle drives. In 19th-century Kansas, barbed wire caused a far greater ruckus than evolution, although the debates didn’t drag on long because the two sides were well armed.[2] In Europe the theory caused more consternation, but discussions were fought with hot air rather than hot lead. Nor did the Bishop Wilberforce run a cattle stampede through Thomas Huxley’s garden. You could destroy a farm that way, but it didn’t work with intellectual property.

Barbed-wire fences broke up the prairie and metaphorically divided the population over deeper issues:  Would all the unsettled land be sold? Who had the right to use it? There seemed to be two clear sides, but only by leaving Native Americans out of the discussion. Tribes had diverse views of the relationship between people and land that would have added more dimensions to the debate.

Spatial metaphors are a type of trope – a wide range of rhetorical devices whereby words are used in unusual ways, often to describe one thing in terms of something else.[3] They are fundamental to the way we think, learn, and communicate. Tropes do not simply rename things, but rather combine complex networks of associations that correspond at some points and diverge at others. They often remain hidden as we communicate, causing misunderstandings that are hard to figure out. They have a powerful influence on the way we think, especially when we don’t realize they are there. Some are so basic, stylized and routine that we limit our imagination and the ability to see things in other ways. People often transfer the wrong properties of a trope to its target, expecting two systems to behave the same way and missing the differences.

Some tropes are obvious in everyday language, making them fairly easy to detect and analyze – take, for example, the old adage, “Every debate has two sides.” It reduces many issues – whether over barbed-wire fences, science, or “red-blue” divisions on a political spectrum – to the shape of a coin, implying that you have to choose. But most topics are far more complex. Why not think of a shape with more sides – perhaps six, like a dice, or a ball that can come to rest on any point and is easy to nudge to another?

But the two-sided model completely dominates the way most people think of debates about evolution: as if the world is firmly divided into two camps, science and religion, entrenched and fighting a war. The real situation is more interesting: Most religious denominations accept evolution, and many scientists have religious beliefs. But things got off on the wrong foot in the very first public forum in 1860, where religious fundamentalists saw the issue as a battle between universal truth and everything else, and they have controlled the form of the debate ever since. It’s too bad: fundamentalists have discovered no new facts to support their position in all of that time, while evolutionary science has made extraordinary progress. The theory is a scientific idea and should be discussed that way, rather than being hijacked and carried off to the foreign land of theology.

Even if it’s a bad metaphor, scientists could take more advantage of the coin. You could print competing hypotheses on its two sides: “Species arose through a long process of evolution,” versus “Species were created over a six-day period about 6,000 years ago.” Every day this coin is flipped by geneticists, chemists, physicists, doctors, geologists, paleontologists, mathematicians, informaticians, and researchers from other disciplines. They find new ways to test it all the time. There ought to be plenty of evidence for a sudden burst of creation 6,000 years ago, or at least evidence to debunk evolutionary theory, but the coin lands with Darwin’s head pointing up every time. Even the strongest beliefs haven’t flipped it over. That doesn’t stop people from hoping it will land, just once, on the other side. But prayers can’t make evolution go away, or even improve the health of the royal family in Britain.[4]

The two-sided debate has become such a social institution that people forget it’s a trope, just one of many ways of looking at things, and take it to represent something real. When that happens tropes move into a cognitive underground where they powerfully influence our thoughts, discussions, and perceptions of many things, and they become devilishly hard to get rid of. It’s hard to imagine that these stereotyped collisions between religious fundamentalists and scientists will go away.

Even so, I think the debate is about to change. The cause won’t be a miraculous conversion of the entire planet to some form of religious fundamentalism, or a mass exodus into atheism. Instead, I believe that science is on the verge of a conceptual revolution that will completely discredit simplistic debates. For a long time now words like “species”, “genes” and “natural selection” have been tossed back and forth, as if we are talking about the same things. I am not sure how fundamentalists think of these scientific concepts, but scientists have been steadily changing the sophisticated tropes and models that underlie them. A common vocabulary has masked a much deeper conflict; we are not at all talking about the same things.

Now, I believe, science is on the verge of a conceptual revolution that is changing the basic tropes by which we think of life; this new view may render the old sort of debate completely meaningless. The two-sided metaphor has always been a poor one. Discussions about evolution should finally escape this sort of intellectual Flatland and enter more profound dimensions.[5]

* * * *

Both religious and scientific explanations for the world depend on tropes and models. Scientists make specific observations and try to extract general principles that can be tested and improved. An experiment might confirm a model, or discredit it, and the results aren’t known in advance. Fundamentalists claim that some questions about life are answered in Biblical stories and others are mysteries that can’t be solved. There is no need to do experiments – which would either confirm what is already known, or the results would be ignored.

Developing large scientific models such as evolution or restricted concepts such as species begins with a lot of specific observations. Each doesn’t mean much on its own; the aim is to classify many into groups that exhibit similar general patterns. This resembles a trope called synechdoche, in which the features of individuals are transferred to the whole group. The next step is to test the pattern by applying it to new objects or situations. This creates a continual dialogue in which new observations force scientists to revise their general models. I’ll use a spatial metaphor and call this dual process “upward and downward” reasoning, which we use in everyday thinking as well. It’s the basis of learning, communication, and all sorts of judgments that people make.

Scientists recognize that errors can be made when reasoning in both directions. Upward reasoning can suffer from the exception fallacy: if the examples you start with are unusual, you may arrive at the wrong general principles. If you then apply the principles too widely to the wrong things, you commit an error in the downward direction: the ecological fallacy. Upward-downward thinking in our daily lives can suffer from the same errors and lead to problems such as racist stereotypes. So scientists continually check their assumptions and conclusions by requiring changes in models, if they aren’t confirmed by experiments. Fundamentalists deny that these types of fallacies exist in their own thinking, but are perfectly willing to look for them in science.

Understanding a scientific model requires understanding both parts of the process. To talk about a species, for example, you need to know how researchers assemble individual organisms into a group, make decisions about its common features, and apply them to new examples. I don’t know what the meaning of “species” is for a fundamentalist – if you deny the validity of the reasoning process by which scientists made up the term, you can’t be talking about the same thing.

Researchers make their models available to the world to allow them to be widely tested and ensure that they aren’t littered by a scientist’s subjective beliefs. At some point a model has been put to so many tests in different situations that people begin to treat it as a sort of “law”. Even then we know that it is a product of human thinking. Evolution is so interesting because its view of life exposes both the power of tropological thinking and its limitations, when the subject is an open-ended biological system that will always produce surprises.

Understanding this problem may affect the way we construct models in science and other systems. It will not discount the ability of current models to predict the function of a human gene by studying a related molecule in another species, or to manipulate organisms through genetic engineering. At some point, however, progress may be held back by mental constraints that may need to be understood to overcome. Science already recognizes that the problem exists: Double-blind experiments are necessary because expectations and models have an unpredictable influence not only our interpretation of data, but perception itself.

* * * *

When evolutionary theory appeared, it moved into a neighborhood of older concepts shaped by tropes and other mental models. The theory was communicated in common words and metaphors that were strongly associated with other things. It should have caused people to reevaluate a much wider set of assumptions, and it finally has – but the process has taken 155 years. At the time, the opposite happened, and the theory was forced into a network of very old beliefs.

For example, proposing that complex organisms could arise from simpler forms sounded like “progress”: a huge political and social theme during the Industrial Revolution. Many readers immediately tried to use evolution as a metaphor for race or class relations within human society, or to confirm the old, dearly-held view of man’s dominion over nature. Both efforts were doomed to failure: social models were tropes themselves, based on old notions about nature that had now become outdated. Social issues became a metaphorical battleground between old models of life based on religion and the new theory. No one realized that the real fight was happening at a meta-level of tropes. It was as if two people were playing a game, using the same board and pieces, but following completely different rules. It’s no wonder that you could never bring the game to a satisfactory end.

Now I think biology is in the process of toppling one of its central metaphors, in a way that may also have wider social effects. This is happening partly because of advances in technology that provide a much clearer view of living organisms and the complexity of their interactions with the environment. One result is to provide a sharper view of evolution, and how it differs from some of the cultural metaphors that have been holding it down. The change is appearing in bits and pieces and its full nature hasn’t been clearly articulated or even widely perceived. It will affect the way we understand humans, nature, and society. But this time we shouldn’t make the same mistake by applying the change inappropriately to other areas.

To make the case I will first provide a very brief sketch of evolutionary theory; secondly, point out a few issues that are central to it but are hard to deal with using current models; and finally, try to link what is happening to more general processes that underlie our construction of cognitive models.

In a text of this length it is impossible to properly ground all the philosophical, linguistic, cognitive and biological concepts that support its view of the role of tropes in cognition and science. Those arguments derive from a much larger conceptual framework that I will articulate in a future project. Here I will provide an application of the method to a debate that is currently, almost universally, carried out at a level that is much more superficial and naïve.

* * * *

“Evolution is so simple, almost anyone can misunderstand it,” said philosopher David Hull.[6] Darwin and Wallace drew on straightforward observations that can be made anywhere, and interpreted them in a way that is closely linked to everyday, “common-sense” ways of thinking. The complexity of the theory lies in the way they abstracted a model from these observations, then extended it far into the past to show how a few basic principles suffice to produce new species.

The outline here covers four basic principles. The most general is common to all natural sciences and distinguishes them from religion and other styles of thought. Researchers make a fundamental assumption: “We should understand states of the world that we can not directly observe on the basis of what we can observe.” This can be seen as a derivative of Occam’s razor, which in its original form has been translated as, “Plurality must never be posited without necessity.”[7]

The razor doesn’t mean that the universe is inherently simple; instead, it recognizes that views of the natural world are the product of philosophical and methodological choices, and one shouldn’t make up more hypotheses than are necessary. If a single, global force (gravity) can account for falling apples and the motion of the planets, we shouldn’t make more assumptions and suppose that each object is being pushed around by its own personal force, without evidence. By definition this approach discounts miracles such as the idea that the universe was created 6,000 years ago, in six days, which presupposes a suspension of the current forces we observe at work.

A model may posit forces that can’t be observed (such as gravity), but which have predictable effects that can be tested in observations or experiments. If galaxies are racing away from each other, their trajectories can be projected backwards in time to produce the notion of the Big Bang, or forward to produce a vision of the future of the universe. The same rationale yields an explanation for geological formations and a likely age of the Earth. Evolution is the biological equivalent, based on an observation of current life to abstract rule-governed processes that explain the origin of diverse species.

To conceive evolution, Darwin and Wallace wove three basic observations into a system that respects this fundamental principle of science. First: species constantly undergo variation. Offspring are not identical to their parents or each other (unless they are twins or clones). Variation can be directly observed in every species and is rarely an issue in popular, dualistic debates about evolution. The theory partly hinges on the rate at which it happens, which can only be determined using scientific methods; the results have been consistent with evolutionary predictions.

Most variation arises because of natural imperfections in biochemical systems. DNA undergoes many types of changes: through “spelling errors” (mutations), or when sequences break off longer molecules during the creation of egg and sperm cells. Cells can repair the damage, but material can move from one chromosome to another in a process called recombination. Other errors include duplications of DNA sequences, whole chromosomes, and in some cases an entire genome. Genetic material can also be lost. Any of these alterations can result in measurable physiological or behavioral changes in the organism as a whole – its phenotype. Such changes happen to some degree in every child; we are all X-Men.

The second observation was that some variations are passed down to an organism’s offspring through a process of heredity. The main reason is the conservation of specific DNA sequences from parents to their offspring, but some other types of biochemical changes are passed along as well. Heredity is not a deterministic system because first, each of us inherits a unique genome – we are all experiments, venturing into a landscape that has not yet been explored by evolution – and secondly, most types of behavior and many aspects of a body’s development are shaped in a dialogue with the environment.

The third factor in evolution, natural selection, is usually wildly misunderstood. Right from the start it was labeled with a misleading trope – “survival of the fittest” – that scientists have been trying to peel off ever since. It was coined by Darwin’s contemporary Herbert Spencer, a philosopher with the social status of a movie star. One of Spencer’s main interests was social progress, and he hoped that the new theory would shed light on cultural development. Religious and political conservatives seized on his words and applied their own tropes in interpreting “fittest” any way they liked – to keep humans at the top of nature, near God, or the wealthy or powerful at the top of society. They used it to justify racism and its nastiest form: eugenics movements that sought to “improve” humanity by sterilizing or killing the ill, the handicapped, prisoners, “promiscuous women,” Jews, and anyone else that those in power didn’t care for.

Darwin never liked “survival of the fittest” because he recognized that biological concepts could only be applied to culture in a metaphorical way that mangled what he meant. Finally, grudgingly, he used the phrase – probably out of the wish to appear conciliatory – but only after redefining it in and stripping it of moral and social connotations. The translation in strictly Darwinian terms sounds circular and almost silly: “survival of the survivors,” or “survival of the reproducers.” In other words, current species are the descendants of animals that managed to reproduce more than others. If you couldn’t pass along your genes, a lot of your hereditary material would disappear in favor of those that could. And if you didn’t reproduce as much as your neighbors, and nor did your descendants, and this happened over vast periods of time, then eventually your genomic contribution to the future of your species would dwindle and perhaps even disappear.

Darwin had noticed that many factors could give an animal a reproductive edge over other members of its species: differences in fertility, an organism’s ability to survive long enough to reproduce, preference for certain mates, etc. Events that struck a population equally, like random accidents, wouldn’t have much effect: The diversity of a species would undergo slow, random changes in a process called genetic drift. That itself can produce different species. If two subpopulations are isolated from each other long enough, drift may eventually change their genomes to an extent that they can no longer mate to produce fertile offspring.

So selection begins with any trait that gives an organism a reproductive edge, increasing its frequency, compared to other variants, in the next generation. If offspring with the trait also produce more children, and the bias continues over many generations, the result may be natural selection. It always occurs as a function of a dialogue between the features of an organism and its environment; identical animals don’t always do equally well in different environments. If you could measure the frequency of particular variants of genes in a species before selection happened and then again afterwards, most would exhibit random drift. But variants in an animal that had undergone “positive” selection would show a statistical increase, while forms that lower an organism’s reproduction would become rare or even disappear.

Today the signature of these events can only be detected by studying the frequency of particular DNA sequences over time. And here is also the signature of a trope by which the process is usually oversimplified in our imagination: “fitness”, or selection, isn’t something that happens to a single individual, or even a single couple, or a single generation. Instead, it is a population effect that may require thousands of generations, or however long it takes to create a new species. The change usually takes place in multiple family lines. What happens to an individual organism plays a role, but the impact on evolution is a statistical one, spread out over vast periods of time. One can observe individual advantages in reproduction, then postulate their extension into the past and future as an “upward” style of thinking. But one can’t reason back “downward” to make predictions for specific individuals, which might die in accidents or suffer from other random events. It’s also important to note that a reproductive advantage passes along an organism’s entire genome, including factors that may support the “edge”, but also all of the other characteristics it passes down.

An organism’s reproductive ability can be influenced at every level – from single letters of the genetic code, the behavior of molecules within its cells, the function of its organs, its thinking, and its overall interactions with the environment. It comes into play at every phase of a lifetime – from its origins as a single cell, through its development in an egg or the womb, its infancy, childhood, or adulthood, up to the end of its fertile phase. Usually selection stops there, but it might continue in cases where organisms contribute substantially to the survival of their “grandchildren”. Any difference that affects an organism’s phenotype can influence selection, given a permissive environment.

Variation, heredity, and reproductive differences are directly observable and – along with the more general assumptions of science – form the basis of evolutionary theory. The first two factors are rarely called into question; selection is more contentious, but mostly because the debaters are using different tropes.

* * * *

The power of evolutionary theory lies in the way it has spawned millions of hypotheses that continue to be tested in countless ways. Even this hasn’t been convincing to “Young Earth” fundamentalists, who have discarded the basic scientific premise of a continuity of natural forces in favor of a miraculous act of Creation that took place about 6,000 years ago. Their rationale is based on a faith in what they call a “literal” reading of the book of Genesis, but each fundamentalist decides what should be read literally and what not, in response to other cultural influences, making today’s fundamentalism is much different than forms practiced in the past. The written record of languages – easy to discover through a trip to any library – makes it easy to discard the Bible’s story of language creation (the “Tower of Babel”) as a fable. But the creation of species, recorded in fossils, and recounted in the same book, is regarded differently – why?

Other challenges to evolutionary theory are grouped under the popular label “intelligent design.” This is indistinguishable from a religious philosophy known as Natural Theology,[8] which dominated thinking about life until the development of evolutionary theory. Its major argument holds that living systems appear so complex and well-structured – usually by analogy to a machine such as a clock – that they must have been created by some sort of supernatural intelligence.[9]

Darwin grew up in this tradition, but several major conceptual flaws convinced him to reject it in favor of evolution. It “cherry-picks” from empirical observations of life: Anything that can’t yet be explained is assigned to the domain of miracles, including biochemical processes discovered through strictly scientific methods. Once scientists provide a reasonable account of the origins of these processes, or demonstrate that some fossil species didn’t arise spontaneously, the intelligent design community shifts its focus to the next unsolved problem. Michael Behe, a biochemist who has become an advocate for the philosophy of intelligent design, has consistently taken this strategy.[10]

Another flaw is the difficulty of distinguishing between “designs” and the structures or patterns that arise due to physical and chemical laws. The spiral forms of snail shells and the tornado-like pattern of water as it moves into a drain might look like supreme achievements of an intelligent architect, but both can be explained by applying models of biological or physical components and the forces acting on them. The body of every human child is an amazing structure that arises from a single cell. Usually this process is explained by reference to biological events, rather than constant, supernatural interventions – so why not the origins of species?

Finally, even if scientists were to stumble upon some unmistakable “signatures of a designer,” how many such designers are there? Each molecule, cellular structure, organism, or species might have its own. Claiming to see the hand of a single designer in different natural phenomena is the clear sign of a particular religious agenda, and today it is usually the attempt to thrust a Judeo-Christian deity into the science classroom.

* * * *

Evolutionary theory is not yet complete because some aspects of living systems have been impossible to explore. Some of these problems represent a lack of technology; others, I think, are inevitable when human minds construct a model and try to apply it almost universally to the world.

The first area of incompleteness has to do with evolution’s portrayal of the environment. Darwin was the first ecologist: He demonstrated that the fates and forms of species were thoroughly intertwined with each other and external factors; that each species exerts an influence on others, and that overpopulation and a competition for resources play a role in natural selection. Organisms don’t change due to purely internal factors; they arise and are shaped through a complex, fluid dialogue with everything around them. This includes every other species they interact with and other aspects of the environment such as temperature, the amount of precipitation, sunlight, seasonal changes, etc. It also includes interactions at the microscopic scale. Recently, for example, scientists have caught the first glimpse of the microbiome:[11] the extraordinarily complex, dynamic populations of bacteria and viruses that inhabit our bodies and the environment. This opens the door, for the first time, on understanding their influence on our evolution (and vice versa) and human health.

Single molecules can promote or hinder an organism’s survival and reproductive capacity, so they, too, contribute to natural selection as they carry out functions in cells. Here they will serve as an example of a gap that remains in our understanding of the interplay between organisms and their environments.

Nearly every biological process involves a process whereby cells detect and respond to change. One mechanism involves signaling cascades that typically start when a molecule binds to a receptor protein on the surface of the cell. The receptor undergoes a structural and chemical change that causes it to bind to other proteins, subsequently changing their structure and behavior. This effect is transferred from one type of molecule to the next, often ending with the transport of a protein to the cell nucleus. There it helps change the overall pattern of active and silent genes in the cell, altering the population of molecules it contains, its biochemistry, and its responsiveness to other signals.

A particular signaling cascade requires certain molecules to be present or quickly produced in response to a stimulus. They need to be located in the right regions of the cell: microenvironments that must also be properly configured to respond to the signal. Signal molecules have to be present in sufficient quantities, and they are usually bound to complexes (sometimes consisting of dozens of other molecules), whose components also need to be present in sufficient quantities. Some protein complexes are “prefabricated” and localized in particular microenvironments, where they can be “switched on” through the addition of a single component.

Passing a signal requires that a protein’s atoms have a particular physical architecture. This requires the help of still more molecules that help it fold, or “decorate” it with complex sugars, or bind it to a membrane with a particular composition of fats and other molecules, etc. This takes place against the background of multiple signals that may carry conflicting “instructions” and compete to push the cell in different directions. By adopting different conformations, or docking on to different complexes, a single molecule can act as a “switching station” to route different signals in various directions.

The quantities and states of all the other molecules in a microenvironment influence whether a protein receives a signal and how the “information” is passed along. Those populations determine whether the protein will bind to its proper partner; too many copies of another protein may change its preferences (affinities) for other molecules. If everything works and the protein does transmit the signal, the contingencies must also be met by the next molecule, in a neighboring microenvironment, so that it can be passed farther.

Microenvironments both constitute the cell and are shaped by it. They are dynamic, constantly requiring the production, refinement, and delivery of new molecules. Events within them move beyond to activate new genes, silence others, and cause changes across the entire system in intricate feedback loops. Molecules, microenvironments, and entire cells continually undergo fluid transitions – rather than adopting a clearly definable state – in which adjustments are constantly being carried out. At any given time, some proteins have achieved the form necessary to receive and pass along a signal; others are being processed; still others are being translated from RNA molecules; RNAs are being transcribed from genes at a particular frequency, etc. Every protein in a signaling cascade is undergoing similar transitions in terms of its chemistry, form, and quantities. So the success of a signal depends on the attainment of tipping points: changes from various conditions under which a microenvironment is not yet ready to receive a signal, to conditions which permit it.

Until very recently it has been impossible to capture a remotely adequate census of microenvironments or the dynamic nature of their components. As a result, proteins have generally been described as metaphorical actors – like telling the history of a war only from the perspective of generals. Some do have powerful roles, as clarified through experiments that change or remove them, but such experiments usually involve hundreds, thousands, or millions of copies of a particular molecule in highly standardized microenvironments. What is really being described is collective behavior, averaged out in a statistical way to make a model that is then applied to single molecules, in microenvironments where the major contingencies have been met.

Such descriptions aren’t perfect; they rarely describe the behavior of any single molecule, and they don’t have to. This inexactitude isn’t just a by-product of gaps in technology. Evolution predicts that it must be an inherent feature of cells. Life is constantly subject to variation and unpredictable events, so cells and their microenvironments have to have a certain tolerance for them. Most of these systems exhibit a robustness by which one molecule can step in for another, or some other “backup” system comes into play – evolution has favored them. At the same time, cells can’t tolerate everything. So far it has been impossible to define precise boundaries of permissiveness and intolerance in their microenvironments.

The same principles that govern proteins and their surroundings apply to all scales of biological organization. Simply by living – using resources and producing waste products – a cell changes the environment for itself and everything around it. In a complex organism, cells build higher levels of structure and tissue to create a body that is likewise in a fluid state of change, constantly adjusting to internal and external changes. There is an upward-moving causal chain whose restrictions are most evident in diseases where events triggered by specific molecules – in the context of their microenvironments – disrupt the body as a whole. Such upward causality participates in every aspect of growth, activity, and physiological processes such as digestion.

This is dramatically different than the common concept of environments as large external spaces in which organisms interact with each other, and where causal forces work mainly downward. That concept is also appropriate: temperature and other external factors (such as the availability of specific types of food) reorganize biological structures down to the level of molecules. But a better definition of the evolutionary environment is a to imagine a succession of fields of all scales in which biological activity has causal, fluid effects in both directions, upward and downward.

One fascinating “downward” causal chain can be found in the process of thinking, which may create a new biological environment that can affect all lower levels of biological structure. Suppose I interpret a phrase of music on a bowed instrument. That interpretation is a personal construct developed from years of experience, learning, and aesthetic tastes that constantly move back and forth between mental and physical domains. My conception of it somehow triggers specific types of motor activity across the body: muscles in the hand holding the bow do something very different than my fingerings on the string, while remaining highly coordinated. Playing music produces new cellular signals and the activation of new genes. At the same time I remain highly responsive to external feedback: feeling an irregularity in the surface of the string, noticing the expression on a listener’s face, or hearing the behavior of my fellow musicians. Thoughts, intentions, and social interactions create and constantly reshape environments for biological activity at every scale.

* * * *

This much more fluid, multi-scalar view of biology shakes up some central metaphors by which we have described living systems and the models we use to understand them: a fusion of materialism and mechanism. Their breakdown will significantly alter the way we think about issues like genetic determinism, states of health and disease, and large models such as evolution.

Materialism is probably easiest to understand in contrast to another philosophical tradition called vitalism. Until the 19th century and even later, many scientists (and all theologians) postulated a qualitative difference between living things and inorganic substances. Evolution might be fine to describe everything that had happened since the appearance of the first cell, but how did that organism arise? Vitalists believed that some “spark”, energy, or force must have been necessary to create life from the inorganic world. Theologians ascribed this to a supernatural being, but it didn’t have to be; it might simply be a type of measurable energy that simply hadn’t yet been detected in physical or chemical experiments. The idea attracted droves of physicists to the life sciences.

What they discovered ultimately led to the abandonment of vitalism in the life sciences. In 1828, Friedrich Wöhler demonstrated that a biological molecule (urea) could be synthesized using purely inorganic substances. In the 1950s, Watson and Crick drew on physics experiments to propose a model of DNA whereby a molecule could reproduce itself by purely biochemical means. Experiments at about the same time carried out by Stanley Miller showed that complex organic molecules such as amino acids could spontaneously arise in sterile conditions, even in outer space.[12] Miller never managed to build something as complex as RNA or DNA in the lab, but he didn’t have the time or virtually infinite resources of the early Earth. Every single molecule on the planet could be considered a chemical workbench, carrying out experiments over a billion years.

So biology chose materialism, at a time of rapid industrialization, which made it easy to choose machines as the guiding metaphor for understanding cells and organisms. The components of machines interact based on their physical composition and structures. Obviously organisms were very complex machines, but technology was becoming more complex as well. New machines provided a richer source of metaphors. With the advent of computers, people began discussing biology in terms of systems, as intricate networks of feedback loops and self-regulatory mechanisms somehow analogous to electronic circuitry.

Even with such fabulous machines on hand, the metaphor has reached its limits and, strictly speaking, can no longer be applied. One limitation should have been clear from the outset: Machines couldn’t reproduce themselves. And not even the most complex machines come close to possessing the complex, interlinked, fluid microenvironments described above. We usually design machines with rigid parts that have single, repetitive functions; if they break down, they can be fixed by changing a single part. Their components aren’t continually, fluidly, rebuilt at every level; they haven’t been tested and redesigned to adapt to any contingency. Human machines are rigid and designed to operate as stably as possible under specific conditions foreseen by engineers, rather than in continually changing enviroments whose variations know few bounds. Applying the machine metaphor to life leads to concepts of genetic diseases, for example, in which solutions are sometimes seen as machine-like exchanges of new parts for defective ones. Sometimes that might work, but it may not – the metaphor doesn’t really apply.

Another blow to the metaphor is the fact that by nature, no two organisms are alike; variation is an inherent quality of every species, and a tolerance for unpredictability is essential to its long-term survival. That is much less true of machines, particularly in the age of mass production, where variation in a particular model is usually regarded as an accident. This will be explored in more detail in the next section.

By abandoning the metaphor of the machine, we also abandon a naïve style of hard deterministic thinking that has arisen around notions of genes and organisms. (“My genes made me do it; my genome dictates my life.”) Determinism might be appropriate in a system that works completely from the bottom up, where rigid components dictate the behavior of a system, then the next higher scale of structure and so on. But what if the causal chain flows both upward and down, in which every component is responsive to unpredictable environmental events, contains immeasurable amounts of variation, and where human behavior creates new environments that shape biological activity? Causality itself is a model, usually based on the idea that one state naturally transforms to another after the application of some (model) force. It can only strictly be applied if it’s possible to define states – will it work in the context of ultimately fluid causal systems?

How could it be achieved, for example, in the case of music? To start you would have to fully describe both the material and mechanical basis by which aesthetic experience is physiologically “recorded” in the brain and nervous system. You would have to assume that internal physical structures not only underpin but cause particular thoughts. The system would have to be responsive to unpredictable effects, like an expression of pleasure or distaste on the face of someone in the audience. It’s safer to postulate a system in which unpredictable external stimuli from the environment exert a shaping influence on physical structure that works downward as well. Thoughts themselves – and their content – change the physiological substrate that permits them. Experiments in neurobiology have demonstrated that this is the case.[13]

* * * *

To survive, organisms can’t have some of the features we normally associate with machines. Every existing life form encodes at least a billion years of compromise that creates various degree of tolerance for variation at every scale of biological organization. There are boundaries, of course: Some variants are so disruptive that they are fatal. But just as deadly is any failure of the mechanisms that tolerate variation and change.

The field of biology has had a hard time fully grasping the extent – possibly even the concept – of this variation, and this is the last “gap” in evolutionary science I will discuss. It causes a fundamental problem in defining biological objects – whether single molecules or species. I think it can be dealt with, but this will probably require a new type of model-building. That may be difficult because the problem is closely linked to more general issues of human cognition.

The link is probably easiest to grasp through a metaphor, something much simpler than a molecule or a species – let’s take the concept of a “chair”. As a child I perceive individual chairs in various contexts, do various things with them, and hear people talk about them. There is no real consensus among cognitive psychologists about what happens next, but at some point a child creates conceptual models of “things called chairs” and begins using the models to name things she hasn’t seen before. At that point other people may correct her. She has to understand that different objects can have the same name while remaining distinct from objects with another name. In doing so she integrates features such as shapes, colors, textures, functions, parts, and different materials. Other features include a lifetime trajectory that involves being built, undergoing changes, and falling apart or being destroyed.

Children don’t come pre-programmed with a concept of a “chair”; each of us builds our own in an individual, constructive process based on encounters with specific chairs. The process is highly flexible, permitting us to recognize things that don’t fit any “classical definition” of a chair – such as something with a leg broken off, or a chair in a dollhouse, or a two-dimensional stick-drawing of a chair. All of these acts are based on tropes.

Building a model for a biological entity – such as a protein, or a species – requires a similar process. After specific objects are studied, an abstraction is made to define a “class model” that is as inclusive as possible of everything that belongs and everything that does not. From the beginning the model is intended for refinement: We haven’t yet encountered every object that can potentially belong to the class, so it is difficult to describe the boundary conditions. And since this process is based on experience, it is inherently statistical and subjective, while proposing a model that can be expanded or restricted as it is applied to new objects.

Experimentation allows science to escape the corsets of an inappropriate model. For a long time it might have been fine to think of atoms as tiny planetary systems, made of small, solid objects. But experiments forced the development of quantum mechanics, which suddenly said that objects on the human scale aren’t good metaphors for the subcomponents of atoms. Photons or electrons can’t be snagged like footballs and held onto; they may seem to disappear as they move from place to another, temporarily converted to energy; they are always in transition.

* * * *

Let’s see where this type of thinking gets us in biology by considering one of the most fundamental components of organic life: a protein. The usual biological account of the features of proteins goes something like this: Proteins are strings of amino acids (a metaphor: they share some features of human-scale “strings” but not others). They have sequences: the list of amino acids in their order in the string (a complex metaphor with a time, spatial, and behavioral component:  you imagine traveling down a text in a certain direction and reading letters as they appear). Proteins have a complex, three-dimensional structure or architecture (which don’t behave like most objects on our scale, unless you’re thinking of something like jello, because they are constantly in motion and often reshape themselves).

They have life histories that play a crucial role in their current behavior: Sequences in genes are transcribed into an RNA molecule, which is used as a template for proteins. This simple account skips many steps of processing, each of which may change the molecule’s final form, so the history becomes encoded in its final location, structure, and functions. Proteins have functions that are usually metaphorical (receptors, signal transducers, inhibitors, promoters, etc.). Originally such names convey an impression of their activities, but the terms are ultimately based on specific chemical reactions. In describing features and functions we use letters, texts, mathematical symbols, sequences, and other tropes.

Every feature of a protein naturally appears in extensive variations that can’t be fully measured or catalogued. For example, proteins never have a static, completely immovable structure, although we depict them in two or three-dimensional pictures that give this impression. These are symbols for a type of archetype that probably never exists, at least for any length of time.

Once the features of a specific protein have been defined, it is given a “class” name that can be applied species-wide (“human beta-catenin”) This class is further extended to other species in a process called homology. There is a compelling evolutionary reason to do so: human and mouse versions of beta-catenin evolved from the same gene in an ancestral species. This is established by noticing extensive overlap in their sequences, and it usually allows researchers to draw parallels between a protein’s structure and function in different species.

The central problem in this type of model is that it does not (in fact, cannot) capture a full view of variation along any parameter. It’s impossible within one species, often within one organism, and sometimes even within a single cell. There are two reasons: The technological problem stems from the fact that until very recently, we didn’t have instruments that could identify a single aberrant molecule against the background noise of alternative forms, either in terms of sequence, structure, or function. A single copy may have experienced some sort of accident in which a bit is cut off. Or it might have been improperly folded, or undergone some other processing error.

The second problem lies with the impossibility of defining a consensus sequence within a species. Random mutations continually occur and produce new versions of the molecule; there is no way to predict all possible variations that may occur and yet remain functional. It is possible to predict that specific changes will eliminate the production of a molecule, but not other parameters of variation. This problem is magnified when trying to cross species boundaries.

If we can’t define the sequence of a single gene, how can we define a species? Once again, naming species is a convention – an example of reasoning from specific examples up to a general model, then down again to new examples. This doesn’t create an objectively applicable definition because there is no “consensus genome” (or any other single feature) that can be definitively attributed to a species. Even if you could carry out some sort of census of every living individual, each birth produces a unique genome with variations that might break the rules.

Instead, scientists rely on statistical definitions of objects and parameters that loosely define boundaries of inclusion and exclusion. Suppose that someone discovers a bit of tissue in the woods and asks a lab to identify the species – “Did it come from a human? A gorilla? Or Bigfoot?” A sample is sent to the lab, which produces a DNA sequence. Most likely this exact sequence has never been seen before. It doesn’t matter: It can be attributed to an existing species if the amount of variation doesn’t exceed certain statistical parameters. If it falls substantially outside a norm for humans, gorillas, or other known species, it is deemed to be a new one. Even then, the statistical values permit it to assign it to a space on the evolutionary tree (it’s from a new species of bear or hominid).

By necessity, biological models of objects ranging from proteins to species fall into the domain of a more basic cognitive issue. We construct models individually in a complex process that involves metaphors and other tropes, a process limited by experience, unable to account for all existing and permissible variations, and yet applicable to new objects in a fluid way that is, for lack of a better word, statistical in nature. Like living systems, our mental models are simultaneously individual, robust and flexible. They arise in specific contexts (the way an organism is born into specific genomic and environmental conditions) including physical laws, human beings, and other ideas, and then venture into new territory.

* * * *

What does all of this say about the future of evolutionary debates? In a sense, it shifts the focus from specific questions about biology to more fundamental discussions of scientific practices and “everything else.” It draws a closer link between scientific thinking and everyday cases in which we construct and apply models of the world – including religious systems and the learning of language. It demonstrates that there is something fundamentally flawed about applying bottom-up/top-down reasoning to open-ended systems – at least if we expect the result to be a comprehensive definition that will always work.

Models of species themselves play a central role in popular debates on evolutionary theory. Bitter fights are waged over the question of whether evolution produced new ones, or did they all appear on Earth “as they now are” in an instant of Creation. The second perspective is just wrong – if for no other reason than the fact that the human genome has changed immensely even over the past 6,000 years, simply by adding several billion members to the population. Modern studies of organisms show that it has to be wrong. The notion of a species itself comes from science and bears no relationship to the number of names we have for animals (or organisms) in a particular language. So any time the concept of species comes up in these discussions, people are discussing wildly different things. And they rarely mention that within science, the models are being revised to encompass a more fluid notion of variation and populations that exhibit it in wide, unpredictable amounts.

I believe that what I have called “upward and downward thinking” – reasoning from specific examples to abstract models that are then applied to new examples – is a component of the acquisition of virtually every human concept, and that the act of acquiring it is individual and constructive. This process usually involves tropes that help individuals learn things in a multi-dimensional way, but whose application is not very well controlled. Individuals are usually left to decide on their own what features of a network of relations should be transferred from a known object to a new one. The development of a model is therefore inherently subjective, although it seems to become more objective after it has been shared, its predictions and boundaries have been tested by many people in a wide range of contexts, and becomes a currency for social agreement. This process entails an inherent cognitive flaw, at least in open-ended systems like cells or the attempt to design a new type of chair, that I will explore more fully in later work.

But this account can already shift some of the rhetoric of evolutionary debates because it discounts certain metaphors that are clearly inappropriate and no longer apply. Natural selection itself is an upwards-downwards concept. It can’t be considered some sort of external force – like a heat wave that scorches a population and leaves only one individual with a unique form of a gene standing. Seeing it as a statistical event that happens within a subpopulation, rather than individuals, and something that only happens over many generations is a large shift from the “survival of the fittest” mentality.

I think this view of life also rings the death-knell for the concept of a “selfish gene” (or “selfish allele”). A particular form of a molecule is only successful if it operates within a microenvironment that is permissive (and possibly encouraging) to its activity. This means that many molecules must be attuned to each other to create functional environments. When selection favors a gene, it simultaneously favors all the contingencies that allow it to succeed. These are not established in advance but arise through dialogue. At the moment, we are unable to survey all of the forms of a particular gene that are found in a population, or the variants of other genes that collaborate with it, or establish the mutual constraints on their behavior. So while we know that genes are “social” rather than selfish, at least theoretically, the extent of these mutual contingencies can’t yet be measured.

Evolutionary theory has proven tremendously valuable when it comes to assigning new facts a place in a model; its direct applications have also been incredibly powerful in manipulating organisms and biological systems. This has led to accusations that scientists are “playing God” by taking “artificial control” of “natural processes.” The metaphor only makes sense if you accept its religious premise; additionally, it is merely a way of dressing up the old debate between vitalism and materialism in new clothes. The same charge of “playing God” can be leveled at the inventor of a new type of chair, or anything else, unless you believe that there is some qualitative difference between manipulating living systems and “inorganic” objects (like wood, which is still organic, just no longer attached to a tree).

Genetic engineering and other activities certainly might affect human evolution by altering the environments in which we live, and that it might do so rapidly by releasing organisms that reproduce quickly under particular environmental conditions. On the other hand, changes are inevitably happening anyway as we change the environment in other ways, deliberately or not. Our planet now hosts seven billion humans who continue to produce new babies and waste products, who continually create new technologies, and who spread both diseases and cures at a faster rate than ever before. Our own existence and behavior are integral components of the environments of the future.

The more profound issue that underlies many of these debates, I think, is fear – fear of certain types of change, especially if they seem to threaten something of value. Evolution offers no guarantee that humans will survive (nor does the notion of a “Rapture”); it also allows for changes that we personally wouldn’t care for. We can only be glad that ancient hominids didn’t regard themselves as the pinnacle of Creation and somehow nip future evolution in the bud. They could never have succeeded, nor could the eugenicists, because there is no way to prevent random biological variation and gain long-term control over the fate of our species.

The alternative to a fluid, evolving view of life is a static model that is the gateway to a mechanistic view and thus a deterministic one. If the central metaphor in understanding life is a man-made machine, it is easy to overlook all of the aspects that are non-machine-like, particularly in the interconnectedness of every level of every biological system. To think otherwise is to continue to debate evolution in an intellectual Flatland that the theory has already escaped.

I don’t think a deterministic system can survive within a much greater model that is fluid, individually constructed, open-ended, tolerant of variation, engaged in a multidimensional conversation with its environment – in other words, organic. The metaphor of a watch – or of any other machine – is far too mechanistic to describe any living system. The amazing complexity of life is not evidence of deliberate creation or intelligent design; in fact, its unpredictability is the best evidence for an ongoing process of evolution.

– Russ Hodge, April 2013


[1] Russ Hodge. Evolution: the History of Life on Earth. New York: Facts on File, 2009.

[2] Richard Rodgers and Oscar Hammerstein. “The Farmer and the Cowboy should be Friends” (song). Oklahoma (musical). 1943.

[3] For a fairly complete list of tropes, see “Figure of speech,” http://en.wikipedia.org/wiki/Figure_of_speech

[4] In 1872 Francis Galton, a cousin of Charles Darwin, studied the health of the British Royal family. So many people prayed for their health, he reasoned, that if “third-party” prayer were effective, they ought to have exceptional health. But it appeared to have no effects on their longevity.

[5] Edwin A. Abbott. Flatland: A Romance of Many Dimensions. Dover Publications, 1992.

[6] Hull’s comment from a book review is widely quoted; I have not yet found the original source.

[7] “Ockham’s razor”. Encyclopædia Britannica. Encyclopædia Britannica Online. 2010. Retrieved 1 July 2011.

[8] William Paley. Natural Theology. (Originally published in 1802). DeWard Publishing, 2010.

[9] Intelligent design in court. See, for example, “Judge rules against ‘intelligent design.’” http://www.nbcnews.com/id/10545387/ns/technology_and_science-science/t/judge-rules-against-intelligent-design/. Last accessed on April 5, 2013.

[10] Behe, Michael. Darwin’s Black Box: the Biochemical Challenge to Evolution. Tenth Anniversary Edition. New York: Free Press, 2006.

[11]See, for example, the “Human Microbiome Project.” http://commonfund.nih.gov/hmp/ Accessed April 15, 2013.

[12] Miller, SL. A production of amino acids under possible primitive earth conditions. Science. 1953 May 15;117(3046):528-9.

[13]  see, for example, Hubel, D.H.; Wiesel, T.N. (February 1, 1970). “The period of susceptibility to the physiological effects of unilateral eye closure in kittens”. The Journal of Physiology 206 (2): 419–436.