Tips for reducing talk anxiety (part 1)

This is part of a series of articles on the blog (a few already published, more in the works) devoted to didactics and the communication of science (and other things). I am currently working on a handbook that includes ideas such as these and explores in depth the myriad problems of presenting content. More pieces to come on that.

The tips given here are related to performance anxiety and represent just a sample of things I’ve learned from my own excellent teachers, from my experience in training lots of scientists and other types of speakers, from my own experiences in public speaking, and from the process by which I completely eliminated my own stage fright when performing as a musician (yes, it’s possible – and that’s when the fun and the real music begin!). In the courses I give we always find a way to adapt these principles to individuals and their problems.

Please help me by contributing your own experiences and tips, so we can build a useful, very practical resource that will help as many students and teachers as possible! I will add your points to the list and mention their sources!

The first step in learning is to identify any barriers that exist – to define the problem as clearly as possible. So it’s crucial to carry out some self-exploration: you need to carefully study your own body in situations of fear, anxiety and stress.

These mental and physical techniques require practice, and they work best if you imagine yourself as concretely as possible in the environment you will face when giving a talk. Visualise the room – ideally, visit it ahead of time, and maybe go to another talk there. Sit toward the back and listen. If you can’t visit the room, then imagine various scenarios: a large classroom, an intimate seminar room, a packed auditorium, an almost empty auditorium.

Next close your eyes and imagine the moment before you are invited to speak. Imagine someone getting up and introducing you; you’re sitting there and will be headed onstage in 30 seconds. Find out if possible whether you will be standing or sitting down; imagine the size of the audience you will be facing, mentally prepare for a moment where the beamer doesn’t work and needs to be fiddled with, if the microphone suddenly doesn’t work, etc. Have some strategy for “vamping” the time, with a joke or some other device that engages the audience. (“While we’re waiting, I’d like to conduct an informal survey about a question of tremendous scientific relevance: Where does that stuff in your belly button come from, anyway?” There’s actually a very interesting study out about this… )

  1. Nervousness is usually accompanied by various physiological and mental symptoms, and here the goal is to deal with common and specific symptoms such as stress and tension, a nervous voice, a wavy pointer, and blackouts. By removing these symptoms you can trick your body into thinking it’s comfortable, and the cognitive issues often fade along with them. But there are clear strategies for dealing with blackouts, too.
  2. The first step is to try to replicate the condition of your body when you’re nervous, by imagining you’re in the situation, or remembering the feelings you had the last time.
  3. Anxiety is usually marked by muscle tension in very specific parts of your body. The first goal is to be aware of their positions and consciously relax them. My own technique is very simple: I totally relax my ankles, letting go of all tension in my ankles and then my feet. When I do this – and it’s true for most other people as well – it is very hard to maintain tension anywhere else – in my back, my vocal chords, etc. Try it – totally relax your ankles, and while doing so try to make a muscle tense in your back, or your arms. If it’s difficult, that means you can use this approach as well. If not, you need to find some other part of your body that you can deliberately relax and thus force yourself to relax the stressed muscles as well. Stand up and relax your ankles. This should be the first thing you do after you’re standing at the lecturn or whatever, and you’ll have to practice remembering to do it.
  4. Remember that the first 30 seconds or so of a talk are less about the content than about the audience learning to listen to your voice and style. If you realize that, then you realize that it’s also a time that you can use to get comfortable. First of all, BREATHE. Then speak SLOWLY and CLEARLY and have a clear strategy prepared to invite your listeners to engage with you right from the beginning. This is something you have to practice as well – people are usually most nervous at the beginning of a talk, and that’s when they usually talk the fastest. Additionally, for predictable reasons, they tend to say the highly technical terms they are most familiar with the fastest – and these are just the words that need to be spoken the most clearly and distinctly. Practice the beginning of your talk with a metronome or by slowly pacing around in a way that forces you to slow the rate of syllables as you speak. You’ll have to practice this a lot of times until you instinctively start slowly rather than with the rush of nervousness.
  5. Engagement #1: try to engage the listeners at the very beginning. Before you speak, look around at some of their faces and smile. If you’re not fixed to a podium or a position at the front, move toward them, as if you’re in a more informal setting.
  6. Engagement #2: if possible, start off with a real question that interests you and has motivated the work, if you can find one that’s general enough to be grasped by the entire audience. Why? If you’re lucky, they’ll actually try to come up with an answer in their own minds, or focus on the question. This immediately draws the audience into the content, rather than a focus on you and your behavior. At that point you’ve engaged them in the subject matter. If they really try to answer the question, they’ll think something like, “Oh, that’s interesting; I would have tried to do it this way…” and you’ll immediately have set up a dialogue that will continue throughout the talk and will provide plenty of good feedback at the end.
  7. Engagement #3: Rhetorically speaking, most data slides are also shown to answer specific questions. (“Does protein A interact with protein B?” Well, to find out, here’s what we did. You see the results here, which provides the following answer…) Unfortunately, most speakers don’t realize that this is what’s happening. They use the ANSWER to the question as the title of the slide, and often start trying to explain the answer before clearly presenting either the question, the methodology, and the results. This confuses the rather simple story-line inherent in the slide. It can also disrupt the talk as a whole because an answer (end of slide) usually stimulates the next question (beginning of next slide). You don’t have to make all the titles of your slides questions, but you should realize this is what is going on (and actually, why not do it?). It has the benefit of gluing separate slides together in a smooth story. And it also can stop a big problem that occurs if the order of information on a slide is different from the order you are using while speaking. When that happens, people are trying to read and listen at the same time, are getting different information from those two channels, and probably won’t remember anything.
  8. Boiling a talk down into a big question and many sub-questions can have a huge effect on anxiety when you’re worried about content blackouts. All you need to remember (or have on tiny cards in your hand) are the questions. You know the answers – that’s what you’ve been doing for the past 100 years. The question-answer method serves to create a real dialogue that engages the public and also an outline of your talk.
  9. Practice other specific performance problems that you are aware of. The first step in finding a cure is to identify what has been disrupted at the right level (it’s just like practicing music that way). A while back I had a student who was having what looked like blackouts during a talk. Later he explained that they weren’t blackouts – instead, every idea was bombarding his brain at once, and he couldn’t figure out where to start. I suggested a method by which he put up a slide and practiced fixing his eyes precisely on the thing he would talk about first, then moving them to the next thing, and so on. The very next day he gave a talk in front of 400 people without a single glitch or “brain freeze.”
  10. Shaky voice. If your voice quavers or trembles while you speak, the problem may be tension in some part of your body (see number 3 above). Often there is another problem, especially (but not only) if you are speaking in a foreign language. You may be pitching your voice too high or too low, which puts tension on your vocal cords and that will extend into your face and throat and shoulders and then the rest of your body – and then you’re doomed! This often happens in a foreign language, where people sometimes choose a “base pitch” (the tone – in a musical sense – at which you would speak if you were talking in a monotone) that is at the wrong place of the spectrum. This is really likely to happen if you subjectively consider your voice too high or too low (to be “sexy”) and try to place it differently. How do you know the right base pitch that your voice should have? A friend who has become a well-known speech pathologist gave me this tip. Go to a piano, and find the highest and lowest keys that you can comfortably The appropriate ground tone for your voice should be between the half-way mark and a third of the way from the bottom of this range. If you try to speak at a pitch that’s too low, you’ll experience the “creaky voice” phenomenon. If your voice is too high, in general, you’ll strain your vocal chords and eventually get hoarse or lose your voice. If either of these things happens to you anyway on a regular basis, you may be pitching your everyday voice too high or low. Also try different volumes of voice. You may arrive in a big room with no microphone, and you’ll have to project. Aim your voice at the person in the back, without shouting at the people in the front row. Your diaphragm and vocal cords have the potential to cause all the air in the room to vibrate and communicate your message. Singing teachers know the secrets of projection. I don’t, but it has a lot to do with breathing deeply and comfortably, and not tightening your throat or larynx.
  11. Shaky pointer syndrome. The reason a pointer shakes is because of tension in the muscles that control your arm and hand. The solution is to let your shoulder hang, without any muscular activity from the back or upper arm, and imagine that all the weight is on your elbow, and that it’s sitting on a table. Now use only the muscles you need to raise your forearm (preserving this feeling of all the weight in your elbow) and aim the pointer at a spot on the wall. Let it remain on the same point for a while. If it shakes, there’s probably some tension still in your upper arm (it’s really hard to make the forearm tense if your upper arm and shoulder are relaxed). Once you can hold the point relatively still, try moving it back and forth in a horizontal line. Here, too, you should imagine that your elbow is resting on the table, taking all the weight from your shoulder, and you’re just sliding your forearm back and forth.
  12. Those nerdy, highly technical slides… Although most scientists tell me that nowadays, most of the talks they give are to mixed, non-specialist audiences, you’re bound to have a few slides that are complex or obscure and you won’t have time to teach people “how to read them.” Example: I’m working with scientists who are developing mathematical models of biological processes, and at some point in their talks they want to show the real deal – math and formulas. They know a lot of people will be intimidated by this, but they still need to show the real work. On the other hand, they don’t want people to “tune out” and give up on understanding the rest of the presentation. At this point what I recommend is to say something like, “Now my next slide is specially made for you math nerds out there; the rest of you can take a short mental vacation and I’ll pick you up in just a minute on the other side.”
  13. Imagine the “personality” you’ll project when you become the leading expert in your field. Pretend like you’ve given the talk a hundred times to enormous success, and now you’re on the lecture circuit, giving it to audiences that think you’re the Greatest and are eager to provide input and their own ideas. How will you look up there? What kind of voice will you have? What types of rhetorical devices can you use to project “modest authority”? When a musician has practiced and practiced a piece for months, and gets stuck, sometimes all you have to do to make the next big step is to imagine what it will sound like when you play it a year from now. If you can imagine that, as concretely as possible, usually the next time you play it will be much closer to that vision. The same thing goes for giving talks.
  14. Criteria for success… If I give someone directions to a party, there’s a simple test that reveals whether I’ve done a good job or not – whether they arrive on time, on the right day… What’s the equivalent for a talk? (Pause while you think about it a minute…) The best answer I’ve heard is this: Imagine you leave the room and there’s somebody waiting outside who says, “Damn! I really wanted to hear that talk; what did he/she say?” At that point a member of the audience should be able to give the person a short summary, and it should fit two criteria: 1) the speaker would agree with it, and 2) most members of the audience should give very similar answers. As a speaker, how do you ensure that this happens? Well, the most obvious way – which few people really ever consider – is to close your talk by saying this: “Now imagine when you leave the room, there’s somebody standing outside who tells you, ‘Damn, I really wanted to hear that talk; what did he/she say?’ Well, here’s what you should tell them…” And then sum it up in a nice little package that’s tight enough to be remembered, with a clean, predictable story line. Remember you’re not trying to simply communicate single facts! You’re trying to answer a question – which you have to be able to articulate very precisely – and you need to explain the meaning of that question in terms of models and concepts that you share with the audience. You need to put information into a structure that can be grasped and remembered, in a way that holds the attention of the audience and engages their intelligence. This means you have to provide information in a relational, coherent structure – and if they don’t share your background and models, you’ll have to provide it. If you do that, you’ll get the kind of smart questions and feedback you’d like, the kind that will help you improve your thinking and your research.

The last points relate to content, which will be the subject of more articles very soon.

ALL of these points require practice – numerous repetitions while mentally imagining the real-life situation as it will feel, as closely as possible. You may always feel anxious before or during a talk; it may never go away. But most people can deal with the symptoms, using strategies like these, and that makes all the difference.

Two final points: First, remember what it’s like to be in the audience when a speaker is really nervous. Everybody is rooting for him or her – they’re on your side! Take comfort in that and try to engage people in the sense that “you’re all in this together”: you’re inviting them to think about an interesting question with you, rather than waiting for them to throw rocks (or shoes) at you.

Secondly, you’ve got to be engaged in the content. Even when you think your story isn’t that great or sexy, or leaves lots of questions up in the air – well, that’s what most science is like, folks! Remember that you’re presenting something that has an inherent interest to a lot of scientists. And negative results are useful as well because they can save your colleagues a lot of time; it will prevent them from following the same old leads, time and time again, without realizing that other labs have tried and failed and been unable to publish their results. Closing off blind alleys is a great service to scientists everywhere – it’s a key step toward progress by forcing people to rethink and revise the basic models they are using.

These are some of the very basics I’ve learned through experience in many performance situations of my own as well as working with a lot of students with different problems over the years. I have learned a lot from the fantastic teachers I have had the privilege of studying with (and continue to do so in the life-long process of learning). I also absorbed a lot from a fantastic book about performance anxiety, whose focus is music but every bit of it is applicable to public speaking, which I highly recommend here:

The Inner Game of Music
Overcome obstacles, improve concentration and reduce nervousness to reach a new level of musical performance
Barry Green with Timothy Gallwey (co-author of the Inner Game of Tennis)
London: Pan Books, 2015.
ISBN: 978-1-4472-9172-5

At Amazon, also available on Kindle

For other articles on science communication teaching, click here.

An animal that runs on hybrid fuel

Research highlight from the MDC – a great story from Gary Lewin’s group in the current issue of Science

 

When oxygen gets scarce, the naked mole-rat throws a metabolic switch to draw energy from fructose rather than glucose

The naked mole-rat, a rodent native to Africa, can survive with little or no oxygen far longer than other mammals. The secret lies in its metabolism: in addition to the basic system by which animals generate energy from glucose, naked mole-rats have a backup system based on fructose. This discovery comes from Gary Lewin’s lab at the Max Delbrück Center (MDC), in a collaboration with the groups of Michael Gotthardt (MDC), Stefan Kempa (MDC and BIH), and Thomas Park (University of Illinois in Chicago), as well as scientists from several other countries. The work is published in the April 21 edition of the journal Science.

Oxygen is so essential to life that a very short deprivation is fatal to animals. Their cells need a constant supply to drive the chemical reactions that produce energy from food. In ancient times, cells evolved a form of metabolism that used the sugar glucose as a source of fuel and the high reactivity of oxygen atoms to extract its energy. This process was so efficient that glucose-based metabolism could fuel the bodies of humans and even larger animals, and it has been maintained over the course of evolution.

But life in a harsh environment can alter even very basic aspects of an animal’s biology. Long ago, something drove the ancestors of the naked mole-rat underground. There the rodent’s biology and behavior began an evolutionary dialogue with the extreme conditions it encountered. This led to some highly unusual adaptations. Naked mole-rats are insensitive to some forms of pain, and have lifespans that exceed 32 years – ten times the norm for most other rodents. Only one or two cases of cancer have ever been detected in the species. And now MDC scientists have discovered that the animal can go with little or no oxygen for extraordinary lengths of time.

Such characteristics have attracted the interest of scientists around the globe – including neurobiologist Gary Lewin. Over several years, his laboratory has gained deep insights into the biology of pain by comparing the nervous system of the naked mole-rat to that of mice and humans. Upon learning that the naked mole-rat could cope with little or no oxygen, he was immediately intrigued – and his lab was well prepared to pursue the biology behind this unique attribute.

Linking oxygen deprivation to a unique metabolic system

Oxygen deprivation was clearly connected to the animal’s biology, lifestyle and environment. “Naked mole-rats huddle in huge, underground colonies of up to 280 individuals,” Lewin says. “This means that they continually experience sharp declines in levels of oxygen and dramatic increases in carbon dioxide. Without adaptations, this would be just as deadly to the naked mole-rat as it is to other animals.”

Most organisms on Earth are suited to the surface atmosphere, composed of about 21% oxygen and only tiny amounts of carbon dioxide (about 0.04%). Reducing oxygen to about 5% is fatal for a mouse within about 15 minutes; total deprivation causes fatal damage within about a minute. The naked mole-rat, however, can cope with as little as 5% oxygen and high levels of carbon dioxide for hours on end with no apparent distress or ill effects. And amazingly, it can survive at least 18 minutes without any oxygen at all.

“Under these conditions the animal enters a sort of suspended animation,” says Jane Reznick, a postdoc in Lewin’s group and a lead author on the current paper. “It falls asleep and its heartbeat slows to about a quarter of the normal rate. When oxygen is restored the heart rate rises, and the animal quickly wakes up and goes about its normal behavior.”

This hinted that some backup system was protecting its heart and brain – two organs that are highly sensitive to oxygen in other species. Without it, their cells cannot produce energy and rapidly suffer fatal damage.

Hitting the stop button on an assembly line

There had to be some fundamental difference in the naked mole-rat’s metabolism. To find it, the scientists enlisted help from the MDC’s Metabolomics Unit, headed by Stefan Kempa. His team uses advanced technology to capture global and quantitative snapshots of cellular metabolism. Their methods reveal the presence of tiny metabolites: molecules that are created through the processing of fuels like glucose. Networks of enzymes break glucose down into small products that move through the metabolic pipeline, generating energy along the way.

“These experiments are a bit like hitting the ‘stop’ button on an assembly line,” Kempa says. “If you were to do that in a factory, then look at partially assembled pieces and the bits that were tossed out, you’d get an idea of what was being built, and how it was constructed.” Further experiments traced the remnants of the sugars as they flowed through an alternative metabolic route that generated energy without consuming oxygen.

Comparing mouse and naked mole-rat tissues under conditions with and without oxygen revealed some curious differences. In naked mole-rats, oxygen deprivation triggered a shutdown of cellular energy factories called mitochondria. In the mouse they continued to operate but quickly malfunctioned – mitochondria need oxygen to run.

But the most startling finding had to do with the sugar molecules found in the animals’ blood and tissues. Overall, naked mole-rats had a lot less glucose than mice, which hinted that other sugars might be providing an alternative source of energy. During oxygen deprivation, there was a significant rise in levels of other sugars. Naked mole-rats had more sucrose – and truly stunning was the amount of fructose, which had skyrocketed.

Can tissues run solely on fructose fuel?

Could the naked mole-rat be using fructose rather than glucose as a source of energy? The two sugars weren’t that different – even our own bodies make use of fructose-based metabolism, although this only happens in the kidney and liver. These organs have an enzyme called ketohexokinase, or KHK, which can trim fructose into a form that can be plugged into the energy production line. From that point on the modified fructose, called F1P, is handled like glucose. Since the subsequent stages of processing don’t require oxygen, it wouldn’t be absolutely necessary in a metabolic system based on fructose.

“In humans, fructose metabolism occurs only in the kidney and liver because they’re the only tissues that contain KHK,” Lewin says. “We found that brain tissue from the naked mole-rat contained high levels of F1P – suggesting that KHK was at work – but only under oxygen deprivation. This told us two things: that their brains might really be using fructose as a source of energy, and that the switch only happened when oxygen grew scarce.”

The evidence for fructose metabolism was accumulating, but so far it was all indirect; the next step would be to determine whether the animals were actually using the alternative source of fuel. First the scientists performed experiments using brain tissue to test whether neurons could function if they were deprived of glucose and fed exclusively on fructose. While an hour of this treatment severely damaged the cells of mice, naked mole-rat neurons continued to show activity. Experiments carried out in Michael Gotthardt’s group showed even more dramatic results for the naked mole-rat heart, which could beat just as well when supplied with fructose as it could using glucose.

“This was proof that fructose can replace glucose as an energy source in the naked mole-rat brain and heart,” Reznick says. “It helps explain how these organs – and the animal as a whole – can recover from long periods of oxygen deprivation.”

A two-part system for switching to alternative fuel

Cells can only use fructose as an energy source if they can absorb it from their surroundings. This requires a protein called GLUT5, which snatches fructose and draws it into the cell. In mice and humans, GLUT5 appears in kidney and liver cells, but other tissues have almost none. It’s another factor that restricts fructose metabolism to the kidney and liver in humans and prevents it from serving vital organs such as the brain and heart. In the naked mole-rat those tissues – and most other cells – have at least ten times as much GLUT5.

“This gives the naked mole-rat a two-part system that allows it to survive long periods of oxygen deprivation,” Lewin says. “Throughout its body you find both the GLUT5 transporter and the KHK enzyme that converts fructose into a usable energy source.”

Fructose metabolism has been encountered in human diseases including malignant cancer, metabolic syndrome, and heart failure. This hints that there might be some link between the naked mole-rat’s metabolism, its resistance to cancer, and possibly even its extraordinary lifespan.  Only further research will tell – but the current study provides an interesting new handle on such questions.

“It’s important to understand how these unusual animals make the metabolic switch without any obvious long-term damage to their tissues,” Lewin says. “We might learn something about how our own cells attempt to cope with situations in which they are deprived of oxygen, such as strokes or heart attacks. Our work raises questions about the biology of fructose metabolism that will ‘fuel’ research for years to come.”

 

Russ Hodge

Thanks to Jana Schlütter and Martin Ballaschk for comments on an earlier draft.

Reference:

Thomas J. Park1, Jane Reznick2, Bethany L. Peterson1 , Gregory Blass1 , Damir Omerbašić2, Nigel C. Bennett3, P. Henning J.L. Kuich4, Christin Zasada4, Brigitte M. Browe1, Wiebke Hamann5, Daniel T. Applegate1, Michael H Radke5,10, Tetiana Kosten2, Heike Lutermann3, Victoria Gavaghan1, Ole Eigenbrod2,  Valérie Bégay2, Vince G. Amoroso1, Vidya Govind1, Richard D. Minshall7, Ewan St. J. Smith8, John Larson9, Michael Gotthardt5,10, Stefan Kempa4, Gary R. Lewin2,11 (2017): „Fructose driven glycolysis supports anoxia resistance in the naked mole-rat.“ Sciencedoi:10.1126/science.aab3896

1Laboratory of Integrative Neuroscience, Department of Biological Sciences, University of Illinois at Chicago, Chicago, Illinois, United States of America; 2 Molecular Physiology of Somatic Sensation, Max Delbrück Center for Molecular Medicine, Berlin, Germany; 3Department of Zoology and Entomology, University of Pretoria, Pretoria, Republic of South Africa; 4Integrative Proteomics and Metabolomics, Berlin Institute for Medical Systems Biology, Max Delbrück Center for Molecular Medicine, Berlin, Germany; 5Neuromuscular and Cardiovascular Cell Biology, Max Delbrück Center for Molecular Medicine, Berlin, Germany; 7Departments of Anesthesiology and Pharmacology, University of Illinois at Chicago, Chicago, Illinois, United States of America; 8Department of Pharmacology, University of Cambridge, Cambridge, United Kingdom; 9Department of Psychiatry, University of Illinois at Chicago, Chicago, Illinois, United States of America; 10DZHK partner site Berlin, Germany; 11Excellence cluster Neurocure, Charité Universitätsmedizin Berlin, Germany

Breaking the temperature barrier

With an advanced ERC grant, Thoralf Niendorf’s group will aim ultrahigh-field MRI at a critical, yet largely unexplored dimension of life

 

Temperature is one of the most rigidly controlled aspects of life, as seen by the very narrow range maintained in the tissues of warm-blooded animals. The heat briefly rises through fevers and inflammations as a part of immune responses to infections. But there has been a major obstacle to exploring this crucial dimension of life: scientists have not had a method to alter temperatures within living tissues.

Soon that may change thanks to an advanced ERC grant just awarded to Thoralf Niendorf’s group and his team, who work at the high end of magnetic resonance imaging (MRI) technology. “Every time a doctor takes an image using MRI, there’s a generation of heat,” Niendorf says. “The unknown impact of this has led to strict regulations governing the amount that can reach patient tissues. We’re hoping to take this side effect and turn it into a tool for research, new forms of diagnosis, and hopefully even therapies.”

That will require an instrument which can focus exact amounts of energy on precise, microscopic targets inside animal bodies. The group has found a way to build it: start with a new ultrahigh-field MRI instrument, then add a custom-designed array of radiofrequency transmitters to shape and focus its powerful magnetic field. The scientists have already worked out the theory and tested designs; now, with the new grant, they can build the machine.

At that point they will enter uncharted scientific territory. The first projects will involve thermal phenotyping studies – a term coined by the group – carried out in collaborations with scientists working on a range of systems. The goal is to determine whether various tissues have unique thermal properties that can be detected by MRI and might have diagnostic value. The next step will be to observe how tissues respond to highly focused increases in temperature. Disease-related processes may be susceptible in ways that could usher in new MRI-based therapies. A unique feature of this strategy would be the ability to deliver a treatment and monitor its effects simultaneously, using the same instrument.

Another part of the project will involve an ongoing collaboration with scientists in Sydney, Australia and Berlin who are building temperature-responsive polymers to deliver drugs or other molecules. These “nano-vehicles” can be introduced into the body, where they remain inactive until heated. They can be loaded with several substances which are released at different temperatures upon activation through MRI. The interest for research is that scientists could alter tissues in a step-wise manner, to control complex processes over time. And the same strategy could be used to strike a disease with successive blows, targeting different weaknesses.

“Planning this project has already drawn together a group of people with diverse expertise,” Niendorf says. “We’re excited about exploring this dimension of life in a truly interdisciplinary way. We can’t predict what we’ll find. But the fact that organisms keep temperature under such tight control hints at vitally important functions across the body.”

 

The original version of this article was published on the MDC website and can be seen here.

 

Juggling molecules while balancing the brain

Research highlight from the MDC
(visit www.mdc-berlin.de to see more highlights from MDC research)

People with a mental illness are sometimes described as being “unbalanced” or “having a screw loose.” These expressions may not be very polite, but they capture two important aspects of mental and physical health. First, organs such as the brain need to maintain an overall balance as we experience stress and engage in various types of activity. Ultimately this state depends on the functions of fundamental components in our cells – not screws, of course, but proteins and other molecules. A frenetic activity at this vastly smaller scale is required to ensure the stability of cells and tissues. While it is often extremely difficult to connect these levels of biological structure, the lab of Jochen Meier has established a new link. In a recent study in the Journal of Clinical Investigation, the group has connected a molecule called the glycine receptor (GlyR) to the operation of networks of neurons – and the way they are disrupted in epilepsy.

Jochen and his colleagues had already found an association between GlyR and brain disorders. They had carried out a molecular analysis of brain tissue from epilepsy patients. This disease is caused by an overexcitation of certain neurons, particularly in a region of the brain called the hippocampus. “We found that hippocampal cells produce unusually high proportions of a specific form of GlyR,” Jochen says. “The current project aimed to show how this molecule contributes to higher brain functions and eventually causes symptoms related to the disease.”

GlyR has one function that is clearly related to signal transmission between brain cells: it acts as a receptor for a neurotransmitter called glycine. Neurons release neurotransmitters into synapses, tiny gaps that separate them from their neighbors. These small molecules typically dock onto receptor proteins on other cells (postsynaptic) or on presynaptic receptors of the original cell. Depending on the type of neurotransmitter receptor and type of neuron, this either inhibits or promotes the signal.

The GlyR can be composed of two different proteins called alpha and beta subunits. Our genome encodes only one beta protein, but cells pick and choose between different genes for the alpha subunit. It may be combined with the beta subunit to create the GlyR; however, single cells sometimes produce GlyRs composed of alpha subunits only.

Like all proteins, the GlyR alpha3 subunit (GlyR-a3) is produced when the information in its gene is transcribed into an RNA molecule. Later the RNA is translated into protein. Along the way bits and pieces of the RNA may be removed in a process called splicing, creating proteins of different lengths, containing different functional modules – a bit like adding or removing wagons from a train.

GlyR-a3 RNA sometimes undergoes yet another change that affects its chemistry and functions. During a process called RNA editing, one letter of the molecule is swapped for another. This causes a corresponding change in the chemistry of GlyR-a3 protein and makes it work more efficiently. What Jochen’s team had discovered in epilepsy patients was an unusually high proportion of “long” spliced forms, and they also observed a swap in one letter of its chemical alphabet.

GlyR-a3 is known to inhibit the firing of neurons in the spinal cord, which can block the transmission of signals related to pain. This might mean that the form of GlyR-a3 found by Jochen’s team (the long spliced form, changed by RNA editing) was tuning down the excitability of neural networks in epileptic patients. To find out, the lab needed to observe the behavior of the altered molecule in an animal’s brain. Aline Winkelmann and other members of Jochen’s lab developed a strain of mouse in which particular cells in the hippocampus – called glutamatergic excitatory neurons – produce high amounts of this version of GlyR-a3.

Now they measured the way the change affected the animals in various ways: checking whether it affected the structure of neurons, the excitability of neural networks, cognition, memory, and mood-related behavior. Unexpectedly, they discovered that the altered form of GlyR-a3 caused an overexcitation of the system – and an important reason why.

“The long spliced form of GlyR-a3 is packed up with presynaptic vesicles,” Jochen says. “These are bubble-like packages that neurotransmitters are placed into before cells release them. Put this association together with an increased sensitivity to the neurotransmitter – and even some spontaneous activity due to the change in the receptor’s chemistry – and the neurons were prone to release more neurotransmitters. This had measurable effects on behavior: it disturbed the animals’ cognitive functions and some forms of memory.”

The study yielded another extremely interesting and wholly unexpected finding. The scientists discovered that in another type of cell, parvalbumin-positive inhibitory interneurons, higher amounts of the molecule had completely different effects on network excitability and behavior.
“Here the result was reduced network excitability, because it was enhancing the functions of this type of neuron,” Jochen says. “The change triggered anxiety-related behavior in the animals. But it did not cause any changes in cognitive function.”

A close scrutiny of the animals’ neurons and hippocampus didn’t reveal any significant changes in overall structure. In other words, higher amounts of this form of the GlyR-a3 molecule weren’t “rewiring” the animals’ brain network. Instead, they were persistently changing the overall balance of neural networks by enhancing the neuronal output.

“What we’ve done is to identify a mechanism at the level of molecules that is linked to the release of neurotransmitters and identifies two critical types of neurons that can cause an imbalance in the brain,” Jochen says. “We think this helps explain both changes in excitability of the brain network in epilepsy and the neuropsychiatric symptoms of some types of anxiety that are often associated with the disease.”

– Russ Hodge

Reference:

Winkelmann A, Maggio N, Eller J, Caliskan G, Semtner M, Häussler U, Jüttner R, Dugladze T, Smolinsky B, Kowalczyk S, Chronowska E, Schwarz G, Rathjen FG, Rechavi G, Haas CA, Kulik A, Gloveli T, Heinemann U, Meier JC. Changes in neural network homeostasis trigger neuropsychiatric symptoms. J Clin Invest. 2014 Feb 3;124(2):696-711.

Free full text of the article

MicroRNAs micromanage the pancreatic β-cell

The Poy lab shows that a complex microRNA pathway governs the body’s response to insulin resistance

 

Our daily lives are marked by cycles – wakefulness and sleep, activity and rest, eating and fasting – through which most biological activity must continue in a balanced way. We don’t have to eat all the time because our cells can store nutrients for later use. Eating causes a quick rise in glucose, one of the body’s main sources of energy, but too much sugar in the bloodstream is toxic. When levels surpass a certain point, cells should absorb glucose. They are told to do so by the hormone insulin, which is produced by specialized beta cells in the pancreas. But in the disease diabetes type 2, cells become resistant to insulin stimulation and don’t respond properly. The body tries to compensate by creating more beta cells, which then secrete more insulin. It’s as if cells have become deaf, and the body raises the volume of the signal in hopes that the message will get through.

How the body senses insulin resistance and stimulates the production of more beta cells has been unclear. Matthew Poy’s lab at the MDC has now solved a crucial part of the puzzle. In a recent article in Cell Metabolism, the scientists unravel several layers of regulation by which cells control the production of specific proteins and respond to insulin resistance.

The study demonstrates that beta cells require a protein called Ago2 to begin this type of proliferation. Normally the production of Ago2 is braked by a small RNA molecule (miR-184). During insulin resistance, however, beta cells stop creating miR-184. As a result they release the brake on Ago2, which stimulates their proliferation and the secretion of more insulin.

Understanding this process required that the lab unravel the details of an intricate, switch-back route by which the information in genes leads to the production of proteins (or not). Proteins such as Ago2 are encoded in genes, which can be transcribed into messenger RNA molecules and then translated into proteins. But our genome also encodes at least 2,000 short microRNA molecules (miRNAs) which can block this process. MiRNAs have sequences that cause them to dock onto messenger RNAs and trigger their destruction before they can be translated into proteins.

In recent years scientists have discovered that miRNAs target many – if not most – human messengers and thus play a crucial role in fine-tuning the amounts of proteins produced by cells. MiR-184 apparently docks onto Ago2 messenger RNA and limits its production in this way.

Matthew and his lab have been studying the influence of miRNAs on beta cells for several years. “Many technologies are available now including small RNA sequencing techniques that can be implemented to  detect changes in miRNAs in b-cells and study the amounts of these molecules that were being produced in disease models,” Matthew says. “A few years ago we discovered that healthy beta cells turned out large amounts of one such molecule, miR-375.”

This is where the story becomes a complicated affair of regulators regulating the regulators of regulators. (If you don’t like brain teasers, skip this paragraph and the next.) miR-375 normally docks onto the messenger of a protein called Cadm1. Cadm1 suppresses beta-cell proliferation. In other words, the production of more beta cells depends on eliminating Cadm1. Achieving that requires more miR-375.

Sudhir Tattikota, Thomas Rathjen, and other members of Matthew’s lab established this connection and figured out how Ago2 contributes to the process. When it’s around, Ago2 helps miR-375 establish contact with the Cadm1 messenger. So put together, the whole tortuous chain looks like this: miR-184 blocks the production of Ago2. As a result, Ago2 doesn’t help miR-375 find and block its target. That means the beta cells produce more Cadm1, don’t reproduce, and don’t produce more insulin.

Put more simply: LESS miR-184 means MORE Ago2 and MORE miR-375 activity, which means LESS Cadm1 and MORE beta cells. To simplify further, consider just the input and output: less miR-184 leads to more beta cells and more insulin. (And vice-versa.) Matthew and his colleagues have clarified the links in this pathway by revealing the roles of Ago2 and Cadm1.

The take-home message? “Insulin resistance is a symptom of the growing epidemic of diabetes type 2,” Matthew says. “The body compensates by stimulating the growth of new beta cells and increasing production of the insulin signal. We’ve shown for the first time how several layers of the miRNA pathway work together to stimulate the growth of the insulin-producing cells.”

The scientists used a mouse model in which insulin resistance could be tuned up and down. When they restored the animals’ sensitivity to the hormone, beta cells produced more miR-184 and didn’t proliferate. This demonstrates that the microRNA acts as a crucial part of the mechanism that detects insulin resistance.

The study revealed another aspect of insulin sensitivity which may open new possibilities for treating diabetes type 2. When people reduce their intake of carbohydrates, which are the main source of glucose, the liver begins converting fat into substances called ketone bodies, an alternative source of energy. This type of diet has been found effective in treating some forms of epilepsy, likely because it alters the biochemistry of nerve cells.

“The literature reports that this ketogenic diet also improves insulin sensitivity and affects glucose levels,” Matthew says. “If our mouse model is put on a ketogenic diet, we also see a rise in miR-184 levels.  This may indicate that our dietary intake  may influence pancreatic beta cells in ways that are still unclear. That offers new opportunities to investigate both the mechanisms of insulin resistance and potential therapies.”

 

– Russ Hodge

Reference:

Tattikota SG, Rathjen T, McAnulty SJ, Wessels HH, Akerman I, van de Bunt M, Hausser J, Esguerra JL, Musahl A, Pandey AK, You X, Chen W, Herrera PL, Johnson PR, O’Carroll D, Eliasson L, Zavolan M, Gloyn AL, Ferrer J, Shalom-Feuerstein R, Aberdam D, Poy MN. Argonaute2 Mediates Compensatory Expansion of the Pancreatic β Cell. Cell Metab. 2014 Jan 7;19(1):122-34. doi: 10.1016/j.cmet.2013.11.015. Epub 2013 Dec 19.

 

Link to the original paper:

http://www.ncbi.nlm.nih.gov/pubmed/24361012

 

Home page of the Poy lab:

https://www.mdc-berlin.de/14669454/en/research/research_teams/microrna_and_molecular_mechanisms_of_metabolic_diseases

 

 

 

 

 

Aiming for immortality…

Death is a disease that Google can cure? Come on…

I’m all for Google’s recent decision to cure death; in fact, once they post the on-line registration form for the treatment, I plan to be first in line to sign up. Providing, of course, they can guarantee I won’t spend eternity suffering from Alzheimer’s disease, or have to undergo permanent chemotherapy. And hopefully a lab somewhere will be growing replacement parts from my stem cells. It will be hard to find an organ donor among immortals; they’ll painstakingly avoid accidents and anything else that risks their chance for eternal life.

I’d also like to know where they plan to store all of us immortals – hopefully it won’t be in a drawer, or one of those shoebox-like hotels you find in Japan. But let’s not overthink this, or get fussy about the details. By the time the cure for death is found, I’m sure the big brains at Google will have solved much simpler problems like time travel, or instantaneous teleportation to the stars, or downloading my consciousness onto the Internet.

To take a more sober look at all of this, Google is putting the cart way before the horse. To use a metaphor: if you think of death as hitting the ground after a long leap, most medical research aims to raise the height of the diving board and to ensure that you’re as happy as possible until the moment of collision. Google’s approach is more like saying, “Jump into this hole; we don’t know what’s down there but don’t worry, you’ll never hit the bottom.”

Unfortunately, the hole always has a bottom. Most people used die from diseases or infections caused by viruses or bacteria. Many still do, but the development of vaccines and antibiotics, pesticides, and the introduction of modern sanitation largely removed those obstacles. New drugs and organ transplantations had a huge impact as well, meaning that 20th-century medicine lengthened average life expectancy by a couple of decades. It made for a longer fall, but it exposed a deeper layer of things to crash onto: cancer, cardiovascular disease, and Alzheimer’s. These conditions weren’t as prevalent in earlier times because they typically strike in old age, and people didn’t live long enough to experience them.

The first step in achieving Google’s great dream will have to be to cure those diseases – which, incidentally, is already the aim of a vast amount of biomedical research. As far as I know, the company has no secret plan that will cause this work to jump ahead and achieve some dramatic spurt of progress. If they do, I’m eager to hear it. Of course the injection of a huge amount of money alone into biomedical research is a good thing; it could fund new labs, or help existing groups acquire equipment that they can’t currently afford. It may help keep talented young researchers in the field; frustrated by heavy competition for scarce positions, many end up leaving the lab. It might also shift priorities by putting even more effort into fields such as stem cell research, regenerative medicine, and the other siblings of the science of aging.

A jump in funding, the creation of a new institute, and other measures along these lines are always welcome, but they won’t cause a revolution in biomedicine. Scientists solve huge problems by breaking them down into tiny parts. Even when they have a definitive goal in mind, they can’t predict the outcome of experiments in advance. The best road to progress is to follow results wherever they lead, which is often someplace completely unexpected. It’s the reason that science funding agencies have discovered that investing in basic research is usually much more productive and profitable than supporting narrowly defined work in pursuit of a particular application.

Suppose all those who have been doing this work so long, and so well – now with support from Google – succeed in curing most cases of cancer, cardiovascular disease, and neurodegenerative conditions like Alzheimer’s. We can expect that happen – if not within my lifetime, then surely that of my children. But immortality will remain a distant dream. Just as major infectious diseases had to be cured before the demographics of disease shifted to these next barriers, once the current challenges have been faced, we’ll crash against the next thing. We do not know what health problems typically strike people who are 120 or 130 years old, but we’re about to find out. Likely candidates are prion diseases such as kuru or Creutzfeldt-Jakob Disease (CJD, a cousin of Mad Cow Disease). Very few people currently suffer from these conditions, probably because they follow a period of incubation that is longer than the normal lifespan. Most victims of kuru were cannibals who ate brain material from other people, where the incubation had already reached an advanced stage.

Currently there’s no cure for prion diseases, and we don’t know what other syndromes will strike people in their second century of life. Once we recognize them, which will take a while, we’ll surely develop treatments as well. Then we’ll be able to move on to the diseases that strike 200-year-olds, and so on. The only hope of immortality is to find cures as fast as as new diseases are discovered. Even then, each challenge will expose a new one. Eventually we may run up against some fundamental physical barrier – a sort of biomedical “speed of light” – which dictates that the human body, at some point, will degrade back to the molecules that compose it.

So as far as I can tell, Google has no fabulous secret plan, and promises nothing new – still, maybe there’s a virtue in putting the label of “immortality” on a new campaign in biomedicine. It seemed to work out pretty well for physics; calling the Higgs Boson the “God particle” was surely effective in collecting the billions of Euros needed to build the Large Hadron Collider. I merely hope that before people become immortals, we’ve ensured that they’ll have a world to live in. First it would be nice to get a handle on overpopulation, pollution, and political strife.

Google may be planning to tackle those annoying little problems as well. Or maybe they intend to export immortals to a better place, using the interstellar starship they’ve begun building in a basement somewhere. You’d think we’d go to Mars before the Andromeda galaxy, just like we’d improve current health and social problems across the globe – for the developing world as well as wealthy countries – before aiming for immortality. But those aims may be a bit too pedestrian for the Google business plan.

Hearing a small and quiet chorus in a vast and raucous crowd

(Another new science article that I wrote for the homepage of my institute, the MDC. See the archive there for more stories from MDC research.)

The labs of Young-Ae Lee and Norbert Hübner help identify new genetic risk factors for atopic dermatitis

If you suffer from atopic dermatitis, you surely know it. Infants develop rashes and a susceptibility to allergies that usually persists their whole lives. For years researchers have known that most cases have a hereditary basis, and a few culprit sites have been identified in the human genome. But others have been very difficult to detect. The laboratories of Young-Ae Lee and Norbert Hübner at the MDC, collaborating with other groups from Germany, the U.S., Ireland, China, and Japan, have now identified four new sites associated with atopic dermatitis. Some of these loci are involved in other diseases as well. The work draws on recently developed methods of correlating genetic factors to disease risks and huge cohorts of patients, nonaffected family members, and controls from several countries. The study appears in the July edition ofNature Genetics.

Finding the genetic causes of diseases like atopic dermatitis, which may involve multiple, subtle defects in DNA and environmental factors, has been one of the greatest challenges in disease research. Whether someone is affected or not might depend on a combination of single “letters” in the 3 billion nucleotides that make up a person’s DNA, and even very close relatives exhibit many “spelling” differences. Adding more distant relatives and others introduce a great deal more “noise”, making it extremely difficult to detect a specific site related to a disease.

A second hurdle has been the need to involve very large groups of patients, family members, and control individuals – preferably several such cohorts from different countries. That has to be done to distinguish sequences that cause a disease from those that originated in common ancestors, long ago, and have spread through any intermingling population. Young-Ae and her colleagues could draw on years of work by Germany and other countries to assemble cohorts of families affected by atopic dermatitis and a number of other diseases, and that work continues.

This type of research is considerably easier if a disease can be linked to changes in a single gene or DNA sequence; here, several sites in the genome are involved, and the environment seems to play a role as well. These factors combine to make the task incredibly complex. Imagine recording every conversation on Earth for a year, in hopes of finding a few people who are somehow similar, saying the same thing at the same time – without knowing in advance what topic or type of person you’re looking for. Multiply that problem by a factor of a few million and you get the idea. Even the best eavesdropping software would grind to a halt.

Over the past few years researchers have developed a methodological solution called genome-wide association studies (GWAS) that hammers at the problem statistically, using sophisticated analytical algorithms. The method is applied to the genome of every individual in the study. It establishes the statistical likelihood that certain regions of DNA are involved in a particular disease such as atopic dermatitis. Such studies give researchers “hot spots” for further investigation, in hopes of identifying exactly what sorts of genetic variants bring along increased risk, and why.

“If you compare the genetic code of different individuals, you’ll find differences such as single nucleotide polymorphisms, or SNPs, where the ‘spelling’ of a single nucleotide is swapped for another,” Young-Ae says. “Some of these SNPs confer a higher disease risk for individuals. In the current study we looked at every region of the genome that was somehow linked to processes of chronic inflammation, because past work has shown a link between these factors and atopic dermatitis.”

Young-Ae and her colleagues used a “DNA chip” that contained every SNP found so far in humans for these regions. Now they scanned the DNA of 2,425 German individuals with atopic dermatitis and 5,449 controls, looking for single letters of the code that conferred higher risk. This produced a list of “hits” that seemed to be significant; then the group expanded the study to 7,196 patients and 15,480 controls from Germany, Ireland, Japan and China, hoping to replicate the findings.

Young-Ae and her team confirmed earlier reports by other groups linking several SNPs to the disease; more importantly, they found four new ones. These sequences also corresponded to a person’s likelihood of having other chronic inflammatory conditions as well. Atopic dermatitis is associated with defects in the differentiation of skin cells called keratinocytes as well as problems with the immune system. Answering this question might show that multiple diseases could be traced back to a common biological mechanism.

For example, researchers have known that immune cells called T helper type 2 cells, which cluster at the skin during inflammations, are somehow involved. One of the genes confirmed in the study is the used to produce a protein called DcR3 that is found in abnormally high amounts during the inflammation associated with atopic dermatitis. New genes identified in the study include IL2-IL21, PRR5L, CLEC16A-DEXI, and ZNF652. CLEC16A, which is highly expressed in immune system cells such as B-lymphocytes, seems a particularly interesting candidate for further investigation, Young-Ae says.

Combining the new findings with those made previously now brings the total number of culprits to 11. “We estimate that this now accounts for about 14.4 percent of the hereditary factors involved in atopic dermatitis,” Young-Ae says. “Increasing that number will probably require expanding the study to new and larger cohorts, as well as developing new methods to find even more subtle associations between DNA, the disease, and environmental factors that might play a role.”

–       Russ Hodge

Highlight Reference:

Ellinghaus D, Baurecht H, Esparza-Gordillo J, Rodríguez E, Matanovic A, Marenholz I, Hübner N, Schaarschmidt H, Novak N, Michel S, Maintz L, Werfel T, Meyer-Hoffert U, Hotze M, Prokisch H, Heim K, Herder C, Hirota T, Tamari M, Kubo M, Takahashi A, Nakamura Y, Tsoi LC, Stuart P, Elder JT, Sun L, Zuo X, Yang S, Zhang X, Hoffmann P, Nöthen MM, Fölster-Holst R, Winkelmann J, Illig T, Boehm BO, Duerr RH, Büning C, Brand S, Glas J, McAleer MA, Fahy CM, Kabesch M, Brown S, McLean WH, Irvine AD, Schreiber S, Lee YA, Franke A, Weidinger S. High-density genotyping study identifies four new susceptibility loci for atopic dermatitis. Nat Genet. 2013 Jul;45(7):808-12

The consummate scientist

July 8 marked the 70th birthday of Walter Birchmeier, former Scientific Director of the MDC

A few years ago, upon submitting an article to Nature Reviews: Cancer, Walter Birchmeier was rewarded with the following comment from a referee:

“This is a fine review that nicely covers the long history of Wnt signaling and I cannot think of a better person than Walter Birchmeier to contribute such an article. I say this not only because he is so old, but because he has personally witnessed or directly contributed to most of the significant developments in the Wnt field.” (Italics added here.)

To set the record straight: At the time, Walter was a mere youngster of 65. The comment about his age sounds like a joke, but referees are a grim, humorless species. Instead, I think the writer was searching for a term to describe a scientist at the top of his game, someone who has continually made unique, seminal contributions to a field. Chess has a name such figures – they’re called Grand Masters – but science lacks a similar title. You’re either a “big shot,” a “guru”, or just an “old guy,” and if you’re really lucky, they call you a Nobel laureate.

It’s hard to imagine Wnt without Walter, or Walter without Wnt, or to believe that the Birchmeier genome could produce anything other than a scientist. But phenotypes sometimes take a while to emerge. Walter first earned a diploma in church music, then financed his later studies by teaching a class of 49 unruly fifth-to-eighth graders in a Swiss middle school. Not many institutional directors have those items on their CVs. Maybe they should – you learn some useful skills.

Scientists are an unruly bunch, too, that sometimes need a firm hand to herd them along. And playing the organ requires both hands and both feet. If you have to deal with scientists, physicians, state and federal governments, and the changing landscape of health care, it’s good to be able to do four things at the same time.

A unique route to the MDC

Walter would want this article to be devoted to science, and so it shall be – but first a bit of context. His papers from the 1970s and 80s form a trail from Zürich to the U.S., then on to Tübingen and Essen – like getting the most out of a scientific Inter-rail pass. Then came a call from Berlin-Buch, where a new institute was taking shape on the site of the former Academy of Sciences of the GDR. Walter was offered a lab and a position as Coordinator, then Deputy Director; it was time to set down some roots.

“Right away he was recognized as someone who pursued scientific work of the highest quality and expected the same from his colleagues,” says MDC founding Director Detlev Ganten. “He developed an excellent rapport with all the former staff – from the directorate to the technical personnel. Being Swiss probably helped; he could stand aloof as the East and West settled their affairs. We had immense mutual respect and complemented each other very well.”

In 2004, Detlev was invited to head the Charité, and Walter became Scientific Director at the MDC. There was a lot to do: BIMSB needed to hit the ground running, and the partnership between the MDC and the Charité needed a work-over. The institutes began planning a joint Experimental and Clinical Research Center, which Walter planned with many colleagues. The project turned out to be the perfect preparation for a new grand scheme: to create the Berlin Institute of Health. That task falls to Walter Rosenthal, who became Scientific Director of the MDC in 2009.

Walter Birchmeier’s administration placed an enormous emphasis on the quality of MDC science, from which all good things would follow. It was the key to attracting excellent new group leaders and students and securing funding. And studies had shown that the best strategy for turning scientific discoveries into biomedical applications was to make strong investments in basic research. Once again, Walter held his own group to the same standards. Most days he slipped away to make at least a brief appearance in his lab, to the delight of the scientists and the consternation of his administrative assistants.

His leadership of the institute has paid off in many ways. The marks for MDC groups have steadily risen in external reviews. And the institute’s international reputation has soared; a 2010 study by Thomson Reuters ranked the MDC 14th in the world in the fields of molecular biology and genetics, making it the only German institute in the top 20. This was a great achievement by any standards – especially for an institute that was not yet 20 years old. Walter’s lab, and many groups established under his tenure, helped put it there. But passionate scientists don’t rest on their laurels; the minute Walter handed over the reins of the MDC to his successor (likely even 5 minutes before that), it was straight back to the lab.

“Retire?” he says, looking scandalized. “How can I retire? Klaus Rajewsky is still putting out high-impact papers, isn’t he? And he’s five years older than me!” (Sorry, Klaus… Readers, please don’t do the math.)

In pursuit of a molecular pathway

Trying to summarize Walter’s work in a short text is as hopeless as trying to see his native Switzerland from the window of a bus, in a single day, but it would be a shame to miss the highest peaks. PubMed lists him as author on 195 papers. 33 of the articles are reviews, the best place to hear his stories straight from the horse’s mouth. Here we’ll introduce a few topics that appear again and again, like the recurring theme of a Bach fugue.

Walter has always been interested in factors that help arrange cells into tissues and organs and hold them there. During embryonic development – and cancer – cells sometimes free themselves to embark on migrations. This shift is managed by complex biochemical signals that also affect how cells specialize. A handful of basic signaling pathways – including, of course, Wnt – govern these processes in different ways in different tissues. Their activity and effects change during cancer and other diseases; understanding how that happens can help explain how the diseases arise in the first place and sometimes yield potential therapeutic targets. The group’s work has helped identify the complex sets of molecules involved in passing signals along, how they interact with each other, the genes they activate, and their ultimate biological effects.

More than a decade before his arrival at the MDC, Walter had begun taking a look at the behavior of cells called fibroblasts. These types of cells exhibit migratory behavior, for example during wound healing, but their chief function is to create factors that bind cells into larger structures and tissues. They contain “stress fibers” that expand and contract, helping with the cells’ crawling behavior as well as their structural functions. Until 1980, the composition of these fibers was unknown. That year Walter’s group at the ETH Zürich used fluorescent dyes to show that they were probably composed of actin fibers and contracted through interactions of actin and a “motor” protein called myosin. The work was published in Cell.

Three years later Cell accepted another paper from the group, now located at the Friedrich-Mieschner laboratory of the Max Planck Institute in Tübingen. This time the topic was cell-cell adhesion. The lab showed that a particular monoclonal antibody, which recognized a protein called E-cadherin (at the time known under the name uvomorulin) on the surface of epithelial cells, could disrupt and loosen the adhering junctions that have cemented different cells to each other. The work established a new method to identify proteins within cell-cell junctions.

In 1989 the group showed that the antibody, which binds to uvomorulin, caused epithelial cells to leave the tissue and undergo migrations that lead to invade foreign tissues including, at least in the experiments, heart tissue. In the same paper, published in the Journal of Cell Biology, the group showed that epithelial cells that have been infected by sarcoma viruses become migratory. During this transformation, the cells stopped producing uvomorulin on their surfaces. Losing their adhesion properties seemed to be a key step along the road to invasive cancer.

In 1991, now at the Institute for Cell Biology of the University Medical School in Essen, Walter and his colleagues proved that a protein known as scatter factor, which strongly promoted cell motility and was secreted by cells called fibroblasts, also caused invasive behavior by epithelial cells – in fact, it was the same molecule as hepatocyte growth factor (HGF). Its gene was located on chromosome 7, in an area rich with genes involved in cell division, development, and cancer. The discovery hinted at the intricate connections between mechanisms in healthy organisms and disruptions that lead to a number of serious diseases. It was just the sort of theme that would fit in well at the new MDC.

By 1996 Walter’s lab was well established at the MDC and was digging deeply into the signaling pathway activated by Wnt and HGF. Such signals activate proteins in their target cells, often changing the activity of genes and thus cell structure and behavior. In 1996, the journal Nature published a landmark paper from the group on Wnt. This signal molecule usually activates a pathway that arrives at a protein called Beta-catenin, which is been locked up in a complex of proteins outside the cell nucleus until the signal arrives. Then beta-catenin is released, travels to the nucleus, interacts with transcription factors of the Lef/TCF family and activates genes. Normally cells control the molecule by blocking the signal before it arrives, or breaking down beta-catenin before it reaches its targets. But tumors often hold a form of beta-catenin that is too active; it has undergone mutations that block its breakdown and accumulates in the nucleus and other regions of the cell. Walter’s group also discovered a new protein they named conductin/Axin2; it receives Wnt signals from a molecule called APC and then binds to beta-catenin, marking it for degradation. Without this interaction, beta-catenin isn’t destroyed.

HGF activates a receptor called Met, lodged in the plasma membrane, but no one knew what happened next. In a paper in Nature, also in 1996, the lab discovered that Met binds to a particular region of a protein called Gab1, which accumulates at sites responsible for cell adhesion. Activating Gab1 with Met or by artificial means caused the cells to separate and become more mobile. In the process, they began extending tube-like structures in a pattern that resembled the formation of epithelial tissues in embryos. The work proved that Gab1 receives developmental information from c-Met and triggers a program of epithelial specialization.

By 2001 the lab had developed mice with conditional mutations. This was a new genetic engineering technique developed by Klaus Rajewsky and his colleagues at the University of Cologne; it allowed the removal a molecule like beta-catenin in specific cells and tissues at precise times. Like many signaling molecules, beta-catenin has many important functions across the body; conditional mutagenesis permitted studying its activity in very specific contexts. Walter’s group used the method to deplete beta-catenin just in the skin and hair follicles as these tissues formed in the embryo. In another Cell paper, the lab determined that cells were no longer differentiating into the structures required to produce hair follicles. Without beta-catenin, cells weren’t getting the necessary developmental signals; instead of forming follicles, they became surface skin.

In a 2007 paper in PNAS, Walter’s group reported on more functions of the Wnt/beta-catenin pathway – this time in the formation of specific regions of the heart. This organ begins as a tube-like structure and is guided through a series of transformations that make it asymmetrical, with a larger left side. The lab discovered that signaling through Wnt and beta-catenin needed to be active in particular regions for this to take place. Another pathway, triggered by a molecule called Bmp, seemed to be active in other regions. Producing heart structures with the proper form and shape required that different signals be received at precise times and places, in a highly coordinated way. In another paper the same year, published in the Journal of Cell Biology, the group showed that the HGF receptor Met was essential during the process of healing skin wounds.

Walter’s group continues to study the interactions of these pathways in other tissues and contexts, including defects in signaling that support the development of tumors. Cancer can arise when stem cells don’t follow their normal path of differentiation but are diverted along another route. The most aggressive tumor cells resemble stem cells and take advantage of signaling pathways to survive, reproduce at a high rate, and develop in unusual ways. In a paper published in the EMBO Journal this year, Walter and his colleagues showed that tumor cells in the salivary gland exhibit high Wnt and beta-catenin signaling, combined with low Bmp signaling. The Wnt signals activate a molecule called MLL. This protein remodels the knotted structure of DNA in the nucleus and switches on a number of genes associated with cancer.

An affair of the heart

These papers – and nearly 200 more – represent significant milestones along a career that’s worth taking a step back from to get a bit of perspective. Walter’s work reflects decisions made early on: to focus on a central biological mechanism and follow it wherever it might lead, into a range of tissues and disease processes. Only then does the true biological meaning of something like the Wnt signaling pathway become clear, showing us how a process that evolved long ago in ancient cells has been tweaked in many different ways to guide the development of diverse organs and processes in complex animals. The lab continues to explore this system in new contexts; stay tuned for more discoveries about the functions of Wnt and Met signaling in development and disease.

In retrospect it’s a straight and logical route, but along the way some interesting side-roads have appeared. Walter has never hesitated to make small detours to see where they might lead. He admits that some things never panned out, but in 2004 one of those side-trips turned out to have an immediate medical impact, saving lives and becoming a great example of the MDC’s approach to molecular medicine. The story appeared in Nature Medicine that year and was widely covered in the popular press.

Walter’s abiding interest in cell adhesion had led the group to knock out molecules that help link neighboring cells. The lab produced a strain of mouse without one of these molecules, called plakophilin 2, a relative of beta-catenin, and made a surprising discovery: the animals died mid-way through embryonic development due to heart defects. Ludwig Thierfelder, a clinician and researcher working on the heart, had a lab right down the hall. Walter paid a visit and posed a simple question: Do any human patients with heart defects exhibit mutations in plakophilin-2?

It turns out that they do: About 30 percent of people who suffer from hereditary forms of arrhythmogenic right cardiac ventricular cardiomyopathies (ARVC) had such mutations. People with the condition experience rhythmic disturbances in their heartbeats and have a high risk of sudden death. There is a solution – implanting a defibrillator – but until 2004 it was difficult to diagnose the disease. The discovery by Walter and Ludwig’s lab made it possible to screen family members at risk and identify those with mutations in plakophilin-2. They could be given defibrillators, and this intervention has saved many lives.

How to address a Guru

I haven’t mentioned one paper – all right, maybe it’s one of those urban myths of science – about the migration of a colony of microbes through a musty organ pipe (low B-flat) in a Swiss church. You can ask Walter Birchmeier about it the next time you spot him in the lab, or steering his bike across the campus. Also be sure to ask about the last concert he attended, or the book that he’s currently reading. The answer will always be interesting. And then take a minute to imagine what the MDC – and science – would be like if he had remained in his organist’s loft or become stranded in a middle-school classroom.

If you aren’t quite sure how to address him, here are some choices: Grand Master or Guru, or perhaps the Lord of Wnt. If you prefer a literary reference, “Oh Captain, my Captain” would certainly be appropriate. Maybe we can get him knighted, in which case he’ll be “Sir Walter.” Until then, just “Walter” will do.

– Russ Hodge

(with thanks to Daniel Besser for his considerable help)

The Kansas Creationists vs. the Evolutionary Atheists

Leaving Flatland and its flawed debate

Note: This article is being published in under the same title in the current edition of the magazine Occulto. Hodge, Russ. “The Kansas Creationists vs. the Evolutionary Atheists.” Occulto Issue e, Summer 2013, Berlin. Edited by Alice Cannava. ISSN 2196-5781. pp. 64-85. You can obtain a printed copy of the journal at  http://www.occultomagazine.com

My daughter was leaving Germany for a year to explore the American half of her genome. Rather than the liberal Kansas town where I went to school, she was headed for the southern half of the state, colored deep red on political maps. “You’ll be fine if you don’t discuss politics, religion, or guns,” I advised her. “Or Charles Darwin.” His name alone provokes a strong reaction in my home state, as I found out after writing a book on evolution.[1] Everyone has an opinion and you don’t have to pass a test before you jump in to a scientific debate, giving it the character of a barroom brawl. The topic leaves few Kansans sitting on the fence. Maybe because we use a lot of barbed wire.

Barbed wire was patented in 1867, nine years after Darwin and Wallace foisted evolution on the world. Out on the prairie, farmers began fencing up their lands, threatening the culture of cowboys and cattle drives. In 19th-century Kansas, barbed wire caused a far greater ruckus than evolution, although the debates didn’t drag on long because the two sides were well armed.[2] In Europe the theory caused more consternation, but discussions were fought with hot air rather than hot lead. Nor did the Bishop Wilberforce run a cattle stampede through Thomas Huxley’s garden. You could destroy a farm that way, but it didn’t work with intellectual property.

Barbed-wire fences broke up the prairie and metaphorically divided the population over deeper issues:  Would all the unsettled land be sold? Who had the right to use it? There seemed to be two clear sides, but only by leaving Native Americans out of the discussion. Tribes had diverse views of the relationship between people and land that would have added more dimensions to the debate.

Spatial metaphors are a type of trope – a wide range of rhetorical devices whereby words are used in unusual ways, often to describe one thing in terms of something else.[3] They are fundamental to the way we think, learn, and communicate. Tropes do not simply rename things, but rather combine complex networks of associations that correspond at some points and diverge at others. They often remain hidden as we communicate, causing misunderstandings that are hard to figure out. They have a powerful influence on the way we think, especially when we don’t realize they are there. Some are so basic, stylized and routine that we limit our imagination and the ability to see things in other ways. People often transfer the wrong properties of a trope to its target, expecting two systems to behave the same way and missing the differences.

Some tropes are obvious in everyday language, making them fairly easy to detect and analyze – take, for example, the old adage, “Every debate has two sides.” It reduces many issues – whether over barbed-wire fences, science, or “red-blue” divisions on a political spectrum – to the shape of a coin, implying that you have to choose. But most topics are far more complex. Why not think of a shape with more sides – perhaps six, like a dice, or a ball that can come to rest on any point and is easy to nudge to another?

But the two-sided model completely dominates the way most people think of debates about evolution: as if the world is firmly divided into two camps, science and religion, entrenched and fighting a war. The real situation is more interesting: Most religious denominations accept evolution, and many scientists have religious beliefs. But things got off on the wrong foot in the very first public forum in 1860, where religious fundamentalists saw the issue as a battle between universal truth and everything else, and they have controlled the form of the debate ever since. It’s too bad: fundamentalists have discovered no new facts to support their position in all of that time, while evolutionary science has made extraordinary progress. The theory is a scientific idea and should be discussed that way, rather than being hijacked and carried off to the foreign land of theology.

Even if it’s a bad metaphor, scientists could take more advantage of the coin. You could print competing hypotheses on its two sides: “Species arose through a long process of evolution,” versus “Species were created over a six-day period about 6,000 years ago.” Every day this coin is flipped by geneticists, chemists, physicists, doctors, geologists, paleontologists, mathematicians, informaticians, and researchers from other disciplines. They find new ways to test it all the time. There ought to be plenty of evidence for a sudden burst of creation 6,000 years ago, or at least evidence to debunk evolutionary theory, but the coin lands with Darwin’s head pointing up every time. Even the strongest beliefs haven’t flipped it over. That doesn’t stop people from hoping it will land, just once, on the other side. But prayers can’t make evolution go away, or even improve the health of the royal family in Britain.[4]

The two-sided debate has become such a social institution that people forget it’s a trope, just one of many ways of looking at things, and take it to represent something real. When that happens tropes move into a cognitive underground where they powerfully influence our thoughts, discussions, and perceptions of many things, and they become devilishly hard to get rid of. It’s hard to imagine that these stereotyped collisions between religious fundamentalists and scientists will go away.

Even so, I think the debate is about to change. The cause won’t be a miraculous conversion of the entire planet to some form of religious fundamentalism, or a mass exodus into atheism. Instead, I believe that science is on the verge of a conceptual revolution that will completely discredit simplistic debates. For a long time now words like “species”, “genes” and “natural selection” have been tossed back and forth, as if we are talking about the same things. I am not sure how fundamentalists think of these scientific concepts, but scientists have been steadily changing the sophisticated tropes and models that underlie them. A common vocabulary has masked a much deeper conflict; we are not at all talking about the same things.

Now, I believe, science is on the verge of a conceptual revolution that is changing the basic tropes by which we think of life; this new view may render the old sort of debate completely meaningless. The two-sided metaphor has always been a poor one. Discussions about evolution should finally escape this sort of intellectual Flatland and enter more profound dimensions.[5]

* * * *

Both religious and scientific explanations for the world depend on tropes and models. Scientists make specific observations and try to extract general principles that can be tested and improved. An experiment might confirm a model, or discredit it, and the results aren’t known in advance. Fundamentalists claim that some questions about life are answered in Biblical stories and others are mysteries that can’t be solved. There is no need to do experiments – which would either confirm what is already known, or the results would be ignored.

Developing large scientific models such as evolution or restricted concepts such as species begins with a lot of specific observations. Each doesn’t mean much on its own; the aim is to classify many into groups that exhibit similar general patterns. This resembles a trope called synechdoche, in which the features of individuals are transferred to the whole group. The next step is to test the pattern by applying it to new objects or situations. This creates a continual dialogue in which new observations force scientists to revise their general models. I’ll use a spatial metaphor and call this dual process “upward and downward” reasoning, which we use in everyday thinking as well. It’s the basis of learning, communication, and all sorts of judgments that people make.

Scientists recognize that errors can be made when reasoning in both directions. Upward reasoning can suffer from the exception fallacy: if the examples you start with are unusual, you may arrive at the wrong general principles. If you then apply the principles too widely to the wrong things, you commit an error in the downward direction: the ecological fallacy. Upward-downward thinking in our daily lives can suffer from the same errors and lead to problems such as racist stereotypes. So scientists continually check their assumptions and conclusions by requiring changes in models, if they aren’t confirmed by experiments. Fundamentalists deny that these types of fallacies exist in their own thinking, but are perfectly willing to look for them in science.

Understanding a scientific model requires understanding both parts of the process. To talk about a species, for example, you need to know how researchers assemble individual organisms into a group, make decisions about its common features, and apply them to new examples. I don’t know what the meaning of “species” is for a fundamentalist – if you deny the validity of the reasoning process by which scientists made up the term, you can’t be talking about the same thing.

Researchers make their models available to the world to allow them to be widely tested and ensure that they aren’t littered by a scientist’s subjective beliefs. At some point a model has been put to so many tests in different situations that people begin to treat it as a sort of “law”. Even then we know that it is a product of human thinking. Evolution is so interesting because its view of life exposes both the power of tropological thinking and its limitations, when the subject is an open-ended biological system that will always produce surprises.

Understanding this problem may affect the way we construct models in science and other systems. It will not discount the ability of current models to predict the function of a human gene by studying a related molecule in another species, or to manipulate organisms through genetic engineering. At some point, however, progress may be held back by mental constraints that may need to be understood to overcome. Science already recognizes that the problem exists: Double-blind experiments are necessary because expectations and models have an unpredictable influence not only our interpretation of data, but perception itself.

* * * *

When evolutionary theory appeared, it moved into a neighborhood of older concepts shaped by tropes and other mental models. The theory was communicated in common words and metaphors that were strongly associated with other things. It should have caused people to reevaluate a much wider set of assumptions, and it finally has – but the process has taken 155 years. At the time, the opposite happened, and the theory was forced into a network of very old beliefs.

For example, proposing that complex organisms could arise from simpler forms sounded like “progress”: a huge political and social theme during the Industrial Revolution. Many readers immediately tried to use evolution as a metaphor for race or class relations within human society, or to confirm the old, dearly-held view of man’s dominion over nature. Both efforts were doomed to failure: social models were tropes themselves, based on old notions about nature that had now become outdated. Social issues became a metaphorical battleground between old models of life based on religion and the new theory. No one realized that the real fight was happening at a meta-level of tropes. It was as if two people were playing a game, using the same board and pieces, but following completely different rules. It’s no wonder that you could never bring the game to a satisfactory end.

Now I think biology is in the process of toppling one of its central metaphors, in a way that may also have wider social effects. This is happening partly because of advances in technology that provide a much clearer view of living organisms and the complexity of their interactions with the environment. One result is to provide a sharper view of evolution, and how it differs from some of the cultural metaphors that have been holding it down. The change is appearing in bits and pieces and its full nature hasn’t been clearly articulated or even widely perceived. It will affect the way we understand humans, nature, and society. But this time we shouldn’t make the same mistake by applying the change inappropriately to other areas.

To make the case I will first provide a very brief sketch of evolutionary theory; secondly, point out a few issues that are central to it but are hard to deal with using current models; and finally, try to link what is happening to more general processes that underlie our construction of cognitive models.

In a text of this length it is impossible to properly ground all the philosophical, linguistic, cognitive and biological concepts that support its view of the role of tropes in cognition and science. Those arguments derive from a much larger conceptual framework that I will articulate in a future project. Here I will provide an application of the method to a debate that is currently, almost universally, carried out at a level that is much more superficial and naïve.

* * * *

“Evolution is so simple, almost anyone can misunderstand it,” said philosopher David Hull.[6] Darwin and Wallace drew on straightforward observations that can be made anywhere, and interpreted them in a way that is closely linked to everyday, “common-sense” ways of thinking. The complexity of the theory lies in the way they abstracted a model from these observations, then extended it far into the past to show how a few basic principles suffice to produce new species.

The outline here covers four basic principles. The most general is common to all natural sciences and distinguishes them from religion and other styles of thought. Researchers make a fundamental assumption: “We should understand states of the world that we can not directly observe on the basis of what we can observe.” This can be seen as a derivative of Occam’s razor, which in its original form has been translated as, “Plurality must never be posited without necessity.”[7]

The razor doesn’t mean that the universe is inherently simple; instead, it recognizes that views of the natural world are the product of philosophical and methodological choices, and one shouldn’t make up more hypotheses than are necessary. If a single, global force (gravity) can account for falling apples and the motion of the planets, we shouldn’t make more assumptions and suppose that each object is being pushed around by its own personal force, without evidence. By definition this approach discounts miracles such as the idea that the universe was created 6,000 years ago, in six days, which presupposes a suspension of the current forces we observe at work.

A model may posit forces that can’t be observed (such as gravity), but which have predictable effects that can be tested in observations or experiments. If galaxies are racing away from each other, their trajectories can be projected backwards in time to produce the notion of the Big Bang, or forward to produce a vision of the future of the universe. The same rationale yields an explanation for geological formations and a likely age of the Earth. Evolution is the biological equivalent, based on an observation of current life to abstract rule-governed processes that explain the origin of diverse species.

To conceive evolution, Darwin and Wallace wove three basic observations into a system that respects this fundamental principle of science. First: species constantly undergo variation. Offspring are not identical to their parents or each other (unless they are twins or clones). Variation can be directly observed in every species and is rarely an issue in popular, dualistic debates about evolution. The theory partly hinges on the rate at which it happens, which can only be determined using scientific methods; the results have been consistent with evolutionary predictions.

Most variation arises because of natural imperfections in biochemical systems. DNA undergoes many types of changes: through “spelling errors” (mutations), or when sequences break off longer molecules during the creation of egg and sperm cells. Cells can repair the damage, but material can move from one chromosome to another in a process called recombination. Other errors include duplications of DNA sequences, whole chromosomes, and in some cases an entire genome. Genetic material can also be lost. Any of these alterations can result in measurable physiological or behavioral changes in the organism as a whole – its phenotype. Such changes happen to some degree in every child; we are all X-Men.

The second observation was that some variations are passed down to an organism’s offspring through a process of heredity. The main reason is the conservation of specific DNA sequences from parents to their offspring, but some other types of biochemical changes are passed along as well. Heredity is not a deterministic system because first, each of us inherits a unique genome – we are all experiments, venturing into a landscape that has not yet been explored by evolution – and secondly, most types of behavior and many aspects of a body’s development are shaped in a dialogue with the environment.

The third factor in evolution, natural selection, is usually wildly misunderstood. Right from the start it was labeled with a misleading trope – “survival of the fittest” – that scientists have been trying to peel off ever since. It was coined by Darwin’s contemporary Herbert Spencer, a philosopher with the social status of a movie star. One of Spencer’s main interests was social progress, and he hoped that the new theory would shed light on cultural development. Religious and political conservatives seized on his words and applied their own tropes in interpreting “fittest” any way they liked – to keep humans at the top of nature, near God, or the wealthy or powerful at the top of society. They used it to justify racism and its nastiest form: eugenics movements that sought to “improve” humanity by sterilizing or killing the ill, the handicapped, prisoners, “promiscuous women,” Jews, and anyone else that those in power didn’t care for.

Darwin never liked “survival of the fittest” because he recognized that biological concepts could only be applied to culture in a metaphorical way that mangled what he meant. Finally, grudgingly, he used the phrase – probably out of the wish to appear conciliatory – but only after redefining it in and stripping it of moral and social connotations. The translation in strictly Darwinian terms sounds circular and almost silly: “survival of the survivors,” or “survival of the reproducers.” In other words, current species are the descendants of animals that managed to reproduce more than others. If you couldn’t pass along your genes, a lot of your hereditary material would disappear in favor of those that could. And if you didn’t reproduce as much as your neighbors, and nor did your descendants, and this happened over vast periods of time, then eventually your genomic contribution to the future of your species would dwindle and perhaps even disappear.

Darwin had noticed that many factors could give an animal a reproductive edge over other members of its species: differences in fertility, an organism’s ability to survive long enough to reproduce, preference for certain mates, etc. Events that struck a population equally, like random accidents, wouldn’t have much effect: The diversity of a species would undergo slow, random changes in a process called genetic drift. That itself can produce different species. If two subpopulations are isolated from each other long enough, drift may eventually change their genomes to an extent that they can no longer mate to produce fertile offspring.

So selection begins with any trait that gives an organism a reproductive edge, increasing its frequency, compared to other variants, in the next generation. If offspring with the trait also produce more children, and the bias continues over many generations, the result may be natural selection. It always occurs as a function of a dialogue between the features of an organism and its environment; identical animals don’t always do equally well in different environments. If you could measure the frequency of particular variants of genes in a species before selection happened and then again afterwards, most would exhibit random drift. But variants in an animal that had undergone “positive” selection would show a statistical increase, while forms that lower an organism’s reproduction would become rare or even disappear.

Today the signature of these events can only be detected by studying the frequency of particular DNA sequences over time. And here is also the signature of a trope by which the process is usually oversimplified in our imagination: “fitness”, or selection, isn’t something that happens to a single individual, or even a single couple, or a single generation. Instead, it is a population effect that may require thousands of generations, or however long it takes to create a new species. The change usually takes place in multiple family lines. What happens to an individual organism plays a role, but the impact on evolution is a statistical one, spread out over vast periods of time. One can observe individual advantages in reproduction, then postulate their extension into the past and future as an “upward” style of thinking. But one can’t reason back “downward” to make predictions for specific individuals, which might die in accidents or suffer from other random events. It’s also important to note that a reproductive advantage passes along an organism’s entire genome, including factors that may support the “edge”, but also all of the other characteristics it passes down.

An organism’s reproductive ability can be influenced at every level – from single letters of the genetic code, the behavior of molecules within its cells, the function of its organs, its thinking, and its overall interactions with the environment. It comes into play at every phase of a lifetime – from its origins as a single cell, through its development in an egg or the womb, its infancy, childhood, or adulthood, up to the end of its fertile phase. Usually selection stops there, but it might continue in cases where organisms contribute substantially to the survival of their “grandchildren”. Any difference that affects an organism’s phenotype can influence selection, given a permissive environment.

Variation, heredity, and reproductive differences are directly observable and – along with the more general assumptions of science – form the basis of evolutionary theory. The first two factors are rarely called into question; selection is more contentious, but mostly because the debaters are using different tropes.

* * * *

The power of evolutionary theory lies in the way it has spawned millions of hypotheses that continue to be tested in countless ways. Even this hasn’t been convincing to “Young Earth” fundamentalists, who have discarded the basic scientific premise of a continuity of natural forces in favor of a miraculous act of Creation that took place about 6,000 years ago. Their rationale is based on a faith in what they call a “literal” reading of the book of Genesis, but each fundamentalist decides what should be read literally and what not, in response to other cultural influences, making today’s fundamentalism is much different than forms practiced in the past. The written record of languages – easy to discover through a trip to any library – makes it easy to discard the Bible’s story of language creation (the “Tower of Babel”) as a fable. But the creation of species, recorded in fossils, and recounted in the same book, is regarded differently – why?

Other challenges to evolutionary theory are grouped under the popular label “intelligent design.” This is indistinguishable from a religious philosophy known as Natural Theology,[8] which dominated thinking about life until the development of evolutionary theory. Its major argument holds that living systems appear so complex and well-structured – usually by analogy to a machine such as a clock – that they must have been created by some sort of supernatural intelligence.[9]

Darwin grew up in this tradition, but several major conceptual flaws convinced him to reject it in favor of evolution. It “cherry-picks” from empirical observations of life: Anything that can’t yet be explained is assigned to the domain of miracles, including biochemical processes discovered through strictly scientific methods. Once scientists provide a reasonable account of the origins of these processes, or demonstrate that some fossil species didn’t arise spontaneously, the intelligent design community shifts its focus to the next unsolved problem. Michael Behe, a biochemist who has become an advocate for the philosophy of intelligent design, has consistently taken this strategy.[10]

Another flaw is the difficulty of distinguishing between “designs” and the structures or patterns that arise due to physical and chemical laws. The spiral forms of snail shells and the tornado-like pattern of water as it moves into a drain might look like supreme achievements of an intelligent architect, but both can be explained by applying models of biological or physical components and the forces acting on them. The body of every human child is an amazing structure that arises from a single cell. Usually this process is explained by reference to biological events, rather than constant, supernatural interventions – so why not the origins of species?

Finally, even if scientists were to stumble upon some unmistakable “signatures of a designer,” how many such designers are there? Each molecule, cellular structure, organism, or species might have its own. Claiming to see the hand of a single designer in different natural phenomena is the clear sign of a particular religious agenda, and today it is usually the attempt to thrust a Judeo-Christian deity into the science classroom.

* * * *

Evolutionary theory is not yet complete because some aspects of living systems have been impossible to explore. Some of these problems represent a lack of technology; others, I think, are inevitable when human minds construct a model and try to apply it almost universally to the world.

The first area of incompleteness has to do with evolution’s portrayal of the environment. Darwin was the first ecologist: He demonstrated that the fates and forms of species were thoroughly intertwined with each other and external factors; that each species exerts an influence on others, and that overpopulation and a competition for resources play a role in natural selection. Organisms don’t change due to purely internal factors; they arise and are shaped through a complex, fluid dialogue with everything around them. This includes every other species they interact with and other aspects of the environment such as temperature, the amount of precipitation, sunlight, seasonal changes, etc. It also includes interactions at the microscopic scale. Recently, for example, scientists have caught the first glimpse of the microbiome:[11] the extraordinarily complex, dynamic populations of bacteria and viruses that inhabit our bodies and the environment. This opens the door, for the first time, on understanding their influence on our evolution (and vice versa) and human health.

Single molecules can promote or hinder an organism’s survival and reproductive capacity, so they, too, contribute to natural selection as they carry out functions in cells. Here they will serve as an example of a gap that remains in our understanding of the interplay between organisms and their environments.

Nearly every biological process involves a process whereby cells detect and respond to change. One mechanism involves signaling cascades that typically start when a molecule binds to a receptor protein on the surface of the cell. The receptor undergoes a structural and chemical change that causes it to bind to other proteins, subsequently changing their structure and behavior. This effect is transferred from one type of molecule to the next, often ending with the transport of a protein to the cell nucleus. There it helps change the overall pattern of active and silent genes in the cell, altering the population of molecules it contains, its biochemistry, and its responsiveness to other signals.

A particular signaling cascade requires certain molecules to be present or quickly produced in response to a stimulus. They need to be located in the right regions of the cell: microenvironments that must also be properly configured to respond to the signal. Signal molecules have to be present in sufficient quantities, and they are usually bound to complexes (sometimes consisting of dozens of other molecules), whose components also need to be present in sufficient quantities. Some protein complexes are “prefabricated” and localized in particular microenvironments, where they can be “switched on” through the addition of a single component.

Passing a signal requires that a protein’s atoms have a particular physical architecture. This requires the help of still more molecules that help it fold, or “decorate” it with complex sugars, or bind it to a membrane with a particular composition of fats and other molecules, etc. This takes place against the background of multiple signals that may carry conflicting “instructions” and compete to push the cell in different directions. By adopting different conformations, or docking on to different complexes, a single molecule can act as a “switching station” to route different signals in various directions.

The quantities and states of all the other molecules in a microenvironment influence whether a protein receives a signal and how the “information” is passed along. Those populations determine whether the protein will bind to its proper partner; too many copies of another protein may change its preferences (affinities) for other molecules. If everything works and the protein does transmit the signal, the contingencies must also be met by the next molecule, in a neighboring microenvironment, so that it can be passed farther.

Microenvironments both constitute the cell and are shaped by it. They are dynamic, constantly requiring the production, refinement, and delivery of new molecules. Events within them move beyond to activate new genes, silence others, and cause changes across the entire system in intricate feedback loops. Molecules, microenvironments, and entire cells continually undergo fluid transitions – rather than adopting a clearly definable state – in which adjustments are constantly being carried out. At any given time, some proteins have achieved the form necessary to receive and pass along a signal; others are being processed; still others are being translated from RNA molecules; RNAs are being transcribed from genes at a particular frequency, etc. Every protein in a signaling cascade is undergoing similar transitions in terms of its chemistry, form, and quantities. So the success of a signal depends on the attainment of tipping points: changes from various conditions under which a microenvironment is not yet ready to receive a signal, to conditions which permit it.

Until very recently it has been impossible to capture a remotely adequate census of microenvironments or the dynamic nature of their components. As a result, proteins have generally been described as metaphorical actors – like telling the history of a war only from the perspective of generals. Some do have powerful roles, as clarified through experiments that change or remove them, but such experiments usually involve hundreds, thousands, or millions of copies of a particular molecule in highly standardized microenvironments. What is really being described is collective behavior, averaged out in a statistical way to make a model that is then applied to single molecules, in microenvironments where the major contingencies have been met.

Such descriptions aren’t perfect; they rarely describe the behavior of any single molecule, and they don’t have to. This inexactitude isn’t just a by-product of gaps in technology. Evolution predicts that it must be an inherent feature of cells. Life is constantly subject to variation and unpredictable events, so cells and their microenvironments have to have a certain tolerance for them. Most of these systems exhibit a robustness by which one molecule can step in for another, or some other “backup” system comes into play – evolution has favored them. At the same time, cells can’t tolerate everything. So far it has been impossible to define precise boundaries of permissiveness and intolerance in their microenvironments.

The same principles that govern proteins and their surroundings apply to all scales of biological organization. Simply by living – using resources and producing waste products – a cell changes the environment for itself and everything around it. In a complex organism, cells build higher levels of structure and tissue to create a body that is likewise in a fluid state of change, constantly adjusting to internal and external changes. There is an upward-moving causal chain whose restrictions are most evident in diseases where events triggered by specific molecules – in the context of their microenvironments – disrupt the body as a whole. Such upward causality participates in every aspect of growth, activity, and physiological processes such as digestion.

This is dramatically different than the common concept of environments as large external spaces in which organisms interact with each other, and where causal forces work mainly downward. That concept is also appropriate: temperature and other external factors (such as the availability of specific types of food) reorganize biological structures down to the level of molecules. But a better definition of the evolutionary environment is a to imagine a succession of fields of all scales in which biological activity has causal, fluid effects in both directions, upward and downward.

One fascinating “downward” causal chain can be found in the process of thinking, which may create a new biological environment that can affect all lower levels of biological structure. Suppose I interpret a phrase of music on a bowed instrument. That interpretation is a personal construct developed from years of experience, learning, and aesthetic tastes that constantly move back and forth between mental and physical domains. My conception of it somehow triggers specific types of motor activity across the body: muscles in the hand holding the bow do something very different than my fingerings on the string, while remaining highly coordinated. Playing music produces new cellular signals and the activation of new genes. At the same time I remain highly responsive to external feedback: feeling an irregularity in the surface of the string, noticing the expression on a listener’s face, or hearing the behavior of my fellow musicians. Thoughts, intentions, and social interactions create and constantly reshape environments for biological activity at every scale.

* * * *

This much more fluid, multi-scalar view of biology shakes up some central metaphors by which we have described living systems and the models we use to understand them: a fusion of materialism and mechanism. Their breakdown will significantly alter the way we think about issues like genetic determinism, states of health and disease, and large models such as evolution.

Materialism is probably easiest to understand in contrast to another philosophical tradition called vitalism. Until the 19th century and even later, many scientists (and all theologians) postulated a qualitative difference between living things and inorganic substances. Evolution might be fine to describe everything that had happened since the appearance of the first cell, but how did that organism arise? Vitalists believed that some “spark”, energy, or force must have been necessary to create life from the inorganic world. Theologians ascribed this to a supernatural being, but it didn’t have to be; it might simply be a type of measurable energy that simply hadn’t yet been detected in physical or chemical experiments. The idea attracted droves of physicists to the life sciences.

What they discovered ultimately led to the abandonment of vitalism in the life sciences. In 1828, Friedrich Wöhler demonstrated that a biological molecule (urea) could be synthesized using purely inorganic substances. In the 1950s, Watson and Crick drew on physics experiments to propose a model of DNA whereby a molecule could reproduce itself by purely biochemical means. Experiments at about the same time carried out by Stanley Miller showed that complex organic molecules such as amino acids could spontaneously arise in sterile conditions, even in outer space.[12] Miller never managed to build something as complex as RNA or DNA in the lab, but he didn’t have the time or virtually infinite resources of the early Earth. Every single molecule on the planet could be considered a chemical workbench, carrying out experiments over a billion years.

So biology chose materialism, at a time of rapid industrialization, which made it easy to choose machines as the guiding metaphor for understanding cells and organisms. The components of machines interact based on their physical composition and structures. Obviously organisms were very complex machines, but technology was becoming more complex as well. New machines provided a richer source of metaphors. With the advent of computers, people began discussing biology in terms of systems, as intricate networks of feedback loops and self-regulatory mechanisms somehow analogous to electronic circuitry.

Even with such fabulous machines on hand, the metaphor has reached its limits and, strictly speaking, can no longer be applied. One limitation should have been clear from the outset: Machines couldn’t reproduce themselves. And not even the most complex machines come close to possessing the complex, interlinked, fluid microenvironments described above. We usually design machines with rigid parts that have single, repetitive functions; if they break down, they can be fixed by changing a single part. Their components aren’t continually, fluidly, rebuilt at every level; they haven’t been tested and redesigned to adapt to any contingency. Human machines are rigid and designed to operate as stably as possible under specific conditions foreseen by engineers, rather than in continually changing enviroments whose variations know few bounds. Applying the machine metaphor to life leads to concepts of genetic diseases, for example, in which solutions are sometimes seen as machine-like exchanges of new parts for defective ones. Sometimes that might work, but it may not – the metaphor doesn’t really apply.

Another blow to the metaphor is the fact that by nature, no two organisms are alike; variation is an inherent quality of every species, and a tolerance for unpredictability is essential to its long-term survival. That is much less true of machines, particularly in the age of mass production, where variation in a particular model is usually regarded as an accident. This will be explored in more detail in the next section.

By abandoning the metaphor of the machine, we also abandon a naïve style of hard deterministic thinking that has arisen around notions of genes and organisms. (“My genes made me do it; my genome dictates my life.”) Determinism might be appropriate in a system that works completely from the bottom up, where rigid components dictate the behavior of a system, then the next higher scale of structure and so on. But what if the causal chain flows both upward and down, in which every component is responsive to unpredictable environmental events, contains immeasurable amounts of variation, and where human behavior creates new environments that shape biological activity? Causality itself is a model, usually based on the idea that one state naturally transforms to another after the application of some (model) force. It can only strictly be applied if it’s possible to define states – will it work in the context of ultimately fluid causal systems?

How could it be achieved, for example, in the case of music? To start you would have to fully describe both the material and mechanical basis by which aesthetic experience is physiologically “recorded” in the brain and nervous system. You would have to assume that internal physical structures not only underpin but cause particular thoughts. The system would have to be responsive to unpredictable effects, like an expression of pleasure or distaste on the face of someone in the audience. It’s safer to postulate a system in which unpredictable external stimuli from the environment exert a shaping influence on physical structure that works downward as well. Thoughts themselves – and their content – change the physiological substrate that permits them. Experiments in neurobiology have demonstrated that this is the case.[13]

* * * *

To survive, organisms can’t have some of the features we normally associate with machines. Every existing life form encodes at least a billion years of compromise that creates various degree of tolerance for variation at every scale of biological organization. There are boundaries, of course: Some variants are so disruptive that they are fatal. But just as deadly is any failure of the mechanisms that tolerate variation and change.

The field of biology has had a hard time fully grasping the extent – possibly even the concept – of this variation, and this is the last “gap” in evolutionary science I will discuss. It causes a fundamental problem in defining biological objects – whether single molecules or species. I think it can be dealt with, but this will probably require a new type of model-building. That may be difficult because the problem is closely linked to more general issues of human cognition.

The link is probably easiest to grasp through a metaphor, something much simpler than a molecule or a species – let’s take the concept of a “chair”. As a child I perceive individual chairs in various contexts, do various things with them, and hear people talk about them. There is no real consensus among cognitive psychologists about what happens next, but at some point a child creates conceptual models of “things called chairs” and begins using the models to name things she hasn’t seen before. At that point other people may correct her. She has to understand that different objects can have the same name while remaining distinct from objects with another name. In doing so she integrates features such as shapes, colors, textures, functions, parts, and different materials. Other features include a lifetime trajectory that involves being built, undergoing changes, and falling apart or being destroyed.

Children don’t come pre-programmed with a concept of a “chair”; each of us builds our own in an individual, constructive process based on encounters with specific chairs. The process is highly flexible, permitting us to recognize things that don’t fit any “classical definition” of a chair – such as something with a leg broken off, or a chair in a dollhouse, or a two-dimensional stick-drawing of a chair. All of these acts are based on tropes.

Building a model for a biological entity – such as a protein, or a species – requires a similar process. After specific objects are studied, an abstraction is made to define a “class model” that is as inclusive as possible of everything that belongs and everything that does not. From the beginning the model is intended for refinement: We haven’t yet encountered every object that can potentially belong to the class, so it is difficult to describe the boundary conditions. And since this process is based on experience, it is inherently statistical and subjective, while proposing a model that can be expanded or restricted as it is applied to new objects.

Experimentation allows science to escape the corsets of an inappropriate model. For a long time it might have been fine to think of atoms as tiny planetary systems, made of small, solid objects. But experiments forced the development of quantum mechanics, which suddenly said that objects on the human scale aren’t good metaphors for the subcomponents of atoms. Photons or electrons can’t be snagged like footballs and held onto; they may seem to disappear as they move from place to another, temporarily converted to energy; they are always in transition.

* * * *

Let’s see where this type of thinking gets us in biology by considering one of the most fundamental components of organic life: a protein. The usual biological account of the features of proteins goes something like this: Proteins are strings of amino acids (a metaphor: they share some features of human-scale “strings” but not others). They have sequences: the list of amino acids in their order in the string (a complex metaphor with a time, spatial, and behavioral component:  you imagine traveling down a text in a certain direction and reading letters as they appear). Proteins have a complex, three-dimensional structure or architecture (which don’t behave like most objects on our scale, unless you’re thinking of something like jello, because they are constantly in motion and often reshape themselves).

They have life histories that play a crucial role in their current behavior: Sequences in genes are transcribed into an RNA molecule, which is used as a template for proteins. This simple account skips many steps of processing, each of which may change the molecule’s final form, so the history becomes encoded in its final location, structure, and functions. Proteins have functions that are usually metaphorical (receptors, signal transducers, inhibitors, promoters, etc.). Originally such names convey an impression of their activities, but the terms are ultimately based on specific chemical reactions. In describing features and functions we use letters, texts, mathematical symbols, sequences, and other tropes.

Every feature of a protein naturally appears in extensive variations that can’t be fully measured or catalogued. For example, proteins never have a static, completely immovable structure, although we depict them in two or three-dimensional pictures that give this impression. These are symbols for a type of archetype that probably never exists, at least for any length of time.

Once the features of a specific protein have been defined, it is given a “class” name that can be applied species-wide (“human beta-catenin”) This class is further extended to other species in a process called homology. There is a compelling evolutionary reason to do so: human and mouse versions of beta-catenin evolved from the same gene in an ancestral species. This is established by noticing extensive overlap in their sequences, and it usually allows researchers to draw parallels between a protein’s structure and function in different species.

The central problem in this type of model is that it does not (in fact, cannot) capture a full view of variation along any parameter. It’s impossible within one species, often within one organism, and sometimes even within a single cell. There are two reasons: The technological problem stems from the fact that until very recently, we didn’t have instruments that could identify a single aberrant molecule against the background noise of alternative forms, either in terms of sequence, structure, or function. A single copy may have experienced some sort of accident in which a bit is cut off. Or it might have been improperly folded, or undergone some other processing error.

The second problem lies with the impossibility of defining a consensus sequence within a species. Random mutations continually occur and produce new versions of the molecule; there is no way to predict all possible variations that may occur and yet remain functional. It is possible to predict that specific changes will eliminate the production of a molecule, but not other parameters of variation. This problem is magnified when trying to cross species boundaries.

If we can’t define the sequence of a single gene, how can we define a species? Once again, naming species is a convention – an example of reasoning from specific examples up to a general model, then down again to new examples. This doesn’t create an objectively applicable definition because there is no “consensus genome” (or any other single feature) that can be definitively attributed to a species. Even if you could carry out some sort of census of every living individual, each birth produces a unique genome with variations that might break the rules.

Instead, scientists rely on statistical definitions of objects and parameters that loosely define boundaries of inclusion and exclusion. Suppose that someone discovers a bit of tissue in the woods and asks a lab to identify the species – “Did it come from a human? A gorilla? Or Bigfoot?” A sample is sent to the lab, which produces a DNA sequence. Most likely this exact sequence has never been seen before. It doesn’t matter: It can be attributed to an existing species if the amount of variation doesn’t exceed certain statistical parameters. If it falls substantially outside a norm for humans, gorillas, or other known species, it is deemed to be a new one. Even then, the statistical values permit it to assign it to a space on the evolutionary tree (it’s from a new species of bear or hominid).

By necessity, biological models of objects ranging from proteins to species fall into the domain of a more basic cognitive issue. We construct models individually in a complex process that involves metaphors and other tropes, a process limited by experience, unable to account for all existing and permissible variations, and yet applicable to new objects in a fluid way that is, for lack of a better word, statistical in nature. Like living systems, our mental models are simultaneously individual, robust and flexible. They arise in specific contexts (the way an organism is born into specific genomic and environmental conditions) including physical laws, human beings, and other ideas, and then venture into new territory.

* * * *

What does all of this say about the future of evolutionary debates? In a sense, it shifts the focus from specific questions about biology to more fundamental discussions of scientific practices and “everything else.” It draws a closer link between scientific thinking and everyday cases in which we construct and apply models of the world – including religious systems and the learning of language. It demonstrates that there is something fundamentally flawed about applying bottom-up/top-down reasoning to open-ended systems – at least if we expect the result to be a comprehensive definition that will always work.

Models of species themselves play a central role in popular debates on evolutionary theory. Bitter fights are waged over the question of whether evolution produced new ones, or did they all appear on Earth “as they now are” in an instant of Creation. The second perspective is just wrong – if for no other reason than the fact that the human genome has changed immensely even over the past 6,000 years, simply by adding several billion members to the population. Modern studies of organisms show that it has to be wrong. The notion of a species itself comes from science and bears no relationship to the number of names we have for animals (or organisms) in a particular language. So any time the concept of species comes up in these discussions, people are discussing wildly different things. And they rarely mention that within science, the models are being revised to encompass a more fluid notion of variation and populations that exhibit it in wide, unpredictable amounts.

I believe that what I have called “upward and downward thinking” – reasoning from specific examples to abstract models that are then applied to new examples – is a component of the acquisition of virtually every human concept, and that the act of acquiring it is individual and constructive. This process usually involves tropes that help individuals learn things in a multi-dimensional way, but whose application is not very well controlled. Individuals are usually left to decide on their own what features of a network of relations should be transferred from a known object to a new one. The development of a model is therefore inherently subjective, although it seems to become more objective after it has been shared, its predictions and boundaries have been tested by many people in a wide range of contexts, and becomes a currency for social agreement. This process entails an inherent cognitive flaw, at least in open-ended systems like cells or the attempt to design a new type of chair, that I will explore more fully in later work.

But this account can already shift some of the rhetoric of evolutionary debates because it discounts certain metaphors that are clearly inappropriate and no longer apply. Natural selection itself is an upwards-downwards concept. It can’t be considered some sort of external force – like a heat wave that scorches a population and leaves only one individual with a unique form of a gene standing. Seeing it as a statistical event that happens within a subpopulation, rather than individuals, and something that only happens over many generations is a large shift from the “survival of the fittest” mentality.

I think this view of life also rings the death-knell for the concept of a “selfish gene” (or “selfish allele”). A particular form of a molecule is only successful if it operates within a microenvironment that is permissive (and possibly encouraging) to its activity. This means that many molecules must be attuned to each other to create functional environments. When selection favors a gene, it simultaneously favors all the contingencies that allow it to succeed. These are not established in advance but arise through dialogue. At the moment, we are unable to survey all of the forms of a particular gene that are found in a population, or the variants of other genes that collaborate with it, or establish the mutual constraints on their behavior. So while we know that genes are “social” rather than selfish, at least theoretically, the extent of these mutual contingencies can’t yet be measured.

Evolutionary theory has proven tremendously valuable when it comes to assigning new facts a place in a model; its direct applications have also been incredibly powerful in manipulating organisms and biological systems. This has led to accusations that scientists are “playing God” by taking “artificial control” of “natural processes.” The metaphor only makes sense if you accept its religious premise; additionally, it is merely a way of dressing up the old debate between vitalism and materialism in new clothes. The same charge of “playing God” can be leveled at the inventor of a new type of chair, or anything else, unless you believe that there is some qualitative difference between manipulating living systems and “inorganic” objects (like wood, which is still organic, just no longer attached to a tree).

Genetic engineering and other activities certainly might affect human evolution by altering the environments in which we live, and that it might do so rapidly by releasing organisms that reproduce quickly under particular environmental conditions. On the other hand, changes are inevitably happening anyway as we change the environment in other ways, deliberately or not. Our planet now hosts seven billion humans who continue to produce new babies and waste products, who continually create new technologies, and who spread both diseases and cures at a faster rate than ever before. Our own existence and behavior are integral components of the environments of the future.

The more profound issue that underlies many of these debates, I think, is fear – fear of certain types of change, especially if they seem to threaten something of value. Evolution offers no guarantee that humans will survive (nor does the notion of a “Rapture”); it also allows for changes that we personally wouldn’t care for. We can only be glad that ancient hominids didn’t regard themselves as the pinnacle of Creation and somehow nip future evolution in the bud. They could never have succeeded, nor could the eugenicists, because there is no way to prevent random biological variation and gain long-term control over the fate of our species.

The alternative to a fluid, evolving view of life is a static model that is the gateway to a mechanistic view and thus a deterministic one. If the central metaphor in understanding life is a man-made machine, it is easy to overlook all of the aspects that are non-machine-like, particularly in the interconnectedness of every level of every biological system. To think otherwise is to continue to debate evolution in an intellectual Flatland that the theory has already escaped.

I don’t think a deterministic system can survive within a much greater model that is fluid, individually constructed, open-ended, tolerant of variation, engaged in a multidimensional conversation with its environment – in other words, organic. The metaphor of a watch – or of any other machine – is far too mechanistic to describe any living system. The amazing complexity of life is not evidence of deliberate creation or intelligent design; in fact, its unpredictability is the best evidence for an ongoing process of evolution.

– Russ Hodge, April 2013


[1] Russ Hodge. Evolution: the History of Life on Earth. New York: Facts on File, 2009.

[2] Richard Rodgers and Oscar Hammerstein. “The Farmer and the Cowboy should be Friends” (song). Oklahoma (musical). 1943.

[3] For a fairly complete list of tropes, see “Figure of speech,” http://en.wikipedia.org/wiki/Figure_of_speech

[4] In 1872 Francis Galton, a cousin of Charles Darwin, studied the health of the British Royal family. So many people prayed for their health, he reasoned, that if “third-party” prayer were effective, they ought to have exceptional health. But it appeared to have no effects on their longevity.

[5] Edwin A. Abbott. Flatland: A Romance of Many Dimensions. Dover Publications, 1992.

[6] Hull’s comment from a book review is widely quoted; I have not yet found the original source.

[7] “Ockham’s razor”. Encyclopædia Britannica. Encyclopædia Britannica Online. 2010. Retrieved 1 July 2011.

[8] William Paley. Natural Theology. (Originally published in 1802). DeWard Publishing, 2010.

[9] Intelligent design in court. See, for example, “Judge rules against ‘intelligent design.’” http://www.nbcnews.com/id/10545387/ns/technology_and_science-science/t/judge-rules-against-intelligent-design/. Last accessed on April 5, 2013.

[10] Behe, Michael. Darwin’s Black Box: the Biochemical Challenge to Evolution. Tenth Anniversary Edition. New York: Free Press, 2006.

[11]See, for example, the “Human Microbiome Project.” http://commonfund.nih.gov/hmp/ Accessed April 15, 2013.

[12] Miller, SL. A production of amino acids under possible primitive earth conditions. Science. 1953 May 15;117(3046):528-9.

[13]  see, for example, Hubel, D.H.; Wiesel, T.N. (February 1, 1970). “The period of susceptibility to the physiological effects of unilateral eye closure in kittens”. The Journal of Physiology 206 (2): 419–436.