Where touch meets hearing

Touch sensitivity is hereditary and linked to genetic mechanisms that support hearing

Vision and hearing are so crucial to our daily lives that any impairments usually become obvious for an affected person. A number of mutations in genes governing these types of perception lead to hereditary defects in humans. But little is known about our sense of touch, where defects might be so subtle that they go unnoticed. In the May 10 issue of PLOS Biology, Gary Lewin’s laboratory at the Max Delbrück Center for Molecular Medicine (MDC) in Berlin demonstrates that differences in touch sensitivity arise from genetic factors that can also be inherited. Some of these factors support hearing as well, meaning that a single mutation may impair both senses.

There are good reasons to suspect that hearing and touch might have a common genetic basis. Sound-sensing cells in the ear detect vibrations and transform them into electrical impulses. Likewise, nerves that lie just below the surface of the skin detect movement and changes in pressure and generate impulses. The similarity suggests that the two systems might have a common evolutionary origin – they may depend on a common set of molecules that transform motion into signals that can be transmitted along nerves to the brain.

In the current study, Lewin’s lab and collaborators at medical schools in Berlin (Charité), Hannover, and Valencia, Spain (Hospital Universitario La Fe) carried out a classical “twins study” to try to discover a hereditary basis for touch sensitivity. The project compared the touch and hearing acuity of identical twins (who have identical sets of genes, including any mutations that might cause defects) with that of fraternal twins, other family members, and a wider set of subjects. They discovered a significant hereditary trend in touch sensitivity, and this correlated strongly to certain types of hearing problems.

“We found a strong correlation between touch and hearing acuity in healthy human populations,” Lewin says. “Additionally, about one in five young adults who suffered from congenital deafness had very poor touch sensitivity.” Blind subjects used as controls, on the other hand, often had enhanced touch perception. This made sense because the genetic basis of vision depends on proteins called photoreceptors that detect light rather than motion.

During the tests, subjects were exposed to vibrations of various frequencies; another experiment had them run their fingers over a fine grating with ridges spaced at intervals of about a millimeter.

One group of subjects suffering from Usher’s syndrome, a hereditary condition that leads to both deafness and blindness, had a significantly impaired sense of touch. This suggests that the gene USH2A, which is mutated in the syndrome, contributes to sensations of both touch and sound. There are likely to be many more genes that play a role in both types of perception.

The scientific literature reports about 60 mutations in known genes that have been linked to hearing impairment, and about 60 more alterations in DNA with a similar effect that haven’t yet clearly been linked to a gene. “Our next task will be to investigate some of these other cases to see if they are also correlated to problems with touch,” Lewin says. “This will give us a better understanding of the genetic mechanisms that underlie both types of perception.”

An earlier study by the labs of Gary Lewin and Carmen Birchmeier at the MDC showed that while defects in touch sensation don’t seem to cause serious problems for people, those affected may be aware of them. “A number of subjects report problems in gripping objects – they may need to watch their hands as they grasp something,” Lewin says.

– Russ Hodge

The full article can be seen for free at:

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3341339/

Preliminary draft of the Minutes from the 9,154,388,279,911,101,314th meeting of the Committee for Intelligent Design

copyright 2012 by Russ Hodge

Preliminary draft of the Minutes from the 9,154,388,279,911,101,314th meeting of the Committee for Intelligent Design

Subgroup: Eukaryotes

Sub-subgroup: Exploratory Committee on Multicellular Organisms

Sub-sub-subgroup:  Worms

Sub-sub-sub-subgroup: Worms with a tubular form

 

Please make any corrections you see fit before we circulate the final version of the minutes.

 

Attendance: All 9,453 members of the committee were present; the Head of the Department of Viral Engineering was out with a cold and was replaced by his deputy.

The Big Boss called the meeting to order and introduced the agenda with a plea that presenters stick to their allotted times so that there would be ample time for questions. He noted, with a bit of irony, that he has over seven billion other meetings with other subcommittees to attend, and these all need to take place within the next few minutes. To a proposal that he simply expand the fabric of time to allow for special cases, the Big Boss said, “You can only stretch things so far before things get out of hand; the first four days have already expanded to fill up about 12 billion years. And in my experience, speakers are always willing to talk and talk until they fill whatever time is allotted to them. And I have a vacation planned in three days and I am not willing to postpone the flight another time.” (Discussion closed.)

TOPIC 1:

Minutes of last meeting read and approved.

TOPIC 2:

Continuation of the discussion on Means for Creating Multicellular Organisms.

The Working Group on Worms in a Tubular Form got up to give a PowerPoint presentation with their proposals for a basic body plan. They had, however, saved the presentation in the wrong format and had to run it from an iPad provided by the Biochemistry Department. An appropriate adaptor plug had to be requisitioned from Technical Resources. Then the bulb on the beamer burned out. The Big Boss tapped his fingers impatiently on the table and finally expanded time by ten minutes until things could get straightened out.

During this period the Working Group on bacteria once again raised its motion that unicellular life was fine (supported by the WG on Archaea); they repeated their basic objection to eukaryotes with the claim that once DNA was packed in a cell nucleus, it was especially susceptible to mutations due to the inherent flaws in physical chemistry (noting their previous objection to the creation of DNA) unless you intervened in every chemical reaction and made sure that every single nucleotide was faithfully reproduced. He reported on several cases in which entire regions of DNA had been duplicated, extra chromosomes were acquired, genes were deleted, etc. And that might lead to Evolution, a process which violates the Charter on the Rules of the Universe as Decreed by the Big Boss.

The head of the committee on Eukaryotes pointed out that bacteria likewise underwent mutations, in fact, at a much more rapid pace because the organization of its DNA into circular plasmids permitted them to swap genetic material during S-x.

The representative of the Committee on Occam’s Razor (C.O.R.) once again requested that it should be permitted to pronounce words like S-x without leaving out letters. To which the Committee on Propriety (C.O.P.) replied that in the Charter on the Rules of the Universe as Decreed by the Big Boss, S-x was entered in the Database of Dirty Words.

C.O.R.: Even when it refers to bacteria?

C.O.P.: Yes, they stick those disgusting spaghetti tube things into each other. The only way to stop it is to put them in a blender. You should have listened to us when we objected to S-x in the first place.

The Big Boss gaveled for Order and the Subgroup began its presentation.

Summary: the Working Group proposes a simple, tubular body plan with a mouth on one end and an anus on the other. The form is modular: the head region may be connected to the tail by a number of segments which, for all practical purposes, should be virtually identical. The segments have “nubs” on the side (note to Department of Terminology: create appropriate Latin term) which could be used, at a later date, as the base for filaments or appendages.

Questions raised: Why are the middle segments necessary? Why can’t the thing have just a head and an anus?

Response of the committee: Some system of legs or fibers may be desirable, in new species in the long term, for locomotion, which might be required to find food.

Question: Why can’t the food simply be brought to the worm?

Response of the committee: This is desirable because of previous decisions which made unicellular organisms mobile. As the Big Boss stated during that meeting, “Otherwise everything will have to live on its own dung heap.” And no mechanism had yet been invented to attract food to the creature intended to eat it, except for magnetism, and adding a magnet to the worm body plan and magnetism-sensing proteins to all of its prey would require an unacceptable number of interventions in existing species. And that would be forbidden by the decree under the Charter on the Rules of the Universe as Decreed by the Big Boss: “Once invented, no species may undergo significant changes outside of a standard range of deviation.”

Call for clarification by the Department of Terminology (D.O.T.): We still don’t have a technical definition of the term “species”. (Groans around the table).

The Department chair was reminded that the problem has been referred to Subcommittee.

D.O.T.: Well why is it taking them so bloody long?

(General silence; D.O.T. will be fined at the standard rate for using a Dirty Word; the amount will be determined by the C.O.P. and notification will be sent through the Billing Department. C.O.P. stated: “And this time please provide the correct account number!” The chair of D.O.T. smirked.)

Comment by the representative of the Committee on Flatworms: Why are a mouth and an anus even necessary? Why can’t the worm simply absorb nutrients through its skin, like flatworms do?

Response by the Subgroup: We’ve been over and over and over this; if you want thicker animals you have got to invent a digestive tract and some sort of circulatory system, because due to the nature of cells (casting a dirty look at the chair of the Subgroup on Cells) nutrient molecules won’t simply diffuse to the inner organs.

At this point the Big Boss remembered a note from the last meeting on Flatworms and called for a status update on the Planarium problem. “The d-mned things just won’t die,” he said. “You cut off the head and the tail grows a new one. H-ll, I’ve chopped one up into about 300 pieces and each one of them grows into a whole new worm. What measures are being taken to prevent the things from just covering the whole d-mned planet?”

Response from the rep. of the Committee on Flatworms: We have put in a special application for the creation of several species of predators.

Comment from the Big Boss: “Well, just make sure the predators die. And make sure that when a planarium passes through their digestive system, it gets broken down into molecules. If the cells go through intact we’ll still be stuck with the same problem.”

Comment by the chair of the Subgroup on Dictyostelium: Why can’t the cells of the worm simply disband, seek out food on their own, and then reunite?

Intervention by the Big Boss: “Dictyostelium was an interesting experiment, but it’s hard to find the things when you need one. First of all, they’re so small I can’t see them without my bifocals, and second, you can never tell when they’re likely to group up to form a worm, or one of those dandelion-like things, and those are liable to blow up any time they get hungry.” He requests an update on the Dictyostelium Disaster from the Research and Development department.

Chair of R&D:  We’ve traced the problem to an error made by the Department of Mathematics and Physics; they did not properly calculate the force required by the cell adhesion molecules. Dictyostelium cells only stick together when the system has an optimal level of energy – in other words, when they’ve been fed. The problem was detected too late in the design process without sending the whole thing back to subcommittee or violating the law on standard permissible variation within an existing species.

Comment from the Department of Terminology: (Cut off before the standard request for definition could be made.)

Question from the Subgroup on Technical Innovation: Why is it that every time we invent a new species, we have to stick to the same conservative biochemistry? Why can’t we please, please, just once make an organism from scratch and not have to integrate all these past designs which, if you ask me, makes things way too complicated? Instead of integrating genes from bacteria and archaea into eukaryotes, we should have just junked the past and started over.

Answer from R&D: I quote from the basic Statutes on Biodegradability: “Any new organism which is created must adhere to basic chemical and physical laws and their subcomponents must be degradable by other organisms in the ecosphere as a means of energy conservation.”

Comment from the Chair of Physics: Our calculations demonstrate that violating this principle would require a constant, massive influx of supernatural energy into the Earth environment to support higher life forms on the scale we have planned.

Comment from Astrophysics:  And we would like to state once again, for the record, that when you guys started inventing biochemistry, we told you to make a system that would withstand supernovae. But did you listen? Well, did you??

The Big Boss allowed one final question before moving to adjournment.

The chair of the Subgroup on Multicellular Organisms: We would just like to point out that these meetings take up a vast amount of time. I have consulted with all the Subcommittees and the Head of R&D and the Technical Support Groups and we would like to ask for an amendment, or at least a special waiver, in the Prohibition on Speciation under the Rules of the Universe as Decreed by the Big Boss. Once the basic worm plan has been established, we could just let the rules of chemistry and physics alone and we’d get a plethora of advanced species.

The chair of the Subgroup on Geology points out: For God’s sake, man, the Cambrian period is coming up and you’d get some kind of explosion!

The Big Boss patiently pointed out that Rules were Rules.

The chair asked for a voice vote on the general plan for tubular worms as presented; the majority approved; the chair of the Subgroup on Dictyostelium objected; D.O.T. and C.O.P. abstained. The chair pointed out that C.O.P. didn’t have a vote and couldn’t “abstain”.

The Big Boss said: “Change the record to record that.”

Conclusion: The plan for tubular worms should be submitted to R&D for working out the details. They should present a final proposal at the next meeting, to be held in one minute.

R&D submitted their routine request for an expansion of time because of a heavy workload. “Refer to our minutely report,” the chair said. “Check Appendix 412. We have 8 trillion ongoing projects.”

The request was denied.

The Big Boss stroked his beard, consulted the time in picoseconds on his large, gold pocketwatch, and adjourned the meeting.

“I’m picking up good vibrations”

If you’re reading this on your laptop right now, say over a Venti Latté at Starbucks, take your hand off the hot cup and lay your fingertips for a moment on the keyboard. You may feel the hard drive spinning, or the fan blowing. Your ability to detect heat and vibrations is due to the presence of different types of nerves in your fingertips. A recent finding by the labs of Carmen Birchmeier and Gary Lewin published in the current issue of Science, shows that a molecule that directs the development of nerve cells is important for the detection of vibrations.  This molecule determines the form this nerve cell acquires, its functions in the nervous system, and ultimately whether humans sense high-frequency vibrations. With the findings, the lab has managed to tell a complete story of how the development and function of the nervous system of an organism as a whole can be directly linked to a molecule at work in one of its cell types.

Before scientists can study different kinds of nerves, their functions in organisms, and their roles in disease, they need a way to tell them apart. Hagen Wende and other members of Carmen’s lab first carried out a screen to try to find molecules that could be used to make fine distinctions between types. They discovered that a sub-group of mouse peripheral neurons located in dorsal root ganglia (DRG) produced a protein called c-Maf. Some of the cells expressed this molecule, along with another protein called Ret, at a very early stage. They continued to produce both molecules during the embryonic development of the mouse and after birth.

Some DRG neurons were already known as mechanosensors – transmitters of touch, pressure and vibration sensations – and Hagen and his colleagues wondered about the role of this subset of cells that produced c-Maf. One way to find out would be to “knock out” the c-Maf gene using genetic engineering techniques. Since blocking the production of c-Maf throughout the embryo is lethal – c-Maf has vital functions in other cells – the scientists used a “conditional” knock-out method that removed it only in DRG neurons. The next step was to investigate the effects of this procedure on nerves and the animals’ perception of sensory stimuli.

First they discovered differences in a group of neurons in the DRG: these neurons no longer had thick axons – the trunk-like structures that transmit signals to other nerves. Some of those cells end in thick, egg-shaped ends called Pacinian corpuscles, which detect sensations like pressure and vibration. The corpuscles were much smaller when c-Maf was absent.

“Measurements done by Stefan Lechner in cell cultures showed that the change profoundly disrupted the neurons’ functions,” Carmen says. This effect was very strong in cells called rapidly-adapting mechanosensors (RAMs), which respond to the movement of skin rather than pressure.

Did the changes in mouse neurons correspond to similar effects in humans? “c-Maf also plays a role in the development of the eye – particularly the lens,” Carmen says. “Families with mutations in c-Maf were known to have developmental abnormalities in the eye. But their sensitivity to vibrations had never been tested.”

The researchers contacted one of these families, in whom four people carried the mutation, and tested their ability to detect vibrations. They discovered that the carriers had to be stimulated much more strongly to detect high-frequency vibrations like those produced by the spinning hard drive of a computer, whereas their sensitivity to lower, rumbling vibrations was not affected. And family members without the mutation could detect both types of sensations at normal levels.

Further experiments provided a biochemical explanation of the way changes in c-Maf affected cells. The scientists discovered that the neurons weren’t activating genes called Ret or crystallins (which are crucial in the development of the eye and its lens). They also produced smaller-than-normal amounts of a membrane channel protein called Kcnq4. Gary’s lab has collaborated with the group of Thomas Jentsch at the MDC and FMP to show that this molecule, which permits a flow of potassium ions through the membrane of nerves, plays an important role in the function of mechanoreceptors.

“This provides a full picture of the way c-Maf directs the development of rapidly-adapting mechanosensory nerves by targeting other genes,” Carmen says. “Without it, these cells fail to acquire their proper structure; they lose the Pacinian corpuscles which are needed to ‘fire’ the cells and transmit a signal on to the brain. And humans lose their sensitivity to high-frequency vibrations.”

Link to the full text of the article

Home page of Carmen Birchmeier’s lab

Tipping the balance on Alzheimer’s disease

A mix of math and experiments links a main symptom of Alzheimer’s disease to subtle changes in protein dynamics

In 1906, while peering at brain tissue through a microscope, Aloysius Alzheimer discovered one of the main hallmarks of the disease that now bears his name. The tissue came from a former patient who had just died as a consequence of a severe, progressive form of dementia. Alzheimer found that the space between her brain cells was filled with clumpy “plaques” made of proteins. Their main component is a protein fragment called amyloid-beta peptide, or A-beta. It starts as part of a longer protein called APP that is found in cell membranes. Making the deadly fragment requires enzymes to dock onto APP and make a series of cuts. While this probably happens to some extent in healthy people, it occurs much more often during the disease, and figuring out why is a central question in Alzheimer research. Now a combination of experiments and computer models have provided the labs of Thomas Willnow and Jana Wolf with at least part of the answer. Cutting works best when single APP molecules bind to each other in pairs. In healthy situations, another molecule blocks the pairing and most APP molecules remain bachelors. This discovery, reported in the October 2011 issue of the EMBO Journal, provides a potential new focus for the development of Alzheimer therapies.

 The current study was carried out by postdoctoral fellow Vanessa Schmidt and PhD student Katharina Baum, with Angelyn Lao from Olaf Wolkenhauer’s lab at the University of Rostock. It builds on previous work from Thomas’ group. In 2009 he showed that a protein called SORLA is involved in the development of the disease. This molecule participates in the movement of APP through the cell and the production of amyloid-beta peptide. Its effects are usually beneficial: increasing the amount of SORLA leads to less A-beta, both in the test tube and animal models. Mice that have been genetically engineered to lack SORLA, on the other hand, produce higher levels of the dangerous amyloids. But the reasons have been unclear.

The processing of APP involves an interplay of so many proteins that a “systems biology” approach, using mathematical modeling, was necessary to describe their roles. “Most mouse models and other studies have used ‘all-or-nothing’ methods, either completely eliminating particular molecules, or raising their amounts to unnaturally high levels,” Thomas says. “Patients experience much subtler changes in protein levels. We needed a way to make small changes in protein expression and watch their effects over long periods of time.” Levels of SORLA drop in many Alzheimer’s patients, but it isn’t completely lacking.

The scientists developed a unique cell-based system in which they could incrementally raise or lower concentrations of APP and SORLA. Then they carried out quantitative studies to study the effects of the changes on the production of amyloid-beta peptides. The next step was for Katherina, Jana and their colleagues to replicate these effects in mathematical models.

A breakthrough came when the scientists applied “Hill kinetics” to the problem. This approach detects cases when the elements of a system produce an effect by cooperating, rather than acting independently. It showed that the production of A-beta depended on some sort of cooperative event, which further experiments exposed as the pair-wise binding of APP proteins.

“This pairing creates an optimal ‘platform’ for enzymes to bind to APP, the first step in producing dangerous fragments,” Thomas says. “That discovery gave us a hint about the role of SORLA. It doesn’t directly stop enzymes from binding to APP, which was one of our early hypotheses. Instead, it interferes with the pairing of APP molecules. It locks up single copies so they don’t bind to each other. This means fewer ideal ‘docking sites’ for the enzymes, and a lower production of A-beta.”

The cell culture method allowed the scientists to observe the effects of a gradual raising or lowering of levels of SORLA. Even small reductions led to significant jumps in the amount of A-beta. “This helps explain how a drop in SORLA of just 25 percent in some Alzheimer’s patients leads to dramatically more fragments,” Thomas says. “It’s due to an increase in the cleavage of APP. Other groups have shown that APP normally forms pairs about 30 to 50 percent of the time. If levels of SORLA drop, that proportion rises. Cells produce more amyloid-beta peptide, leading to accumulations and the dangerous plaques seen in Alzheimer’s disease.”

The study has implications for the development of new therapies, he says. Rather than trying to inhibit the activity of APP-cutting enzymes, which healthy cells might need for other reasons, scientists can look for drugs or other substances that imitate the action of SORLA and block the pairing of APP molecules.

“This is the first mathematical explanation of the anti-Alzheimer effects of SORLA,” Thomas says, “and it helps show how relatively small changes in the ‘dosage’ of this molecule can have big effects on the course of the disease.”

Reference:

Schmidt V, Baum K, Lao A, Rateitschak K, Schmitz Y, Teichmann A, Wiesner B, Petersen CM, Nykjaer A, Wolf J, Wolkenhauer O, Willnow TE. Quantitative modelling of amyloidogenic processing and its influence by SORLA in Alzheimer’s disease. EMBO J. 2011 Oct 11;31(1):187-200

Link to the original article

Weighing in on Intelligent Design & co. (against my better judgment)

Well, since intelligent design continues to rear its ugly head in the current U.S. Presidential campaign, it’s time to weigh in and try to stop some of the nonsense (would you seriously vote for someone whose personal opinions go against over 150 years of thorough scientific research, for motivations that are unclear?). As the author of a book on evolution (see http://www.amazon.com/Russ-Hodge/e/B0024J8XO0/), and a native of Kansas (completely by accident, rather than by design), I’d like to pose the following questions. They mainly center around the following key points: What’s the difference between what people call a design and something that seems to be a pattern? and what would constitute valid evidence for attributing a structure to some sort of supernatural intelligence? I don’t really know why the following points are largely missing from the public debate on the topic, or why they aren’t the first questions raised by scientists, but there you have it.

1. If we were to accept the notion that patterns, structures, or other aspects of nature reflect some sort of intelligent design, why should we suppose that there is only one designer? Why couldn’t each individual phenomenon have its own independent designer, or even a committee of designers?

2. What is the difference between the concepts of pattern, structure and design?

3. We all know that incredible complexity can arise from something much simpler: it happens during the development of every human embryo. Why is evolution any harder to conceive of than embryonic development? Does an intelligent designer (or several) intervene in every one of the trillions upon trillions of biochemical reactions necessary to create a human being from a single fertilized egg?

4. Why should a person who believes he or she understands the Bible (or any other religious doctrine) ever experience a change of mind about a matter of faith? Why should today’s religious movements be any different than those of tens, hundreds, or a thousand years ago?

5. What theory (besides evolution) can explain the fact that several types of independent measurements seem to corroborate evolution’s concept of descent from common ancestors?

6. What if any significant differences are there between today’s ideas of intelligent design and the concept of Natural Theology as proposed by William Paley ca. 1800? What solutions does the intelligent design movement propose to the questions that caused Charles Darwin to discard natural theology as an explanation for our observations of living and fossil species?

Feel free to discuss the topic on this page; an alternative is to follow what’s happening at the following site:

http://technology-science.newsvine.com/_news/2012/01/01/9873836-new-year-brings-new-attacks-on-evolution-in-schools?threadId=3309319&commentId=61184582#c61184582

or even through my Facebook page:

http://www.facebook.com/Russ.Hodge2

Storytelling and science communication (part 1)

A few thoughts and resources for teachers

There are as many ways to tell a science story as there are writers, and as many ways to give a talk or make a poster as there are scientists. Still, there’s a difference between effective communication and efforts that miss the mark. After years of writing about science and trying to teach others to do so, I’ve gained some experience that may be useful to other teachers, or may at least get the ball rolling on a larger discussion.

A note on the context: most of my teaching has taken place in Germany, one of those places that has been notorious for failing to develop a notion of functional communication and teaching it across the curriculum. The results have been predictable; to quote William Zinsser, “Literature professors shouldn’t be left alone in teaching a skill that is inherent to every field.” (His book, On Writing Well, is a must for teachers, editors, and anyone who cares about clear, effective communication.)

In Germany (and too many other places), academics still ride the dead horse of prose which is purposefully obscure, which has to be dissected and reassembled before it can be understood, presumably to show how smart an author is. Usually, though, it simply reveals a sloppiness of thought, a failure to crystallize ideas into a pure and simple form, and no concern for readers who are short on time. It requires way too much effort in decoding texts that often have little to say in the first place. The idea that complex ideas can only be expressed in complex sentences is a myth; just check out the banquet speeches given by Nobel prize-winners. Or Albert Einstein’s maxim, “Everything should be as simple as it can be, but no simpler.”

I often begin my classes with a citation from the book And the Band Played On, an account of the early days of AIDS research written by Randy Shilts. The text comes from a press conference held in June 1982, at a time before the discovery of the virus, as the Centers for Disease Control released its first findings on the epidemiology of the disease. A reporter asked CDC Director James Curran whether AIDS was a sexually transmittable disease. He gave the following answer:

“The existence of a cluster study provides evidence for an hypothesis that people in the study are not randomly associated with each other and the study is a sexual cluster. On the other hand, we don’t have enough scientific evidence to say for certain that one person gives it to another person. We have to focus much more research into this area so that we don’t prematurely release information that’s not validated. On the other hand, we’re not holding back any information that might provide important health benefits. Thank you.”

My students immediately recognize that this statement is scientifically correct; on the other hand, it is so obscure that reporters had to interpret it themselves (or turn to scientist friends for help in doing so). Interestingly, students immediately assume that its opaqueness was politically motivated – which, in fact, it probably was. If scientists feel this way, is it any wonder that the public often reacts the same way toward other scientific pronouncements they can’t understand (“If we don’t get it, they must be hiding something”)?

In the course we talk about the value of shaping and controlling a message – there are easy ways to explain cluster studies that show how strong the “sexual transmission” hypothesis is, vis-à-vis other possible interpretations of the data. Curran could have tailored the information to his audience and given an answer that would have sent a stronger message to the public at a time when people desperately needed information about a dangerous disease.

Rewriting Curran’s statement is a useful exercise, but we usually leave it to discuss what constitutes an effective communicative strategy overall. We usually arrive a single basic principle that, so far, has always gotten the point across. I ask the students to imagine attending a scientific talk, and as they leave, they meet someone outside the door who says, “Oh, I wanted to attend but just missed it – what did the speaker say?” Everyone who leaves the room should be able to give a short, sensible account of the story. Their versions should agree with each other, and they should also agree with what the speaker would say if asked the same question. If that happens, and if nobody passed out or died or spent the whole time answering e-mails because the speaker failed to hold his attention, then the talk must have fulfilled its function.

With this single criterion in hand, I tell students, “So imagine you’re giving the talk – why don’t you just hand-deliver the answer? Just put in a statement like, ‘Now, when you leave and someone asks you what this talk was about, here’s what you should say…'” Call it a take-home message, a conclusion, or whatever you like; this forces the student to reduce the content to a clear, sensible story that should be told in a way that can be remembered and repeated, no matter what type of audience is on hand, and it leaves the speaker in control of the message.

It’s a fine idea, but translating this simple principle into all the steps of preparing and presenting a talk, a poster, or a text usually requires intensive practice. I’ll discuss some of the strategies we use in the next entry.

References:

Zinsser, William. On Writing Well, 30th Anniversary Edition: The Classic Guide to Writing Nonfiction. New York: Harper Paperbacks, 2006.

Shilts, Randy. And the Band Played On: Politics, People, and the AIDS epidemic. New York: St Martin’s Griffin, 2007

Manipulating a matrix of pain

Molecules that bind skin cells together influence the transmission of pain

“Where does it hurt?” a doctor says, and it sounds like a simple question; a patient is supposed to point to the offending region of the body. But a look at the details of how sensations arise reveals that the question isn’t so straightforward: pain begins with a response from nerves near the site of an injury. For the brain to perceive it, those neurons must generate electrical impulses that travel to other nerves in the spinal column and on to the brain. The impulse – and the sensations it causes – can be blocked at many places along the way. Now Gary Lewin’s group at the MDC has discovered that some signals are interrupted right at the source. A matrix of proteins that bind cells together in the skin can interfere with the contact between neurons and dampen touch sensation. Without this mechanism, people suffer from severe pain in a condition called epidermolysis bullosa, sometimes termed butterfly disease. The lab’s findings, published in the July 3 issue of Nature Neuroscience, reveal that a matrix protein called laminin-332 helps tune down sensations of pain and touch.

The skin needs to be tightly sealed to protect the body from water and dangerous substances in the environment. The seal consists of a dense matrix of glue-like proteins that stick cells to each other, including laminin-332. Sensory nerves end in the extracellular space in the skin and are surrounded by the same glue-like matrix of proteins. Brushing or probing the skin leads to an electrical excitation of these endings in their matrix-glue.

Normally, proteins in the matrix are required for an efficient transfer of the stress and strain produced by skin movement to the sensory endings in the matrix. The Lewin group has found that one component of the matrix normally acts to dampen down or inhibit the mechanical activation of the sensory endings in the skin. Laminin-332 was identified by two scientists in the Lewin lab, Li-Yang Chiang and Kate Poole, as a protein which naturally brakes the initiation of sensory signals that lead to pain. In human patients who lack this protein, the same sensory endings can be activated by normal stroking and probing of the skin, thereby amplifying pain.

As nerves grow, their tips branch many times to establish contact with other cells. If the endings of sensory branch excessively, the result will be amplified response to touch and pressure. The scientists also discovered that the presence of laminin-332 helps prevent too much branching in the skin.

The skin is made up of layers consisting of several types of cells, each of which is bound to its neighbors in different ways. This creates environments that develop different types of nerve architecture. One goal of the recent work was to show how the differences affect the transmission of sensory information.

“Pain-detecting nociceptors extend to the surface of the skin, into the layer called the epidermis,” Gary says. “Mechanosensory cells that detect touch reside exclusively in a lower layer, the dermis. Li-Yang and Kate found that laminin-332 blocks or lowers transmission from the nociceptor cells as they extend into the epidermis.”

In other words, if the keratinocyte cells that make up the dermis lack a working version of laminin-332, more nerves will be stimulated and will transmit stronger signals.

“People who suffer from epidermolysis bullosa don’t have laminin-332 – or they have a version of the molecule that doesn’t function well,” Gary says. “This work offers a mechanistic explanation for the intense pain that they experience.”

– Russ Hodge

A roadblock for metastases

A drug used to treat patients with tapeworms may help fight deadly colon cancer

Caught at an early stage, many cases of colon cancer come with a fairly good prognosis: 90 percent of patients survive at least five years. But if the diagnosis comes too late, after metastases have formed, the numbers flip. Only about ten percent of such patients reach the five-year mark, making this one of the most frequent causes of cancer death worldwide. Researchers are actively searching for ways of detecting the disease before it becomes deadly and new forms of treatment. A few years ago, Ulrike Stein’s group at the MDC and Charité discovered that high levels of a protein called S1004A was a good indicator of whether tumor cells were likely to migrate through the body and form metastases. Now the scientists have found that treating mice with a substance commonly used to rid the body of tapeworms reduces the likelihood that this will happen and improves the prognosis for the animals. They hope that the discovery can now lead to an effective treatment in human patients. Their work was reported in the July 6 edition of the Journal of the National Cancer Institute.

“In aggressive colon cancer and many other types of tumors, cells produce up to 60 times the normal amount of S100A4 protein,” Ulrike says. “The reason lies with the disruption of a biochemical signaling pathway within tumor cells. This route, called the beta-catenin signaling pathway, frequently becomes far too active during cancer. One result to strongly trigger the production of the S100A4 protein.”

While S100A4 itself does not cause cancer, it has deadly effects on animals already susceptible to tumors. When mice that produce too much of the protein are crossed with another strain that has a high rate of cancer, tumors spread like wildfire. On the other hand, if mice that don’t produce the protein are injected with highly metastatic breast cancer cells, the tumors don’t metastasize.

Over the past few years, researchers have discovered some of the reasons: S100A4 helps cells loosen ties to their neighbors and promotes their migration through the body and the formation of new colonies of cells. These processes are vital in the formation of tissues and organs during the development of embryos, but it becomes deadly during cancer.

Until now, Ulrike says, scientists haven’t found a substance that can block the expression and thereby the metastatic potential of S100A4. The current study changes that situation. During a research stay at the National Cancer Institute-Frederick in Maryland, USA, Ulrike and Wolfgang Walther carried out a high-throughput screen of 1280 small molecules with Robert Shoemaker’s group, to find something that could inhibit the protein’s production. The most powerful effect came from a drug called niclosamide, which is already used to treat tapeworm infections in human patients.

With this information, postdoc Ulrike Sack and her colleagues in Ulrike Stein’s group first investigated the effects of niclosamide on the behavior of cells in the test tube, including human cancer cells. They discovered that the treatment had a strong impact on cell migration and their ability to form colonies – two steps which are essential to metastases.

Next the scientists turned to mice, using a well-established method of studying metastases. They injected tumor cells into the spleens of control animals and another group that was treated with niclosamide. All the animals developed tumors in their spleens. In the control mice, metastases spread to the liver within just eight days.

But the livers of the treated animals remained almost metastases-free. The size and the number of the metastases that developed in their livers were significantly reduced, and the animals survived about twice as long as their control counterparts. This was true whether they were given the drug for just a few days after injection of the tumor cells or over the entire course of the disease.

The scientists discovered that niclosamide specifically blocks production of high amounts of S100A4 and its metastatic effects. “We’re lucky that the drug has already been thoroughly tested in human patients,” Ulrike Stein says. “That is normally a huge bottleneck in the development of drugs. So we’re hopeful that we can quickly move to human trials, to study the effects of niclosamide on deadly forms of colon cancer.”

– Russ Hodge

Exposing a Fata Morgana of smell

The Jentsch lab identifies a protein long thought to be crucial to smell… and finds that it isn’t

Open a biology textbook to the chapter on the senses and you’ll find a story about the way nerve cells transmit information about odors to the brain. It usually goes like this: “smelly” molecules enter the nose and dock onto proteins on the surfaces of neurons. As a result, ion channels in cell membranes open and allow a passage of charged particles (ions), changing the voltage over the outer cell membrane. This change generates electrical impulses that travel to the next cell and on to the brain. The ion channel that opens in response to odor molecules lets positively charged sodium and calcium ions flow into the cell. The entry of calcium has been thought to trigger the opening of another channel by which negatively charged chloride ions exit the cell, but researchers haven’t been able to identify the channel protein. Thomas Jentsch’s lab at the FMP and MDC has just found it, and in the process they encountered a surprise: animals can smell just fine without the channel. The new study appears in the June issue of the journal Nature Neuroscience.

“Most researchers have seen the exit of chloride ions as important, probably even crucial, in mammals’ perception of smells,” Thomas says. “Presumably its function has been to amplify odor signals. Measurements based on cells from rodents have suggested that the release of chloride leads to a five-to-ten-time increase in the strength of electrochemical signals.”

One reason for the focus on chloride channels has surely been the fact that in freshwater animals, this ion is the major player in exciting neurons that transmit information about odors. The cells of mammals accumulate unusually high amounts of it during “quiet” phases and apparently release it when they are activated. Scientists have identified the molecule that draws the ions into cells: a co-transporter called Nkcc1. But until now, the channel that permits chloride to leave has remained a mystery.

And questions remained about the impact of chloride on smell, particularly after 2008, when another lab developed a line of mice that lacked Nkcc1. Without the protein, animal neurons accumulated much less chloride and thus had little that could be released. The researchers confirmed that this caused a drop in the strength of electrochemical signaling to the brain. However, the animals responded normally to odors. The results might have cast doubts on the “amplifier” function attributed to chloride, but most researchers interpreted it differently: in parallel to Nkcc1, cells might have another mechanism to accumulate chloride. In that case, the ions could still be released to augment the animals’ sense of smell.

Thomas’ lab has been systematically investigating ion channels, with a particular focus on chloride channels. Such molecules have been linked to a range of serious diseases; inheriting a defective version of one of them or losing its functions often causes a disruption of the nervous system. As a part of the group’s efforts, PhD student Gwendolyn Billig produced a line of mice lacking a protein called Ano2. This molecule belongs to a family of chloride channels that open in response to rising concentrations of another ion – calcium.

Several pieces of evidence showed that the researchers had found the elusive chloride channel. Björn Schröder, now a junior group leader at the MDC, had already proven that Ano2 transported chloride in response to calcium – so it was the right type of molecule. Now the scientists labeled it with an antibody that made it visible under the microscope. It appeared to be the only calcium-activated chloride channel in neurons of the main olfactory epithelium, which is the point of arrival for odor molecules. Finally, Gwendolyn and her colleagues demonstrated that in the mouse line which lacked Ano2, these sensory cells no longer generated chloride currents in response to high levels of calcium.

They now had a way to test the importance of chloride in odor detection. They carried out precise measurements of electrochemical stimulation in tissue from the main olfactory epithelium of animals without Ano2. They discovered that the strength of signals was reduced by 40 percent at most – much lower than previous estimates. And the animals could still detect odors and differentiate between them. In fact, no difference was found between the smell sensitivity of normal mice and those lacking the channel protein.

As a result of the study, Thomas says, researchers will have to rethink the role of chloride in odor reception. “These ions do amplify a signal, but much more modestly than people have believed. That boost doesn’t seem to be necessary for animals to achieve near-normal sensitivity to smells, at least under normal conditions. Interestingly, a few humans who suffer from a condition called von Willebrand disease also lack the Ano2 gene. There haven’t been any reports about deficits in their ability to smell.”

It’s possible, he admits, that the channel plays a role in the response to some odors – that remains to be seen. But it may also simply be a vestige of evolution, predating the rise of mammals. It may have been preserved because its signal-amplifying functions give animals a slight evolutionary edge. In any case, Thomas says, it’s satisfying to have found the elusive chloride channel. In understanding the molecular mechanics of smell, scientists can now stop chasing a Fata Morgana.

– Russ Hodge

The first full census of a mammalian cell

MDC researchers track the output of an entire mammalian genome from DNA to proteins for the first time

A cell’s functions and behavior depend on the total population of molecules present in it at any given time, and how they respond to changes in the environment. Since Francis Crick declared “DNA makes RNA makes protein” in 1958, scientists have unraveled the mechanisms by which the hereditary information in genes is used to produce messenger RNAs and proteins, but counting them to obtain an accurate picture of cells’ contents has been notoriously difficult. One gene can spawn huge numbers of messenger RNAs (mRNAs) over a cell’s lifetime, and one mRNA can be used to produce vast numbers of proteins – but how many? And how long do they function before being taken apart again? Now scientists at the MDC’s Berlin Institute for Medical Systems Biology (BIMSB) have combined a range of new technologies and a mathematical modeling approach to solve some of these questions. The study, carried out by the groups of Matthias Selbach, Wei Chen, and Jana Wolf, tracks the global output of a mammalian cell for the first time, measuring quantities, lifetimes, and predicting rates of synthesis for its RNAs and proteins. The concept for the comprehensive project was developed in intensive discussions between experimental and theoretical scientists. Their work appears in the May issue of Nature and offers new insights into the functions and evolution of animal cells.

The scientists tracked the output of more than 5,000 genes in cells obtained from mice. An accurate census of the cell, Matthias says, requires counting the number of mRNAs synthesized from each gene, the number of proteins made from the template of each mRNA, and the rate at which each type of molecule is degraded.

“Earlier experiments about the lifespans and productivity of mRNAs and proteins have produced a very unclear picture,” Matthias says. “They relied on far smaller numbers of genes, and they typically focused on single steps in the process. Levels of messenger RNA molecules measured in one experiment were usually compared to levels of proteins obtained in another experiment in a different lab, under different conditions. Laboratories didn’t have methods to track what was happening in one cell through all the stages of the process. Additionally, most studies have relied on drugs that block single steps in the process, leading cells to behave unnaturally, or artificial molecules that may not behave like their natural counterparts.”

PhD student Björn Schwanhäusser from Matthias’ lab and his colleagues overcame these limitations by combining new technologies in an original way. Matthias’ lab has become a world leader in a method called stable isotope labeling by amino acids in cell culture (SILAC). This approach relies on growing cells in a standard culture medium, and then moving them to another. The amino acids in the new medium – which will be used to build new proteins starting at the time the cells are transfered – are “heavy” because their atoms have extra neutrons.

An instrument called a mass spectrometer can detect the difference between the two types of amino acids. So scientists can track the speed of degradation of proteins from the old medium and the construction of new molecules in the second medium.

In parallel, Na Li, a PhD student from Wei Chen’s lab, measured RNA levels using the most state-of-art sequencing technology. “It is not a trivial task to determine the absolute mRNA copy number at the genome-wide scale,” Wei says, “Most high-throughput technologies, such as microarrays, tell us only the relative difference in gene expression between different samples. It is almost impossible to use it for absolute quantification. Thanks to recent developments in novel sequencing technologies, we can now obtain quite a precise estimation of copy numbers for thousands of different mRNA transcripts in one cell.”

The mRNA degradation rate was studied using a strategy similar to that for proteins. Instead of amino acids, one of the nucleic acids , the building block of RNAs, appears in a different version between the newly produced mRNAs and old ones. Using biochemical methods, new and old RNA populations can be separated and distinguished from each other. The difference in the level of RNAs between the two samples can indicate how fast RNA transcripts are degraded.

The scientists discovered that on the average, proteins were five times more stable than mRNAs. Proteins had an average “half life” (the time point at which half of the quantity of a given molecule is degraded) of 46 hours, compared to nine hours for mRNAs. “We also discovered a wider range of lifetimes – from very short to very long – for proteins than for mRNAs,” Matthias says. “Interestingly, there was no general correlation between the half-life of a protein and the mRNA from which it was made. In other words, a short-lived mRNA could produce long-lasting proteins, and vice versa.”

Another fascinating result was that proteins were about 900 times more abundant than the mRNAs used to make them – one way to think of this is that on the average, a single mRNA is used to manufacture about 900 copies of the corresponding protein. Although the exact quantities differed a lot between different genes, there was a clear general correlation between the amounts of mRNAs and their corresponding proteins.

Half-lifes and levels of mRNAs and proteins don’t give the complete picture. Obtaining it requires the quantification of transcription and translation rates, but these are difficult to measure. Here, mathematics and modeling again come into play, and the necessary expertise was provided by Jana Wolf’s group. “The transcription and translation rates have been predicted by applying mathematical modeling,” says Dorothea Busse, a postdoc in Jana Wolf’s lab. “Overall, this approach allowed us to fully quantify the gene expression cascade for more than 5000 genes.”

“We found that the average gene spawned about two mRNA molecules per hour, but of course there is wide variation in individual cases,” Matthias says. Those molecules went on to produce high numbers of proteins – an average of about 40 per hour. But here, too, mRNAs show a wide range of usage; the most productive molecules build 100 times as many proteins as the least. And there seems to be an upper limit: the maximum rate seemed to be about 180 proteins per mRNA per hour.

Cells can block protein production at many stages: by not making mRNAs from a gene in the first place; by quickly degrading an mRNA; or by putting the mRNA “on hold” – in other words, keeping it around, but blocking its translation into proteins. mRNAs may be tagged with molecules, for example, that obstruct the protein-synthesis machinery until they are removed again. And finally, the cell may remove proteins from circulation once they have been built.

With so many ways to intervene in the production line, which does the cell use most? Matthias says the major control seems to happen when mRNAs are translated into proteins. “In predicting how many proteins you’ll find,” he says, “the production of new proteins plays a much larger role than breaking them down.” As well as providing key information about this crucial question, the study revealed specific sequences of mRNAs and proteins that can be used to predict how productive they are likely to be.

For example, mRNAs have tails called 3′ UTRs of various lengths which do not encode protein sequences; instead, they influence how the molecules are handled. The new project showed that a longer 3′ UTR usually results in a shorter lifespan for the mRNA. And the scientists found other signs of short lives: if molecules had a predominance of two particular nucleic acids (A and U), or if they permitted a protein called Pumilio2 to bind.

As well as discovering new principles that describe the productivity of particular genes, the scientists hoped to develop a mathematical model that could be used to predict it. When tested in additional, independent experiments, the model provided by Jana’s group successfully provided numbers for about 85 percent of the genes.

Do these general principles hold for other types of cells and other organisms? The scientists compared their results to 2,030 genes in a line of human breast cancer cells grown in the lab. “We focused on genes with clear evolutionary relatives to the mouse counterparts we had studied,” Matthias says. “The model gave accurate data for about 60 percent of the genes. It’s a smaller number than if we remain in the mouse, but most of the variation comes from differences in the rates of translations of mRNAs into proteins.”

The data also provide insights into the evolution of cellular processes. Molecules that carry out similar cellular functions often resemble each other in terms of their lifespans and productivity because they have co-evolved through natural selection. For example, the mRNAs and proteins involved in “housekeeping” tasks that all cells need to survive tend to be more stable than molecules that carry out more specialized tasks. This is likely due to differences in the amount of energy that cells need to carry out the transformation of genetic information into proteins .

“Protein synthesis is the most ‘expensive’ step,” Matthias says. “It consumes more than 90 percent of the energy available for the assembly of molecules, whereas building mRNAs from genes requires less than 10 percent. The study showed that most mRNAs – and especially proteins – are stable. The exceptions are usually molecules that help cells respond quickly to stimuli. This reveals what seems to be an optimal evolutionary principle, a trade-off between energy efficiency and a cell’s ability to respond quickly to environmental changes.”

The findings provide a rich resource for the scientific community. “The unique, quantitative data turned up in the study will help researchers search for common features that determine whether molecules are long-lived or short-lived,” Matthias says. “It should also help us understand the very complex regulatory relationships by which thousands of genes are linked to each other and to the molecules that they produce in cells.”

He adds that the work validates the overall “systems” approach being developed within BIMSB at the MDC. “The work is a good example of the way we can gain insights into the systems level of life by combining different high-throughput technologies with mathematical modeling and follow-up experiments in the wet lab. That strategy is most successful at a place like BIMSB, where we are closely linked to groups with expertise in biology, physics, mathematics, chemistry, computational science, and so on. The questions we are asking can only be solved through technological developments and contributions of groups from several ‘classical disciplines’ that bring their expertise to bear on a common problem. This paper – and hopefully many more to come – show that with the BIMSB, we’ve got a good recipe for doing so.”

– Russ Hodge