The saga continues…
Here is the opening chapter of the Case of the Short-fingered Musketeer.
You can read the introduction here.
The Case of the Short-fingered Musketeer… continues!
This is the book I wrote in 2012 called “The Case of the Short-fingered Musketeer,” about a long-term project by the laboratory of Friedrich Luft to discover the genetic causes of essential hypertension. The book was written as both a detailed case study of a scientific project and a parable for the amazing progress of what we call “molecular medicine” over the past 20 years. It is also a remarkable account of a unique collaboration between basic researchers, a family with a genetic disease, doctors, clinicians, pharmacologists, and the politics of science. (There was also some art involved, as seen in the magnificent cover painted by my good friend Stephen Johnson, of Lawrence, Kansas.)
In 2012 the story was still unfinished – so it goes in science – but 2015 saw the publication of a new paper that brought the story to a satisfying conclusion. That occasioned a new chapter.
The book was supported and published by the institute Fred, his team and I work for – the Max Delbrück Center for Molecular Medicine of the Helmholtz Association. We are still hopeful that a mainstream publisher will pick up a streamlined version of the book – if you’re interested, please let us know!
Now the group is awaiting word on (hopefully) the acceptance a new paper that takes the story even farther and will certainly require a chapter 22. In optimistic anticipation, and in honor of Fred Luft’s recent 75th birthday, I will begin posting excerpts from the book here over the next days and weeks.
For those who can’t wait, the introduction and final chapter can already be read on-line at the links below.
Stay tuned for new developments!
One reason the term “science communication” has broadened to include so many activities is that research is leaping across the boundaries of disciplines and into our daily lives more quickly and profoundly than ever before. Without a basic understanding of scientific results and the methods by which they are obtained, people can’t be expected to digest complex information about their health or the global impact of their lifestyles and respond in reasonable ways. This has stimulated diverse efforts by many types of communicators to broaden and raise the level of scientific literacy in society as a whole. The pace of science has also created challenges for scientists as they confront massive amounts of data that can only be understood by teaching a computer how to cope with them, excruciatingly detailed models, and problems that can only be solved by transcending the boundaries of classical disciplines whose practitioners come from different backgrounds and speak different languages – both literally and figuratively.
Many well-meaning efforts aimed at explaining the significance of a piece of research – or the aims of science as a whole – somehow fail. That’s true at the interface of science and wider sectors of society, but forms of the general problem are also common within research communities, where communication is fundamental to daily practice. Good communication skills boost careers and the progress of a field. Failing to help scientists develop them, I will argue, has effects not only on their careers but also on the quality of their research. This comes from working in the field a long time and witnessing countless examples demonstrating that excellent scientists are often superb at explaining their work to very diverse audiences. Is there a connection? You don’t truly understand something until you can explain it to someone else; does this old adage hold true for the highest levels of research and communication? If so, can you make people better scientists by making them better communicators? A few years ago I decided to try to find out.
A meaningful approach to answering these questions would have to encompass both theory and practice; it would require a thorough understanding and analysis of not only the strategies people were using to communicate, but the content they were trying to get across. I had access to plenty of examples through the scientists I encountered every day, the difficulties I encounter myself in writing about their work, and from hundreds of students over the years whom I had tried to help write and present their science to many types of audiences. As a general approach I stole a page from the handbook of the early fly geneticists, who uncovered the functions of hundreds of genes by studying how mutations disrupted biological systems. Maybe problems in communication could be used the same way: maybe they could show how things ought to work.
Over several years I followed this strategy in studying communication problems and funneling much of what I learned back into the courses I was teaching. The result was a steady but dramatic change in my understanding of the relationship between communication and science. I believe that these two fields of effort are connected at a profound level that is incompletely understood and rarely explicitly discussed or taught.
This project offers a new model of that relationship which attempts to connect how scientists communicate their work – effectively or not – to deeper underlying aspects of the way they think. It shows how many communication problems stem from chaos in the laboratory: not the physical benches where scientists spend their days, but the mental laboratory they are constantly constructing and rebuilding as they learn science.
It’s in this inner laboratory that real science happens, and understanding this gives communication a fundamental role: it is a means of exposing, exploring, and manipulating the cognitive models that give every scientific question and every piece of data its meaning. Disorder in the mental laboratory almost always leads to chaos in communication, and the act of communicating science offers ways not only to detect it, but also to straighten things out. In fact, it’s often the only way to even notice that the disorder is there. Our minds make assumptions and carry out logical jumps we aren’t aware of; until they are articulated aloud, our innermost beliefs and convictions are prey to influences that lie outside of science. A scientist’s examination of any system – even before a first encounter with it – is already styled by experiences of other systems, expectations, and models built using other systems long in the past; the recognition that this generates bias and can even reach into data in ways that reconfigure it is the reason why double-blind studies are so important. By putting something on paper, scientists can carry out a more careful, analytical scrutiny of their assumptions and models – to ask the questions, “Is this conclusion founded,” or “Are other interpretations possible?” one must first see the whole train of a thought. Then it can be broken down and mercilessly queried, step by step, and weak points can be discerned.
The process of communicating science thus externalises thought to permit a self-critical scrutiny that may otherwise be impossible or at least extremely difficult. Inevitably one becomes aware of gaps that have been invisible. It allows a person, at least to some extent, to look at his or her own ideas more the way another reader would. This skill can be trained, and it is the first step toward developing distance toward a set of ideas – and even to apply the perspective of a potential audience. That process not only improves the quality of a researcher’s communication – it can also affect the work. Sometimes the only thing necessary to discover fascinating new questions and develop better models is to notice the structure of a system in a text or diagram.
Most of the models in today’s science are so complex that they can’t even be thought about clearly without some form of representation – in language, images, or mathematical formulae. Papers and talks and other communicative acts open this complexity to inspection, analysis, discussion, criticism, and correction from the community. Trying to do science without communicating it is like trying to play chess – or teach someone else to play – without a board. For those who aren’t geniuses, a physical board offers a playing field to try things out, move the components around, and probe new strategies. To become a good scientist a person needs to look at many, many games, recorded in the literature, and extract the patterns and rules that lead to success.
Today’s students are constantly flooded with massive amounts of information which they are expected to arrange in their mental laboratories in a certain way. The hypotheses they frame, the experiments they design, and the way they interpret results are manifestations – symptoms – of the architecture they have built in their heads. But the only way to catch a real glimpse of this architecture, and measure their success at assembling it, is by watching how they put their work into the larger, logical framework of a text or talk. Explaining their science to non-specialists requires stepping farther back, seeing the more basic and generic patterns that underlie models, and trying to capture those patterns using tools such as metaphors.
That’s an important process because the inner mental laboratory of science is a metaphorical one as well. When a scientist frames a hypothesis regarding a specific problem – say, the behaviour or structure of a molecule – the form of the question is determined by the concept of a molecule, and what we think others think about it, rather than the molecule itself. The simplest things we think about are highly complex models and they are intermingled in a messy knot of other concepts, abstractions, and many types of knowledge that all come to bear on how clearly we are thinking.
So I am convinced that it is no accident at all that the best scientists I know have a very good understanding of their own thought processes, as applied to science. Often they have arrived at this point intuitively, devining rules and models through an intense study of the games going on around them. There are many parallels to learning a language: small children construct models that allow them to produce grammatical sentences by taking in and imitating the sounds around them, and attaching those sounds to things in contexts that have meaning for them, and testing them against the practices of others. What ultimately comes out is a compromise between the things they want to talk about, social contracts about the meaning of words and sentence structures, genre-like expectations about what is likely to be said when and where, and fundamental aspects of the biology of our brains – the extent of short-term memory determines, to a great extent, how many things we can think about and process at once. That influences how complex a grammar can be, and it also determines how much of a model can be processed without an external reference such as a text or diagram.
The rules for how an adult learns a new language are different than those for a child, and this means that teaching must do more than delivering single facts or pieces of evidence that we expect non-native speakers to assemble properly. People come to science after a long process of intellectual development in which so many concepts and expectations are already fixed, which means that moving into an artificial system of scientific models is more like the second type of language learning. Teachers usually take advantage of their students’ intelligence by presenting them with models of sentences and methods of producing new ones for the real communicative contexts that give them meaning. The same is true for research, and looking at it this way has profound implications for how we teach science and how we teach people to communicate it. I think that these efforts are most likely to succeed with a better understanding of the complex rules by which models give scientific ideas their meaning, and an understanding of the cognitive nature of the models themselves, and an search for methods aimed at resolving these parts of the “communication problem.”
* * * * *
Some of my colleagues and other professionals in the field of science communication might be surprised that this enterprise doesn’t start with a discussion of issues we usually confront and talk about the most, such as the fact that people who have something to say about science and their audiences often have very different agendas in coming together. Their knowledge and interests often diverge very widely. Dialogues that are started as a way of generating mutual understanding sometimes lead to even greater misunderstandings, and in the worst case achieve exactly the opposite response. Audiences sometimes leave “popular science talks” thinking, “Science is so hard I’ll never understand any of it,” “Why can’t scientists ever give me a straight answer to a question?” and even, “They’re trying to hide something from me.”
Miscommunication is often the result of getting off on the wrong foot from the very beginning: a failure to consider exactly what you hope to communicate, which has to be a function of a rational decision about what it’s possible and desirable to achieve with a specific audience, and what you expect them to do with the message. The usual result of this failure is a mis-match: a message doesn’t resonate because it hasn’t taken into account an audience’s interests, needs, or their motivation in entering into a dialogue in the first place.
These situations and less severe symptoms of poor communication are deeply connected to the cognitive models by which we navigate science and nearly everything else in our lives. They constantly arise in teaching because most of the students I deal with have never been introduced to very basic principles of functional communication, where success depends on a good understanding of the message one wishes to share, the expectations and knowledge of the target audience, and the modes and genres that are available to deliver it. The quickest path to a communicative breakdown is a mis-match between any of these things.
My experience is a meaning-based approach to teaching communication – which in science requires thinking about the connection between specific questions, results, and models – is extremely effective at solving these more fundamental problems. In following entries regarding this project I will use examples to explore the details of this model of science communication and how it can be translated into a didactic approach. In science, the first step toward solving a problem is usually to articulate a question very clearly. The same thing is true in teaching: to help a student acquire skills, we should first grasp what they need to learn. Communication begins with the construction of meaning, and the better we understand that process, the better we will be able to teach researchers to explain what they mean – no matter whom they need to address.
Russ Hodge, Sept. 2017