Announcing our new Science Communication Teacher Training Program at the MDC (SCOTT) 

Aims

SCOTT is a new program aimed primarily at advanced career stage scientists with excellent (near native) English and solid writing and presentation skills. The goals are to:

  • help participants develop additional professional qualifications as science communication trainers, teachers, writers, etc.;
  • produce a group of highly trained, excellent teachers to act as multipliers at the MDC and beyond;
  • serve as a unique model program to promote the institutionalization of excellent science communication training.

Who are we looking for?

Initially we will establish a group of 10-12 trainees who will work together as a team for one year. Priority will be given to postdocs and advanced career stage scientists at the MDC, although we will consider exceptional candidates with other qualifications and from other institutes. We also invite applicants from other fields of natural science, data science, informatics, etc.

What does the program entail?

Participants will need to make a long-term commitment and be prepared to devote about 3 days per month to the project (not as a block). Papers, presentations, grants or other projects they are working on with their own groups will count as part of this time. Activities will include seminars, observations of courses, outside assignments, and teaching. The program is divided into 3 phases:

  • Seminars, observation, and discussions to provide a solid theoretical introduction to practices and problems in scientific communication, didactics and learning styles. The group will hone their own science communication skills, observe ongoing courses in a range of skill areas, discuss and deconstruct the teaching, and creatively brainstorm to improve the theory and methodology. We will work on lesson plans together and feed new ideas into the next cycle of courses.
  • In the second phase, participants will take over the teaching of some modules of ongoing courses themselves, with supervision by the instructor, observation by colleagues and sessions for constructive feedback afterwards.
  • In the third phase participants will begin teaching independently with support from the instructor and the group. We will present the program through lectures, demonstrations and group workshops, at the MDC and other organizations, to engage the community. “Graduates” will help recruit and work with the next class of trainers, export the model to their future institutes, and become the basis of a network that will continue to work together over the long term.

The first 4 months will mainly involve meetings of whole or half-days, spread at intervals through the month, and outside assignments. Later the schedule will be more flexible; participants will be able to choose from a range of modules to attend and teach. We will work together on lesson plans and develop a range of innovative teaching materials. We will also invite external experts to enhance the program with talks and workshops.

During the later phases, participants will teach in ongoing courses, take part in other projects, and be encouraged to develop workshops and courses around their own scientific topics, communication activities and needs. The project will offer special types of support to the participants’ home labs, such as customized workshops and help with projects such as papers, theses, and presentations.

Table of initial dates and activities

MeetingDateTopic
Meeting 1 (full day)April 4   Theory and aims
Meeting 2 (half day)April 26Observation and analysis
Meeting 3 (full day)May 12  Observation & didactic workshop (student orientation)

What do we hope to achieve?

This work is based on an established theoretical background and teaching model which needs to be refined, improved, and expanded. As a group we will collect experience, improve the program, develop original teaching methods and materials and produce a handbook for future trainers.  We will enhance current training structures at the MDC and on campus by offering more support to students and scientists, developing content for the Long Night of Sciences and other events, and producing games, teaching materials for schools, etc.

The program will be extremely transparent, open to group leaders, scientists and other staff at the MDC as observers or participants at any time. We will support your work by offering customized workshops and helping develop communication and education modules for grants or institutional projects. Contact the program if you are interested.

Over the long term we will offer lectures and demonstration courses to other institutes and organizations within the Helmholtz Association and beyond, to promote the wider institutionalization of this model of training.

If you are interested or have questions, please contact Russ Hodge directly, at hodge@mdc-berlin.de.

As a part of registration, we will set up an individual appointment to discuss details of the program and your individual interests and needs.

Scientific communication training, Theoretical introduction

This is the latest version of the theoretical introduction to my communications courses, recorded in January 2022.

The last few minutes provide a transition to the first practical session on presentation skills.

Newest version of the “Ghosts” talk

This is an updated version of the presentation in which I introduce a “new model of the relationship between science and communication,” as presented to the Leibniz Association in June, 2021.

The talk is intended for researchers at every career stage, science communicators, communication trainers, other teachers, anyone interested in scientific thinking, and a wider group of stakeholders in research, communication and education

This serves as the theoretical introduction to my courses in writing, presentation skills and other types of communication.

Please get in touch if you are interested in learning more, have comments, or would like to join a group of scientists and teachers who hope to institutionalize this type of training in research organizations and science curricula.

Ghosts of omission

What a thing IS encodes what it ISN’T

Note: This piece follows up on my other articles on “ghosts” – an analysis of diverse factors which disrupt science communication. To read more, see:

An overview of the model: “Ghosts, models and meaning in science”

The main article

A recent talk on the topic given at Jackson laboratories

Ghosts in images

More on ghosts in images

“Ghosts of omission” are a type I describe in the talk recently given at the Jackson Laboratory in Maine (see the link above). I discovered this type during a retreat with the Niendorf group from the MDC. We were doing an exercise on the difference between verbal descriptions of things and images. Each member of the group had to go into the kitchen, choose an object, then come back and describe it in purely physical, spatial terms, without naming it or stating its function. The listeners had to draw it.

One of the postdocs chose to describe this:

About half of the participants drew something that clearly corresponded to this object. But interestingly, the other half of the group drew one of these:

 

There are times when the “resolution” of language usually doesn’t suffice to disambiguate two things that are similar. Think of verbal descriptions of faces, for example, which could usually apply to lots of different individuals – it’s hard for most people to describe them well enough for a police artist, even when a face is being drawn right in front of them.

In this case that isn’t really the problem. It would be straightforward to describe the “egg whisk” well enough to distinguish it from the beaters of a mixer. What happened, though, is that the person giving the description just didn’t think about beaters at the time.

This means that confusion or ambiguity can arise because when describing something, the speaker or writer doesn’t know about – or simply doesn’t think about – another thing that it might be confused with. In other words, the way we think of a thing encodes not only what it is– what we’d probably call defining features – but those which distinguishit from other things that resemble it along multiple dimensions.

This concept surely has profound implications for fields like information and set theory, and across the spectrum of linguistics. It’s equally crucial in the types of concepts and models created by biologists. I’ll just cite two examples here: noncoding RNAs and immune cells.

The completion of the human genome and the rapid development of sequencing technologies revealed that our DNA encodes not only messenger RNAs bearing the recipes for proteins, but a wide range of other types of RNAs. Scientists are still exploring the functions of these molecules. New types – with different functions – are being discovered all the time. Initially scientists grouped them into classes generally based on the length of the molecules – into categories such as microRNAs, or long noncodingRNAs – and generally expected that these sizes would be associated with specific functions. The field has now exploded with the characterization of dozens of types, whose functions do not necessarily correlate cleanly with an RNA’s length. In principle, the discovery of each new type is like the discovery of a new kitchen instrument which might shift the defining and distinguishing features of existing utensils.

But it’s not always the case that the discovery of a new element in a system causes scientists to revisit and revise existing classifications. The same is true of the immune system, where new types of cells continue to be discovered. Researchers with a profound understanding of this incredibly complex system know that new types can force a revision of the roles and functions of the players already known. This can, however, take a while to seep into the broader awareness of the community. And there’s no guarantee that the patterns encoded in old ways of thinking of a type of RNA, or an immune cell, will be completely stripped from the old concepts.

This problem is inherent to biology because new instruments – or upping the resolution of an old method – continually expose new features and elements of systems. At first, these components are almost always seen from the perspective of models that have done without them. Eventually the cognitive shifts spread and are better integrated. But we need to be aware that our models encode old ghosts that are never completely broken down and reconfigured.

To close I’d like to show another way in which “ghosts of omission” exert an extremely powerful effect on our thinking. In an earlier version of the “Jackson talk” I used to include an example of a text (slightly edited) by a famous humorist. We read the text and it usually got a laugh:

Tom and saw Tom’s older brother George kissing his girlfriend on a couch. Tom and I looked at each other with big grins. If faces had been meant to kiss each other, they would not have been given noses.

Suddenly the scene turned bizarre because we saw that the girl had her tongue in George’s mouth and George’s tongue was misplaced, too.

What could that girl’s tongue possibly be doing in George’s mouth? Tom and I felt sick. After about a minute of observation, we went out into the backyard.

“That’s it!” I told Tom. “I’m really disgusted with girls now. I’m never gonna hit another one. Or even hit one with a jelly bean… Let’s make a pact. The first girl who ever puts her tongue in our mouth, we give it right back to her.”

At that point I identified the author: Bill Cosby.

If you know anything of Cosby’s subsequent legal troubles, and go back and read the text, what was simply amusing now becomes somewhat “creepy”. Knowing a single fact changes the way we process language and envision the roles of the characters. I can’t define creepiness in cognitive terms… But the change that occurs between the two readings of the text is the result of ghosts of omission. It’s another example of the profound effects of the “dark matter” of ghosts.

More “ghosts” in images

In my talk at the Jackson laboratories and my other work on “ghosts” in science communication (1)(2)(3), I refer to the way hidden structures and patterns in our thinking influence not only how we understand meaning, but basic aspects of perception. Here are a couple of new examples, developed for the talk and then something I found in the news this morning.

The first illustrates how we scan, process and interpret grey-scale images. I think generally if we see a black and white image, we’ve been trained to recognize structures and patterns based on everyday things we encounter. I’m sitting on a sofa with greyish/green cushions, and I recognize significant structures such as the cracks between them (very dark lines) and a floral pattern on the fabric, and others that I dismiss – shadows just because the way the light is falling:

When I look at an MRI scan, I also see patterns:

and my brain does something similar… In essence, my brain is simplifying the structure, highlighting some differences and reducing others. It’s filtering the image down to something like this:

BUT the gradations of grey-scale on a sofa don’t mean the same thing as in an MRI scan of the brain. The original image actually contains far more gradations of grey than I can probably perceive…

But using Photoshop or another image processing program you can get the computer to mark them, and use false coloring to exaggerate the differences. Doing that to the original image produces this:


It’s not necessarily true that this rendering contains more functional information than the simpler one, but I’d bet it does. How meaningful are these new substructures? That’s for the experts to decide, but you have to notice them in the first place to ask the question.

The “ghosts” in this process are a level of visual processing that our brains often carry out below the surface, recognizing some shades of grey as the “same” and clustering them, ignoring others and filtering them out. There’s simply no guarantee that the way this is happening – trained by all kinds of situations in which we recognize patterns in images – will pick up the critical differences in an MRI image of the brain.

This morning I found a similar image in an article by the NY Post and used it to do the same thing. The piece refers to a study comparing the brains of a “normal, healthy” three-year-old and another who had suffered extreme emotional abuse. I’m not making any claims about the original study here, or the controls and so on, not having read it yet. Nor am I sure that the image they posted represents the original data, with the full resolution and color scale. But still, the difference is remarkable.

Here’s the image posted on the site:

 

And here’s my colorized version:

 

There’s certainly more to see. What does it mean? Thoughts are welcome.

“Ghosts” in scientific images and narratives

copyright 2019 by Russ Hodge

This piece was motivated by my recent correspondence with Jens Wohlmann, a talented young scientist working in Norway. It follows up on two previous pieces concerning phenomena I call “ghosts”, which play crucial (and often disruptive) roles in scientific thinking and communication. They can be read here:

A brief overview in interview format

A more detailed introduction to the problem of ghosts and examples of various types

The letter from Jens:

Dear Russ,

It has been a while since I wrote you but I have been following your blog and I was actually thinking that I should write you about some “observations” concerning your “ghosts” – so I will use your mail as a reason to finally do so:

Listening to some recent talks by scientists of our institute I came across “forces” you are most likely aware of but which are new to me – in the terminology of “ghosts” one could probably describe them as “goblins” or “deranging ghosts”. I think of drawings and models of structures of interest in papers, presentations, schemes and so on. One example is when parts seem to be totally out of scale.

For example, a small GFP tagged Protein will be marked by a small “star” as a marker attached it, although the GFP itself may be double the size of the protein of interest. The same is true for markers on antibodies. In EM we put a dot on a Y-shaped structure, but the whole antibody has a size of 15nm. Since the particle may be 10 or 15nm it should be as big as the antibody. Even the orientation in the illustrations is kept consistent – always with the binding end pointing “outwards”. This confuses students if steric hindrance is important, because in reality the structure is most likely a chaotic, multi-layered coat oriented in all possible directions.

Another nice example can be seen in schemes of transmembrane proteins, pores or receptors. Most of the time the structure of interest is presented as a huge thing standing alone on the cell surface (because it’s so important), and the membrane is represented as a thin line – but in reality the size of the molecule is only slightly larger than the membrane, and it may be entirely inside it. Of course schemes need to be simplified, but this emptiness often gives the impression that cells are empty membrane bubbles, whereas the cytosol has an incredible high protein concentration and is full of fibrers and structures….

I think this “out of scale” representations can result in similar problems as your ghosts and they can be seriously found everywhere.

Best, Jens

My response in four sections:

Hi Jens,

I think you’re absolutely on the mark with your observations about the peculiar ways biological entities are represented in images or schemes. The examples you gave are excellent. At the moment, I’m exploring ways of mapping some of these problems into the conceptual framework of “ghosts”. Below I’ve broken this down into a few related points.

1: Overview

To me what makes this so important is that we use images and schemes to represent complex concepts, but obviously ideas undergo important transformations as they are translated into visual form. Relationships that we know are three-dimensional are pressed into two. Key points are brought into the foreground, while others fade into the background or disappear altogether.

And dynamic processes are broken into static frames. What happens a lot like the difference between a musical phrase and the way it is represented in a score; composers and musicians know that tones aren’t “particles” just lined up in sequences (they are “waves” integrated into longer “waves” – phrases of different lengths in different voices). I’ll be talking about thinking of cellular processes in terms of “phrases” and other musical terms like polyphony, harmony and dissonance in my upcoming talk in Oslo – this is a much larger discussion.

The ways we translate concepts into images, language or mathematical models are highly susceptible to influences by effects of styles and genres, which reflect experience, habits and expectations and make communication possible. Such “styles” guide the way a thinker produces an image (or text) and the way audiences unpack it to map information onto their own conceptual frameworks. It’s interesting that most of the time, the two are combined: when a speaker shows a slide, he will say something about it; figures in texts are accompanied by legends. The idea behind this, I think, is to ensure that the audience decodes the meaning the way the author or speaker intended.

And here the problem of “ghosts” rears its head – inevitably, a lot of meaning is hidden. Some of it lies in the invisible conceptual architecture that lies behind packing and unpacking; some of it lies in the style or code. And an awful lot of it comes from differences in the way the author and audience have their knowledge organized. As an electron microscopist, you have an extremely high-resolution version of what a cell is in your head; you know how “full of stuff” this landscape is. And you work at a scale where the relative sizes of molecular objects are incredibly important.

But you’ve seen enough talks and read enough papers to be familiar with the styles of most schemes that you’ll see, and you know how to translate them into your own conceptual models. Sometimes they won’t fit. You surely find it equally difficult, sometimes, to translate your ideas into schemes that other people will understand. This is true for all kinds of communication; what makes it interesting in science is that it’s often possible to pinpoint where things go wrong and identify the ghosts very clearly.

I think that recognizing this is essential to the scientific process. Hidden architectures are essential to meaning. Individual scientists – even in the same field – have their knowledge organized in different ways. This creates subtle differences in their views of models that are analogous to variation in biological systems. When these differences collide and become exposed, they lead to refinements and revisions in models. This can be a powerful, efficient, creative process if we are aware that they are there. The problem is that most people don’t consider them consciously when they communicate or teach, and don’t actively look for ghosts that can disrupt communication. If the structure of a collection of concepts remains invisible, students will have to assemble it themselves, and a lot of things can go wrong in the process.

So now I’ll try to break down some of the things you’ve mentioned.

 

2: The problem of “translation” between conceptual models, language, and images

Recently a number of scientists have asked me to create drawings for their talks. They describe something, I try to draw it – FAIL! – they tell me what to fix and I try – FAIL! – and so on, until we finally have it right. This happens even when they pre-draw a scheme, because somehow I don’t seeit the way they do.

There are several things going on here. First, if the scheme represents a model of a physical system, such as a molecular structure, a complex of molecules or a process, the scientist is probably thinking of it visually and spatially but simultaneously functionally. Whatever function he is considering at the moment (foreground) plays a big role in the degree of detail that is in his mind and he wishes to be displayed in the image. So if I’m just trying to show what molecule binds to what, it may be enough to represent single components as generically as Lego blocks. A lot of times in these schemes, the pieces are not even placed in the right relationship to each other – which is understandable given the difficulties of crystallizing complexes. I was astounded many years ago to learn that biochemically, it was even hard to determine how many copy numbers of a specific protein there are in a particular complex.

(For the nerds: Even when crystals are made, weird Fourier transformations (math!) have to be applied to turn X-ray diffraction patterns into electron density maps, then sequence information and homologous structures are applied to find alpha helices, beta sheets, and tell what belongs to what.)

Anyway, in diagrams, very simple models may be sufficient until fine details of their surfaces and issues like steric hindrance suddenly become important in understanding something.

When starting out to make a scheme, I think it’s important to understand that our minds are constantly shifting between considering different types of functions, rapidly shuffling concepts between the foreground and background, and doing so at different scales. So there aren’t “one-size-fits-all” models. Different levels of structure ought to be embedded in each other and linked, but as you well know, most methods in biological don’t give us scalable views of things like Google Earth; if we want to study a new level, we have to change methods. That means, inherently, that models are necessary not only to classify, generalize and describe or depict the components of a system, but to link them to higher and lower levels of structure. Conflicts arise all the time because they are connected by a hidden web of assumptions and structures.

In your work with biologists, yeah – it would nice to have (3D) EM pictures of everything everybody is studying, and even real (3D) structural views of the molecules, but it’s not always necessary to make a certain point.

To give you an example, below I’ve inserted three models of the same thing: a nucleosome. Each of these was clearly developed to emphasize a particular relationship between structure and function. But those relationships lie at different scales, which has dictated the level of detail that is included, what lies in the foreground and background, and influenced all sorts of stylistic decisions.

This helps explain why some of the examples you gave don’t work, or are dissatisfying – people are often lazy about making their own images; they borrow them from other people and don’t check that they are really made to fit the point at hand. As a result, images may not convey the information they’re really aiming at.

Please note: I acquired these images from diverse papers; if any author has a problem with their use, contact me and I will replace it.

There are several issues to consider here. First, A researcher has some sort of visual representation of the system in his head, but when he tries to draw it or create an image he may realize that he hasn’t probed that internal visualization in detail. In fact, the most detailed images here required a computer: the scientist doesn’t have all of this in his head – at least not in this form. This means that there are all kinds of gaps in his concepts, which is interesting because he may be completely unaware of them until he actually tries to create the image. This is one way that engaging in communication can generate entirely new scientific questions. (“Oh, I didn’t know that, or I have no idea where this component belongs – how can I figure it out?”)

When trying to describe something in words, language is notoriously bad at capturing a lot of types of visual information. Part of this has to do with the linear nature of language: you can describe a row of dominoes, which are lined up in a sequence, but if they’re scattered randomly across a table we don’t have enough words for complex two-dimensional shapes, let alone 3D. Or try finding a criminal based on a verbal description. Or drawing a face based on one – the “police artist” problem.

Third, the prerequisite to communicating any model well is having it clearly in your mind, and mapping it onto language in the clearest possible way given your expectations about the audience. A lot of scientists don’t understand all the things that can go wrong in this mapping process.

3: Ghosts in visual styles and genres

A biologist would see the image below much differently than a non-scientist. When I show this to groups of scientists, they all recognize that they are looking at something on the molecular scale. They immediately recognize the double-helix structure of a DNA molecule, and notice that its circular, wrap-like structure encloses ribbon diagrams (simplified schemes of proteins). This probably makes the structure a nucleosome. They will probably assume different colors of the ribbons are meant to represent different proteins – here, four of them. If you assume that this object has a front-back symmetry, then you might guess that there are eight histones in the complex.

Very little of this information is contained in the image per se: it’s extra knowledge that the viewer has to have to decode the scheme.

There are lots of other, very basic “ghosts” related to two-dimensional images you need to be aware of to “understand” and explain this object. We’re used to translating 2D into 3D; shading and shadows create an illusion of depth, but some of this is cultural. Is it very thin or thick? And so on.

But there’s another enormousghost in this image, truly invisible in the most literal sense, that no scientist I’ve shown it to has detected so far. What’s all that white space around the thing? It can’t just be empty space. Nucleosomes only exist in a very specific biochemical environment – that of the nucleus, composed of all kinds of other molecules, a specific pH, and so on. So this object and its nature are contingent on a lot of invisible things that aren’t in the image at all. They are, however, somehow encoded in the image.

4: The “fudge factor”

This type of ghost is something my good friend and mentor Jim Hartman came up with over a lunch last year. It’s omnipresent – there in every example we’ve taken so far – and very complex because it mixes lots of types of other ghosts. In some ways it comes really close to what you called “goblins”.

“Fudge factors” arise from the fact that everyone knows that language, concepts, models and images don’t map onto each other very well. So whenever you describe something, you’re packing some thought into language or an image, transmitting it to someone else, and expecting them to unpack it in a very similar way. The representations are usually highly simplified – highly complex processes are reduced to a shorthand. If everyone translates them the same way, this works fine. But hidden within are lots of ghosts that can make things go very wrong. Think how hard it would be to truly adequately describe – in language – an experimental protocol to someone like me, and expect me to do it right the first time. I barely know a pipette from an electron microscope.

Here’s an example of a text loaded with “fudge factors,” concerning a biochemical signaling pathway that I recently deconstructed with my friend Uwe Benary:

 

A Wnt stimulus leads to the inhibition of the destruction complex that normally targets β-catenin. In consequence, less β-catenin is degraded and more β-catenin is able to enter the nucleus. There it regulates the expression of specific target genes.

 

Any experienced molecular biologist recognizes that dozens (hundreds? Thousands? Millions?) of steps are omitted from this description. To list just a tiny fraction of them: to receive a signal, lots of things have to happen to prepare a cell to bind the Wnt ligand. Lots of types of molecules (including its receptor) have to be presents, in the right quantities. They have to be arranged in often huge complexes – many of whose parts are unknown – that are constantly undergoing dynamic rearrangements. For beta-catenin to get involved, specific sites in its binding partners have to be chemically modified; once the complex dissolves, it is somehow transported to the nucleus and through pores, all along the way interacting with other factors and releasing them again. It has to find its way through masses of chromatin to find specific targets, a process which is hardly understood at all, and then participate in assembling the transcription complexes that will read the DNA sequence and build RNAs. It hits a lot of the “wrong” targets.

There are more types of ghosts: a scientist knows that we are not really talking about single molecules, but a generic model of how whole populations of molecules behave. Etc. Etc.

This type of highly oversimplified account is only meaningful within the context of a particular function of focus – just like the nucleosome image – and because people agree on how a model should be packed into language and unpacked again. Students won’t know all the missing pieces when they hear this, and the ghosts will lead to lots of misunderstandings. They may mistake the shorthand for a complete account of the process.

Interestingly, this skeletal shorthand reflects the history of the beta-catenin model. The bits of the story that are mentioned represent major discoveries over the past couple of decades. Digging out the missing steps has been the subject of an amazing amount of research; still, when the story is told, it’s arranged on the foundations of historical ghosts: what should be pulled into the foreground, what can be “safely ignored,” and what is simply unknown. This shorthand is a perfect example of the operation of fudge factors, a process that constantly generates ghosts.

There are always fudge factors – even in the most detailed experimental protocols, which are based on a researcher’s knowledge of tools and procedures and a large corpus of experimental and biological knowledge. I think they are likely a major cause of difficulty in reproducing experiments. And a lot of disagreements between the preeminent scientists in a field are waged over fudge factors – listening to debates can be extremely confusing if they are not exposed. Sometimes for the non-insiders, it’s hard even to tell what they are arguing about.

More on the profound connection between communication and science

Last year I gave a number of talks on a new model of the relationship between communication and research, which I have covered in “Ghosts, models and meaning in science,” and a more detailed text, here. I’ll be adding articles on this theme in the coming weeks. Comments are greatly appreciated – they have already significantly improved the project.

The core point is that scientific messages derive meaning from their relationship to various models and other concepts that often remain “invisible” (ghosts) in a given text or communicative context. This is true of all kinds of communication, of course. But the natural sciences relate meaning to models in specific, highly structured ways that can be recovered. If this invisible architecture is not shared by a writer or speaker, meaning will be lost. A failure to take this into account is one of the most common reasons people misunderstand a message. And in doing science, being unaware of the link between a project and the models that spawned it can become an obstacle to generating new hypotheses or fully understanding what happens in an experiment.

The inherent connection between thinking about, doing and communicating science is crucial to the quality of research and has important implications for science education. Here I present two slides I use in my talks. These “Concept maps” expose some of the patterns that link these ideas.

 

The first slide shows how a very specific scientific question (rose-colored box at the bottom) can be fit into a hierarchy of more general questions and models. There is no single path for creating such a chart: the same question at the bottom could be analyzed upward in different ways. You might diagram it within a more chemical or physical or evolutionary pathway, because specific questions are embedded in all kinds of models.

There are several important implications.

First, at some level, an experiment which seeks an answer to a very specific question also challenges the higher-order models it is embedded in. Basically, an experiment may be shaking a big tree and probing assumptions concerning several levels of the hierarchy and how they are linked. A highly specific experiment can refute a very large model, theory or linked set of assumptions. For example, all kinds of simple experiments might have shaken evolutionary theory, or a study that characterizes tumor samples could overturn a view of how a particular therapy works.

Secondly, an audience may know nothing about the example given below, which involves NF-kB, transcription factors, signaling pathways and so on. When trying to explain something, a scientist needs to make a reasonable guess about the knowledge of an audience and the kinds of things they are interested in, then find the right level of the hierarchy to jump in. Going downward provides a logical path for a dialogue that moves from a general question to a more specific one – and how they fit together.

The second slide links the way this communicative strategy can help scientists think about a problem more clearly, see relationships between models, and widen their understanding of the implications of their work. 

Stay tuned for more soon.

Russ Hodge

 

 

A dialog on ghosts and models in science

This is the first of several pieces in response to questions I have received about my recent lengthy article (too lengthy!) on “Ghosts, models and meaning: rethinking the role of communication in science.” It’s intended to give a quick overview of the main ideas; you’ll find the full article here.

Can you give me a succinct definition of the “ghosts” you’re talking about?

There are a lot of contexts in which science communication somehow fails because an audience doesn’t get the point or understand a message the way it was intended. The naïve view of this is that scientists just know a lot more about a specialized topic than people from other fields or the public. Of course that happens, but I’ve found it’s rarely the biggest issue in communication. And it doesn’t explain why people so often have problems writing for experts in their own field, or have trouble clearly expressing things they know very well.

When I began teaching scientists to write, I constantly came across content-related breakdowns that were hard to understand. This got so frustrating that I finally decided to carry out a systematic analysis of the problems. That took about four years, and “ghosts” emerged as a fundamental concept that’s helpful in understanding a lot of what goes wrong.

Ghosts originate from many things: concepts, frameworks, logical sequences, various patterns of linking ideas, theories, images and so on. What unifies them is that the author has something in mind that is essential to understanding what he means – but it’s missing or very hard to find within the message itself. Often the author is not even aware he’s thinking of something a certain way. Since it’s nowhere to be found in the message, it’s invisible. If the reader doesn’t sense its presence and go looking for it, or has too much trouble digging it out, he will probably misunderstand what the author really meant. All the words might make sense, but there’s some core idea that’s still missing.

I call these things “ghosts” because they are invisible, in that sense, and yet highly disruptive. Of course they occur in all kinds of communication. But ghosts are particularly interesting in science because it has very structured and special ways of assigning meaning to things. What things mean depends on a hidden code that most scientists eventually absorb and imitate, but a failure to recognize its existence causes all kinds of problems. A scientific text will be completely opaque to a lot of people not only because its meaning depends on all of these invisible things – even more because people don’t know where to look for it, or that it’s there at all. It makes science harder to communicate and much harder to learn.

What this boils down to is that science has special ways of assigning meaning to things that really need to be taken into account when you’re planning a message or trying to interpret one. If you don’t, a lot of misunderstandings become almost inevitable, when they could easily have been avoided.

 

You mention models again and again – why are they so central to misunderstanding science?

Among the most significant and disruptive ghosts in science are various models that are used in formulating a question or hypothesis and interpreting the results. Most studies engage many types and levels of models. In a single paper an author often draws on basic concepts such as the structure, organization and composition of cells, to the components and behavior of biochemical signaling pathways, to complex processes such as gene regulation, to notions like states of health and disease, evolutionary theory and so on. The way scientists describe fairly simple things usually draws on a complex, interlinked universe of models that goes from the smallest level of chemical interactions to mechanisms, organisms, species, and their evolutionary relationships.

Scientists obviously recognize this; as Theodore Dobzhansky said, “Nothing in biology makes sense except in the light of evolution.” But there is a big difference between vaguely acknowledging this and actually working out how the vast theoretical framework of evolution reaches into every single event you’re studying, and reaches into the way you understand the “simplest” things – such as the names of molecules.

And often people don’t realize that even Dobzhansky’s statement is resting on huge, invisible ghosts that he doesn’t explicitly state but are essential to understanding what he means. What I mean is that evolution itself is based on principles of science that are even more fundamental – it follows from them. So if you’re talking about the theory, you’re also engaging this deeper level. That’s really interesting because most of the “debates” over evolution I’ve witnessed are actually arguments about these even larger things. If the parties in the dialogue never articulate that deeper level of the disagreement, it makes very little sense to discuss the types of details that people go round and around about. They’re exchanging a lot of words, but they don’t fundamentally agree on what those words mean. They are arguing about whether species change, split apart or go extinct, but to get anywhere on those issues you have to agree what the term “species” means. It’s not so much that they don’t agree – more that they don’t even realize there is a problem.

 

What deeper ghosts have to be faced before someone can really understand evolution? 

I think there are two, which are so basic that they distinguish science from other ways of thinking about things and assigning them meaning. I call the first one the principle of local interactions, which follows from a fundamental assumption about physical laws. In science if you claim that something directly causes another thing, you are expected to prove that there is some moment of time and space where the cause and effect come into direct contact with each other, or at least to demonstrate that this is a highly reasonable assumption to make. Scientists extend this concept with a sort of shorthand: the two objects may not really bang into each other, but then they have to be linked by steps such as a transfer of energy that do follow this rule. So to make a scientific claim that a child inherits traits from its parents, you have to find some direct mechanism linking them, such as the DNA in their cells. It is directly passed to the oocyte from DNA from the reproductive cells of the parents, and gets copied into each cell, and then it gets used in the transcription of RNAs and translation into proteins through a lot of single, physical interactions. You’ll never directly see all of those things happening, but the models you use predict they are there.

The second principle applies this type of causality to entities as complex as organisms or entire ecospheres. It shows what happens when a lot of local interactions create systems that are much more complex. At that point the principle declares that the state of a system arises from its previous state through a rule-governed process. From that it follows that future states of the system will arise from the present one, following the same rules. We’re far from knowing all those rules, but scientists assume they are there, and a lot of their work is aimed at creating models that describe them.

Both of these concepts are closely tied to a style of argumentation that integrates Occam’s razor; I’ll talk about that elsewhere.

How are these fundamental principles linked to evolution? Well, you start by observing what is going on in a biological system right now and creating models that project the state into the past and future. You test those models with experiments, and then start extending them farther and farther into the past and future. You make predictions about what will happen if the model is correct in the future, and look for evidence of its activity in the past. If something in an experiment violates those predictions, you have to revise the model. This process of observation, modeling, and challenging models is the source of the Big Bang theory in astrophysics; it’s the basis of our geological understanding of the Earth’s crust, and when Darwin applied it to life he got evolution.

Other belief systems such as religious accounts don’t start from an assumption that models are works in progress that will inevitably be revised; nor do they require that their versions of things constantly be revised to conform to evidence. It leaves people free to believe whatever they like, to maintain idiosyncractic positions in the face of mounting evidence to the contrary. It leads to inconsistencies about the way they think about causes and effects in their daily lives versus how they extend their opinions to the universe. This is pretty egocentric; it leaves no place for self-doubt and encourages no respect for the potential validity of other belief systems. This very easily slides into a type of intellectual authoritarianism which is absolutely counter to the fundamentally democratic nature of science.

You can see these two principles at work in the way we distinguish “scientific models” from every other kind. Anything that violates the principle of local interactions would be considered non-scientific. That’s the case for extrasensory perception – until someone demonstrates that some energy passes from one person’s mind into another’s, you can’t make a scientific claim for its existence, so you have to look closely into whatever model of causality led you to claim it might exist. And the second principle implies that there are no discontinuities – you can’t create something from nothing. Miracles and the fundamentalist account of creation violate both principles.

If you can’t agree on these two things, it makes very little sense to discuss details of evolution that derive from them, because the differences in the very basic assumptions held by people can’t be resolved – you’ve got to agree on things like standards of evidence and causality. If you don’t do that you can’t even agree on the meaning of words. That’s what makes these fundamental principles ghosts in “debates” on evolution, and they are the things you need to clarify before getting involved in one. And, of course, you have to insist that the participants act in a way that is intellectually fair and honest, with integrity.

There are a lot of other debates in science – such as controversies over animal experimentation – in which this doesn’t happen. Reputable organizations make inflammatory remarks and hold untenable positions on points of fact, and refuse to back down when you refute their points. Then you get barroom brawls rather than civil discussions about important topics.

 

You came up with this concept of “ghosts” while working on texts by students and other scientists. Why are they a particular problem for students?

An active researcher is usually so deeply engaged with his models that they have become a fully natural, shorthand style of thought. It’s like the grammar of a native language, which becomes internalised without a real understanding of its structure. In science this grammar has to do a lot with models. Most projects in research take place in a fairly exact dialog with specific models you are either trying to elaborate on by adding details, or extend to new systems, or refute through new evidence. This makes models very dynamic, and there’s no single reference on the Internet or wherever where you can go and find them. In biology virtually every topic gets reviewed every year or two, which is an expert’s attempt to summarize the most recent findings in a field to keep people in a field more or less on the same page. That’s the group that a lot of papers and talks are addressed to, at least most scientists think that way – and they assume the readers will have more or less the same concepts, models and frameworks in mind. Anything that is widely shared, people often fail to say – they think they don’t need to. And it’s impossible to lay out all the assumptions and frameworks that underlie a paper within it – you can’t define every single term, for example. So these become ghosts that aren’t explicitly mentioned but lie behind the meaning of every paper. The two really huge basic principles I mentioned above are rarely, rarely described in papers.

And even the details of the models more directly addressed by a piece of work – the physical structure of the components of signaling pathways, or all the events within a developmental process – aren’t mentioned very often. Those models are embedded in higher-level models, and the relationships in this hierarchy are not only hard to see – there’s no single way of explaining them. Scientists sometimes work these things out fairly intuitively as they extend the meaning of a specific set of results to other situations and higher levels of organization.

Now imagine a science student who is absorbing tons of information from papers like these. As he reads he’s grappling with understanding a lot of new material, but he’s also actively building a cognitive structure in his head – I call it the “inner laboratory, or cognitive laboratory.” It consists of a huge architecture in which concepts are linked together in a certain structure. The degree to which he understands a new piece of science depends on how that structure is put together, and where he plugs in new information. If the text he’s reading doesn’t explicitly tell him how to do this, there will be a lot of misinterpretations.

How can his professor or the head of his lab tell whether a scientist under his supervision is assembling this architecture in a reasonable way? You catch glimpses of part of it in the way someone designs an experiment, but I think the only method that gives you a very thorough view of it is to have the young scientist write. That process forces him to make the way he links ideas explicit and put them down in a way you can analyse each step. In writing – or other forms of representation, such as drawing images or making concept maps – you articulate a train of thought that someone else can follow, providing a means of interrogating each step. Most texts are pretty revealing about that architecture; if you read them closely you can see gaps, wrong turns, logical errors, and all kinds of links between ideas that a reader can examine very carefully.

The problem is that in most education systems in continental Europe, in which most of the scientists I deal with were educated, writing is not part of the curriculum. Whatever training they have is done in all sorts of ways, and the teaching is usually not content-based. Instructors use all kinds of exercises on general topics, but that learning doesn’t transfer well to real practice. Why not? Because when you write about a general theme, your knowledge is usually arranged very similarly to that of the teacher’s and any general audience. In your specialized field, on the other hand, your knowledge is likely to be very differently arranged, and that’s where the ghosts start to wreak real havoc on communication.

 

So ghosts aren’t just things that scientists leave out of texts – they’re also phenomena that arise from the reader or audience…?

Absolutely – they arise from differences in the way a speaker and listener or a writer and reader have their knowledge organized. That can happen in any kind of communication, but in science it’s actually possible to pin ghosts down fairly precisely. In political discussions or other types of debates there aren’t really formal rules about the types of arguments that are allowed… But if you know how meaning in science is established, you can point to a specific connection in a text or image and say, “To understand what the scientist means, you have to know this or this other thing.” Again, since neither of you can directly see what’s in the other’s head, a reader may not guess that some of the meaning comes from very high levels of assumptions, or a way of organizing information that you’re not being told. And some have been digested so thoroughly by scientists that they’re no longer really aware that they are there.

Some of the most interesting ghosts appear when you try use someone’s description of a structure or process to draw a scheme or diagram. I recently had to draw an image of how a few molecules bind to DNA because we needed an illustration for a paper. I thought I had it clear in my mind, but I ended up drawing it five times – each version incorporating some new piece of information the scientist told me – before I got it the way she wanted it. You learn an incredible amount that way.

A scientific text is often based on an image of a component or process that a scientist has in his mind. He’s trying to get a point across, and to understand what he means you have to see it the way he sees it – but if he leaves anything out, it’s easy to completely miss the logic. It’s like trying to follow someone’s directions… That works best if the person who’s giving the instructions can “see the route” the way it will appear to you, maybe driving it one time to look for the least ambiguous landmarks, or taking public transportation and watching exactly what signs are the most visible. And thinking it through with the idea, “Now where could this go wrong?”

 

Another thing you refer to is concept maps – you include several examples in the article. How do they fit in?

Concept mapping is a system invented by a great educator named Joe Novak; it gives you a visual method to describe very complex architectures of information. It’s extremely useful in communication, teaching, and analyzing communication problems. One reason it’s so important is that our minds deal with incredibly complex concepts that are linked together in many ways. Think of trying to play a game of chess without a board – that’s incredibly difficult, but a chess set is a fairly simple system compared to most of those that science deals with. There’s really no way to keep whole systems in your head at the same time. Making a map gives you a chance to see the whole and manipulate it in ways that would be impossible just by thinking about it.

But the real genius of this system appears in communication and its most precise form – education – where a teacher ought to understand what he is really trying to communicate, and how it’s likely to be understood by the students or audience. In most cases you’re hoping to do more than just “transmit” a list of single facts; you’re trying to get across a coherent little network of related ideas, linked in specific ways. If you do that successfully, the audience will leave with a pattern they can reproduce later. It might be a story, a sequence of events, or a metaphor – the main thing is, they have seen how the pieces are related to each other.

A great way to do this is to make a map of the story you’re trying to tell, and then make your best guess about how this information is arranged in the heads of your target audience. What can you realistically expect them to know, and what information and links are likely to be new? If you see the pattern you’re trying to communicate very clearly, and make a reasonable guess about how some type of knowledge you can relate it to is arranged in your audience’s head, you know what you have to change to get them to see things the way you’re hoping. In schools they’re teaching kids to make concept maps early on. Then before a lesson about something like the solar system, the teacher has the kids draw a map of what they think about the sun, moon, planets, and so on. After the lesson the kids make a new map – comparing the two tells you what they’ve really learned.

 

In your article you point out ghosts that come from schemes like sequences of events or tables…

A lot of scientific models consist of sequences of interactions between the components of a system. Those start somewhere and involve steps arranged in a particular order, and it’s important for the reader to have a view of the steps and that order in his mind. You’d be surprised how often scientists describe these processes in some bizarre order that doesn’t go from A to K, but starts at G, goes to H and I, then goes back to G and works backward to F, E, and D… Again, if you are already familiar with the sequence or pathway this is no problem. But if you don’t, you’re probably expecting the reader to try to assemble the process in some reasonable order. That may be possible through a careful reading of the text, but it takes far more “processing time” than a reader would need if the whole sequence were simply laid out in order in the first place.

Tables are interesting because a lot of experiments are designed with a structure that’s pretty much inherently that of a table. Say you have two experimental systems plus a control, and you apply two procedures to all of them. To make a claim about the results, you have to march through all these cases – basically a table that’s 3×2 or 2×3. Here again, you’d be surprised how many scientists’ descriptions skip over some of the cells of the table, mostly because the results aren’t very informative. Or they tell you, “Procedure A caused a 5-fold increase over Procedure B,” without telling you what happened in the control.

Both of these effects are due to a scientist’s failure to recognize the structure of the information he has in his head and is trying to present… Then he fails to present that structure in the text in a way that’s easy for the reader to rebuild in his own head.

 

You’ve said that ghosts are one component of a larger model you’re working on that reformulates the relationship between science and communication… What else is there?

A lot of the other points can be captured through an exploration of what I call this “inner” or “cognitive” laboratory of science. The really good scientists I know have a very clear understanding of their own thinking. They know the assumptions that have gone into the models they are using, and are aware of the limitations, where there are gaps and so on. That type of clarity usually translates into good communication, no matter what the audience.

One thing I found during this project that was very surprising was the extent to which writing and communication for all kinds of audiences was connected, and how addressing very diverse audiences could clarify thinking in a way that improved a scientist’s research. When you find a scientist struggling with clarity in a text, it usually means one of two things. Either a topic is not clear in his head at that moment, or it’s not clear in anybody’s head at this moment in science… That second case is very interesting because it means you can find interesting questions just through a very careful reading of a text, realizing that it’s asking you to build a certain structure of ideas. If you have difficulty, that means something. One of the basic strategies I used in working these things out was that problems are meaningful – they’re trying to tell you something about how good science communication works, or how scientific thinking works… usually both.

Speaking to a general public with really no specialized knowledge of a field can be a truly profound exercise for a scientist. It makes him interrogate his own knowledge in alternative ways. He has to come to a much more basic understanding of the patterns in his inner laboratory and apply different metaphors, trying to map that knowledge onto someone else’s patterns. Well, the cognitive laboratory is already metaphorical, based on concepts rather than real objects, and applying new patterns or metaphors to what’s in there is extremely interesting. It can suggest questions you’ve never thought of before. This means that tools that have been developed by linguists and communicators can be used as tools to crack open scientific models.

I’ve actually done this – used those tools to expose an assumption about evolution that everyone was making but wasn’t usually aware of. The assumption had never been tested, so my friend Miguel Andrade decided to take it on as a project, and put a postdoc on it. The results were really interesting, showing that there were a lot of cases where the assumption didn’t hold – and we got a published, peer-reviewed paper out of it. That was three years ago, and in the meantime I’ve been involved in a number of similar projects that have had a similar outcome. A communicator who pursues questions about meaning and language has a different set of tools to understand how ideas are linked in scientific models. You’re freer to apply slightly different metaphors and patterns to ideas; you may be more rigorous in perceiving assumptions; metaphors and other tropes help you see cases in which people are reasoning by analogy rather than strictly adhering to the system at hand.

So these ideas aren’t just a way to help people plan and communication better – although they certainly help in those tasks. In fact they are much more fundamental in scientific thinking. Understanding these relationships between communication and science is a pathway to doing better research, through a better understanding of its cognitive side. I’ve noticed recently, for example, a lot of cases where the way people are thinking of complicated processes is drifting away from the language they use to describe them. The language is conservative and it may be hard to adjust. But that will be essential as the models these fields are using move forward and become so complex that our minds – and our language – may not be truly able to capture them.

 

 

 

Ghosts, models and meaning in science

Rethinking the role of communication in science

by Russ Hodge, copyright 2018

Read the article here

This article is intended for all the stakeholders in the broad field of science communication: from practicing scientists at all stages of their careers to science students and teachers, journalists, communicators, and educators. It could also be of interest to linguists, cognitive psychologists, and others interested in the connection between thinking and language. I hope it will be read by those responsible for university programs across Europe, because it provides several arguments for making communications training a standard part of their curricula.

Here I bring together ideas that have been dealt with superficially in other pieces (1, 2, 3, 4) on the blog.

This is a rough draft, one of at least three more major parts to come. In it I aim to demonstrate that the relationship between science and communication is far more profound and interesting than we usually consider. The process that most of us go through when we want to communicate well is crucial to clarifying thinking, and it offers tools that could be used much more strategically in posing new scientific questions and interpreting data. To say this as boldly and plainly as possible: learning to communicate well can improve your scientific work – not only because your papers have better grammar, but because it requires a type of thinking that is extremely useful for science.

I do not say this lightly; I know how skeptically most scientists will greet it. That’s fine; I have waited a long time to write this piece because I needed to collect powerful examples to support it and put them together in a convincing way. If you are a scientist, I hope you will recognize aspects of your own thinking in this piece, and feel that it puts words to things that have become your daily habits. It may even surprise you by revealing “mechanisms” of thinking that you have never considered, yet use all the time.

It has been a long road to get here: 20 years of interacting with scientists at all levels of their careers on a daily basis, working together to find didactic approaches to a wide range of problems, and over 30 years as a teacher overall. Yet it wasn’t until a few years ago that I finally decided to confront some frustrating, content-related problems that constantly arise while helping my students and colleagues write, speak, or communicate in other ways about their work . I realized that we didn’t have a very good model to describe and hopefully understand a lot of the problems they encountered. That motivated four years of systematically analyzing these problems. I came to several conclusions:

  1. Science and communication are profoundly linked at a deeper level than we usually appreciate, which has significant implications for science education programs and the ways individuals, institutes and organisations communicate their work.
  2. The process of writing or preparing a talk is usually essential in clarifying and organising one’s own scientific thinking.
  3. This process requires a thoughtful reconsideration of the scientific models related to a project and can expose weaknesses or hidden assumptions that need to be reexamined.
  4. Every experiment represents a dialogue with models of many types and levels and the results may say something about all of them.
  5. Becoming aware of hidden connections in the structure of scientific thinking can powerfully affect our interpretation of results and generate important new questions.
  6. Communication offers an extensive set of tools which can be systematically applied to scientific problems and improve the quality of research.
  7. Scientific models are highly complex cognitive architectures that individuals construct in their minds and integrate into an “inner laboratory” where the “real science” takes place.
  8. The only way to examine these architectures is by externalizing them in writing, talks, images, or other modes of representation
  9. Effectively speaking to the public or non-specialist audiences usually requires seeing familiar systems through new patterns. Doing it well requires a process that can clean up sloppy thinking, help us approach an old theme in a new way, generate new scientific questions and suggest alternative interpretations of experimental results.

I know, the last one’s the big one.

The text starts with a short theoretical introduction. After that I apply the principles it introduces to nine case studies taken from real students’ texts, papers, images and other examples of science communication.

This model is just a beginning, but it has some powerful implications for the way we train scientists and teach them to communicate. It strongly suggests that effective training in these skills should be an integral part of a scientific education early on and continue through a student’s career. But before people start changing their curricula, scientists need to have a convincing model that shows them why it is important, and the method of teaching must be effective. I think this is a start, but it will need to be tested in many formats and teaching environments to be validated and improved.

The model I propose is not comprehensive; I will add another major section on metaphors and patterns in scientific models and a third that specifically explores how these ideas can be practically translated into teaching. I am hoping to work with teachers who are interested in learning the theory and methodology, applying it to other types of science, and becoming multipliers. I think this is the only way to achieve the long-term goal of institutionalising this type of training and ensuring that it becomes a staple of university science curricula throughout Europe.

I need and would greatly appreciate feedback from all stakeholders in this process. Please be as critical as you like; the model has to be tough enough to take it. I will consider all of your comments very carefully, report on them here, and use them to develop better versions of this text, the model it presents, and the teaching that results from it.

 

Thanks in advance,

Russ Hodge

Please contact me at  hodge@mdc-berlin.de if you would like to discuss this personally. Also if you are interested in teaching or training in these fields, in learning the methodology yourself, or would like to discuss setting up workshops or a program to implement its ideas.

Russ Hodge, March 2018

Read the article

I would like to thank all the scientists who have been such great teachers and given so generously of their time helping me over the past 20 years, the students who continue to inspire this project, the teachers who have been a continual inspiration, and my family, friends, and colleagues present and past for their support. 

I would like to particularly thank Prof. James Hartman of the University of Kansas, an extraordinary teacher, lifelong mentor and friend, for setting me on this path so many years ago and stimulating my ideas at exactly the right moments over the years;

Joseph Novak, father of Concept Mapping and one of the most brilliant educators I have ever met, who in a single week at Cold Spring Harbor completely changed my views of the goals of teaching and the methods needed to achieve them;

Jochen Wittbrodt and the COS department at the University of Heidelberg, Gareth Griffiths at the University of Oslo, and Thoralf Niendorf at the MDC for being constantly supportive and serving as the guinea pigs in this crazy endeavour.

 

 

A new model of the profound relationship between science and communication

One reason the term “science communication” has broadened to include so many activities is that research is leaping across the boundaries of disciplines and into our daily lives more quickly and profoundly than ever before. Without a basic understanding of scientific results and the methods by which they are obtained, people can’t be expected to digest complex information about their health or the global impact of their lifestyles and respond in reasonable ways. This has stimulated diverse efforts by many types of communicators to broaden and raise the level of scientific literacy in society as a whole. The pace of science has also created challenges for scientists as they confront massive amounts of data that can only be understood by teaching a computer how to cope with them, excruciatingly detailed models, and problems that can only be solved by transcending the boundaries of classical disciplines whose practitioners come from different backgrounds and speak different languages – both literally and figuratively.

Many well-meaning efforts aimed at explaining the significance of a piece of research – or the aims of science as a whole – somehow fail. That’s true at the interface of science and wider sectors of society, but forms of the general problem are also  common within research communities, where communication is fundamental to daily practice. Good communication skills boost careers and the progress of a field. Failing to help scientists develop them, I will argue, has effects not only on their careers but also on the quality of their research. This comes from working in the field a long time and witnessing countless examples demonstrating that excellent scientists are often superb at explaining their work to very diverse audiences. Is there a connection? You don’t truly understand something until you can explain it to someone else; does this old adage hold true for the highest levels of research and communication? If so, can you make people better scientists by making them better communicators? A few years ago I decided to try to find out.

A meaningful approach to answering these questions would have to encompass both theory and practice; it would require a thorough understanding and analysis of not only the strategies people were using to communicate, but the content they were trying to get across. I had access to plenty of examples through the scientists I encountered every day, the difficulties I encounter myself in writing about their work, and from hundreds of students over the years whom I had tried to help write and present their science to many types of audiences. As a general approach I stole a page from the handbook of the early fly geneticists, who uncovered the functions of hundreds of genes by studying how mutations disrupted biological systems. Maybe problems in communication could be used the same way: maybe they could show how things ought to work.

Over several years I followed this strategy in studying communication problems and funneling much of what I learned back into the courses I was teaching. The result was a steady but dramatic change in my understanding of the relationship between communication and science. I believe that these two fields of effort are connected at a profound level that is incompletely understood and rarely explicitly discussed or taught.

This project offers a new model of that relationship which attempts to connect how scientists communicate their work – effectively or not – to deeper underlying aspects of the way they think. It shows how many communication problems stem from chaos in the laboratory: not the physical benches where scientists spend their days, but the mental laboratory they are constantly constructing and rebuilding as they learn science.

It’s in this inner laboratory that real science happens, and understanding this gives communication a fundamental role: it is a means of exposing, exploring, and manipulating the cognitive models that give every scientific question and every piece of data its meaning. Disorder in the mental laboratory almost always leads to chaos in communication, and the act of communicating science offers ways not only to detect it, but also to straighten things out. In fact, it’s often the only way to even notice that the disorder is there. Our minds make assumptions and carry out logical jumps we aren’t aware of; until they are articulated aloud, our innermost beliefs and convictions are prey to influences that lie outside of science. A scientist’s examination of any system – even before a first encounter with it – is already styled by experiences of other systems, expectations, and models built using other systems long in the past; the recognition that this generates bias and can even reach into data in ways that reconfigure it is the reason why double-blind studies are so important. By putting something on paper, scientists can carry out a more careful, analytical scrutiny of their assumptions and models – to ask the questions, “Is this conclusion founded,” or “Are other interpretations possible?” one must first see the whole train of a thought. Then it can be broken down and mercilessly queried, step by step, and weak points can be discerned.

The process of communicating science thus externalises thought to permit a self-critical scrutiny that may otherwise be impossible or at least extremely difficult. Inevitably one becomes aware of gaps that have been invisible. It allows a person, at least to some extent, to look at his or her own ideas more the way another reader would. This skill can be trained, and it is the first step toward developing distance toward a set of ideas – and even to apply the perspective of a potential audience. That process not only improves the quality of a researcher’s communication – it can also affect the work. Sometimes the only thing necessary to discover fascinating new questions and develop better models is to notice the structure of a system in a text or diagram.

Most of the models in today’s science are so complex that they can’t even be thought about clearly without some form of representation – in language, images, or mathematical formulae. Papers and talks and other communicative acts open this complexity to inspection, analysis, discussion, criticism, and correction from the community. Trying to do science without communicating it is like trying to play chess – or teach someone else to play – without a board. For those who aren’t geniuses, a physical board offers a playing field to try things out, move the components around, and probe new strategies. To become a good scientist a person needs to look at many, many games, recorded in the literature, and extract the patterns and rules that lead to success.

Today’s students are constantly flooded with massive amounts of information which they are expected to arrange in their mental laboratories in a certain way. The hypotheses they frame, the experiments they design, and the way they interpret results are manifestations – symptoms – of the architecture they have built in their heads. But the only way to catch a real glimpse of this architecture, and measure their success at assembling it, is by watching how they put their work into the larger, logical framework of a text or talk. Explaining their science to non-specialists requires stepping farther back, seeing the more basic and generic patterns that underlie models, and trying to capture those patterns using tools such as metaphors.

That’s an important process because the inner mental laboratory of science is a metaphorical one as well. When a scientist frames a hypothesis regarding a specific problem – say, the behaviour or structure of a molecule – the form of the question is determined by the concept of a molecule, and what we think others think about it, rather than the molecule itself. The simplest things we think about are highly complex models and they are intermingled in a messy knot of other concepts, abstractions, and many types of knowledge that all come to bear on how clearly we are thinking.

So I am convinced that it is no accident at all that the best scientists I know have a very good understanding of their own thought processes, as applied to science. Often they have arrived at this point intuitively, devining rules and models through an intense study of the games going on around them. There are many parallels to learning a language: small children construct models that allow them to produce grammatical sentences by taking in and imitating the sounds around them, and attaching those sounds to things in contexts that have meaning for them, and testing them against the practices of others. What ultimately comes out is a compromise between the things they want to talk about, social contracts about the meaning of words and sentence structures, genre-like expectations about what is likely to be said when and where, and fundamental aspects of the biology of our brains – the extent of short-term memory determines, to a great extent, how many things we can think about and process at once. That influences how complex a grammar can be, and it also determines how much of a model can be processed without an external reference such as a text or diagram.

The rules for how an adult learns a new language are different than those for a child, and this means that teaching must do more than delivering single facts or pieces of evidence that we expect non-native speakers to assemble properly. People come to science after a long process of intellectual development in which so many concepts and expectations are already fixed, which means that moving into an artificial system of scientific models is more like the second type of language learning. Teachers usually take advantage of their students’ intelligence by presenting them with models of sentences and methods of producing new ones for the real communicative contexts that give them meaning. The same is true for research, and looking at it this way has profound implications for how we teach science and how we teach people to communicate it. I think that these efforts are most likely to succeed with a better understanding of the complex rules by which models give scientific ideas their meaning, and an understanding of the cognitive nature of the models themselves, and an search for methods aimed at resolving these parts of the “communication problem.”

* * * * *

Some of my colleagues and other professionals in the field of science communication might be surprised that this enterprise doesn’t start with a discussion of issues we usually confront and talk about the most, such as the fact that people who have something to say about science and their audiences often have very different agendas in coming together. Their knowledge and interests often diverge very widely. Dialogues that are started as a way of generating mutual understanding sometimes lead to even greater misunderstandings, and in the worst case achieve exactly the opposite response. Audiences sometimes leave “popular science talks” thinking, “Science is so hard I’ll never understand any of it,” “Why can’t scientists ever give me a straight answer to a question?” and even, “They’re trying to hide something from me.”

Miscommunication is often the result of getting off on the wrong foot from the very beginning: a failure to consider exactly what you hope to communicate, which has to be a function of a rational decision about what it’s possible and desirable to achieve with a specific audience, and what you expect them to do with the message. The usual result of this failure is a mis-match: a message doesn’t resonate because it hasn’t taken into account an audience’s interests, needs, or their motivation in entering into a dialogue in the first place.

These situations and less severe symptoms of poor communication are deeply connected to the cognitive models by which we navigate science and nearly everything else in our lives. They constantly arise in teaching because most of the students I deal with have never been introduced to very basic principles of functional communication, where success depends on a good understanding of the message one wishes to share, the expectations and knowledge of the target audience, and the modes and genres that are available to deliver it. The quickest path to a communicative breakdown is a mis-match between any of these things.

My experience is a meaning-based approach to teaching communication – which in science requires thinking about the connection between specific questions, results, and models – is extremely effective at solving these more fundamental problems. In following entries regarding this project I will use examples to explore the details of this model of science communication and how it can be translated into a didactic approach. In science, the first step toward solving a problem is usually to articulate a question very clearly. The same thing is true in teaching: to help a student acquire skills, we should first grasp what they need to learn. Communication begins with the construction of meaning, and the better we understand that process, the better we will be able to teach researchers to explain what they mean – no matter whom they need to address.

Russ Hodge, Sept. 2017