What a thing IS encodes what it ISN’T
Note: This piece follows up on my other articles on “ghosts” – an analysis of diverse factors which disrupt science communication. To read more, see:
“Ghosts of omission” are a type I describe in the talk recently given at the Jackson Laboratory in Maine (see the link above). I discovered this type during a retreat with the Niendorf group from the MDC. We were doing an exercise on the difference between verbal descriptions of things and images. Each member of the group had to go into the kitchen, choose an object, then come back and describe it in purely physical, spatial terms, without naming it or stating its function. The listeners had to draw it.
One of the postdocs chose to describe this:
About half of the participants drew something that clearly corresponded to this object. But interestingly, the other half of the group drew one of these:
There are times when the “resolution” of language usually doesn’t suffice to disambiguate two things that are similar. Think of verbal descriptions of faces, for example, which could usually apply to lots of different individuals – it’s hard for most people to describe them well enough for a police artist, even when a face is being drawn right in front of them.
In this case that isn’t really the problem. It would be straightforward to describe the “egg whisk” well enough to distinguish it from the beaters of a mixer. What happened, though, is that the person giving the description just didn’t think about beaters at the time.
This means that confusion or ambiguity can arise because when describing something, the speaker or writer doesn’t know about – or simply doesn’t think about – another thing that it might be confused with. In other words, the way we think of a thing encodes not only what it is– what we’d probably call defining features – but those which distinguishit from other things that resemble it along multiple dimensions.
This concept surely has profound implications for fields like information and set theory, and across the spectrum of linguistics. It’s equally crucial in the types of concepts and models created by biologists. I’ll just cite two examples here: noncoding RNAs and immune cells.
The completion of the human genome and the rapid development of sequencing technologies revealed that our DNA encodes not only messenger RNAs bearing the recipes for proteins, but a wide range of other types of RNAs. Scientists are still exploring the functions of these molecules. New types – with different functions – are being discovered all the time. Initially scientists grouped them into classes generally based on the length of the molecules – into categories such as microRNAs, or long noncodingRNAs – and generally expected that these sizes would be associated with specific functions. The field has now exploded with the characterization of dozens of types, whose functions do not necessarily correlate cleanly with an RNA’s length. In principle, the discovery of each new type is like the discovery of a new kitchen instrument which might shift the defining and distinguishing features of existing utensils.
But it’s not always the case that the discovery of a new element in a system causes scientists to revisit and revise existing classifications. The same is true of the immune system, where new types of cells continue to be discovered. Researchers with a profound understanding of this incredibly complex system know that new types can force a revision of the roles and functions of the players already known. This can, however, take a while to seep into the broader awareness of the community. And there’s no guarantee that the patterns encoded in old ways of thinking of a type of RNA, or an immune cell, will be completely stripped from the old concepts.
This problem is inherent to biology because new instruments – or upping the resolution of an old method – continually expose new features and elements of systems. At first, these components are almost always seen from the perspective of models that have done without them. Eventually the cognitive shifts spread and are better integrated. But we need to be aware that our models encode old ghosts that are never completely broken down and reconfigured.
To close I’d like to show another way in which “ghosts of omission” exert an extremely powerful effect on our thinking. In an earlier version of the “Jackson talk” I used to include an example of a text (slightly edited) by a famous humorist. We read the text and it usually got a laugh:
Tom and saw Tom’s older brother George kissing his girlfriend on a couch. Tom and I looked at each other with big grins. If faces had been meant to kiss each other, they would not have been given noses.
Suddenly the scene turned bizarre because we saw that the girl had her tongue in George’s mouth and George’s tongue was misplaced, too.
What could that girl’s tongue possibly be doing in George’s mouth? Tom and I felt sick. After about a minute of observation, we went out into the backyard.
“That’s it!” I told Tom. “I’m really disgusted with girls now. I’m never gonna hit another one. Or even hit one with a jelly bean… Let’s make a pact. The first girl who ever puts her tongue in our mouth, we give it right back to her.”
At that point I identified the author: Bill Cosby.
If you know anything of Cosby’s subsequent legal troubles, and go back and read the text, what was simply amusing now becomes somewhat “creepy”. Knowing a single fact changes the way we process language and envision the roles of the characters. I can’t define creepiness in cognitive terms… But the change that occurs between the two readings of the text is the result of ghosts of omission. It’s another example of the profound effects of the “dark matter” of ghosts.