Why fallible expertise trumps armchair science—a Q&A with sociologist of science Harry Collins
By JR Minkel
If you take scientists at their word, human-induced climate change is well underway, evolution accounts for the diversity of life on Earth and vaccines do not cause autism. But the collective expertise of thousands of researchers barely registers with global warming skeptics, creationist movie producers and distrustful parents. Why is scientific authority under fire from so many corners? Sociologist Harry Collins thinks part of the answer lies in a misunderstanding of expertise itself. Like Jane Goodall living among the chimps, Collins, a professor at Cardiff University in Wales, has spent 30 years observing physicists who study gravitational wave detection—the search for faint ripples in the fabric of spacetime. He's learned the hard way about the work that goes into acquiring specialized scientific knowledge. In a recent book, Rethinking Expertise, he says that what bridges the gap—and what keeps science working—is something called "interactional expertise". Collins spoke recently with ScientificAmerican.com about his view of expertise; what follows is an edited transcript of that interview.
How did we get to the point where scientific authority is so easily challenged?
The high point of the authority of science was perhaps the 1950s. In those days one would see on the popular television programs a scientist wearing a white coat with license to speak authoritatively on almost any subject to do with science—and sometimes on subjects outside of science. But things go wrong in the progress of science and technology. If you see the space shuttle crashing, you can see that these guys in the white coats don't always get it right.
When you discover the jagged edges of science, you start to think, wait a minute—maybe scientists' views aren't quite as immaculate as we thought they were. Maybe ordinary people's views can weigh a little more. And I think there's some truth to this, but not as much as some of my colleagues think. Having studied esoteric sciences from the outside, I know that ordinary people have no chance of grasping the details of them.
What's wrong with ordinary people weighing in on scientific subjects?
It is easy to imagine all sorts of horror stories if we abandon the idea that there are some people who know what they are talking about and some who don't. Most scientific disputes that concern the public are at the cutting edge—the place where things are not completely certain. Examples are the safety of vaccines, the true importance of global warming, the effects of farming genetically modified food crops, and so forth.
Even now, in the U.K., the relatively dangerous disease of measles is becoming endemic as a result of a widespread consumer revolt against the MMR vaccine about 10 years ago. Parents believe that even though doctors assure them that vaccines are safe, those doctors may be wrong. Therefore, the parents think they are entitled to throw their own judgment into the mix. Quite a few social scientists are pushing this trend hard.
Why should the average person acknowledge that scientists might know better than they do?
It is possible to make an argument from the common sense idea that scientists know what they're talking about because they've spent much more time looking at the areas of the natural sciences that we're interested in. Normally, if somebody's spent a lot of time in an area, you'd tend to take their opinion as more valuable.
We believe that you can work out whether someone has the right scientific expertise and experience to make some sensible contribution to scientific debates. It doesn't mean they're right. What you have to do is not sort out the people who are right and wrong; what you have to sort is the people who can make sensible contributions from those who can't. Because once you stop doing that, things go horribly wrong.
That seems like it cuts both ways. Are evolutionary biologists like Richard Dawkins fanning the flames in the way that they engage creationists?
Once scientists move outside their scientific experience, they become like a layperson. I'm not a religious person, but if I want to talk religion with someone, it won't be a scientist; it will be with someone who understands theology (who might be either an atheist or a believer). I believe people like Dawkins give atheism a bad name because their arguments are so crude and unsubtle. They step outside their narrow competences when they produce these arguments.
In our book we too criticize creationism's pretensions to be a science, but we don't treat it as a trivial problem. Our critiques of creationism are: (a) that it stops scientific progress in its tracks by answering questions in a way that closes off further research; and (b) that there is no real attempt to meld the approach with the existing methods of science. We know that the creationists say this is not true, but their hypotheses relate to books of obscure origin or to faith rather than to observation.
How do you distinguish the people who can and can't contribute to a specialized field?
The key to the whole thing is whether people have had access to the tacit knowledge of an esoteric area—tacit knowledge is know-how that you can't express in words. The standard example is knowing how to ride a bike. My view as a sociologist is that expertise is located in more or less specialized social groups. If you want to know what counts as secure knowledge in a field like gravitational wave detection, you have to become part of the social group. Being immersed in the discourse of the specialists is the only way to keep up with what is at the cutting edge.
Is this where interactional expertise comes into play?
Interactional expertise is one of the things that broadens the scope of who can contribute. It's a little bit wider than the old "people in the white coats" of the 1950s, but what it's not is everybody. (Within science, lots of people have interactional expertise, because science wouldn't run without it.)
You did experiments to test your theory of expertise. What did you find?
The original version we did was with color-blind people. What we were attempting to demonstrate is something we call the strong interactional hypothesis: If you have deeply immersed yourself in the talk of an esoteric group—but not immersed yourself in any way in the practices of that group—you will be indistinguishable from somebody who has immersed themself [sic] in both the talk and the practice, in a test which just involves talk.
If that's the case, then you're going to speak as fluently as someone who has been engaged in the practices. And if you can speak as fluently, then you're indistinguishable from an expert. It's what I like to call "walking the talk". You still can't do the stuff, but you can make judgments, inferences and so on, which are on a par.
We picked color-blind people because they've spent their whole lives immersed in a community talking about color. So we thought color-blind people should be indistinguishable from color-perceivers when asked questions by a color-perceiver who knew what was going on. And we demonstrated that that was in fact the case. Now we're planning to do another imitation test on the congenitally blind to see if they can perform as well as the color-blind.
You also found that gravitational wave physicists had a hard time distinguishing you from one of their own in a written test.
I thought it's my duty to put myself through this test and see if anybody can tell. I'm not claiming my interactional expertise is really good enough to pass for a physicist, so I had to put brackets around it. There were no mathematical questions allowed. But they did involve some pretty damn difficult questions, which I'd never encountered before and which really gave me a fright. And it turned out I could work out the answers.
You've spent the past 30 years studying gravitational wave physicists. What do you like about them?
They're my ideal kind of academic. They're doing a slightly crazy, almost impossible project, and they're doing it for purely academic reasons with no economic payoff. I consider myself an academic who's made the bargain that I want to have an interesting life, and I'm prepared to have a little less status and a little less money as a result.