Outreach

Outreach and popularizing science are fundamentally good things. I've always believed that as a scientist, using tax money, I am accountable to the general public, i.e. if asked by a random person a science question, I have to try hard to provide an understandable answer, if asked for what I do, I have to explain the outline of it.

Likewise, bringing science into schools and newspapers, letting the general public see what scientists do, explain what kind of questions they work on and how they work is crucial - science takes a lot of money, and the public deserves to see why we think that kind of money is needed and what we do with it.

This can take many form - science blogs, programs at high schools, organized lab visits, popular science books or newspaper articles - but with all good things, there comes a point where problems occur.

A question of spin

Suppose you are a scientist, and looking at some recently published bit of data, you suddenly have an idea and make a quick, rough calculation. The result turns out to be spectacular - you see a large effect, something that, if true, would indicate the discovery of a new fundamental force in nature.

The scientific method would urge that the next step to take is to attack your result hard and see whether it breaks. Test whether it depends on the assumptions you have made in an unwanted way, see whether it is a simple mistake, whether it gets much smaller if you do a detailed calculation, this kind of thing.

But imagine you are running a science blog. Nothing prevents you from writing I just made this calculation, and the result may indicate we have discovered a new fundamental force. There's no lie anywhere in here. A blog is also not the same as a peer-reviewed article - you don't risk that much academically being wrong. Yet you could also write the result may indicate that I made a mistake or the result may indicate that I made too bold assumptions. This would be equally true and meaningful (and in fact more likely) - and yet nobody does this kind of thing.

It's obvious why - it's not news, it's not exciting. But the fact of the matter is that there is no science news at this point, just idle speculation what may be the implication - and yet, it will easily be perceived and transmitted as science news. How many newspaper articles can you recall in which a formula like scientists believe this may lead to a novel cancer therapy / a novel way to combat overweight /... ends an article? And how many real breakthroughs in cancer therapy and weight reduction have there been?

Speculating about what your results might imply is cheap - but in public perception, often close to actual claims of what is the case. Doing those things hence leads into a grey zone as far as scientific integrity is concerned - there is no outright lie or deception involved, but one is readily willing to let readers get a wrong impression. Contrast this with Feynman's picture of science as a toolkit developed to minimize fooling oneself and others.

Now, suppose you have verified that you didn't make an outright mistake and bring your results to a conference where it is enthusiastically celebrated. And there, over a remark in a discussion, you suddenly realize that your effect actually hinges on an approximation you have made and will be much reduced once you drop the approximation.

Scientific integrity would argue that you make this public immediately. Yet, you can get by just as well with adding disclaimers like if these results ultimately turn out to be correct in your statements and presentations and enjoy the fame while it lasts. Even if someone makes a detailed investigation and can't reproduce your effect, there's the possibility that he did something wrong, not you. Instinctively, people will believe the first result they've heard until very compelling evidence emerges later. And in the end, you can always point to your disclaimers and argue that you yourself always said that it all needs to be verified by a more detailed investigation. And the calculation which proves you wrong won't be perceived as an original work, just as something re-doing your original idea in detail. So - the risk even of being proven wrong is modest.

Now, is this still a grey zone, or are we straying outside scientific integrity? It's questions like these young researchers have to struggle in their presentations and public appearances - how to deal with vague possibilities of what results might imply? How to deal with things you yourself start to suspect to be wrong? And more often then not, the answer of scientific integrity turns out to be the most harmful for a career whereas grey zones and keeping up appearances tend to earn reputation.

Inside academia, there's at least some checks and balances. A journal reviewer might ask you to do additional verification tests or similar. Workshop participants might ask critical questions. But in outreach, very few of the same checks and balances are still in place. If someone decides to play with expectations and overemphasizes vague possibilities, hardly anyone will call him out on that.

Expectation management

Why is that a problem at all? After all, the science is done in peer reviewed journals, not in blogs and popular science magazines.

The potential problem starts getting visible when you consider that the amount of readers is almost inversely proportional to how detailed the research is presented. Popular science articles probably have the largest readership - yet a science journalist does not usually take a months to work through a couple of detailed peer-reviewed works before writing it - he consults press releases and blogs.

Likewise, a general physics audience may not read through detailed studies but rather focus on non-specialized peer reviewed journals. But - as we've seen before, they are biased towards schematic models, first ideas and speculations on what might be the case, rather than final answers by detailed investigations what is the case.

Thus, the impression the bast majority of readers gets from a field (i.e. from reading blogs or popular science articles) is actually very different from the hard science the field really produces - it tends to be much more speculative and spectacular than actual science done in the field. And this can lead to mismatches in expectations.

I've experienced this once giving a plenary overview talk representing URHIC physics on a particle phyics conference. In the question session in the end, I was asked about strong parity violation. At this time, this was a conjectured phenomenon, a possibility of what might be seen in the data - really believed by only a small minority in the field, with discussions about confunding factors and whether the assumptions about magnetic field strength made would be reasonable. Yet, when I answered the question thus, I could sense from the thrust of the follow-up questions that the audience was not happy. They were expecting something on novel physics, something spectacular, not a discussion how other effects can mock up the same observation.

When this becomes a real problem is in grant applications if the reviewers are not from the field. Suppose the reviewer is primed by blog speculations about the possible discovery of a new fundamental force in nature - and gets a grant application proposing to improve the background subtraction in an algorithm. Or he expects a field dealing with strong parity violation - and sees a research proposal to refine a photon spectrum measurement. Naturally he would assume that these researchers ignore the real interesting phenomena in their field and hence are not competitive. Yet inside the field, these topics may be what is high up on the agenda and what is known from blog entries and popular science articles may just represent a fringe development.

Breaking news

Remember the announcement of the discovery of the Higgs boson by CERN? There was a huge press conference, and breaking news around the globe (They were actually very careful in phrasing the precise wording of what they had discovered - yet almost everyone remembers this as the announcement of the Higgs disovery).

Moments such as this do a tremendous job by bringing science into the public focus. Yet they're not without side effects on science.

One such problem is that breaking news and release dates are actually alien to the way the scientific method works. That more resembles a phase transition - people take a look at the evidence, do their own tests in their own time and eventually make up their mind about a particular claim, and once enough people do that, a paradigm changes. What a statement like science knows that... often really means is more than 95% of scientists working in the relevant field believe that.... Having had a public declaration of some effect being established now creates a tremendous pressure to justify yourself if you happen to have a dissenting opinion, because now for the people not familiar with the details, the case is closed and you may appear the odd person out trying to re-open it (well, you may be - dependent on the actual numbers). Basically the event discourages critical investigation - except if the claim is perceived as too bold.

This happened to the OPERA measurement of superluminal neutrinos. It created a tremendous attention, but the physics community remained to a large fraction skeptical - this did not allow a full 'case closed' perception of the general public. Still, what happened was that the speculative possibilities were over-advertized to the general public but the disclaimers on experimental issues were not omitted. In the end, the public got to see all claims of novel physics retracted, which, after reading through the initial news items, must have been perceived as a sore letdown to the public, leading to the perception that science can't back up bold claims. In the event, I think this is unfortunately true - all the detailed investigations of the experiment, ultimately leading to the discovery of a loose GPS reveicer link should have been done before a result was announced.

This is not to advocate secrecy - but things work out best for science if scientists should announce results when they're ready and thoroughy tested, not before. Science can not be done well if there is public expectation or any other sort of pressure to arrive at a certain result, for science to work well, all possibilities must remain open, in reality as in the mind of the investigator. The need to produce spectacular, breaking news in real time to keep the public entertained just intereres with this basic need.

The case of the OPERA experiment highlights another problem - normal scientific debate in a field whether ideas are correct or not which is needed to sort good ideas from bad ones is frequently perceived as a sign of a field in trouble from the outside - rather than making a clear message, different people in the field claim different things. Since a bad perception of a field in public harms future funding opportunities, especially if other fields of science are not seen to have a similar controversy, this often leads to appeals to close ranks rather than have a controversy on major conferences. Of course, not raising possible objections against an idea and refusing to discuss them does not add up to good science.

On errata

Researchers occasionally do get a result wrong - be it because of a simple mistake in a calculation, or because of a measurement calibration error. The idea is that these things are caught early on - first while the co-authors of a publication review it, later during the peer review process, but there still remain some errors which slip into publications.

The scientific code of conduct requires that if such errors are discovered later, an erratum should be published in the same place where the original publication appeared so that the scientific community is aware that the original result is no longer relevant. That is of some importance as subsequent research should not build upon a wrong result - for instance theoretical modeling should not try to explain what is a genuine error in a measurement.

The standards for corrections of results presented in outreach channels are, however, mainly up to the publisher. A science journalist may or may not be aware of an erratum for research he has been reporting on, and that may or may not lead to a public correction.

As a result, it is not uncommon that the general public believes a claim which inside the scientific community has been disproven a while ago. This does not happen for spectacular, important things (like the superluminal neutrino measurement discussed above) where the news value of a correction is high - but it fairly frequently happens for less spectacular results. And again, in the sum this constitutes a distortion of the public image of what the current state of the art in science really is.

Continue this essay with Conclusions

Back to main index     Back to essays

Created by Thorsten Renk 2015 - see the disclaimer, privacy statement and contact information .