Science and the media can be strange bedfellows and reporting on science can be a minefield. The AusSMC is here to assist journalists to get the science right – whether it’s the latest press release claiming a medical breakthrough or a natural disaster. 

The following guide to reporting science has been adapted from the NZ Science Media Centre’s  ‘Desktop Guide for Covering Science!’  The guide can be viewed as an ebook, here

A new website – www.scijourno.com.au – is also worth checking out and contains six modules of study to help working journalists and media students.

Covering science – what every journalist should know

Balance in science reporting

When is research ready for primetime?

Evaluating studies and trials

Communicating statistics and risk responsibly

Peer-review – How does it work?

Scientists as sources

Disaster on deadline

Getting access to research

Dealing with scientific uncertainty

Top 10 tips for covering science

Covering science – what every journalist should know

Shrewd advice from former New York Times and Washington Post science reporter Boyce Rensberger:

  • Science demands evidence, and some forms of evidence are worth more than others are. A scientist’s authority should command attention but, in the absence of evidence, not belief.
  • There is no one scientific method, but all good science includes elaborate procedures to discover and avoid biases that might mislead.
  • Uncertainty is a sign of honest science and reveals a need for further research before reaching a conclusion. Cutting edge science is highly uncertain and often flat-out wrong.
  • The pace of science, despite the hype, is usually slow, not fast. Breakthroughs are never the result of one experiment.
  • Balanced coverage of science does not mean giving equal weight to both sides of an argument. It means apportioning weight according to the balance of evidence.
  • Virtually all new technologies pose risks along with benefits. Thus “safe” and effective,” whether applied to drugs or new devices or processes, are always relative terms. It is irrational to ask whether something is safe or not. Nothing is 100 percent safe. Policy decisions involving science must balance risks and benefits.
  • Journalists and scientists espouse similar goals. Both seek truth and want to make it known. Both devote considerable energy to guard against being misled. Both observe a discipline of verifying information. Both insist that society allow them freedom to pursue investigations wherever they lead. Neither requires licensure or approval of an outside authority to practice its craft.
  • News organisations usually invest too much importance in a scientific development and not nearly enough in the broader trends.

Boyce Rensberger has been a science writer and science editor for nearly 40 years, beginning in 1966 at The Detroit Free Press and moving on to the New York Times (1971-79), PBS Science, Science and the Washington Post. He served as the director of the Knight Science Journalism Fellowships and has written four science books, most recently Life Itself: Exploring the Realm of the Living Cell.

This article was originally published by the Nieman Foundation for Journalism at Harvard and is re-published with kind permission.

—————-
Balance in science reporting

“Giving both sides their due” is a basic principle of newsgathering, particularly when covering political and social debates. But good reporting on science issues requires more than a simple “he said/she said” approach to balance.

In science, claims must be backed up by evidence. Understanding the context for a scientific assertion or research study is crucial to giving your audience a balanced view, and allowing them to assess the truth of competing claims.

As a general rule, scientists hold themselves to high standards of transparency and subject each other’s work to rigorous critiques. The advancement of scientific knowledge is a collaborative effort, one where openness and exchange of ideas are essential. But ideas are not enough – scientists work by accumulating evidence to support, refine or overturn our current understanding of the world around us.

The balance of evidence

On controversial issues, it’s important to weigh up the relative merits of different scientific views before presenting them side by side. Focus on what the balance of evidence demonstrates. If a substantial majority of published research and scientists back a given perspective, make this explicit in your reporting.

Of course, figuring out how much weight to lend to different sides of a scientific debate often requires a great deal of specialised background knowledge. Look into what research has been done on the topic, and what major peer-reviewed assessments or papers have to say about it. Supplement what you can find out on your own by consulting trusted scientists uninvolved with the research in question.

In general, journalists should approach scientific claims that fall outside the mainstream with a healthy scepticism. Beware of cherry-picked, obscure or outdated research findings. A single study or two can present a distorted view of the facts when taken in isolation. The more dramatic the claim, the more cautious you should be.

Some things to consider when choosing your sources:

  • Does your expert have a scientific background that is relevant to the area they are weighing in on?
  • Do they have established credentials? An active research career? What is their reputation among fellow scientists?
  • Can you uncover any conflicts of interest or ties to an outside organisation that may unduly influence the expert’s views?

Bear in mind that there is often a diverse range of scientific opinion. By exploring several scientists’ views, you may uncover new angles that hold more interest than a simple pro- and anti- storyline.

The journalistic norm of balance has no corollary in the world of science. On the contrary, scientific theories and interpretations survive or perish depending upon…whether the results on which they’re based can be replicated by other scientists. When consensus builds, it is based on repeated testing and retesting of an idea.

– Chris Mooney

Blinded by Science: How ‘Balanced’ Coverage Lets the Scientific Fringe Hijack Reality

See also: the Knight Science Journalism Tracker

—————-

When is research ready for primetime?

Often the first time you hear about an interesting area of science is when a press release arrives in your Inbox proclaiming the latest discovery or scientific breakthrough.

But how did the scientists get to this, the really newsworthy part? Understanding the scientific method will help you understand how scientific discoveries are made and determine what weight to give them in your news reports.

Scientific method

Scientists deal with uncertainty all the time because they are engaged in the pursuit of new knowledge. They advance their understanding literally through trial and error and rely on a series of techniques (see chart) to guide them along the way – scientific method.

Scientific method involves collecting data through experimentation and observation to test an hypothesis. This is a proposed explanation scientists formulate based on previous observations that then need to be tested. Testing the hypothesis can involve experimentation and observation, the result of which is measurable evidence that scientists can then attempt to reproduce using the same methods. The testing needs to be designed in a way so that the results are objective, to reduce the likelihood of a biased interpretation of the results.

Scientists document everything, not just the results of their experiments, but also the methodology they used, so that other scientists can try to replicate the results of the experiments. As such scientists place a lot of emphasis on disclosure of data, so it can be scrutinized by other researchers.

Uncertainty remains

After scrutinizing their results, scientists will determine whether they have proven their hypothesis and then write up preliminary findings. The answer, which may eventually be reported in the form of a scientific paper in a peer-reviewed journal, will rarely be conclusive.

—————-

Evaluating studies and trials

Key questions to ask an expert when evaluating research

  • How does this study compare with others that have come before?
  • How does it add to or contradict existing scientific views?
  • Correlation vs. causation – Did A actually cause B, or are A and B connected for reasons we don’t fully understand?
  • How large is this study? What was the sample size?
  • Was the study well designed? n Have the findings been replicated? Will they need to be to gain widespread acceptance?
  • Are the results compelling enough to recommend a change in our current behaviour/treatment/ regulations?
  • What would be the effect of such changes versus keeping things as they are?

There are many ways scientists can investigate the world, including experimentation, description, comparison, and modeling: in many cases, more than one of these is used in a study.

Types of studies

When carrying out a study, it’s worth noting that there are different kinds. A review looks at research that has already been carried out on the subject, and finds trends. There are also different ways of looking at groups of people – this is particularly important in matters of health.

Some methods of study are observational – generally retrospective in nature, they look at the effects of a risk factor (say, smoking) without influencing what happens. Cohort studies compare two groups of people where one group has been exposed to a variable to which the other has not. Case control studies, on the other hand, look at people known to have a preexisting condition (the cases) and compare them with a group of people known not to have it (the control) – this is often used to test new drugs.

Research methods

Some methods of research involve direct experimentation, where scientists introduce a variable to see what happens to their subjects. These subdivide into two categories: randomised controlled trials occur when people who fit the criteria are assigned randomly to two groups, with one group receiving the intervention and the other not. Controlled clinical trials are similar, but people aren’t randomly assigned to groups, which can increase the risk of ‘bias’ in the study, making its findings less reliable. Randomised controlled trials are still seen as the most reliable means of testing something. It’s also good to remember that peer-reviewed research is the best, with conference papers coming next. Of course, one should always remember to examine the affiliations of the authors of a published paper, to check for possible conflicts of interest.

 

—————-

Communicating statistics and risk responsibly

Comparing risks

It may be tempting to try to put a new risk in perspective by comparing it to something your audience is familiar with (road accidents, smoking a pack of cigarettes a day, etc). But be careful! When translating statistics and risk from one context to another, it’s all too easy to get things wrong. We examine a few common pitfalls.

Absolute risk vs. relative risk

Absolute risk refers to the naturally-occurring frequency of an event. It gives an ordinary frame of reference that is easy to understand.

Example: Four out of every 1000 women will die of breast cancer in the next 10 years

Relative risk refers to a change in the level of risk. This kind of figure often sounds very impressive, and is frequently used in reports of drug trials or new treatments, but it has little meaning unless it is put into the correct context. Example: This drug reduces a woman’s risk of dying from breast cancer by 25% One of the most common confusions occurs when these two types of risk are mixed up. In the example above, the 25% decrease actually means that for every 1000 women taking the drug, three will die of breast cancer instead of four. In other words, this treatment could potentially save one life in 1000.

When the percentage is given in terms of a woman’s overall risk of dying from breast cancer, it means a reduction of 0.1%. This is because the risk of dying from breast cancer is relatively small to begin with, so even a large reduction in that risk does not equate to many lives saved.

Using the context of absolute risk (or getting an expert to provide this) is the best way to explain what a result will mean for your readers in their daily lives.

Positive vs. negative frame

Pay attention to the way statistics are framed. While a 97% chance of survival, and a 3% chance of dying may both be correct, they don’t always mean the same to the person listening.

Evidence shows that positive framing is more effective than negative framing in persuading people to take risky treatment options.

Single event probabilities

The chances of a single, undesirable event taking place can be easily confused with the day-to-day likelihood of things going awry.

Example: A psychiatrist prescribes a drug to his patients with the warning that they will have a “30% to 50% chance of developing a sexual problem” such as impotence or loss of sexual interest.

His patients understand this to mean 30 – 50% of their own sexual encounters will be problematic, and don’t want the drug.

But the psychiatrist actually means that of every 10 patients taking the drug, three to five will experience a sexual problem at some stage. Explaining it this way, he finds his patients are less concerned about the risk.

Peer-review – How does it work?

After experiments have run their course and results are in, scientists turn their focus to writing up and publishing their research in peer-reviewed journals.

It’s an extremely important part of the scientific process, as it means other scientists around the world can quickly learn from each other’s successes and failures, and also independently test the research for themselves to verify its accuracy. Publishing is also an important measure for many scientists of their output.

Before a paper can be published in a reputable journal, it must be peer-reviewed. In a process which can last months, a paper is sent out to several scientists working in the same field, who are best positioned to be able to decide whether the study is well designed, the methodology is sound, and the conclusions drawn make sense. The reviewers’ comments are anonymous and unpaid.

Why is this step necessary? Because science is becoming increasingly complex, and no one person has all the knowledge necessary to evaluate the full range of research submitted for publication. Even for the journals which are devoted to a single field, the plethora of subjects within it mean that those best suited to judging the merit of a paper are those people working on something similar. Peer review is designed to provide an important check on the quality of research entering the public domain.

A paper submitted for peer review can have one of three different outcomes – it can be accepted with no changes, sent back for revision, or rejected outright. If this latter occurs, the paper’s authors can always try to get it published in another journal and hope for a more favourable outcome.

This strategy of submitting rejected papers repeatedly ‘down the chain’ to ever less competitive journals is part of the reason that the quality of research papers in more obscure journals may be less robust than in the most highly sought after ones.

Hierarchy of peer-reviewed journals

Scientific journals are ranked according to various measures of their quality.

  • Prestigious, multidisciplinary journals (Nature, Science, etc.)
  • Discipline-specific journals with varying degrees of selectivity and specialisation
  • Wide assortment of less well-known journals that may be regional, narrow or unselective

Publication in top journals is incredibly competitive, while more obscure journals may struggle to get enough submissions to fill their pages. Some journals require researchers to pay for publication, while others rely on subscription fees. Impact factor is one way in which journals can be assessed. Impact factor it is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period. The impact factor can be used to provide a gross approximation of the prestige of journals in which individuals have been published.

—————-

Scientists as sources

Some tips on approaching and interviewing scientists

Cultivate your sources – Spend time talking to scientists when you’re not on deadline. Help them get to know and trust you, and understand how you work. If a researcher seems particularly approachable, see if they might be willing to help you get your head around a crucial bit of research or fact-check an assertion on short notice in future.

Make your deadline clear up front – Journalists tend to work to much tighter time frames than scientists are used to. They may not instinctively give a media enquiry the highest priority on their long to-do lists. If you need a response within the next few hours or days, spell it out clearly – (and go ahead and show your appreciation if they manage to drop everything to accommodate you).

Use email – We’ve found that many scientists are virtually unreachable by phone but respond obsessively to emails. Scientists tend to travel frequently, and many juggle appointments at multiple research institutions or are regularly away from their offices for teaching commitments or lab / field work. If you’re under time pressure, follow your email up with a phone call.

Head off over-preparation – Scientists will often think they need to spend unnecessary hours prepping with background research on in-depth facts and figures you’ll never cover. Give your scientist a rough idea of the outcome you are shooting for, particularly if you have strict constraints on your word or time limit. (i.e. Are you producing a 7 minute segment? 300 words? A 30 second bulletin item?) It may also pay to make sure you’re on the same page regarding what territory you’ll be covering in the interview

Don’t be intimidated – If you’re not following something, or the scientist starts slipping into jargon, don’t hesitate to interrupt or ask them to explain in simpler terms. It’s often hard for scientists to judge exactly how much background explanation they should provide.

 

—————-
Disaster on deadline

Newsrooms have well-tested procedures for covering natural disasters that involve them working with emergency services to get accurate information out to the public quickly. But hundreds of scientists around the country are involved in monitoring for natural hazards, managing disasters and helping prepare us for when the big one hits.

Scientists can provide invaluable expertise and background not only when a natural disaster strikes here or elsewhere in the world, but when you need information on how well prepared we are for such an event.

—————-

Getting access to research

Science news is largely driven by the publishing cycle of the major peer-reviewed scientific journals. So as a journalist covering the science beat, getting access to science papers ahead of time is crucial.

Staying in close contact with key scientists, and asking regularly about forthcoming research is a good way to find out what is coming up, but this approach can be time-consuming and scattershot.

Luckily, most science journals provide journalists free, early access to scientific papers under embargo. You’ll generally be asked by journal publishers to prove your credentials, which may involve providing samples of your work, references or a letter of introduction from your editor.

Embargoes: break them at your peril!

The major scientific journals usually release papers under embargo. This means that news reports on the paper, no matter how brief, cannot be published or go to air until after the date specified on the paper or accompanying abstract or news release.

Embargoes are designed to give journalists time (typically a few days), to digest the research, conduct interviews and source graphics. Be aware – breaking an embargo can result in your organisation being banned from receiving future press releases.

Many embargoes are issued in international time zones, so check your local time conversion.

—————-

Dealing with scientific uncertainty

Uncertainty is part of the process: Science cannot prove a negative – no matter how many carefully designed experiments they’ve already run, scientists will never be able to say, they’re “100% certain” that something is safe. That’s because they are always open to the possibility that new research tomorrow could turn everything they know on its head. This flexibility of approach is actually one of science’s great strengths.

Enough is enough: That said, when the studies start to stack up, most scientists will agree that they’ve done everything in their power to rule out a given risk or association. Accept a “high confidence” level as the scientist’s most strongly worded statement on the subject, and don’t vilify scientists who won’t categorically rule out a given possibility.

Experts may focus on the gaps in knowledge: Be aware that scientists may spend less time talking about what they do know (which they assume everyone probably knows already), than talking about what they don’t know. This is because the unknown is an area of intense interest and potential discovery for scientists. Overall, this can give a skewed view of how important the gaps in knowledge actually are.

Qualifiers and caveats are essential: Editors and sub-editors hate them, but qualifiers indicate the level of scientific uncertainty and are therefore more important in sciencerelated stories than your average general story. If scientists are uncertain about their results, you need to report that accurately. Leave notes to the sub-editors when you file your story to try and avoid qualifiers and caveats being cut and inappropriate headlines being created for your stories.

Avoid single-source stories: It can be tempting to spin a yarn from a well-crafted press release and the one scientist it quotes, but you need to get views from other scientists, particularly when dealing with uncertainty in results. Scientists are often too close to their work to accurately say how much weight their findings should be given. Check their claims against the peer-reviewed literature and their peers.

The flipside – don’t exaggerate uncertainty: Sometimes media reports give the impression that scientists can’t even agree on the basics. But as you’ve already read in this guide, science is a process and the big picture changes as new studies are completed and scientists add to the body of work that came before them. Contrasting scientific views should be noted but not beaten up to suggest uncertainty reigns supreme.

Be careful about “dueling experts”: There’s nothing as quote-worthy as a good argument between experts. But two opposing talking heads doesn’t mean a rift in the scientific community. Be careful you are not making the science out to be less certain than it actually is by playing up disagreement between scientists. Go to scientific bodies, societies and associations for a big picture view.

Don’t pit scientist against non-scientist: A science-related story may originate from a politician or a man in the street, but while their points of view are important, save the discussion of scientific uncertainty to scientific experts in the topic under discussion.

—————-

Top 10 tips for covering science

The BBC’s science correspondent Pallab Ghosh gives his top 10 tips for reporting on science stories.

1. Ask yourself: Why is the story important? It’s part of BBC News’ philosophy that the “why?” is as important as the “who, what, where and when” in a news story.

This is particularly relevant when covering science stories, which can be detailed and complex.

The answer to the “why?” question is often the most interesting part of the story.

2. Tell the story to someone else before you write it. I’ve often gone into an edit suite with my script, ready to voice over a TV report, only to tear it up and start again after I’ve explained the story to the picture editor I’m working with. That is because, however good you are at writing, you will always convey a story more directly and engagingly if you tell someone else what it is and why it’s interesting, in two or three sentences.

3. Your job is not just to enthuse. When I first started out as a science journalist, most people in the profession saw themselves as cheer-leaders for science, enthusing about the wonder of a particular piece of research. There was also a view that science journalists should artfully explain complex, jargon-ridden science. Now we report on controversial topics such as GM crops, cloning and climate change, which have a political, as well as scientific, dimension. As a science journalist, your job is to challenge what you are told.

4. Sometimes it’s okay to enthuse. Although science journalists are careful to stick to the facts and not get carried away, occasionally something is so exciting, so awe-inspiring and so fantastic. For example, none of us could contain our joy at reporting on the potential marvels of the Large Hadron Collider when it was first switched on.

5. Science is not the truth. The Newsnight presenter Jeremy Paxman famously explained that when interviewing ministers he would think to himself: “Why is this person lying to me?” Now, that’s not to say that scientists fall into the same category as politicians. But it is worth realising that there’s plenty of debate in all fields of research, and science journalists should not take what one particular scientist says at face value. Ask other scientists what they think of the work.

6. Peer-reviewed? Find out whether the research has been published in a scientific journal. Now, it’s no guarantee that the research is solid. For example, a leading scientific journal published claims that Korean researchers had cloned a human embryo, which were subsequently found to be untrue. However, a story is more likely to be solid if it is in a journal.

7. Unpublished is okay. Research doesn’t always have to be published for you to report on it. Research presented at scientific conferences won’t have gone through the rigorous checking processes it would have if it were published in a journal.

But, because the work is being presented in a public forum by reputable scientists, science journalists do report these stories. During national emergencies, leading scientists are called upon by government to carry out research to help tackle the problem. Because of the urgency of the situation, there’s no time to put the research through the normal checking process.

But we do report this research because it has been informally assessed by leading experts.

8. Get out more. It is all too easy to wait for press releases to drop into your inbox. But these stories also end up in the inboxes of hundreds of other journalists. That’s why you’ll probably see the same mildly interesting stories in newspapers and on radio and TV. But the really interesting stories are out there in the thousands of laboratories we have across the world.

9. Follow your passion, not the crowd. Just because everyone else is writing about stem cells, it doesn’t mean you have to. If you become a science journalist it is probably because you are fascinated by the stories. Don’t lose that fascination. Rely on it to help you follow great stories.

10. Enjoy yourself. You get to talk to fascinating people about some of the most interesting stories of our time. Science journalism is the best job in the world. Savour every moment.

Reprinted with kind permission of Pallab Ghosh and the BBC.