Podcast, Ivan Oransky, RetractionWatch: Retractions are like red flags highlighting infractions in science

Photo credit: SciencePOD.


Keeping a record of the retraction of research publications made it easier for journalistic coverage dissecting the infractions occurring within science publishing. “What I see that’s important, is not just coverage of individual cases, people are actually trying to put this all together. They’re filing public record requests because that’s something we’re not thinking of.” That’s according to RetractionWatch, co-founder, Ivan Oransky,  who started this initiative as a blog in 2010 with a single purpose in mind: making the peer-review process more transparent.

Listen to the edited version of the recording by SciencePOD.org of a recent presentation Oransky made at Trinity College Dublin, Ireland. This event, held on 20th June 2018, was co-organised by the Irish Science & Technology Journalists’ Association and the Science Gallery Dublin.

Oransky recalls the motivation that originally animated him and co-founder Adam Marcus in highlighting the mishaps of the peer-review process within academic communities. “Those of you who may be familiar with PubMed, Medline or Web of Science, you go to any of those you’ll find under 6,000 (retractions)… we [at Retraction Watch] have 3 times as many,” notes Oransky. Today, the RetractionDataBase.org site holds 17,500 retractions –and it is still growing. While retractions are rare, Oransky believes there is a screening effect attached to them.

For a sense of scale, the two countries in the world with the most retractions are China and the US. To provide an in-depth look at this, Oransky and his team compiled a leaderboard. Each of these instances are linked with a comprehensive story following the original publication.

Many varieties of malpractice

Oranksy highlights a few of the problems surrounding retractions found in the peer-review community. At the time of this recording, RetractionWatch had cataloged, 630 retractions specifically due to e-mail fraud by submitting a fake peer-reviewer’s e-mail. How does this work? An academic submits a paper to a journal for submission. When the journal comes back to ask for an e-mail to reference for peer-review, rather than submitting a genuine e-mail, the academic offers a fake e-mail, which then closes the loop between him or herself and the journal. Thus, eliminating the need for a peer-review. Back in 2000, only about 5% of papers were retracted, due to e-mail fraud.

Another area of malpractice occurs through duplication of results in different journals, not to be confused with plagiarism. Duplication is giving undue weight to a scientific conversation within the literature. This means when you try to conduct a scientific analysis on a topic, you’re looking at publications publishing the same thing multiple times without adding value to the topic.

All knowledge is provisional

To assume a paper should be retracted because the results aren’t reproducible is odd; but, it does occur. This shows that there is no perfect system for scholarly publishing. And that keeping a tap on retractions can help to uncover unsavoury behaviour among scientists.

Ultimately, this red flag activity leads to stronger science, as researchers are aware of the potential downsides of naming and shaming authors of retracted papers.

Enjoy the podcast!


Photo credit: SciencePOD.org

Attendees discuss ECSJ2017 in anticipation of ECSJ2018

What do the participants think of the conference? Watch Antonia Geck and Cora Klockenbusch ask attendees about their impression of ECSJ2017.

For the full 4-part series of videos, head over to ESCJ’s website, here.

How to avoid Martian radiation

Researchers put model predictions of radiation to the test ahead of future manned missions to Mars.

Cold, dry, airless – Mars doesn’t make the most comfortable environment for human exploration. But what makes a manned mission to the Red Planet truly dangerous is its radiation, which is thought to be more than 500 times more potent than here on Earth.

Now, a team based in Germany and the US has made an important step towards predicting when, where and with what strength this radiation will strike. Their work, which has just been published in Life Sciences in Space Research, compares theoretical predictions of different models with actual observations for the first time. This work could one day be used to mitigate the risk to Mars explorers of radiation sickness and cancer.

“Using different models and comparing them to available data allows us to better understand the weaknesses and strengths in those models, and how we can apply them to extend our knowledge beyond the measurements,” explains lead author Daniel Matthiä of the German Aerospace Center in Cologne. “We can use the models to determine hypothetical scenarios, for example, the radiation exposure in shelters, habitats and underground.”

Radiation on Mars is far from consistent. Most of it consists of cosmic rays – high-energy particles shooting from outer space – and these vary with the fluctuating strength of the Sun’s protective magnetic field. But just being able to forecast general levels of cosmic rays isn’t enough, as different high-energy particles have different effects on human physiology, and the geography of Mars means that some places are more exposed than others.

There is already a handful of computer models that can predict, in high detail, the changing radiation field on Mars generated by cosmic rays. However, it is not known how accurate these predictions are.

Matthiä and colleagues have begun to find out, by comparing model predictions with data taken from the Radiation Assessment Detector (RAD) instrument aboard NASA’s Curiosity rover on Mars since 2012. This effort was part of a “Blind Challenge” Model Comparison Workshop organized by Matthiä, Don Hassler at Southwest Reseach Institute in Boulder, Colorado (Principal Investigator of the RAD investigation) and the RAD team. The RAD is supported by NASA and DLR to make exactly these kind of measurements to help improve astronaut safety on future human missions to Mars. Building on a preliminary comparison last year, the researchers asked several modelling groups to predict the radiation environment on the surface of Mars for a two-month period, without seeing Curiosity’s results beforehand. “This is harder than it sounds,” says Matthiä.

Although the researchers are unable to quantify the accuracy of the model predictions yet, Matthiä says they were surprised in some cases by how much the models disagreed with each other and with the observational data. But this will help to improve the model predictions, he says, and ultimately provide more confidence when planning manned missions to Mars.

“Understanding, mitigating and managing the radiation environment for astronauts on a manned mission to Mars is a challenging, but not unsurmountable, problem,” Matthiä adds. “The more we understand about the environment, its variability, and its effect on humans, the safer our astronauts will be.”

Comparisons of model predictions and RAD data are not the only way to study the health effects of space radiation during a manned Mars mission, however. Last year, in another paper published in Life Sciences in Space Research, scientists reported upgrades at NASA’s Space Radiation Laboratory (NSRL) at the Brookhaven National Laboratory in Upton, New York, enabling the effects of cosmic rays to be simulated experimentally with greater precision here on Earth.

Article details:

D. Matthiä et al.: “The radiation environment on the surface of Mars – Summary of model calculations and comparison to RAD data,” Life Sciences in Space Research (2017)

Original article published on Elsevier.com.

Offering a hacking solution for scholarly publishing

Changing incentives to researchers and scientific-centric technology solutions could be the new normal.

Hacking solutions to science problems are springing up everywhere. They attempt to remove bureaucracy and streamline research. But how many of these initiatives are coming from the science publishing industry? There is currently no TripAdvisor to the best journal for submission, no Deliveroo for laboratory reagent delivery. How about a decentralised peer-review based on the blockchain certification principle? Today, the social media networks for scientists—the likes of ResearchGate, Academia.edu and Mendeley—have only started a timid foray into what the future of scholarly publishing could look like.

This topic was debated in front of a room packed with science publishing executives at the STM conference, on 18th October 2016, on the eve of the Frankfurt Book Fair. Earlier that day, Brian Nosek, executive director at the Centre for Open Science, Charlottesville, Virginia, USA, gave a caveat about any future changes. He primarily saw the need to change the way incentives for scientist work so that, ultimately, research itself changes rather than technology platforms imposing change.

Yet, the key to adapting is “down to the pace of experiment,” said Phill Jones, head of publisher outreach, at Digital Science, London, UK, which provides technology solutions to the industry. Jones advocates doing lots and lots of experiments to find solutions to better serve the scientific community.

Indeed, “rapid evolution based on observed improvement is better than disruption for the sake of disruption,” agreed John Connolly, chief product officer at Springer Nature, London, UK.

Adopting an attitude that embrace these experiments, “is the biggest change that we [the scholarly publishing industry] need to embrace,” Jones concluded.

To do so, “we need publishers to be a lot less cautious,” noted Richard Padley, Chairman Semantico, London, UK, providing technology solutions to science publishers. “It is a cultural thing, publishers need to empower their organisation to use technology from the top down.”

So are the lives of scientists about to be changed? Arguably, yes. Resistance from proponents of the status quo may still arise. It may depend on the pace at which science publisher turn into technology service industry. The truth is “users want to see tools that are much more user-centred and less centred around publishers,” argued Connolly. However, “if you ask a scientists what they wanted [in the past], they would have said high impact factors articles,” said Phil Jones. “They thought this is what they wanted because there was no alternative,” Jones added, whereas: “they wanted to have higher impact of their research and have greater reach.”

Clearly, “if you are optimistic about publishers, there is a job for publishers, to synthesise knowledge, to see the relevant content,” said Connolly.

This means taking quite a lot of adjustment to those who pay for content. A download is not a marker of whether you have passed on that synthesised knowledge!

Original article published on EuroScientist.com.