Podcast, Ivan Oransky, RetractionWatch: Retractions are like red flags highlighting infractions in science

Photo credit: SciencePOD.

 

Keeping a record of the retraction of research publications made it easier for journalistic coverage dissecting the infractions occurring within science publishing. “What I see that’s important, is not just coverage of individual cases, people are actually trying to put this all together. They’re filing public record requests because that’s something we’re not thinking of.” That’s according to RetractionWatch, co-founder, Ivan Oransky,  who started this initiative as a blog in 2010 with a single purpose in mind: making the peer-review process more transparent.

Listen to the edited version of the recording by SciencePOD.org of a recent presentation Oransky made at Trinity College Dublin, Ireland. This event, held on 20th June 2018, was co-organised by the Irish Science & Technology Journalists’ Association and the Science Gallery Dublin.

Oransky recalls the motivation that originally animated him and co-founder Adam Marcus in highlighting the mishaps of the peer-review process within academic communities. “Those of you who may be familiar with PubMed, Medline or Web of Science, you go to any of those you’ll find under 6,000 (retractions)… we [at Retraction Watch] have 3 times as many,” notes Oransky. Today, the RetractionDataBase.org site holds 17,500 retractions –and it is still growing. While retractions are rare, Oransky believes there is a screening effect attached to them.

For a sense of scale, the two countries in the world with the most retractions are China and the US. To provide an in-depth look at this, Oransky and his team compiled a leaderboard. Each of these instances are linked with a comprehensive story following the original publication.

Many varieties of malpractice

Oranksy highlights a few of the problems surrounding retractions found in the peer-review community. At the time of this recording, RetractionWatch had cataloged, 630 retractions specifically due to e-mail fraud by submitting a fake peer-reviewer’s e-mail. How does this work? An academic submits a paper to a journal for submission. When the journal comes back to ask for an e-mail to reference for peer-review, rather than submitting a genuine e-mail, the academic offers a fake e-mail, which then closes the loop between him or herself and the journal. Thus, eliminating the need for a peer-review. Back in 2000, only about 5% of papers were retracted, due to e-mail fraud.

Another area of malpractice occurs through duplication of results in different journals, not to be confused with plagiarism. Duplication is giving undue weight to a scientific conversation within the literature. This means when you try to conduct a scientific analysis on a topic, you’re looking at publications publishing the same thing multiple times without adding value to the topic.

All knowledge is provisional

To assume a paper should be retracted because the results aren’t reproducible is odd; but, it does occur. This shows that there is no perfect system for scholarly publishing. And that keeping a tap on retractions can help to uncover unsavoury behaviour among scientists.

Ultimately, this red flag activity leads to stronger science, as researchers are aware of the potential downsides of naming and shaming authors of retracted papers.

Enjoy the podcast!

 

Photo credit: SciencePOD.org

Attendees discuss ECSJ2017 in anticipation of ECSJ2018

What do the participants think of the conference? Watch Antonia Geck and Cora Klockenbusch ask attendees about their impression of ECSJ2017.

For the full 4-part series of videos, head over to ESCJ’s website, here.

Offering a hacking solution for scholarly publishing

Changing incentives to researchers and scientific-centric technology solutions could be the new normal.

Hacking solutions to science problems are springing up everywhere. They attempt to remove bureaucracy and streamline research. But how many of these initiatives are coming from the science publishing industry? There is currently no TripAdvisor to the best journal for submission, no Deliveroo for laboratory reagent delivery. How about a decentralised peer-review based on the blockchain certification principle? Today, the social media networks for scientists—the likes of ResearchGate, Academia.edu and Mendeley—have only started a timid foray into what the future of scholarly publishing could look like.

This topic was debated in front of a room packed with science publishing executives at the STM conference, on 18th October 2016, on the eve of the Frankfurt Book Fair. Earlier that day, Brian Nosek, executive director at the Centre for Open Science, Charlottesville, Virginia, USA, gave a caveat about any future changes. He primarily saw the need to change the way incentives for scientist work so that, ultimately, research itself changes rather than technology platforms imposing change.

Yet, the key to adapting is “down to the pace of experiment,” said Phill Jones, head of publisher outreach, at Digital Science, London, UK, which provides technology solutions to the industry. Jones advocates doing lots and lots of experiments to find solutions to better serve the scientific community.

Indeed, “rapid evolution based on observed improvement is better than disruption for the sake of disruption,” agreed John Connolly, chief product officer at Springer Nature, London, UK.

Adopting an attitude that embrace these experiments, “is the biggest change that we [the scholarly publishing industry] need to embrace,” Jones concluded.

To do so, “we need publishers to be a lot less cautious,” noted Richard Padley, Chairman Semantico, London, UK, providing technology solutions to science publishers. “It is a cultural thing, publishers need to empower their organisation to use technology from the top down.”

So are the lives of scientists about to be changed? Arguably, yes. Resistance from proponents of the status quo may still arise. It may depend on the pace at which science publisher turn into technology service industry. The truth is “users want to see tools that are much more user-centred and less centred around publishers,” argued Connolly. However, “if you ask a scientists what they wanted [in the past], they would have said high impact factors articles,” said Phil Jones. “They thought this is what they wanted because there was no alternative,” Jones added, whereas: “they wanted to have higher impact of their research and have greater reach.”

Clearly, “if you are optimistic about publishers, there is a job for publishers, to synthesise knowledge, to see the relevant content,” said Connolly.

This means taking quite a lot of adjustment to those who pay for content. A download is not a marker of whether you have passed on that synthesised knowledge!

Original article published on EuroScientist.com.