Podcast, Ivan Oransky, RetractionWatch: Retractions are like red flags highlighting infractions in science

Photo credit: SciencePOD.

 

Keeping a record of the retraction of research publications made it easier for journalistic coverage dissecting the infractions occurring within science publishing. “What I see that’s important, is not just coverage of individual cases, people are actually trying to put this all together. They’re filing public record requests because that’s something we’re not thinking of.” That’s according to RetractionWatch, co-founder, Ivan Oransky,  who started this initiative as a blog in 2010 with a single purpose in mind: making the peer-review process more transparent.

Listen to the edited version of the recording by SciencePOD.org of a recent presentation Oransky made at Trinity College Dublin, Ireland. This event, held on 20th June 2018, was co-organised by the Irish Science & Technology Journalists’ Association and the Science Gallery Dublin.

Oransky recalls the motivation that originally animated him and co-founder Adam Marcus in highlighting the mishaps of the peer-review process within academic communities. “Those of you who may be familiar with PubMed, Medline or Web of Science, you go to any of those you’ll find under 6,000 (retractions)… we [at Retraction Watch] have 3 times as many,” notes Oransky. Today, the RetractionDataBase.org site holds 17,500 retractions –and it is still growing. While retractions are rare, Oransky believes there is a screening effect attached to them.

For a sense of scale, the two countries in the world with the most retractions are China and the US. To provide an in-depth look at this, Oransky and his team compiled a leaderboard. Each of these instances are linked with a comprehensive story following the original publication.

Many varieties of malpractice

Oranksy highlights a few of the problems surrounding retractions found in the peer-review community. At the time of this recording, RetractionWatch had cataloged, 630 retractions specifically due to e-mail fraud by submitting a fake peer-reviewer’s e-mail. How does this work? An academic submits a paper to a journal for submission. When the journal comes back to ask for an e-mail to reference for peer-review, rather than submitting a genuine e-mail, the academic offers a fake e-mail, which then closes the loop between him or herself and the journal. Thus, eliminating the need for a peer-review. Back in 2000, only about 5% of papers were retracted, due to e-mail fraud.

Another area of malpractice occurs through duplication of results in different journals, not to be confused with plagiarism. Duplication is giving undue weight to a scientific conversation within the literature. This means when you try to conduct a scientific analysis on a topic, you’re looking at publications publishing the same thing multiple times without adding value to the topic.

All knowledge is provisional

To assume a paper should be retracted because the results aren’t reproducible is odd; but, it does occur. This shows that there is no perfect system for scholarly publishing. And that keeping a tap on retractions can help to uncover unsavoury behaviour among scientists.

Ultimately, this red flag activity leads to stronger science, as researchers are aware of the potential downsides of naming and shaming authors of retracted papers.

Enjoy the podcast!

 

Photo credit: SciencePOD.org

Data privacy: Should we treat data handling the same way we do our own health?

Increasingly digital breadcrumbs are making it possible for others to track our every move. To illustrate what is at stake here, we need to travel back in time. In the pre-computer era, the ability to remain incognito made for great drama in black and white movies. It also opened the door to the perfect crime, without much risk that the perpetrator  would get caught. Yet, when it comes to crime, invading peoples’ privacy could be justified, some argue, for the sake of the greater good and to preserve a stable society.

But now anybody can become the object of intense online and digital scrutiny, regardless of whether they are guilty of some nefarious crime or not.  And there is distinct possibility that digital natives may, in the not so distant future, take for granted that their every move and decision is being traced without any objection.

It is not clear which is more worrying: that future generations might not even question that they are under constant digital scrutiny. Or that it is our generation that is allowing technology to further develop without the safety nets that could secure our privacy now and for the future; this could leave the next generation without any hope of the privacy we once took for granted.

Health offers an insightful comparison.  It may appear paradoxical, but our society appears much more concerned about preserving our physical health than the health of our digital anonymity. Indeed, new drugs are subjected—rightly so—to a very intense regulatory process before being approved. But new technology and the way inherent data is handled is nowhere near as closely scrutinised. It simply creeps up on us, unchecked.

Despite protests from regulators that existing data privacy laws are sufficient, greater regulatory oversight would invariably impact the way data collection is operated. Take the case of data used for research, for example. Experience has shown that even in countries where transparency is highly valued, such as Denmark, there have been deficiencies in getting consent for the use of sensitive personal health data in research, which recently created uproar. By contrast, the current EU regulatory debate surrounding the new Data Protection Directive has the research community up in arms, for fear that too much data regulation would greatly disrupt the course of research.

As for our digital crumbs, it has therefore become urgent to consider how best this data may be managed, at the dawn of the Internet of Things.  Striking the right balance between finding applications with societal relevance and preserving people’s privacy remains a perilous exercise.

Do you believe digital natives are unlikely to be as concerned about their privacy?

Should we  allow technology to further develop without implementing the necessary privacy safety nets?

Original article published on EuroScientist.com.

GDPR gives citizens control over their own data: An interview with Shawn Jensen

Data protection regulation GDPR, includes exemptions to allow research on anonymised data. In this exclusive interview with Shawn Jensen, CEO of data privacy company Profila, Sabine Louët finds out about the implications of the new GDPR regulations for citizens and for researchers. The regulation was adopted on 27th April 2016 and is due to enter into force on 25th  May 2018. In essence,  it decentralises part of the data protection governance towards data controllers and people in charge of processing the data.

As part of the interview, Jensen explains the exemptions that have been bestowed upon certain activities, such as research, so that scientists can continue to use anonymised data for their work while having to ensure the privacy of the data  required by the law.

In fact, the regulations are designed to preserve the delicate balance between the need to protect the rights of  data subjects in a digitalised and globalised world and yet making it possible to processing of personal data for scientific research, as explained in a recent study by Gauthier Chassang, from the French National Health Research Institute INSERM. The study author concludes:

“While the GDPR adopts new specific provisions to ensure adapted data protection in research, the field remains widely regulated at national level, in particular, regarding the application of research participants’ rights, which some could regret. ”

However, Chassang pursues,

“the GDPR has the merit to set up clearer rules that will positively serve the research practices notably regarding consent, regarding the rules for reusing personal data for another purpose, assessing the risks of data processing …” In addition, he continues, “for the first time, the GDPR refers to the respect of ethical standards as being part of the lawfulness of the processing in research” and “opens new possibilities for going ahead in the structuring of data sharing in scientific research with measures encouraging self-regulation development.”

Read the original article, here…

How to live in a post-privacy world: An interview with Michal Kosinski

Is it possible that by giving some sort of privacy, we could have better health or cheaper car insurance rates?

Discussion on privacy were top of the agenda at one of the biggest technology trade fairs in Europe, the CeBIT 2017 in Hannover, Germany. Indeed as social media are full of the many aspects of the lives of those who share their views with as little as pressing a  ‘like’ button on Facebook.

Invited speaker at CeBIT 2017,  Michal Kosinski, who is an assistant professor of organisational behaviour at Stanford graduate school of business, California, USA, shares update on his latest work about the ability to predict future behaviour from psychological traits inferred from the trail made of the many digital crumbs, including pictures we share over the internet.

His latest work has huge implications for privacy. He believes human faces—available from pictures found on social media networks–can be used as a proxi for people’s hormonal levels, genes, developmental history and culture. “It is pretty creepy that our face gives up so much personal information.” He adds: “I would argue sometimes it is worth giving up some privacy in return for a longer life and better health.”

In this context, the regulators don’t work in a vacuum. But regulators cannot guarantee absolute privacy. He explains that it is an illusion for people to strive to retain control over their own data. “The sooner as a society and policy makers, we stop worrying about winning some battles in a privacy war, and the sooner we accept, ‘ok we’ve lost this war,’ and we move towards how to organising society and culture, and technology and law, in such a way that we make the post-privacy world a habitable place, the better for everyone.”

In this exclusive podcast, Kosinski also discusses the constant struggle between top-down governance and bottom-up self-organisation, which leads to a constant trade off in terms of privacy in oursociety. He gives an insightful example, The likes of Facebook with their algorithms would be uniquely placed to match people with the right job, or detect suicide before they happen. However, this possibility raises questions concerning the level of invasion of people’s privacy, which is not socially acceptable, even if it could solve some of our society’s problems.

Finally, Kosinski gives another example where people’s privacy has been invaded for the purpose of changing people’s behaviour. Specifically, he refers to intervention by car insurance industry which has added sensors in cars to monitor the drivers’ behaviour, thus breaching their privacy in exchange for lower premium.

Original article published on EuroScientist.com.

Open Access sector moves slowly to mature

New figures show the relatively limited size and slow rate of uptake of the Open Access market.

Open Access (OA) continues to be the subject of discussion in the scientific community, as do debates about the need for greater levels of open access. However, the reality on the ground is not as clear cut and the adoption rate of OA is not as quick as its promoters would like it to be. At the recent STM Association conference held on the 10th October 2017 in Frankfurt, I presented the findings of a study by independent publishing consultancy Delta Think, Philadelphia, USA, about the size of the open access market. The numbers help unearth recent trends and the dynamics of the OA market, giving a mixed picture. Although open access is established and growing faster than the underlying scholarly publishing market, OA’s value forms a small segment of the total, and it is only slowly taking share. With funders’ showing mixed approaches to backing OA, it might be that individual scientists have a greater role to play to effect change.

Size and growth of the current open access market.

New figures about the size and growth of the current Open access market have been published recently by Delta Think’s Open Access Data & Analytics Tool, which combines and cross-references the many and varied sources of information about open access to provide data and insights about the significant number of organisations and activities in the OA market recently released. They give a sense of how Open access is faring, and what the future has in store for its adoption.

In 2015, the global Open access market is estimated to have been worth around EUR €330m (USD $390m), and grew to around €400m ($470m) in 2016. This growth rate is set to continue, meaning that the market is in course to be worth half a billion dollars globally going into 2018.

Size numbers of open access market in context

To put the sizing numbers into context, the growth rate in OA revenues is much higher than the growth in the underlying scholarly publishing market, which typically grows at a few percent –low to mid-single digits–per year.

However, the share of the OA market is low compared with its share of output. Just over 20% of all articles were published as Open access in 2016. Meanwhile, they accounted for between 4% and 9% of total journal publishing market value, depending on how the market is defined. The OA numbers cover Gold Open access articles publishing in the year, being defined as articles for which publication fees are paid to make content immediately available, and excluding such things as articles deposited in repositories under an embargo period or articles simply made publicly accessible.

Uptake of open access

The degree to which Open access is taking market share is slow. Taking the 2002 Budapest OA Initiative as a nominal starting point for market development, the data suggest that it has taken over 17 years for Open access to comprise one-fifth of article output, and, at best, one-tenth of market value.

The key driver of changes in the OA market remains funders. When choosing which journals to publish in, researchers continue to value factors such as dissemination to the right audience, quality of peer review, and publisher brand over whether their work will be made OA.  Numerous studies have shown this, and suggest similar attitudes towards both journals and monographs.

Movement towards OA therefore happens where funders’ impetus overcomes researchers’ inertia. Funders’ appetites for OA in turn vary by territory, so the outlook for Open access growth of market share remains mixed. Funders in the EU have the strongest mandates, but many in the Far East, for example, incentivise publication based on Journal Impact Factor, regardless of access model, as they seek to enhance reputations and advance careers.

Future predictions of open access

The relatively low value of open access content compared with its share of output poses interesting questions about the sustainability of the open access publishing model. To some, the data suggest that open access is cost effective and could lower systemic costs of publishing. By contrast, others suggest that we need to be realistic about systemic costs and global funding flows.

Further, although open access is clearly entrenched in the market, at current rates of change, it will be decades before open access articles and monographs form the majority of scholarly output. Opinions vary as to whether the transition to Open access is frustratingly slow or reassuringly measured.

The current data suggest that the discussion and debate around open access will continue as they have been. For the average researcher, it is therefore business as usual and so it might be that individual scientists have a greater role to play now to shift the balance, regardless of whether funders nudge them in the OA direction.

 

Dan Pollock

Daniel is Director of Data & Analytics for Delta Think, a consultancy specialising in strategy, market research, technology assessment, and analytics supporting scholarly communications organisations; helping them successfully manage change.

Original article published on EuroScientist.com.

Illustration adapted from a photo by Astaine Akash on Unsplash

Podcast: How open science could benefit from blockchain

Experts reflect on the  implications of blockchain for research.

Reporting from the APE2018 a recent conference gathering the who’s who of scholarly publishing in Berlin on 16th and 17th January 2018, EuroScientist Editor, Sabine Louët interviews several experts on their views on how blockchain technology will change the world of scientists.

First, we hear from Lambert Hellerwho is the head of the Open Science Lab at TIB Hannover, the German national library for science and technology,  who gives his perspective as a digital librarian. He gives the bigger picture of how blockchain is going to help science become more open and help remove the current bottlenecks in the scientific endeavour by increasing the connectivity, accessibility and storage of scholarly objects, such as research papers and databases, through metadata and interplanetary data systems

Second, Amsterdam-based Joris van Rossum, director special projects, Digital Science, London, UK, highlights key findings of a recently published report about Blockchain for Research he has written. In particular, he outlines the many aspects of the research endeavour that could benefit from tracking, including through the use of blockchain technology, which can be in the form of data layer underneath the current research ecosystem.

Then, comesBerlin-based, Sönke Bartling,  founder of Blockchain for Science, whose mission is ‘to Open up Science and knowledge creation by means of the blockchain (r)evolution’ speaks about how blockchain could change the way science is being funded via the creation of cryptocurrencies.

Finally, we hear from  Eveline Klumpers, co-founder of Katalysis,  a start-up aiming to redefine the value of online content and focusing on developing blockchain solutions for the publishing industry. Based in Amsterdam, The Netherlands. She gives some concrete examples of how blockchain technology–which can store transparent immutable smart contracts defining how the content should be shared. This approach can give power back to authors of original work and help them monetise it. It could also help ensure reproducibility in research.

Original article published on EuroScientist.com.