Changing incentives to researchers and scientific-centric technology solutions could be the new normal.
Hacking solutions to science problems are springing up everywhere. They attempt to remove bureaucracy and streamline research. But how many of these initiatives are coming from the science publishing industry? There is currently no TripAdvisor to the best journal for submission, no Deliveroo for laboratory reagent delivery. How about a decentralised peer-review based on the blockchain certification principle? Today, the social media networks for scientists—the likes of ResearchGate, Academia.edu and Mendeley—have only started a timid foray into what the future of scholarly publishing could look like.
This topic was debated in front of a room packed with science publishing executives at the STM conference, on 18th October 2016, on the eve of the Frankfurt Book Fair. Earlier that day, Brian Nosek, executive director at the Centre for Open Science, Charlottesville, Virginia, USA, gave a caveat about any future changes. He primarily saw the need to change the way incentives for scientist work so that, ultimately, research itself changes rather than technology platforms imposing change.
Yet, the key to adapting is “down to the pace of experiment,” said Phill Jones, head of publisher outreach, at Digital Science, London, UK, which provides technology solutions to the industry. Jones advocates doing lots and lots of experiments to find solutions to better serve the scientific community.
Indeed, “rapid evolution based on observed improvement is better than disruption for the sake of disruption,” agreed John Connolly, chief product officer at Springer Nature, London, UK.
Adopting an attitude that embrace these experiments, “is the biggest change that we [the scholarly publishing industry] need to embrace,” Jones concluded.
To do so, “we need publishers to be a lot less cautious,” noted Richard Padley, Chairman Semantico, London, UK, providing technology solutions to science publishers. “It is a cultural thing, publishers need to empower their organisation to use technology from the top down.”
So are the lives of scientists about to be changed? Arguably, yes. Resistance from proponents of the status quo may still arise. It may depend on the pace at which science publisher turn into technology service industry. The truth is “users want to see tools that are much more user-centred and less centred around publishers,” argued Connolly. However, “if you ask a scientists what they wanted [in the past], they would have said high impact factors articles,” said Phil Jones. “They thought this is what they wanted because there was no alternative,” Jones added, whereas: “they wanted to have higher impact of their research and have greater reach.”
Clearly, “if you are optimistic about publishers, there is a job for publishers, to synthesise knowledge, to see the relevant content,” said Connolly.
This means taking quite a lot of adjustment to those who pay for content. A download is not a marker of whether you have passed on that synthesised knowledge!
Increasingly digital breadcrumbs are making it possible for others to track our every move. To illustrate what is at stake here, we need to travel back in time. In the pre-computer era, the ability to remain incognito made for great drama in black and white movies. It also opened the door to the perfect crime, without much risk that the perpetrator would get caught. Yet, when it comes to crime, invading peoples’ privacy could be justified, some argue, for the sake of the greater good and to preserve a stable society.
But now anybody can become the object of intense online and digital scrutiny, regardless of whether they are guilty of some nefarious crime or not. And there is distinct possibility that digital natives may, in the not so distant future, take for granted that their every move and decision is being traced without any objection.
It is not clear which is more worrying: that future generations might not even question that they are under constant digital scrutiny. Or that it is our generation that is allowing technology to further develop without the safety nets that could secure our privacy now and for the future; this could leave the next generation without any hope of the privacy we once took for granted.
Health offers an insightful comparison. It may appear paradoxical, but our society appears much more concerned about preserving our physical health than the health of our digital anonymity. Indeed, new drugs are subjected—rightly so—to a very intense regulatory process before being approved. But new technology and the way inherent data is handled is nowhere near as closely scrutinised. It simply creeps up on us, unchecked.
Despite protests from regulators that existing data privacy laws are sufficient, greater regulatory oversight would invariably impact the way data collection is operated. Take the case of data used for research, for example. Experience has shown that even in countries where transparency is highly valued, such as Denmark, there have been deficiencies in getting consent for the use of sensitive personal health data in research, which recently created uproar. By contrast, the current EU regulatory debate surrounding the new Data Protection Directive has the research community up in arms, for fear that too much data regulation would greatly disrupt the course of research.
As for our digital crumbs, it has therefore become urgent to consider how best this data may be managed, at the dawn of the Internet of Things. Striking the right balance between finding applications with societal relevance and preserving people’s privacy remains a perilous exercise.
Do you believe digital natives are unlikely to be as concerned about their privacy?
Should we allow technology to further develop without implementing the necessary privacy safety nets?
Data protection regulation GDPR, includes exemptions to allow research on anonymised data. In this exclusive interview with Shawn Jensen, CEO of data privacy company Profila, Sabine Louët finds out about the implications of the new GDPR regulations for citizens and for researchers. The regulation was adopted on 27th April 2016 and is due to enter into force on 25th May 2018. In essence, it decentralises part of the data protection governance towards data controllers and people in charge of processing the data.
As part of the interview, Jensen explains the exemptions that have been bestowed upon certain activities, such as research, so that scientists can continue to use anonymised data for their work while having to ensure the privacy of the data required by the law.
In fact, the regulations are designed to preserve the delicate balance between the need to protect the rights of data subjects in a digitalised and globalised world and yet making it possible to processing of personal data for scientific research, as explained in a recent study by Gauthier Chassang, from the French National Health Research Institute INSERM. The study author concludes:
“While the GDPR adopts new specific provisions to ensure adapted data protection in research, the field remains widely regulated at national level, in particular, regarding the application of research participants’ rights, which some could regret. ”
However, Chassang pursues,
“the GDPR has the merit to set up clearer rules that will positively serve the research practices notably regarding consent, regarding the rules for reusing personal data for another purpose, assessing the risks of data processing …” In addition, he continues, “for the first time, the GDPR refers to the respect of ethical standards as being part of the lawfulness of the processing in research” and “opens new possibilities for going ahead in the structuring of data sharing in scientific research with measures encouraging self-regulation development.”
Read the original article, here…
Is it possible that by giving some sort of privacy, we could have better health or cheaper car insurance rates?
Discussion on privacy were top of the agenda at one of the biggest technology trade fairs in Europe, the CeBIT 2017 in Hannover, Germany. Indeed as social media are full of the many aspects of the lives of those who share their views with as little as pressing a ‘like’ button on Facebook.
Invited speaker at CeBIT 2017, Michal Kosinski, who is an assistant professor of organisational behaviour at Stanford graduate school of business, California, USA, shares update on his latest work about the ability to predict future behaviour from psychological traits inferred from the trail made of the many digital crumbs, including pictures we share over the internet.
His latest work has huge implications for privacy. He believes human faces—available from pictures found on social media networks–can be used as a proxi for people’s hormonal levels, genes, developmental history and culture. “It is pretty creepy that our face gives up so much personal information.” He adds: “I would argue sometimes it is worth giving up some privacy in return for a longer life and better health.”
In this context, the regulators don’t work in a vacuum. But regulators cannot guarantee absolute privacy. He explains that it is an illusion for people to strive to retain control over their own data. “The sooner as a society and policy makers, we stop worrying about winning some battles in a privacy war, and the sooner we accept, ‘ok we’ve lost this war,’ and we move towards how to organising society and culture, and technology and law, in such a way that we make the post-privacy world a habitable place, the better for everyone.”
In this exclusive podcast, Kosinski also discusses the constant struggle between top-down governance and bottom-up self-organisation, which leads to a constant trade off in terms of privacy in oursociety. He gives an insightful example, The likes of Facebook with their algorithms would be uniquely placed to match people with the right job, or detect suicide before they happen. However, this possibility raises questions concerning the level of invasion of people’s privacy, which is not socially acceptable, even if it could solve some of our society’s problems.
Finally, Kosinski gives another example where people’s privacy has been invaded for the purpose of changing people’s behaviour. Specifically, he refers to intervention by car insurance industry which has added sensors in cars to monitor the drivers’ behaviour, thus breaching their privacy in exchange for lower premium.
New figures show the relatively limited size and slow rate of uptake of the Open Access market.
Open Access (OA) continues to be the subject of discussion in the scientific community, as do debates about the need for greater levels of open access. However, the reality on the ground is not as clear cut and the adoption rate of OA is not as quick as its promoters would like it to be. At the recent STM Association conference held on the 10th October 2017 in Frankfurt, I presented the findings of a study by independent publishing consultancy Delta Think, Philadelphia, USA, about the size of the open access market. The numbers help unearth recent trends and the dynamics of the OA market, giving a mixed picture. Although open access is established and growing faster than the underlying scholarly publishing market, OA’s value forms a small segment of the total, and it is only slowly taking share. With funders’ showing mixed approaches to backing OA, it might be that individual scientists have a greater role to play to effect change.
Size and growth of the current open access market.
New figures about the size and growth of the current Open access market have been published recently by Delta Think’s Open Access Data & Analytics Tool, which combines and cross-references the many and varied sources of information about open access to provide data and insights about the significant number of organisations and activities in the OA market recently released. They give a sense of how Open access is faring, and what the future has in store for its adoption.
In 2015, the global Open access market is estimated to have been worth around EUR €330m (USD $390m), and grew to around €400m ($470m) in 2016. This growth rate is set to continue, meaning that the market is in course to be worth half a billion dollars globally going into 2018.
Size numbers of open access market in context
To put the sizing numbers into context, the growth rate in OA revenues is much higher than the growth in the underlying scholarly publishing market, which typically grows at a few percent –low to mid-single digits–per year.
However, the share of the OA market is low compared with its share of output. Just over 20% of all articles were published as Open access in 2016. Meanwhile, they accounted for between 4% and 9% of total journal publishing market value, depending on how the market is defined. The OA numbers cover Gold Open access articles publishing in the year, being defined as articles for which publication fees are paid to make content immediately available, and excluding such things as articles deposited in repositories under an embargo period or articles simply made publicly accessible.
Uptake of open access
The degree to which Open access is taking market share is slow. Taking the 2002 Budapest OA Initiative as a nominal starting point for market development, the data suggest that it has taken over 17 years for Open access to comprise one-fifth of article output, and, at best, one-tenth of market value.
The key driver of changes in the OA market remains funders. When choosing which journals to publish in, researchers continue to value factors such as dissemination to the right audience, quality of peer review, and publisher brand over whether their work will be made OA. Numerous studies have shown this, and suggest similar attitudes towards both journals and monographs.
Movement towards OA therefore happens where funders’ impetus overcomes researchers’ inertia. Funders’ appetites for OA in turn vary by territory, so the outlook for Open access growth of market share remains mixed. Funders in the EU have the strongest mandates, but many in the Far East, for example, incentivise publication based on Journal Impact Factor, regardless of access model, as they seek to enhance reputations and advance careers.
Future predictions of open access
The relatively low value of open access content compared with its share of output poses interesting questions about the sustainability of the open access publishing model. To some, the data suggest that open access is cost effective and could lower systemic costs of publishing. By contrast, others suggest that we need to be realistic about systemic costs and global funding flows.
Further, although open access is clearly entrenched in the market, at current rates of change, it will be decades before open access articles and monographs form the majority of scholarly output. Opinions vary as to whether the transition to Open access is frustratingly slow or reassuringly measured.
The current data suggest that the discussion and debate around open access will continue as they have been. For the average researcher, it is therefore business as usual and so it might be that individual scientists have a greater role to play now to shift the balance, regardless of whether funders nudge them in the OA direction.
Daniel is Director of Data & Analytics for Delta Think, a consultancy specialising in strategy, market research, technology assessment, and analytics supporting scholarly communications organisations; helping them successfully manage change.
Experts reflect on the implications of blockchain for research.
Reporting from the APE2018 a recent conference gathering the who’s who of scholarly publishing in Berlin on 16th and 17th January 2018, EuroScientist Editor, Sabine Louët interviews several experts on their views on how blockchain technology will change the world of scientists.
First, we hear from Lambert Heller, who is the head of the Open Science Lab at TIB Hannover, the German national library for science and technology, who gives his perspective as a digital librarian. He gives the bigger picture of how blockchain is going to help science become more open and help remove the current bottlenecks in the scientific endeavour by increasing the connectivity, accessibility and storage of scholarly objects, such as research papers and databases, through metadata and interplanetary data systems
Second, Amsterdam-based Joris van Rossum, director special projects, Digital Science, London, UK, highlights key findings of a recently published report about Blockchain for Research he has written. In particular, he outlines the many aspects of the research endeavour that could benefit from tracking, including through the use of blockchain technology, which can be in the form of data layer underneath the current research ecosystem.
Then, comesBerlin-based, Sönke Bartling, founder of Blockchain for Science, whose mission is ‘to Open up Science and knowledge creation by means of the blockchain (r)evolution’ speaks about how blockchain could change the way science is being funded via the creation of cryptocurrencies.
Finally, we hear from Eveline Klumpers, co-founder of Katalysis, a start-up aiming to redefine the value of online content and focusing on developing blockchain solutions for the publishing industry. Based in Amsterdam, The Netherlands. She gives some concrete examples of how blockchain technology–which can store transparent immutable smart contracts defining how the content should be shared. This approach can give power back to authors of original work and help them monetise it. It could also help ensure reproducibility in research.