AI-driven personalisation to solve the battle for attention

Photo credit: Rawpixel on Unsplash

Photo credit: Rawpixel on Unsplash

Everyone seems to be suffering from content indigestion as news competes with native advertising for readers’ attention. So what are start-ups doing to fix this?

During the weeks running up to the biggest toy selling season, nobody ignores how cartoons and kids’ movies are used to market branded toys. This is an example of content developed for content marketing purposes. Toy companies use every trick in the digital marketing toolbox to personalise a message aimed at influencing parents and grand-parents—they are the ones with the purchasing power after all.

With such content marketing, the traditional boundaries between content, designed to sell products or raise brand awareness, and news content, designed to inform, are blurred. It is, therefore, not always easy for audiences to identify independent news media content from native advertising emulating news style content. People are starting to feel that they are suffering from content indigestion while the battle for their attention rages on between media and advertisers.

To address such concerns, digital publishing is undergoing a serious shift towards greater personalisation. At a recent event on the Future of Content, organised by Irish industry support agency Enterprise Ireland, in Dublin, Ireland, on 9th November 2018, participants presented the various perspectives on the content creation spectrum. And the advocates of audience-led personalisation opposed the views of marketers pushing for customer microsegment-led personalisation.

Audience-led content selection

Today’s readers want a choice in how content is served on their multiple screens. This is particularly so for news. Yet, the newly relaunched Google News app is designed to deliver news related to readers’ current location, regardless of whether they have an interest in other types of news. In contrast, companies like KinZen aim to empower each reader to take charge of the kind of news they receive. The company has mapped the online journey of a typical reader and is taking stock of how their audience is developing new routines to get informed.

KinZen Co-Founder Mark Little explains that their philosophy is based on giving readers options to manage their own news source feeds, being mindful of not exposing them to unwanted content such as ads, or infringing on their privacy. For readers, this means accessing different kinds of news content, related to their professional or personal lives, at different time of the day, on different devices. Such a solution suggests tilting the traditional balance of media funding towards subscriptions models and away from advertising revenues.

It remains to be seen whether readers are prepared to pay for hyper-personalised news and for the privilege of not being distracted by adverts. However, there are signs that the subscription model is working.

sciencepod personalisation

Going one step further and challenging the way news content itself is created are Rob Wijnberg and Ernst Pfauth, founders of a new kind of subscription-only news organisation called The Correspondent. In their manifesto, they condemn the way online news today is no longer designed to inform audiences. Rather, they complain, ‘news organisations prioritise the needs of advertisers over the needs of readers.’

In a recent crowdfunding campaign to fund their English edition, launched on 14th November 2018, they condemn the lack of media engagement with meeting the needs of their audiences, stating that ‘all-too-often the news talks to you, rather than with you.’ Their model, piloted in Dutch as De Correspondent, aims to change the way news is created by associating readers with the process.

Clearly, this extension of the editorial process, by including contributions from beyond the strict confines of the traditional newsroom, echoes SciencePOD‘s collaborative approach to creating content — be it for journalistic, educational or content marketing purposes. Through our platform, we augment the teams of our media, publishing or industry clients by giving them access to the know-how of our community of journalists and editors, with the right scientific and technical expertise, the right editorial skills, and based in the right location.

Top-down analytics for news and creative content

Clearly this bottom-up personalisation approach, where the user regains power over what news and content is consumed, starkly contrasts with the majority of ad-fuelled approaches of media publishers. Harnessing ever increasing smart data analytics methods, and web-technologies, publishers capture individual readers’ every move before serving them what content, they believe, is most appropriate and link it with matching ads.

For example, NewsWhip’s CEO Paul Quigley, based in Dublin, Ireland, shared how his company provides data analytics on how readers engage with news content, giving insights about the time of the day, the level of popularity of stories and the social media platforms on which they access it.

Yet, the personalisation trend is also filtering through the way advertisers display both news and ads used for content marketing. Solutions like Dublin-based Readersight, presented by founder Anthony Quigley, helps publishers analyse reader engagement in real-time and suggests ways of improving it by personalising the way the content is displayed to optimise advertising revenues.

Another targeting advertisement expert, Barry Nolan, Chief Marketing Officer at San-Francisco-headquartered Swrve says they can deliver targeted, personal and optimised adverts at the ‘perfect’ time and through the ‘perfect channel’. Every interaction with what they call micro-segmented consumer groups is carefully orchestrated within a publishers’ app. His approach is designed to bypass an audience’s negative response. As he puts it: ‘the more you communicate with customers, the more you train them to ignore you.’

This shift is also taking place in TV advertising which is becoming highly targeted. Ronan Higgins, Founder and CEO of TVadSync, shared how they can now capture real-time data about what individual households (identified by their IP address) watch on their smart TV, checking what time they are watching and for how long, when they switch channels.

sciencepod top down news creation

E-commerce links to content marketing

As all sectors of the economy migrate towards e-commerce, start-ups are directly integrating advertising or marketing content sites with e-commerce platforms.

Beyond news, optimising access to news content is also happening in the creative arts, for example, in movies. Oliver Fegan Co-Founder of Usheru, based in London, UK, explained how the company aims to provide a way to connect the marketing of new movies to the online purchase of cinema tickets. Their personalised solution comes complete with an insights platform, giving film marketers a real time view into the sales success of film marketing campaigns.

Others like ChannelSight Co-Founder and CEO John Beckett talked about a widget they developed, which can’t be blocked by ad blockers, and can link products described in online content to online shops where they can be purchased.

As more and more personalisation waves break over the branded content advertising landscape, audiences are increasingly influenced by very sophisticated marketing campaigns. Some, however, may want to pay for the privilege of not becoming a pawn of marketing algorithms.

Sabine Louët

Founder and CEO SciencePOD

Digital publishers under the spell of web automation

What keeps digital publishers awake at night? This is a question that has more answers than we have space in this column. However, for many, having the right set of automation tools is essential. This is as true for digital marketing as it is for creating quality content. Publishers from mainstream digital media organisations could not agree more during DiG Publishing Lisbon held just this year, 3-5 Oct 2018.

While much of the focus and attendees went toward advertising, analytics and monetisation, content remains at the top of the agenda. Start-ups represented around the content creation theme included Market4News. CEO, Igor Gómez Maneiro explained how his platform helps to locate suitable mobile journalism reporters whereas Justin Varilek, Founder and CEO of HackPack presented his online community of journalists. Later, Niko Vijayaratnam, Founder of Authored.io, shared his online authoring tool, showing promise, despite its relatively early stage of development.

Meanwhile, SciencePOD’s CEO, Sabine Louët, offered a sneak peek at the company’s digital publishing platform, which helps to automate the ordering and creation of quality content telling the story of research and innovation. The idea is to translate the complex jargon of science and technology into clear, concise, compelling stories to better influence targeted audiences.

For enthusiasts and perhaps those in need of cost-effective content marketing strategies, Philipp Nette, Founder and CEO of Cutnut, has developed a centralised, social media content creation and distribution platform for marketing campaigns. Meanwhile, Alexandra Steffel, Editor in Chief, at Intellyo, offered a content planning and creation tool for enterprises.

Another favourite, Locationews was certainly one for the books. Created by Co-Founder Julius Koskela, the site helps to map news onto a global map enabling readers to selectively find news most relevant to their preferred location. Finally, Simply Aloud Co-Founder Davide Lovato talked about embedding voice version of articles into news sites. For now, the recordings are done individually to guarantee highest audio quality, rather than via automated text-to-voice systems.

Why the need for a wide variety of content creation and distribution and monitoring tools? Wouldn’t these complicate newsrooms?

“We are reaching an age of the internet where enabling technologies automating content creation and distribution are maturing,” says Louët. “Therefore, making the most of these standard technologies is within everyone’s reach.” These technologies make it possible for digital publishers to unleash their imagination and bring more compelling content and news to their readers.

If you enjoyed this and are eager to see more, please be sure to like, subscribe and share.

Also, to hear more about specific “Ask the Editor” content, leave a message in the comments below.

Cheers!

Photo credit: Pixabay user KJ

Podcast, Ivan Oransky, RetractionWatch: Retractions are like red flags highlighting infractions in science

Photo credit: SciencePOD.

 

Keeping a record of the retraction of research publications made it easier for journalistic coverage dissecting the infractions occurring within science publishing. “What I see that’s important, is not just coverage of individual cases, people are actually trying to put this all together. They’re filing public record requests because that’s something we’re not thinking of.” That’s according to RetractionWatch, co-founder, Ivan Oransky,  who started this initiative as a blog in 2010 with a single purpose in mind: making the peer-review process more transparent.

Listen to the edited version of the recording by SciencePOD.org of a recent presentation Oransky made at Trinity College Dublin, Ireland. This event, held on 20th June 2018, was co-organised by the Irish Science & Technology Journalists’ Association and the Science Gallery Dublin.

Oransky recalls the motivation that originally animated him and co-founder Adam Marcus in highlighting the mishaps of the peer-review process within academic communities. “Those of you who may be familiar with PubMed, Medline or Web of Science, you go to any of those you’ll find under 6,000 (retractions)… we [at Retraction Watch] have 3 times as many,” notes Oransky. Today, the RetractionDataBase.org site holds 17,500 retractions –and it is still growing. While retractions are rare, Oransky believes there is a screening effect attached to them.

For a sense of scale, the two countries in the world with the most retractions are China and the US. To provide an in-depth look at this, Oransky and his team compiled a leaderboard. Each of these instances are linked with a comprehensive story following the original publication.

Many varieties of malpractice

Oranksy highlights a few of the problems surrounding retractions found in the peer-review community. At the time of this recording, RetractionWatch had cataloged, 630 retractions specifically due to e-mail fraud by submitting a fake peer-reviewer’s e-mail. How does this work? An academic submits a paper to a journal for submission. When the journal comes back to ask for an e-mail to reference for peer-review, rather than submitting a genuine e-mail, the academic offers a fake e-mail, which then closes the loop between him or herself and the journal. Thus, eliminating the need for a peer-review. Back in 2000, only about 5% of papers were retracted, due to e-mail fraud.

Another area of malpractice occurs through duplication of results in different journals, not to be confused with plagiarism. Duplication is giving undue weight to a scientific conversation within the literature. This means when you try to conduct a scientific analysis on a topic, you’re looking at publications publishing the same thing multiple times without adding value to the topic.

All knowledge is provisional

To assume a paper should be retracted because the results aren’t reproducible is odd; but, it does occur. This shows that there is no perfect system for scholarly publishing. And that keeping a tap on retractions can help to uncover unsavoury behaviour among scientists.

Ultimately, this red flag activity leads to stronger science, as researchers are aware of the potential downsides of naming and shaming authors of retracted papers.

Enjoy the podcast!

 

Photo credit: SciencePOD.org

Data privacy: Should we treat data handling the same way we do our own health?

Increasingly digital breadcrumbs are making it possible for others to track our every move. To illustrate what is at stake here, we need to travel back in time. In the pre-computer era, the ability to remain incognito made for great drama in black and white movies. It also opened the door to the perfect crime, without much risk that the perpetrator  would get caught. Yet, when it comes to crime, invading peoples’ privacy could be justified, some argue, for the sake of the greater good and to preserve a stable society.

But now anybody can become the object of intense online and digital scrutiny, regardless of whether they are guilty of some nefarious crime or not.  And there is distinct possibility that digital natives may, in the not so distant future, take for granted that their every move and decision is being traced without any objection.

It is not clear which is more worrying: that future generations might not even question that they are under constant digital scrutiny. Or that it is our generation that is allowing technology to further develop without the safety nets that could secure our privacy now and for the future; this could leave the next generation without any hope of the privacy we once took for granted.

Health offers an insightful comparison.  It may appear paradoxical, but our society appears much more concerned about preserving our physical health than the health of our digital anonymity. Indeed, new drugs are subjected—rightly so—to a very intense regulatory process before being approved. But new technology and the way inherent data is handled is nowhere near as closely scrutinised. It simply creeps up on us, unchecked.

Despite protests from regulators that existing data privacy laws are sufficient, greater regulatory oversight would invariably impact the way data collection is operated. Take the case of data used for research, for example. Experience has shown that even in countries where transparency is highly valued, such as Denmark, there have been deficiencies in getting consent for the use of sensitive personal health data in research, which recently created uproar. By contrast, the current EU regulatory debate surrounding the new Data Protection Directive has the research community up in arms, for fear that too much data regulation would greatly disrupt the course of research.

As for our digital crumbs, it has therefore become urgent to consider how best this data may be managed, at the dawn of the Internet of Things.  Striking the right balance between finding applications with societal relevance and preserving people’s privacy remains a perilous exercise.

Do you believe digital natives are unlikely to be as concerned about their privacy?

Should we  allow technology to further develop without implementing the necessary privacy safety nets?

Original article published on EuroScientist.com.

GDPR gives citizens control over their own data: An interview with Shawn Jensen

Data protection regulation GDPR, includes exemptions to allow research on anonymised data. In this exclusive interview with Shawn Jensen, CEO of data privacy company Profila, Sabine Louët finds out about the implications of the new GDPR regulations for citizens and for researchers. The regulation was adopted on 27th April 2016 and is due to enter into force on 25th  May 2018. In essence,  it decentralises part of the data protection governance towards data controllers and people in charge of processing the data.

As part of the interview, Jensen explains the exemptions that have been bestowed upon certain activities, such as research, so that scientists can continue to use anonymised data for their work while having to ensure the privacy of the data  required by the law.

In fact, the regulations are designed to preserve the delicate balance between the need to protect the rights of  data subjects in a digitalised and globalised world and yet making it possible to processing of personal data for scientific research, as explained in a recent study by Gauthier Chassang, from the French National Health Research Institute INSERM. The study author concludes:

“While the GDPR adopts new specific provisions to ensure adapted data protection in research, the field remains widely regulated at national level, in particular, regarding the application of research participants’ rights, which some could regret. ”

However, Chassang pursues,

“the GDPR has the merit to set up clearer rules that will positively serve the research practices notably regarding consent, regarding the rules for reusing personal data for another purpose, assessing the risks of data processing …” In addition, he continues, “for the first time, the GDPR refers to the respect of ethical standards as being part of the lawfulness of the processing in research” and “opens new possibilities for going ahead in the structuring of data sharing in scientific research with measures encouraging self-regulation development.”

Read the original article, here…

How to live in a post-privacy world: An interview with Michal Kosinski

Is it possible that by giving some sort of privacy, we could have better health or cheaper car insurance rates?

Discussion on privacy were top of the agenda at one of the biggest technology trade fairs in Europe, the CeBIT 2017 in Hannover, Germany. Indeed as social media are full of the many aspects of the lives of those who share their views with as little as pressing a  ‘like’ button on Facebook.

Invited speaker at CeBIT 2017,  Michal Kosinski, who is an assistant professor of organisational behaviour at Stanford graduate school of business, California, USA, shares update on his latest work about the ability to predict future behaviour from psychological traits inferred from the trail made of the many digital crumbs, including pictures we share over the internet.

His latest work has huge implications for privacy. He believes human faces—available from pictures found on social media networks–can be used as a proxi for people’s hormonal levels, genes, developmental history and culture. “It is pretty creepy that our face gives up so much personal information.” He adds: “I would argue sometimes it is worth giving up some privacy in return for a longer life and better health.”

In this context, the regulators don’t work in a vacuum. But regulators cannot guarantee absolute privacy. He explains that it is an illusion for people to strive to retain control over their own data. “The sooner as a society and policy makers, we stop worrying about winning some battles in a privacy war, and the sooner we accept, ‘ok we’ve lost this war,’ and we move towards how to organising society and culture, and technology and law, in such a way that we make the post-privacy world a habitable place, the better for everyone.”

In this exclusive podcast, Kosinski also discusses the constant struggle between top-down governance and bottom-up self-organisation, which leads to a constant trade off in terms of privacy in oursociety. He gives an insightful example, The likes of Facebook with their algorithms would be uniquely placed to match people with the right job, or detect suicide before they happen. However, this possibility raises questions concerning the level of invasion of people’s privacy, which is not socially acceptable, even if it could solve some of our society’s problems.

Finally, Kosinski gives another example where people’s privacy has been invaded for the purpose of changing people’s behaviour. Specifically, he refers to intervention by car insurance industry which has added sensors in cars to monitor the drivers’ behaviour, thus breaching their privacy in exchange for lower premium.

Original article published on EuroScientist.com.

Open Access sector moves slowly to mature

New figures show the relatively limited size and slow rate of uptake of the Open Access market.

Open Access (OA) continues to be the subject of discussion in the scientific community, as do debates about the need for greater levels of open access. However, the reality on the ground is not as clear cut and the adoption rate of OA is not as quick as its promoters would like it to be. At the recent STM Association conference held on the 10th October 2017 in Frankfurt, I presented the findings of a study by independent publishing consultancy Delta Think, Philadelphia, USA, about the size of the open access market. The numbers help unearth recent trends and the dynamics of the OA market, giving a mixed picture. Although open access is established and growing faster than the underlying scholarly publishing market, OA’s value forms a small segment of the total, and it is only slowly taking share. With funders’ showing mixed approaches to backing OA, it might be that individual scientists have a greater role to play to effect change.

Size and growth of the current open access market.

New figures about the size and growth of the current Open access market have been published recently by Delta Think’s Open Access Data & Analytics Tool, which combines and cross-references the many and varied sources of information about open access to provide data and insights about the significant number of organisations and activities in the OA market recently released. They give a sense of how Open access is faring, and what the future has in store for its adoption.

In 2015, the global Open access market is estimated to have been worth around EUR €330m (USD $390m), and grew to around €400m ($470m) in 2016. This growth rate is set to continue, meaning that the market is in course to be worth half a billion dollars globally going into 2018.

Size numbers of open access market in context

To put the sizing numbers into context, the growth rate in OA revenues is much higher than the growth in the underlying scholarly publishing market, which typically grows at a few percent –low to mid-single digits–per year.

However, the share of the OA market is low compared with its share of output. Just over 20% of all articles were published as Open access in 2016. Meanwhile, they accounted for between 4% and 9% of total journal publishing market value, depending on how the market is defined. The OA numbers cover Gold Open access articles publishing in the year, being defined as articles for which publication fees are paid to make content immediately available, and excluding such things as articles deposited in repositories under an embargo period or articles simply made publicly accessible.

Uptake of open access

The degree to which Open access is taking market share is slow. Taking the 2002 Budapest OA Initiative as a nominal starting point for market development, the data suggest that it has taken over 17 years for Open access to comprise one-fifth of article output, and, at best, one-tenth of market value.

The key driver of changes in the OA market remains funders. When choosing which journals to publish in, researchers continue to value factors such as dissemination to the right audience, quality of peer review, and publisher brand over whether their work will be made OA.  Numerous studies have shown this, and suggest similar attitudes towards both journals and monographs.

Movement towards OA therefore happens where funders’ impetus overcomes researchers’ inertia. Funders’ appetites for OA in turn vary by territory, so the outlook for Open access growth of market share remains mixed. Funders in the EU have the strongest mandates, but many in the Far East, for example, incentivise publication based on Journal Impact Factor, regardless of access model, as they seek to enhance reputations and advance careers.

Future predictions of open access

The relatively low value of open access content compared with its share of output poses interesting questions about the sustainability of the open access publishing model. To some, the data suggest that open access is cost effective and could lower systemic costs of publishing. By contrast, others suggest that we need to be realistic about systemic costs and global funding flows.

Further, although open access is clearly entrenched in the market, at current rates of change, it will be decades before open access articles and monographs form the majority of scholarly output. Opinions vary as to whether the transition to Open access is frustratingly slow or reassuringly measured.

The current data suggest that the discussion and debate around open access will continue as they have been. For the average researcher, it is therefore business as usual and so it might be that individual scientists have a greater role to play now to shift the balance, regardless of whether funders nudge them in the OA direction.

 

Dan Pollock

Daniel is Director of Data & Analytics for Delta Think, a consultancy specialising in strategy, market research, technology assessment, and analytics supporting scholarly communications organisations; helping them successfully manage change.

Original article published on EuroScientist.com.

Illustration adapted from a photo by Astaine Akash on Unsplash

Podcast: How open science could benefit from blockchain

Experts reflect on the  implications of blockchain for research.

Reporting from the APE2018 a recent conference gathering the who’s who of scholarly publishing in Berlin on 16th and 17th January 2018, EuroScientist Editor, Sabine Louët interviews several experts on their views on how blockchain technology will change the world of scientists.

First, we hear from Lambert Hellerwho is the head of the Open Science Lab at TIB Hannover, the German national library for science and technology,  who gives his perspective as a digital librarian. He gives the bigger picture of how blockchain is going to help science become more open and help remove the current bottlenecks in the scientific endeavour by increasing the connectivity, accessibility and storage of scholarly objects, such as research papers and databases, through metadata and interplanetary data systems

Second, Amsterdam-based Joris van Rossum, director special projects, Digital Science, London, UK, highlights key findings of a recently published report about Blockchain for Research he has written. In particular, he outlines the many aspects of the research endeavour that could benefit from tracking, including through the use of blockchain technology, which can be in the form of data layer underneath the current research ecosystem.

Then, comesBerlin-based, Sönke Bartling,  founder of Blockchain for Science, whose mission is ‘to Open up Science and knowledge creation by means of the blockchain (r)evolution’ speaks about how blockchain could change the way science is being funded via the creation of cryptocurrencies.

Finally, we hear from  Eveline Klumpers, co-founder of Katalysis,  a start-up aiming to redefine the value of online content and focusing on developing blockchain solutions for the publishing industry. Based in Amsterdam, The Netherlands. She gives some concrete examples of how blockchain technology–which can store transparent immutable smart contracts defining how the content should be shared. This approach can give power back to authors of original work and help them monetise it. It could also help ensure reproducibility in research.

Original article published on EuroScientist.com.