Posts by SciencePOD Editor

Welcome to the SciencePOD blog!! Find out what is happening at the interface of digital publishing and science communication. Making sense of science and innovation through clear and compelling message aimed at wider targeted audiences is both challenging and exciting. Not only does it have implications for scientists and innovators themselves, but also for society at large. Making sure we get the complex idea buried in the scientific literature and technical documentation right is crucial. SciencePOD brings you the peace of mind of getting access to the best professional teams every time to deliver quality content. In this blog, we bring you the latest topics related to content marketing, digital media, science publishing, open science and open data, science in society.

Dave Kochalko interview – how technology could restore trust in science publishing

Speeding up the research process, according to ARTiFACTS, requires turning research communication on its head by tracking, recording and sharing findings in small fragments and in real-time even before publishing them as full journal articles

Bashing science publishers have gone mainstream. Today, science publishing and publishing practices are often described using a lexicon reflective of a lack of trust: predatory journal, publication bias, adverse incentives, peer review problems, rip-off publishing, and the alliterative science-stymying paywall. Add to this the ‘tug of war’ over Open Access and the potential effects of the ambitious initiative Plan S, and one has to ask, how can trust in science publishing be restored?

This issue was discussed at this year’s NFAIS conference, on 14th February 2019, by Dave Kochalko, Co-Founder and Chief Academic Officer at Cambridge, Massachusetts-based scholarly publishing start-up ARTiFACTS. In a podcast interview with SciencePOD, Kochallko talks about findings solutions to this lack of trust. He believes the answer lies in refocusing on the science and scientists themselves. Specifically, he acknowledges that published work in the form of research papers continues to be valuable and relevant. However, he sees too much emphasis still being placed on publishing in high-impact journals. Instead, he believes, the emphasis should be on whether the scientist and the research team are practising “good science.”

Kochalko defines this good scientific practice as sharing results and research that, “can be verified, reproduced, built upon by other researchers.” And, he adds, “doing so in a more timely fashion.” This means that instead of attributing parenthood to an idea at the stage of a research paper, the genesis of ideas should be recorded and attributed in a far more granular manner to much smaller chunks of findings, as soon as they are formulated in a lab book, for instance.

What ARTiFACTS does is to allow scientists to “do three fundamental, yet powerful things,”  Kochalko points out. First, to establish proof of existence, authorship and confirm the provenance of their work at any time. Second, to protect and manage the IP they created while facilitating knowledge sharing. Third, to receive valid breakproof attribution and credit assignment at any point for any type of research output.

In doing so, scientists can make their findings available throughout the entire research process and not just after a protracted publication process. This is facilitated by ARTiFACTS use of graph database and distributed ledger (or blockchain) technology. The result, according to Kochalko, will be a “deep historical archive of published and discovered findings” that is accessible to the broader scientific community.

Science publishers are coming around to this refocus on the scientist and the tools and technologies that can help make this happen. “We’ve been very pleased that the reaction from publishers has moved from scepticism to thoughtful curiosity,” says Kochalko. This might be taken as a sign that the industry has acknowledged that not only trust in science publishers but in science itself requires turning attention away from a journal brand to what Kochalko describes as the “creative discoveries and contributions of the scientist.”

 

 

Photo credit: Dave Kochalko

Start-ups Survey Debunks Myths about Science Publishing Innovative Ecosystem

Yvonne Campfens discusses some of the surprising findings of her start-ups study with Sabine Louët, SciencePOD CEO, in a podcast

The first study of its kind related to the start-up ecosystem in the science publishing industry brought a lot of discussions during the APE2019 conference, held on 16th January 2019, in Berlin, Germany. The author of this study is Yvonne Campfens, an independent science publishing consultant, based in Amsterdam, The Netherlands. Focusing on 120 start-ups— 92 of which are independent— the study debunked some of the myths around the innovative science publishing ecosystem.

Campfens was motivated to start this study as she felt there were a lot of myths surrounding start-ups in publishing. “I was really curious, because I had seen so many new names over the last decade, and I was wondering what had happened with all these new players,” she says. “At the same time there was a lot of debate about distribution, developments, and trends in the industry,” she adds, “but very little facts and data.” This is what motivated Campfens to take a closer look.

She managed to debunk some of the myths surrounding current innovative companies in the field. “When a start-up is taken over, there is a lot of media attention, which gives some people the impression that this is all there is. So, I wanted to know what was really going on.” Specifically, she examined whether these start-ups existed independently from larger publishers.

First, the study reveals that there is now a well-established community of independent publishing start-ups providing innovation in the processes of publishing and content creation for the scholarly communication industry.  Their focus ranges from providing tools for discovery of information in research to offering solutions to prepare and analyse data from research. In addition, a number of start-ups deliver technology-based solutions to assist in writing about research. SciencePOD, which took part in the study, fits in the latter category.

Campfens also delved into the details of how and where their funding came from. She found that 10 start-ups raised $25 million or more in funding, another 7 raised $10 million or more while 29 companies raised $1 million or more. (See bar graphs below for a visual.)

 

Finally, she analysed when, during their lifecycle, start-ups were acquired and by whom. Of the 92 independent start-ups, she found that less than a quarter (21%) were acquired by other companies in 2018. It took, on average, 5 years of a start-up to be acquired. One of the most significant takeaways from her study, she says, “the acquired start-ups were definitely not taken over by STM players.” Indeed, the study reveals that companies that are not players in the science publishing industry acquired the majority (67%) of the 27 start-ups that were acquired.

To provide some first-hand comments from entrepreneurs themselves, Campfens invited a panel of start-up representatives. These included Arne Smolders, CEO of Academiclabs.co, based in Gent, Belgium.  This company is a social media network for linking academic and industry researchers in the life science sector.

Another panellist, Niklas Dorn, is CEO of video streaming solution Filestage.io based in Stuttgart, Germany, explained how scientific meetings could benefit from his solution.

Then, systems architect Roman Gurinovich with SCI.ai based in Tallinn, Estonia, outlined the benefits of his AI tool to validate scientific hypotheses from the published literature.

Last but not least,  Sami Benchekroun CEO and Co-founder of  Morressier based in Berlin, Germany, explained the concept of his publishing platform where researchers can share and showcase their early work at pre-publication stage, with the aim of fostering greater collaboration among scientists at an early stage.

If you liked this, please remember to like, share, and subscribe. Thank you very much and we hope to see you next time.

 

Andrew Burgess speaks about the use of AI in the content creation sector at ConTech

“The conversation is the relationship”, according to Management Consultant Andrew Burgess, who provides strategic insight around AI. In an interview recorded at ConTech 2018, held at the end of November in London, UK, Burgess talks with SciencePOD about the potential of AI in the content creation sector.

Should professionals in science publishing involved in creating content be concerned about their jobs as AI solutions become more prevalent?

“It will impact jobs but,[…] it’s going to be slower than people think. The things AI can do today is exciting and valuable, but there are limitations.” These limitations prevent AI from replacing people, entirely.

Rather they could augment people’s capabilities. Burgess continues: “AI’s very good at providing additional information to people, extracting value from data that humans wouldn’t necessarily be able to do, to help them make better decisions around their business… These are giving people additional capabilities that they wouldn’t have had before.”

What type of applications could help professionals involved in creating content augment their capabilities?

For instance, “in translation services, we’ve gone from a very structured approach to AI within the natural language to actually use machine learning, whereas the name suggests, it’s the machines that are learning how to do the translation based on the data that you’re giving them, ” Burgess notes. “And that gives you so much wider capabilities than you’ve ever had before… That’s where the real acceleration will be with the development of AI over the next 5-10 years.”

There are still concerns that AI may become harmful to humans? Where is the threat?

“I think the biggest risk is the people that are building and using the AI itself, all of this is just a tool that we use. As long as we plan right and don’t have a bias in the data, we don’t have opacity in the decisions the AI is making… (these) are really important things. That’s really up to humans to control.”

Enjoy the podcast and thanks for listening!

Preferring inductive over deductive reasoning makes science communication more effective

Outside their speciality, scientists need inductive communication from their colleagues

Scientists are, all too often, notoriously bad communicators. Why is this? These are intelligent and thoughtful people, who consider carefully and think deeply about what they do. I fear the problem lies with the rest of us, non-scientists, who quite simply don’t do that. Either because we don’t have the time. Or we don’t have the capability. Or because our thinking processes are aligned with very different, much more urgent matters. Stopping and listening to people who stop and think is not so easy. We think in other ways.

There is a name for these things: deductive versus inductive reasoning. And understanding how they work is essential for communicating scientific ideas to the wider public, and even to scientists in other disciplines.

Deductive reasoning was codified by the great French philosopher, Descartes. The important thing to know about that is that to do it he locked himself in a small room, for a long time. He did away with all that others had told him, and deduced the nature of the world from the facts that he could observe, building his view of the world one fact at a time into a concrete, objective vision. This was an enormously powerful philosophical tool. It shunted alchemy and its bizarre search for essences and replaced it with science.

Science and deductive reasoning took off like a rocket. And within a few years, another French scientist, Lavoisier, was demonstrating that diamonds could be burnt, with enough heat, and that criminals could be painlessly executed. Deductive reasoning is a slow, painstaking, careful process.

Scientists are concerned with exactness and precision because the world works in an exact and relentlessly precise way. Their communications are no different, but the rest of us don’t have the time to lock ourselves in small rooms to appreciate this all fully.

The rest of us, non-scientists, use shortcuts. We don’t have the time or the energy to dedicate to building each problem from the ground up or to observe all the relevant facts. So, we use a mess of inherited, inconsistent and simplified methods – learnt as children, from bitter experience or just made up – to muddle through. This is called inductive reasoning.

Inductive reasoning gives you an answer, but not the answer. An answer is weak or strong, not wrong or right. The point is that an answer is useful.

I’ve only seen white swans, so all swans are white is a useful conclusion. It isn’t correct, some swans are black, but it will do you 90% of the time. It’s a bit sloppy, after all, did you ask yourself whether you’ve seen enough swans to reach such a conclusion? You didn’t, but you still have a useful answer.

This is different for scientists. When a scientist writes or speaks about their subject, they must give exactly the right answer, painstakingly deduced. Their self-esteem, professional standing, and even their consciences demand it. They use specific terminology not to obscure what they are saying but rather to be more precise.

However, in general communication with the lay public or even between scientists in different fields, that terminology doesn’t help matters… The layperson doesn’t need to know that a scientist’s answer is good in only 96% of cases and not in the other 4% where
complications set in. For the layperson, in most cases, 96% will do.

A scientist once said to me, ‘[X] statement, while generally true, is wrong in nearly every particular’. For the layperson, generally true is good enough – the core of the idea, the simple takeaway is enough. This is also true of scientists operating outside their speciality.

It’s not often appreciated that Newton, author of Philosophiæ Naturalis Principia Mathematica, the deductive treatise which explained the orbits of the planets, was also a prolific alchemist, steeped in its inductive reasoning.

Outside their speciality, other scientists need inductive communication from their colleagues.

After all, how much time do you have to look at swans?

Jonathan Mills, CFO, SciencePOD

 

Photo credit: Unsplash user Kasturi Laxmi Mohit

AI-driven personalisation to solve the battle for attention

Photo credit: Rawpixel on Unsplash

Photo credit: Rawpixel on Unsplash

Everyone seems to be suffering from content indigestion as news competes with native advertising for readers’ attention. So what are start-ups doing to fix this?

During the weeks running up to the biggest toy selling season, nobody ignores how cartoons and kids’ movies are used to market branded toys. This is an example of content developed for content marketing purposes. Toy companies use every trick in the digital marketing toolbox to personalise a message aimed at influencing parents and grand-parents—they are the ones with the purchasing power after all.

With such content marketing, the traditional boundaries between content, designed to sell products or raise brand awareness, and news content, designed to inform, are blurred. It is, therefore, not always easy for audiences to identify independent news media content from native advertising emulating news style content. People are starting to feel that they are suffering from content indigestion while the battle for their attention rages on between media and advertisers.

To address such concerns, digital publishing is undergoing a serious shift towards greater personalisation. At a recent event on the Future of Content, organised by Irish industry support agency Enterprise Ireland, in Dublin, Ireland, on 9th November 2018, participants presented the various perspectives on the content creation spectrum. And the advocates of audience-led personalisation opposed the views of marketers pushing for customer microsegment-led personalisation.

Audience-led content selection

Today’s readers want a choice in how content is served on their multiple screens. This is particularly so for news. Yet, the newly relaunched Google News app is designed to deliver news related to readers’ current location, regardless of whether they have an interest in other types of news. In contrast, companies like KinZen aim to empower each reader to take charge of the kind of news they receive. The company has mapped the online journey of a typical reader and is taking stock of how their audience is developing new routines to get informed.

KinZen Co-Founder Mark Little explains that their philosophy is based on giving readers options to manage their own news source feeds, being mindful of not exposing them to unwanted content such as ads, or infringing on their privacy. For readers, this means accessing different kinds of news content, related to their professional or personal lives, at different time of the day, on different devices. Such a solution suggests tilting the traditional balance of media funding towards subscriptions models and away from advertising revenues.

It remains to be seen whether readers are prepared to pay for hyper-personalised news and for the privilege of not being distracted by adverts. However, there are signs that the subscription model is working.

sciencepod personalisation

Going one step further and challenging the way news content itself is created are Rob Wijnberg and Ernst Pfauth, founders of a new kind of subscription-only news organisation called The Correspondent. In their manifesto, they condemn the way online news today is no longer designed to inform audiences. Rather, they complain, ‘news organisations prioritise the needs of advertisers over the needs of readers.’

In a recent crowdfunding campaign to fund their English edition, launched on 14th November 2018, they condemn the lack of media engagement with meeting the needs of their audiences, stating that ‘all-too-often the news talks to you, rather than with you.’ Their model, piloted in Dutch as De Correspondent, aims to change the way news is created by associating readers with the process.

Clearly, this extension of the editorial process, by including contributions from beyond the strict confines of the traditional newsroom, echoes SciencePOD‘s collaborative approach to creating content — be it for journalistic, educational or content marketing purposes. Through our platform, we augment the teams of our media, publishing or industry clients by giving them access to the know-how of our community of journalists and editors, with the right scientific and technical expertise, the right editorial skills, and based in the right location.

Top-down analytics for news and creative content

Clearly this bottom-up personalisation approach, where the user regains power over what news and content is consumed, starkly contrasts with the majority of ad-fuelled approaches of media publishers. Harnessing ever increasing smart data analytics methods, and web-technologies, publishers capture individual readers’ every move before serving them what content, they believe, is most appropriate and link it with matching ads.

For example, NewsWhip’s CEO Paul Quigley, based in Dublin, Ireland, shared how his company provides data analytics on how readers engage with news content, giving insights about the time of the day, the level of popularity of stories and the social media platforms on which they access it.

Yet, the personalisation trend is also filtering through the way advertisers display both news and ads used for content marketing. Solutions like Dublin-based Readersight, presented by founder Anthony Quigley, helps publishers analyse reader engagement in real-time and suggests ways of improving it by personalising the way the content is displayed to optimise advertising revenues.

Another targeting advertisement expert, Barry Nolan, Chief Marketing Officer at San-Francisco-headquartered Swrve says they can deliver targeted, personal and optimised adverts at the ‘perfect’ time and through the ‘perfect channel’. Every interaction with what they call micro-segmented consumer groups is carefully orchestrated within a publishers’ app. His approach is designed to bypass an audience’s negative response. As he puts it: ‘the more you communicate with customers, the more you train them to ignore you.’

This shift is also taking place in TV advertising which is becoming highly targeted. Ronan Higgins, Founder and CEO of TVadSync, shared how they can now capture real-time data about what individual households (identified by their IP address) watch on their smart TV, checking what time they are watching and for how long, when they switch channels.

sciencepod top down news creation

E-commerce links to content marketing

As all sectors of the economy migrate towards e-commerce, start-ups are directly integrating advertising or marketing content sites with e-commerce platforms.

Beyond news, optimising access to news content is also happening in the creative arts, for example, in movies. Oliver Fegan Co-Founder of Usheru, based in London, UK, explained how the company aims to provide a way to connect the marketing of new movies to the online purchase of cinema tickets. Their personalised solution comes complete with an insights platform, giving film marketers a real time view into the sales success of film marketing campaigns.

Others like ChannelSight Co-Founder and CEO John Beckett talked about a widget they developed, which can’t be blocked by ad blockers, and can link products described in online content to online shops where they can be purchased.

As more and more personalisation waves break over the branded content advertising landscape, audiences are increasingly influenced by very sophisticated marketing campaigns. Some, however, may want to pay for the privilege of not becoming a pawn of marketing algorithms.

Sabine Louët

Founder and CEO SciencePOD

Digital publishers under the spell of web automation

What keeps digital publishers awake at night? This is a question that has more answers than we have space in this column. However, for many, having the right set of automation tools is essential. This is as true for digital marketing as it is for creating quality content. Publishers from mainstream digital media organisations could not agree more during DiG Publishing Lisbon held just this year, 3-5 Oct 2018.

While much of the focus and attendees went toward advertising, analytics and monetisation, content remains at the top of the agenda. Start-ups represented around the content creation theme included Market4News. CEO, Igor Gómez Maneiro explained how his platform helps to locate suitable mobile journalism reporters whereas Justin Varilek, Founder and CEO of HackPack presented his online community of journalists. Later, Niko Vijayaratnam, Founder of Authored.io, shared his online authoring tool, showing promise, despite its relatively early stage of development.

Meanwhile, SciencePOD’s CEO, Sabine Louët, offered a sneak peek at the company’s digital publishing platform, which helps to automate the ordering and creation of quality content telling the story of research and innovation. The idea is to translate the complex jargon of science and technology into clear, concise, compelling stories to better influence targeted audiences.

For enthusiasts and perhaps those in need of cost-effective content marketing strategies, Philipp Nette, Founder and CEO of Cutnut, has developed a centralised, social media content creation and distribution platform for marketing campaigns. Meanwhile, Alexandra Steffel, Editor in Chief, at Intellyo, offered a content planning and creation tool for enterprises.

Another favourite, Locationews was certainly one for the books. Created by Co-Founder Julius Koskela, the site helps to map news onto a global map enabling readers to selectively find news most relevant to their preferred location. Finally, Simply Aloud Co-Founder Davide Lovato talked about embedding voice version of articles into news sites. For now, the recordings are done individually to guarantee highest audio quality, rather than via automated text-to-voice systems.

Why the need for a wide variety of content creation and distribution and monitoring tools? Wouldn’t these complicate newsrooms?

“We are reaching an age of the internet where enabling technologies automating content creation and distribution are maturing,” says Louët. “Therefore, making the most of these standard technologies is within everyone’s reach.” These technologies make it possible for digital publishers to unleash their imagination and bring more compelling content and news to their readers.

If you enjoyed this and are eager to see more, please be sure to like, subscribe and share.

Also, to hear more about specific “Ask the Editor” content, leave a message in the comments below.

Cheers!

Photo credit: Pixabay user KJ

Podcast, Ivan Oransky, RetractionWatch: Retractions are like red flags highlighting infractions in science

Photo credit: SciencePOD.

 

Keeping a record of the retraction of research publications made it easier for journalistic coverage dissecting the infractions occurring within science publishing. “What I see that’s important, is not just coverage of individual cases, people are actually trying to put this all together. They’re filing public record requests because that’s something we’re not thinking of.” That’s according to RetractionWatch, co-founder, Ivan Oransky,  who started this initiative as a blog in 2010 with a single purpose in mind: making the peer-review process more transparent.

Listen to the edited version of the recording by SciencePOD.org of a recent presentation Oransky made at Trinity College Dublin, Ireland. This event, held on 20th June 2018, was co-organised by the Irish Science & Technology Journalists’ Association and the Science Gallery Dublin.

Oransky recalls the motivation that originally animated him and co-founder Adam Marcus in highlighting the mishaps of the peer-review process within academic communities. “Those of you who may be familiar with PubMed, Medline or Web of Science, you go to any of those you’ll find under 6,000 (retractions)… we [at Retraction Watch] have 3 times as many,” notes Oransky. Today, the RetractionDataBase.org site holds 17,500 retractions –and it is still growing. While retractions are rare, Oransky believes there is a screening effect attached to them.

For a sense of scale, the two countries in the world with the most retractions are China and the US. To provide an in-depth look at this, Oransky and his team compiled a leaderboard. Each of these instances are linked with a comprehensive story following the original publication.

Many varieties of malpractice

Oranksy highlights a few of the problems surrounding retractions found in the peer-review community. At the time of this recording, RetractionWatch had cataloged, 630 retractions specifically due to e-mail fraud by submitting a fake peer-reviewer’s e-mail. How does this work? An academic submits a paper to a journal for submission. When the journal comes back to ask for an e-mail to reference for peer-review, rather than submitting a genuine e-mail, the academic offers a fake e-mail, which then closes the loop between him or herself and the journal. Thus, eliminating the need for a peer-review. Back in 2000, only about 5% of papers were retracted, due to e-mail fraud.

Another area of malpractice occurs through duplication of results in different journals, not to be confused with plagiarism. Duplication is giving undue weight to a scientific conversation within the literature. This means when you try to conduct a scientific analysis on a topic, you’re looking at publications publishing the same thing multiple times without adding value to the topic.

All knowledge is provisional

To assume a paper should be retracted because the results aren’t reproducible is odd; but, it does occur. This shows that there is no perfect system for scholarly publishing. And that keeping a tap on retractions can help to uncover unsavoury behaviour among scientists.

Ultimately, this red flag activity leads to stronger science, as researchers are aware of the potential downsides of naming and shaming authors of retracted papers.

Enjoy the podcast!

 

Photo credit: SciencePOD.org

Champagne Owes Its Taste To The Finely Tuned Quality Of Its Bubbles

Based on this summary, this story was picked up by the New York Times as A Universe of Bubbles in Every Champagne Bottle.

What provides the wonderful aromas is a long neuro-physico-chemical process that results in bubbles fizzing at the surface of champagne

Ever wondered how the fate of champagne bubbles from their birth to their death with a pop enhances our perception of aromas? These concerns, which are relevant to champagne producers, are the focus of a special issue of EPJ Special Topics, due to be published in early January 2017—celebrating the 10th anniversary of the publication. Thanks to scientists, champagne producers are now aware of the many neuro-physico-chemical mechanisms responsible for aroma release and flavour perception. The taste results from the complex interplay between the level of CO2 and the agents responsible for the aroma–known as volatile organic compounds–dispersed in champagne bubbles, as well as temperature, glass shape, and bubbling rate.

In the first part of the Special Topic issue, Gérard Liger-Belair from CNRS in Reims, France, has created a model to describe, in minute detail, the journey of the gas contained in each bubble. It starts from the yeast-based fermentation process in grapes, which creates CO2, and goes all the way to the nucleation and rise of gaseous CO2 bubbles in the champagne flute. It also includes how the CO2 within the sealed bottle is kept in a form of finely tuned equilibrium and then goes into the fascinating cork-popping process.

The second part of this Special Issue is a tutorial review demystifying the process behind the collapse of bubbles. It is mainly based on recent investigations conducted by a team of fluid physicists from Pierre and Marie Curie University, in Paris, France, led by Thomas Séon. When a champagne bubble reaches an air-liquid interface, it bursts, projecting a multitude of tiny droplets into the air, creating an aerosol containing a concentration of wine aromas.

References

G. Liger-Belair and T. Séon (2017), Bubble Dynamics in Champagne and Sparkling Wines: Recent Advances and Future ProspectsEuropean Physical Journal ST, 226/1, DOI 10.1140/epjst/e2017-02677-8

G. Liger-Belair (2017), Effervescence in champagne and sparkling wines: From grape harvest to bubble rise, European Physical Journal ST

T. Séon and G. Liger-Belair (2017), Effervescence in champagne and sparkling wines: From bubble bursting to droplet evaporation, European Physical Journal ST

Illustration

Caption: Flower-shaped structure, frozen through high-speed photography, found during the collapse of bubbles at the surface of a champagne flute.

Photo credit: Gérard Liger-Belair

 

EPJ

Originally published in EPJ via SciencePOD

Based on this summary, this story was picked up by the New York Times as A Universe of Bubbles in Every Champagne Bottle.

Infographic with Microbide.com

One of the most recent examples of an infographic created by SciencePOD for Microbide.com. The content focuses on the reduction of viral load in the presence of whole blood by a new antimicrobial disinfectant.

SciencePOD microbide.com infographic example

An Interview with Frontiers Science Hero: Christian Voolstra

As a specialist in coral reefs genomics, Christian Voolstra is a Frontiers’ Review Editor for Frontiers in Microbiology and Associate Editor for Frontiers in Marine Science. He is currently an Associate Professor in Marine Science Biological and Environmental Science and Engineering, at King Abdullah University of Science and Technology (KAUST) working from the Red Sea Research Centre, in Thuwal, Saudi Arabia.

 

Frontiers Science Hero: Christian Voolstra from Frontiers on Vimeo.