In the course of this open brainstorming on the future of science 2.0, there is the clear risk of techno-optimism (or technological solutionism).
In this post, we look at the dark side. What could go wrong? To do so, we extrapolate from what happened in other related domains.
For instance, our prevision on the unbundling of science (data production and publication separated from articles and from reputation measurement services; individuals vs institutions; articles vs journals) was envisaged as a liberation of current lock-ins. Data will flow freely from researcher to researcher and from papers to data repository through interoperable open formats.
What is happening on the web today tells a different story. Surely, unbundling weakened the current gatekeepers, such as telecom providers, newspapers, music labels. But rather than a fully anarchical, interoperable economy based on open standards and open API, new gatekeepers and walled gardens emerged under the name of “platforms”: Apple, Facebook, Google, Amazon. Some even said that “the web is dead“. The recent demise of RSS by Google in order to favour GooglePlus is a reflection of this trend. As Wolff puts it in the same feature, “chaos isn’t a business model“.
Even when interoperability is ensured technologically, lock-in is ensured by network effects, preferential attachment or personal data ownership. The Internet, the world wide web, citation networks, and many social networks all are scale-free networks showing power law distribution. In other words, the rich get richer.
So it is possible and likely that either for natural development or for the invisible hand of managers, future science 2.0 will not be totally unbundled and fully interoperable. Instead, it will be divided into walled gardens. Already now, we see platforms such as Mendeley, Google Scholar, Researchgate and Figshare extending their services to what could be considered a kind of vertical integration. For instance, they all try to gather your publications in one place and act as your academic identity.
Just as the free web is damaging newspapers, so openness will weaken existing publishing powerhouses, which by the way are one of the European strenghts. New players will outcompete the “European Champions” of scientific publishing.
Future Science 2.0 will then be platform based. New players will integrate the value chain and build walled platforms. It could for example be that Amazon will build a platform around the Kindle for scientific publishing, including reputation management. Based on the unique data they have from what people read and highlight, they will be able to lock-in researchers and provide finely grained real time reputation based on what people download, read and highlight. They will enable direct publishing (they already do) and even provide scientific crowdsourcing platform for citizen science based on the Mechanical Turk.
Can the Kindle do to scientific publishing what the iPod did to music?
Or it could be data publishers such as FigShare, or reference management systems such as Mendeley. In any case, the lock-in will be based on ownership of the personal data of researchers: what they read, cite, highlight, what data they gather, what they analyse and publish. Different platforms will provide different, competing reputation measures and identity. Imagine a data publication service telling you: “Researchers who analysed this dataset also analysed these others”.
Researchers will have been emancipated from publishers and institutions only to fall in the slavery of future science platforms.
As a result, scientific reputation will become less reliable; existing publishers will disappear or be bought (imagine, in the future Mendeley will buy Elsevier); data interoperability will be reduced because of different standards.
What do you think? Do you see a future of scientific walled gardens? What will be the future Science 2.0 platforms?