In the context of the brainstorming #futurescience20, let’s continue with some more negative scenarios.

The argument about science 2.0 and open science is based on the assumption that if you open up scientific data:

– many researchers will reuse it, thereby accelerating the rate of new discoveries and improving the “return on data collection”;

– researchers will be able to quickly replicate and check whether the findings of the analysis are robust, thereby uncovering scientific errors and discouraging publication of false claims

Similar reasoning can be applied to open reviews and reputation management: rather than having professional peer reviewers (a slow and ineffective process), it is better to have open contribution from anyone interested, so that the real experts (rather than “appointed”) can properly judge the value of a paper.

Or as in the PlosONE approach, perhaps the basic assessment of publishability can be done by editorial committees, but the judgement of value and importance could be left to the “open market”, so that the number of downloads and citation will be sufficient assessment of its value. It’s the new “publish then filter approach” against the traditional “filter than publish”.

However, we learnt from the history of open government that reuse and participation are hard to achieve.

After all the effort for opening up government data, reuse of such data is still disappointing . From the transparency point of view, it certainly it did not transform government. Certainly, citizens did not eagerly await to examine open government data in order to scrutinise government. Even the fears of potential misuse of open data have not been

In terms of creating jobs and growth through open data reuse, results have also not been living up to the promises. As I perceive it, there is a general feeling of disappointment with the economic impact of open data – perhaps because we raised too many expectations. Certainly, the results are not to be expected in the short term.

When it comes to participation and e-democracy, we know very well how difficult it is to engage citizens in policy debate. Participation remains hard to reach. High quantity and high quality of participation remain the exception rather than the rule, and certainly can’t be considered as a sustainable basis for sound policy-making. High quantity typically occurs when dealing with inflammatory debates or NIMBY-like themes. When ideas are crowdsourced, the most innovative ideas are not the most voted.

If we transpose this reality to the future of science 2.0 , it is therefore to be expected:

  • that researchers will not rush to analyze and replicate other researchers’ studies. Replication will mainly be driven by antagonistic spirit. Most datasets will simply be ignored, either because they are partial or because they are not curated enough.
  • that researchers will certainly not provide reviews (especially public reviews). The Nature experiment with open peer review clarified that:

A small majority of those authors who did participate received comments, but typically very few, despite significant web traffic. Most comments were not technically substantive. Feedback suggests that there is a marked reluctance among researchers to offer open comments.

  • that an assessment of the “importance” based on downloads and citation only, rather than on peer-review, is likely to lead to greater homogeneity of science and reduce the rate of disruptive innovation. The attention will disproportionally focus on the most read articles; because reputation is based on this, scientists will focus on “popular” topics rather than on uncomfortable and less popular disruptive discoveries.

In summary, the full deployment of science 2.0 could lead to a reduction in the quality and quantity of scientific discoveries. Scientists will not spend their time in evaluating other researchers’ work, and when they do it will be with antagonistic spirit, thereby making the open assessment model conflictual and unsustainable. They will focus on being read, tweeted and downloaded – in other words, to be popular, thereby reducing the incentives to disruptive, uncomfortable innovation.