Search

Policy and technology: a "longue durée" view

Random thoughts on policy for technology and on technology for policy

Month

January 2014

Brainstorming on #futurescience20 now closed – thanks!

Thanks to all of you who contributed to the discussion on future scenarios on science 2.0. Received feedback on the blog, twitter and in person. As usual, feedback was limited in quantity but great in quality.

I can’t publish the results of the work at this stage, but will keep you updated on future policy developments. Thanks to all.

2030: disappointing rates of scientific data reuse and open reviews in #futurescience20

In the context of the brainstorming #futurescience20, let’s continue with some more negative scenarios.

The argument about science 2.0 and open science is based on the assumption that if you open up scientific data:

– many researchers will reuse it, thereby accelerating the rate of new discoveries and improving the “return on data collection”;

– researchers will be able to quickly replicate and check whether the findings of the analysis are robust, thereby uncovering scientific errors and discouraging publication of false claims

Similar reasoning can be applied to open reviews and reputation management: rather than having professional peer reviewers (a slow and ineffective process), it is better to have open contribution from anyone interested, so that the real experts (rather than “appointed”) can properly judge the value of a paper.

Or as in the PlosONE approach, perhaps the basic assessment of publishability can be done by editorial committees, but the judgement of value and importance could be left to the “open market”, so that the number of downloads and citation will be sufficient assessment of its value. It’s the new “publish then filter approach” against the traditional “filter than publish”.

However, we learnt from the history of open government that reuse and participation are hard to achieve.

After all the effort for opening up government data, reuse of such data is still disappointing . From the transparency point of view, it certainly it did not transform government. Certainly, citizens did not eagerly await to examine open government data in order to scrutinise government. Even the fears of potential misuse of open data have not been

In terms of creating jobs and growth through open data reuse, results have also not been living up to the promises. As I perceive it, there is a general feeling of disappointment with the economic impact of open data – perhaps because we raised too many expectations. Certainly, the results are not to be expected in the short term.

When it comes to participation and e-democracy, we know very well how difficult it is to engage citizens in policy debate. Participation remains hard to reach. High quantity and high quality of participation remain the exception rather than the rule, and certainly can’t be considered as a sustainable basis for sound policy-making. High quantity typically occurs when dealing with inflammatory debates or NIMBY-like themes. When ideas are crowdsourced, the most innovative ideas are not the most voted.

If we transpose this reality to the future of science 2.0 , it is therefore to be expected:

  • that researchers will not rush to analyze and replicate other researchers’ studies. Replication will mainly be driven by antagonistic spirit. Most datasets will simply be ignored, either because they are partial or because they are not curated enough.
  • that researchers will certainly not provide reviews (especially public reviews). The Nature experiment with open peer review clarified that:

A small majority of those authors who did participate received comments, but typically very few, despite significant web traffic. Most comments were not technically substantive. Feedback suggests that there is a marked reluctance among researchers to offer open comments.

  • that an assessment of the “importance” based on downloads and citation only, rather than on peer-review, is likely to lead to greater homogeneity of science and reduce the rate of disruptive innovation. The attention will disproportionally focus on the most read articles; because reputation is based on this, scientists will focus on “popular” topics rather than on uncomfortable and less popular disruptive discoveries.

In summary, the full deployment of science 2.0 could lead to a reduction in the quality and quantity of scientific discoveries. Scientists will not spend their time in evaluating other researchers’ work, and when they do it will be with antagonistic spirit, thereby making the open assessment model conflictual and unsustainable. They will focus on being read, tweeted and downloaded – in other words, to be popular, thereby reducing the incentives to disruptive, uncomfortable innovation.

Great example of #futurescience20 : how @peerj makes scientific articles commentable in-line https://peerj.com/articles/175/

Just love how you can add paragraph-level comments in the articles of PeerJ . Commentability becoming mainstream…

 

 

 

Screenshot 2014-01-09 15.44.50

From unbundling to rebundling: the walled gardens of #futurescience20

In the course of this open brainstorming on the future of science 2.0, there is the clear risk of techno-optimism (or technological solutionism).

In this post, we look at the dark side. What could go wrong? To do so, we extrapolate from what happened in other related domains.

For instance, our prevision on the unbundling of science (data production and publication separated from articles and from reputation measurement services; individuals vs institutions; articles vs journals) was envisaged as a liberation of current lock-ins. Data will flow freely from researcher to researcher and from papers to data repository through interoperable open formats.

What is happening on the web today tells a different story. Surely, unbundling weakened the current gatekeepers, such as telecom providers, newspapers, music labels. But rather than a fully anarchical, interoperable economy based on open standards and open API, new gatekeepers and walled gardens emerged under the name of “platforms”: Apple, Facebook, Google, Amazon. Some even said that “the web is dead“. The recent demise of RSS by Google in order to favour GooglePlus is a reflection of this trend. As Wolff puts it in the same feature, “chaos isn’t a business model“.

Even when interoperability is ensured technologically, lock-in is ensured by network effects, preferential attachment or personal data ownership. The Internet, the world wide web, citation networks, and many social networks all are scale-free networks showing power law distribution. In other words, the rich get richer.

So it is possible and likely that either for natural development or for the invisible hand of managers, future science 2.0 will not be totally unbundled and fully interoperable. Instead, it will be divided into walled gardens. Already now, we see platforms such as Mendeley, Google Scholar, Researchgate and Figshare extending their services to what could be considered a kind of vertical integration. For instance, they all try to gather your publications in one place and act as your academic identity.

Just as the free web is damaging newspapers, so openness will weaken existing publishing powerhouses, which by the way are one of the European strenghts. New players will outcompete the “European Champions” of scientific publishing.

Future Science 2.0 will then be platform based. New players will integrate the value chain and build walled platforms. It could for example be that Amazon will build a platform around the Kindle for scientific publishing, including reputation management. Based on the unique data they have from what people read and highlight, they will be able to lock-in researchers and provide finely grained real time reputation based on what people download, read and highlight. They will enable direct publishing (they already do) and even provide scientific crowdsourcing platform for citizen science based on the Mechanical Turk.

Can the Kindle do to scientific publishing what the iPod did to music?

Or it could be data publishers such as FigShare, or reference management systems such as Mendeley. In any case, the lock-in will be based on  ownership of the personal data of researchers: what they read, cite, highlight, what data they gather, what they analyse and publish. Different platforms will provide different, competing reputation measures and identity. Imagine a data publication service telling you: “Researchers who analysed this dataset also analysed these others”.

Researchers will have been emancipated from publishers and institutions only to fall in the slavery of future science platforms.

As a result, scientific reputation will become less reliable; existing publishers will disappear or be bought (imagine, in the future Mendeley will buy Elsevier); data interoperability will be reduced because of different standards.

What do you think? Do you see a future of scientific walled gardens? What will be the future Science 2.0 platforms?

Making EU consultation accessible: the example of www.copywrongs.eu #policy20


screenshot copywrongs.eu

I have long kept in the back of my mind how to make EU consultation more meaningful and accessible. Typically, they involve a questionnaire to be sent via e-mail or via their survey tool, ambitiously named Interactive Policy Making 2.0. But I always noticed that the biggest problem is not the tool, but rather the obscurity of the language. When we animate policy debate, most of our energy goes into making the questions clear and meaningful.

That is why I particularly like copywrongs.eu.

It is a very simple tool that reorganised the consultation questions around real-life problems such as “can’t access some youtube videos in my country”. It then presents the relevant questions for this problem, and explains them, so that you can answer easily. Finally, it lets you download a .odt document which is the filled EU questionnaire and invites you to send it to the EC.

The innovation is in the process rather than in the technology. It’s a quite simple tool that presents the same questions starting from real-life problems. It is mostly translating the logic and language of government in the logic and language of people. But that is its strength.

Still, it’s only a first step in the right directions. The “problems” are not very clearly presented, they are too many and unstructured. It would benefit from a more visual starting point.

In the future, this kind of tools would flourish if government published the questionnaire in a structured format such as XML. You could easily then produce your own version of the questionnaire, just like we make Neelie Kroes’s speeches commentable.

Can we have machine readable consultations? It’s not difficult! Much less costly than building your own consultation tools.

How can we make consultation more understandable and accessible? Copywrongs is an example, commentable documents are another, what else? What is the best online consultation you have seen?

Blog at WordPress.com.

Up ↑