Search

Policy and technology: a "longue durée" view

Random thoughts on policy for technology and on technology for policy

Month

October 2013

Is GoogleDoc the most “mainstream” way to publish commentable document?

I am a big fan of commentable document software, from digress.it to co-ment.com. We also built our own “MakingSpeechesTalk” software for Commentneelie.eu. Yet this kind of software is not yet widely accepted.

If I want to engage the widest number of people, which software should I use? Is GoogleDoc the most accepted way to comment on a document? Or maybe this is still so niche that I just need to pick the most usable one?

great example of policy-analysis-as-a-service: constituteproject.org #policy20

The Constitute project “offers access to the world’s constitutions that users can systematically compare them across a broad set of topics — using a modern, clean interface.”

It’s one good example of policy as a service. Typically, constitutional comparison takes the form of a report. But today, it takes the shape of a well designed website. This kind of policy analysis as a service is what we have done through for example www.daeimplementation.eu .

This is what I want to focus on in 2014.

Towards an evaluation framework for #openpolicy #policy20

We finally published the research roadmap on policy-making 2.0  from the Crossover project – see below.
I’d like to point out one of the conclusion, in the section on evaluation of policy 2.0:

What emerged from the analysis of the prize and the cases is that evidence for uptake is clearly available and now can be considered mature.

However, the evidence presented by the cases and the prize candidates with regard to their impact remains thin and anecdotal in nature. There is no thorough assessment of the impact on the quality of policies. Typically, the impact is demonstrated in terms of:

–       Visits to the website and participation rates

–        feedback and visibility towards media and politicians,

–       actual influence over the decisions taken

while the actual impact on the quality of policies is yet to be demonstrated. Some initial work (in the case of Gleam and Pathways 2050) is focussing on comparing the predictions with the reality as it is unfolding. Only the case of Ideascale presents some tangible ex ante estimates of the advantages of the decisions taken through policy-making 2.0, but no thourough ex post evaluation.

We are therefore elaborating a new evaluation framework, which we will deploy in the context of our new project – EU COMMUNITY. It will be structured alongside the following criteria:

  • number of participants (audience reached, active users, number of comments/input/votes)
  • type of participants (usual suspects vs new players)
  • involvement of decision makers (direct/indirect/non existent)
  • quality of ideas received (qualitative judgement & feedback from policy-makers)
  • actual usage of the output in policies (direct/indirect/non existent influence on policy decisions)
  • actual improvement of policy quality (counterfactual / difference in difference…)

For each of these criteria, we’ll develop a full methodology (questions+indicators+data collection & analysis) based mainly on the literature on democracy and participation. We’ll keep you posted!

 

a template for developing a #gov20 #opengov project

I am in Odessa, Ukraine, presenting about government 2.0 at the Municipal Innovation Lab organised by UNDP (#citylabodessa). It’s a great event, in the beautiful Hub Odessa, where local municipalities and civil society organisation develop in 2 days open government apps.

While mentoring these groups, I encountered some ambiguity about what is the difference between traditional e-government projects and government 2.0 initiatives. I have therefore come up with a kind of a template process for developing 2.0 apps, which is useful when raising awareness among people who are less aware about gov 2.0:

The steps are:

  1. define the problem well. As @gquaggiotto said, the problem must be specific, evidence-based and concrete for citizens. Generic ideas like “providing all the information you need in one place” are not well defined enough.
  2. Analyse thoroughly what is the potential input of citizens to solve this problems, using the 6 things model. Analyse also what government data could help solving this problem. For instance, if the problem is faster detection of holes in the street, citizens can provide real time information about them (a’ la fixmystreet), and government can open up the workflow for work implementation (when the reparation is planned etc).
  3. Design for engagement, addressing not only the technology but the fundamental design features (anonymity, gamification, publicity, moderation…) – see slide 28 and 38 below

Below it the presentation from my master class.

The end of theory, augmented science and the case for a Google Microscope

One of the refrains in the big data narrative is the “end of theory”, from an influential article of Chris Anderson.

Petabytes allow us to say: “Correlation is enough.” We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.

Anderson is able to convey and popularize a highly scientific notion – and as such it is prone to oversimplification. But there is a lot of serious science confirming the same trend, such as the article “Here is the evidence, now what is the hypothesis?”.

As usual, this is not an revolution: it is not totally new, and theory is very much necessary. As some scholars point out (h/t Giuseppe Veltri), the use of data correlation as a discovery tool is not new, but the (petabyte) scale of it is. More importantly, while one can say that Google PageRank (the greatest example of a data-driven approach) “simply” uses links data to assess the value of a webpage, in reality the idea to use links as a preference can be considered a theory.

In any case, data-intensive science calls for a greater role of inductive, rather than deductive, methods.

Thinking about this, it seemed obvious to me that there should be some kind of software that helps building the hypothesis in an automated way from the data: “hypothesis as a service”. I had the idea, and then as usual turned to Google to see because I am sure that someone in the world already had made it. I searched for “hypothesis formulation tools” and I came across this.

Introduction: what is hypothesis formulation technology?

The DMax Assistant™ product family is a collection of software tools that help researchers to extract hypotheses from scientific data and domain specific background knowledge.

Let me now wander off the track a bit. To me, this is one of the cases where you see the Hegelian Weltgeist (spirit of the world) made real. One can imagine such tool for social sciences as well. It’s a kind of “augmented scientific process”. Instead of a Google Glass, imagine a Google Microscope. You see the image, and it proposes related images, relevant theories and articles, and hypothesis emerging.

Finally, an easy prediction. Automated hypothesis building is just another tool – it augments but does not substitute human brain. Scientists are needed to make the best of it: for instance to check what datasets to merge. But technology is reducing the  need to “choose the datasets”…

Will we need less scientists in the future? Because you know, we’re also seeing “randomize trial as a service” tools (1, 2)…

My bookmarks (weekly)

Posted from Diigo. The rest of my favorite links are here.

Create a free website or blog at WordPress.com.

Up ↑