What has #opengov achieved so far? An initial checklist

cross-posted from Joinup

When thinking about future scenarios for the forthcoming workshop, it can be useful to look back at what we promised, as open government advocates, and what was actually achieved. What worked, and what didn’t? Using an Hegelian metaphor: what is alive and what is dead of open government?

I refer mostly to what I see among my clients, that is EU, national and regional organisations. Hence, I don’t refer to the majority of organisations, but probably to the most advanced ones.

PARTICIPATION

  • ACHIEVED: The nature of policy discussions has certainly  become more open in the government domain. Civil servants are now frequently participating in twitter, facebook and Linkedin. Policy workshops frequently integrate dynamic interaction methods, such as World Cafe methodologies, rather than traditional powerpoint followed up by scant discussions. The manifest goal of many policy discussions launched by government is to reach out beyond the “usual suspects”: the novelty of the people involved is becoming one of the quality criteria of online engagement.

NOT ACHIEVED: The link between open engagement and policy remains unresolved. The final loop of participation, the modification of the policy and the feedback to the participants, remain an exception rather than the rule. As a good practice, consider the charter of “Parlement et citoyens”, which engages MPs to provide a video feedback to citizens about what they did.

TRANSPARENCY

  • ACHIEVED: Open data have become the default option, not only because of the revised directive, but most importantly in the mindset and expectations. The trend continues, with for instance research data funded through H2020 being public by default by 2017. The main benefit remains in term of transparency and accountability: for instance, recently Open Corporates was instrumental in the resignation of a Spanish Minister.

NOT ACHIEVED: uptake of open data remains minor. A minority of citizens download datasets; and the economic benefits by startups reusing public data have most probably being overestimated.

COLLABORATION

  • ACHIEVED: a culture of co-design of public services and public sector innovation is becoming widespread. Many government agencies have created a “Lab” designed to promote innovative methodologies, and there are many specialised provider in service design, agile methods, innovation without permission. The benefits of co-design public services appear clear, also in economic terms.

NOT ACHIEVED: co-design of public policy is much more challenging, as Beth Noveck show in her book. The questions are more specialised, require greater engagement and it is difficult to gain meaningful insight. Collaboration in policy design remains more a goal in itself than an actual effective way to design better public policies.

NOT ACHIEVED: Integration of offline and online collaboration. Successful co-creation requires live interaction. Online tools are important before and after (through open discussion and open feedback), but actual concrete co-creation requires live collaboration. In this sense, we could say that pure online collaboration is no longer a viable option: online and offline have to be considered as integrated tools, but too, in  practice, they still aren´t.

METRICS

  • ACHIEVED: metrics for participation and collaboration are becoming more and more common. Most online tools show the numbers openly in the homepage (e.g. ideascale.com). Uptake is now, finally, considered a key performance indicator, while this was not the case at the early stage of e-government and e-participation.

NOT ACHIEVED: The focus is more on the quantity of participation than in the quality of the contribution. The success of online engagement is too often measured only in terms of “number of tweets”.

What are your views? What do you consider as the main achievement and shortcomings of open government so far?

What has #opengov achieved so far? An initial checklist

To rank or not to rank, this is the question cc @bettermeasured

I have always been wary of rankings in policy analysis. Benchmarking and histograms seem to over-simplify reality, trivialize discussions and encourage “me-too” policy competition.

This is why our policy dashboards (see here, here and here) do not produce rankings, but visual summaries which emphasize the diversity and richness of information.

So when we decided to elaborate, together with the European Digital Forum (i.e. the Lisbon Council and Nesta), a report presenting the results of our startup manifesto policy tracker, our first version contained few rankings and lots of qualitative information: it was designed as a “policy map”. After several iterations and discussions, we accepted to pivot it towards a “policy scoreboard” with plenty of rankings: which countries do more to support startups?

The methodology, previously discussed on this blog, was finally mutuated from the OECD Going for Growth: a simple percentage of implementation of the recommendations contained in the original Startup Manifesto. Every recommendation has the same weight, even if some are crucial (e.g. legislation on second chance for entrepreneurs) and some are of dubious effectiveness (e.g. having a national champion).

I am pleased to communicate that the report has just been published and presented to Commissioner Oettinger and widely discussed on Twitter.

I have to admit that the online discussions always started by an assessment of how a country has scored. The rankings where the single “point of entry” into the discussions, that then evolved into insightful quali-quantitative reflections.

For all their limitations, rankings were clearly necessary for communication and, most importantly, in order to kickstart meaningful discussions.

In conclusions, rankings are nor good neither bad. They are just an important tool in policy analysis, that has to be used appropriately.

 

 

To rank or not to rank, this is the question cc @bettermeasured

#OpenGov future scenarios: triumph or defeat?

Cross-posted from https://joinup.ec.europa.eu/community/opengov/topic/open-government-future-scenarios-triumph-or-defeat 

It’s a defining moment for open government. more than seven years after Obama came to power, many experts are voicing disappointments over its achievement. While the normative importance of the goal is not under discussion, the concrete achievements seem not to have lived up to the expectations.

At least, that is the tone of the posts such as this one by Alberto Cottica, or the recent book by Beth Noveck quoted in the same post.

Yet on the other hand, there is very little evidence behind such statements. Is it the opinion of some experts or is there tangible evidence? And if so, what is the reason and most importantly, what should government do next? Is it a problem of design, or one of implementation?

Precisely these are the question that our new study for the European Commission, OGS, is trying to answer through desk research, case studies, surveys and interviews. On April 28th, we will present the results in a highly interactive workshop that will aim at co-designing the future scenarios of open government.

Let me take sides in this debate. I think open government has not achieved its promises, and that’s ok. There are much more data available than people are actually using, and that’s ok. Perez taught us that over-investment is a necessary part of going from installation to deployment of a new paradigm.

We should remember the fundamental assumption that openness and transparency are elements of good governance in themselves. They are goals, not means.

Yet, these goals come at a cost, and they must be feasible. It’s a problem of managing expectations and updating plans, in an agile manner. Not everything went according to plan, but we know that you can’t plan this ex ante and you should dynamically adapt during implementation.

What matters now is to maximize learning from experience so that we can update our plans and focus on what to do well, and set the right expectations for the future.

It’s a matter of calculating costs and benefit:  not in absolute terms but in comparison with what is traditionally done by government. We know open government hasn’t work perfectly but is there genuinely an alternative? It’s not like government worked perfectly before and then open gov came to create problems.

In broad terms, my feeling is that open government did good for society at relatively moderate costs. In the future, it should focus on some specific activities that worked – I will write on this in my next post.

Over the next days, I will start launching ideas on what is dead and what is alive for open government.

We want to develop together future scenarios for open government in order to set the scene for the workshop.

The final goal of this study is to provide concrete recommendations for the next eGovernment action plan of the European Commission. Alberto Cottica says its all about embracing complexity, Beth Noveck that the focus should be on identifying the right expertise in citizens.

What is your idea? We need all the help we can get out there!

Feel free to comment here, or to provide your answers to the online survey here.

#OpenGov future scenarios: triumph or defeat?

Is mainstreaming the perfect killing? For policies, I mean

In our work of policy evaluation, many times I’ve come across policy changes.

A specific policy priority, such as open policy, foresight, social innovation becomes important. It has now a dedicated measure, a funding system or an organisational unit.

Then after a while, the funding or the unit disappears. The official explanation is that it achieved its goal and it is now being mainstreamed across all government policies.

But maybe this is just a nice way to kill a policy that didn’t achieve its goals?

 

Is mainstreaming the perfect killing? For policies, I mean

Qualitative policy indicators cc @bettermeasured

One of our main activities at Open Evidence is to build policy scoreboards, such as this this and this.

I like this visual representation because it is not a ranking and it illustrates the full complexity of the issue. You can see who’s good and bad on a specific issue, but you don’t have a ranking which encourages “me too” behaviour and institutional isomorphism.

Screen Shot 2015-12-01 at 16.18.44.jpg
Screencap of the Startup Manifesto Policy Tracker

However, it is clear that rankings are an effective way to channel the results of an analysis. If you want to be heard, you need to tell a story. And the large dashboard tells many, not one story. I could bore you to death with the danger of a single story, but in my job, it helps to communicate.

So how can we build a qualitative policy indicator, that we could use for the startup manifesto?

The most interesting example so far is the OECD Reform Responsiveness index, included in the Going for Growth analysis (full details in the 2010 edition, page 79) and used in the Lisbon Council Euro Plus Monitor.

But still, it’s a simple percentage of reforms carried out over reform recommended. Not so sophisticated after all and without any weighting.

Do you know other good examples?

Qualitative policy indicators cc @bettermeasured

Accountability and moving targets: the importance of version control

At Open Evidence, one of our main activities is tools that allow tracking of policy progress, for instance of the Digital Agenda for Europe, eGovernment Action Plan, the Startup Manifesto, the Innovation Union, the Grand Coalition for Digital Jobs. Here, people in charge of each action are allowed to update data on its progress.

One problem we experienced, when you giver users full control, is that there are many changes to the original plan so that it becomes impossible to actually detect delays.

This is not an exception, but the rule in bureaucratic organisations. For instance, I just detected a beautiful graph from Linders, D., & Wilson, S. C. (2011). What is open government?: one year after the directive. 12th Annual International Conference on Digital Government Research (dg.o 2011), 262–271. doi:10.1145/2037556.2037599.

It compares the performance of the Open Government plans before and after the update of the action plan.

Screen Shot 2015-10-16 at 12.25.53

This shows clearly that updating the plan levelled out the differences between “good” and “bad” performers.

We should provide tools that monitor not only the performance against plan, but the changes in the plan itself across time.

Accountability and moving targets: the importance of version control

The rethorics of left wing reaction

One of my favourite books ever is Hirschman, A.O., 1991. The Rhetoric of Reaction: Perversity, Futility, Jeopardy.

It shows that the criticisms against progress have been the same for civil, political and social rights. Reactionary argument always stated that any such change would: make real conditions worse; not have an impact at all; hinder the progress already made so far.

I would suggest a similar analysis, today, on criticism towards free trade agreements such as TTIP.

First let me state that I am against the dispositions of the treaty dealing with special tribunals for disputes between companies and governments; and that I am generally in favour of free trade and against protectionism (see Ricardo and Krugman).

I see a common pattern of criticisms towards free trade agreements:

  • they create looser regulation (e.g. on environmental standards) by removing non-tariff barriers;
  • they favour multinationals;
  • they favour outsourcing jobs towards third countries hence leading to job losses.

For instance, the criticism towards TTIP are similar to those made towards the TPP here.

I think it would be nice to study if previous free trade agreements which are today considered as “given”, including the Single European Act of 1986, encountered similar criticisms to those listed above.

This would help putting the debate in perspective and eliminate criticisms defeated by history – hence giving more weight to well-founded criticisms.

The rethorics of left wing reaction