Policy and technology: a "longue durée" view

Random thoughts on policy for technology and on technology for policy


May 2010

Lessons learnt from animating the Declaration of Amsterdam

As I said, we helped animating the Declaration of Amsterdam at the World Congress of IT.
I’m very happy with the way it was received: the Dutch Minister centred her speech on the “projects that make it happen” from the website.
Some quick lessons learnt for me to remember:
– commentpress is old. At least we should use And policy commenting is loved by policy wonk like me, but not by larger audiences
– visualization based on googleMaps beats textual comments hands-down. We put lots of effort in commentpress, and we achieved little. Yet the global visualization of projects was intuitive and very effective in terms of participation and policy impact.
– strategic design is very important: the choice of wording and overall tone was geared not towards a generic “let’s collaborate” or “let’s discuss” but a more specific “let’s make it happen”, so that it was clear what was asked and what could be expected from this.
– animation live during the presentation via twitter was very effective. In this way, the Debategraph visualization worked very well to connect words/policy to actions/projects.
– web animation has to be embedded at the top strategic level all through the process. Otherwise it is very difficult to set the right “tone” as previously described. In the ideal world, the policy strategist should be able to modify directly the website.
– clear rewards should be given. The beauty of the map visualization was that a project was visible on the map as soon as it was added, creating a sort of “magic” effect. Yet we should have given clear high-level rewards, like “the most interesting projects will be shown by the Minister in her final speech”.

So my final take is the key importance of dynamic visualization for participation and policy impact.

the beauty of tweetlining: but there’s no app for that!

I am a big fan of open underlining as a form of taste-sharing and low-cost collaboration.
Some time ago I tweeted an idea which got some positive feedback:
“I need an app that tweets whatever I underline on a pdf. Just as we do for good sentences at conferences”
While I am reading and studying material, I sometimes find a gem and my first instinctive reaction is to tweet it.
I would love to to this in an easy way, and I am sure it wouldn’t be difficult. Yet I know no app for this.
Ideally, it would link to the original source, enable tagging or hashtagging. It would work on any document, pdf and webpage. It would merge twitter, pdf reader and online highlighters.
Imagine: after a few people start to use it, it would become an unbelievable treasure of knowledge – even more if linked to eReaders (currently Kindle does not allow pdf annotation). You could also be connected to people who underlined the same lines as you – which I find an amazingly effective way to connect.
Plus, the most important thing is that I have a name for this 🙂 . Tweetlining.
I really hope some developers out there is able to do it. Please. I suggested that Mendeley could to it but is there anything out there already?

Gov1.0 was IT-led, Gov2.0 is wonk-led

I often start my presentations with a slide on the failure of e-government, demonstrated by low take-up. E-government was designed and built by engineer and IT people, who thought that “doing things online” was simply too exciting and convenient, and users will immediately rush to “change of address” online, no matter how they were designed. It turned out, users are not so interested to use online services unless they are well designed and intuitive and convenient.
So one of the reasons for failure was that the eGov people thought that all citizens shared their enthusiasm for technology.
I see a similar pattern in gov20. Now it’s us the policy wonks, the activitists, but also the communicators, who are assuming that an open, collaborative, policy-conscious behaviour is normal and widespread.
This is not the case. We have to design gov20 for non-collaborative, egoistic and non-policy conscious people. As I often say, we have to design for Bart Simpson, while we generally do for Lisa Simpson.
Otherwise, gov20 will remain niche. Powerful, because the impact of the wonks and collaborators is heavily augmented, and we’re able to influence government much more than before. But soon, the expectation of wikinomics will fade out, exactly in the same way as the 60s spirit and ideals disappeared.
In fact, just like in the 60s, we now think that the very nature of people is changing, that the future belongs to open, collaborative people, policy conscious etc. The “yes we can” people. But previous experience suggests this will not last.
On the other hand, in my professional life I see not only the “coolness” and “goodness” of collaboration, but the genuine advantages in terms of creativity and productivity.
So my key questions for the future of gov20 is: will the hacker/wonk/sharer mentality become widespread to the majority of the population?

The future of policy making and system dynamics: towards augmented simulation?

In the context of the CROSSROAD project, yesterday in Rome, I was invited by Stefano Armenia to attend the meeting of the System Dynamics Italian Chapter, on applications of System Dynamics to public policy. The meeting included presentations on real-life application of System Dynamics tools and methods to public policy, and allowed me to better understand the status and future opportunities.
Basically, SD is used to anticipate the impacts of decision; to elaborate scenarios of future impact which take into account a large amount of interrelated variables. First, it builds complex causal models (the typical causal loops). Then it adds feedback and interaction between them (the dynamic element). Then it runs software that allows to simulate the scenarios of different kinds of solicitation into the system: not only taking into account the causal relationships, but also the dynamic feedback mechanisms that oftern spur unintended consequences. It is, in summary, a way to capture complexity and wicked problems, in order to have a better view of long-term and unintended effects, and simulate the impact of different actions. Here’s a typical diagram from wikipedia.

In particular, this triggered my thinking about the CROSSROAD model. My vision of future policy-making focusses on tools that support the integration of three, traditionally alternative, features of policy-making:
– evidence-based (traditionally, through experts’ input)
– timely (traditionally, through hyerarchical decision)
– participated (traditionally, through lengthy consultation)
Now, I realize that SD and the other simulation tools add a fourth feature: the long-term thinking and anticipation of future events. Too often public policies are based on short-term thinking – typically the attention span of the media, or at best the electoral cycle. Too often they fail to take into account the many implications of the decisions. This is obviously very important to address issues such as Climate Change.
So this is a key application field: anticipating the impact of policy decisions. Take the example of the financial crisis: first the crisis happened, then government tried to intervene, then government suffer from financial exposure – with all the societal implications of this. None of these events was expected.
This can be done, traditionally, by econometric tools. But they are “too linear” – they fail to capture complexity and non linear phenomena, and thereby are unsuitable for black swan events.
This can be done better by SD tools, which are able to account for the dynamic feedback mechanisms between actors and between events.
Social simulation tools, and agent-based modelling, go one step further. They not only take into account the dynamic effects, but they are open to unexpected causal relationships and interactions between agents. The model itself is dynamic.
However, the overall questions is: how can decision support tools help us dealing with increasing improbable events?
How can we deal with black swan events and wicked problems? Isn’t it a contradiction to try to simulate and anticipate impact when we recognize it’s impredictable? Shouldn’t we just rely more on human judgement?
My impression is that simulation tools are important, on one side, because they are able to capture and simplify a wider set of interactions than traditional econometrics. They are able to structure complexity and provide a more manageable, and more comprehensive, view of future impacts.
But models always carry the risk of excessive reductionism. This is why we need modelling tools that are able to fully capture human expertise and to augment is.
We need to offer modelling tools that are usable directly by thematic experts – not by the methodology/technology experts.
We need collaborative modelling tools, open to the wider set of human intelligence. And we need to go beyond the notion of open data: we need interoperability and open models so that we maximize the effort of analysts and stop re-building the wheel.
Furthermore, these modellng tools should be designed in a way that is usable and open to the wider public. Citizens should be able to visualize, maybe in virtual reality, the impact of the different actions. We would need a policy modeling tool is used as a debating tool by stakeholders, in public. Like, but with underlying models: what could happen if Greece goes out of the EURO? You would see policy-makers interrogating the tool with different possible options, and possibly citizens too.
In their daily life, we could envisage displays for citizens that show the impact of different decisions – just as people are studying display technologies for changing the behaviour of citizens in energey efficiency.
And they should be able, should their expertise allow it, to interact not only with the reporting, but with the models themselves.

In other words, my impression is that we need to futher develop modelling and simulation tools in order to let the widest set of intelligence, and especially the thematic experts:
– take part in building the models;
– carry out the analysis;
– play with the reports dynamically.

This is obviously an initial and superficial reflection, but the key point is that we need to apply the “augmented” metaphor to policy modelling.

Other questions to be addressed in future posts are:
SD and simulation is far from new: it was created in the 1950s. Why now?
What is specific about simulation in governance, and different from commercial applications?

The Amsterdam Declaration as a platform: please comment and add inspiring projects

Following our successful work on the Open Declaration, I have been invited by the World Congress on IT, which takes place end of May in Amsterdam. There, for the first time, governments and industry players have agreed a political declaration, which spells out the key objectives for IT policy for the next years.
My job, with Ton Zijstra and James Burke, was to dynamize the declaration so that it has a real impact. The actual goals of the declaration are widely shared, and they’re not much different from any IT policy. We need to create critical engagement, and to inspire action.
The interest here is in using the Declaration as a platform in Tim O’Reilly sense: rather than focussing on the Declaration itself, let’s talk about how to make it happen. Government can’t reach the goals alone: let’s reach out to collective intelligence to build action on top of the Declaration – just like the iPhone and Facebook let others build application for their platform.
So we designed a website titled “let’s make it happen” where we build on
top of the declaration in two ways:
a) by commenting on the Declaration about what actions are needed
b) by adding inspiring projects that show how the Declaration goals can be achieved
The first option, using commentpress, it allows anybody to comment directly on the specific paragraph of the Declaration. I am a big fan of commenting tools, and in the process I discovered, which is even better – and by the way, it is an outcome of a research project, thereby an interesting gov20 technological innovation.
Secondly, I needed to visualize action so that people are encouraged to contribute. So we have a dynamic map where any project added is automatically visualized on a GoogleMap.

Overall, I think we have a good tool and an example of how to make a policy declaration not a static document, but a platform for action. We will add new features over the following days – watch that space!

What do you think? Is it well designed and thought? What could make it better?

And please: add your inspiring projects and your comments on the Declaration text (pdf). We need smart people to make the Declaration a meaningful policy tool.

criticizing gov20: we don’t have the tools for large scale participation

In my job, I often hear the sentence “technology is not the problem”. I disagree. We mostly say it because we don’t understand technology well enough to detect its limits – it’s a kind of black box.
In the Crossroad project, we are looking at future applications and technologies for collaborative governance and policy making. In this process, a few ideas came to me.
My argument is that all the collaborative tools that we have now, from idea-rating to mash-up and debating tools, work well only at small scale.
We all have problems in leveraging participation: yet we would not be able to deal with large-scale participation.
Most of Obama administration initiatives are very basic in terms of functionalities and performance (ideascale, gModerator, Innovation Jam…). They rely on strong human effort, which is difficult to scale up. And the scalable tools, like quantitative ratings and e-petitions, are very easily manipulated.
Basically, we don’t have the tools for mass conversation and collaboration, so it’s actually good that few people participate!

A few examples:
– the Global Pulse 2010 used IBM Innovation Jam, which as far as I can see is a large-scale forum with sophisticated human-analysis capacities and methodologies, but basic technology (only robust enough to handle thousands of participants
– the Google Moderator tool used by the White House uses basic ratings systems and is prone to hijacking, as the marijuana debate proved
– the recent Commencement Challenge allowed citizens to vote for the best essay and video
– any form of Google Mash-up works well only with small numbers, as we realized when setting up the debate space for the Amsterdam Declaration
– using ideascale or uservoice, as we did in the Open Declaration, people only read and comment on the top proposal, or come out with a new proposal of their own. Free text search is provided by Uservoice, but it’s a very rough method. Nobody reads all the proposals before posting and voting.
PatientOpinion deals with high human-intensive processing of comments, one by one.

Obviously, I am not criticizing the initiatives. They are great. But “technology isn’t there”.

So my question is: what would be future transparency tech? what applications can we envisage 10 years from now? What basic tools need to be developed? Is RDFa the most futuristic thing we can come out with? Is there anything after things like, developed within the US e-Rulemaking research programme?

For me, most relevant research field are:
– collaborative filtering
– reputation-management systems
– visual analytics
– natural language processing


Blog at

Up ↑