Search

Policy and technology: a "longue durée" view

Random thoughts on policy for technology and on technology for policy

Month

December 2016

We need ethical principles for news sharing, not just for journalism

Boycriedwolfbarlow.jpg

I keep receiving fake news, the latest in Whatsapp. What can we do about it?

There has been a shift in journalism. Media are no longer the gatekeepers: people get and share news on Facebook and alike.

This is a good thing: it’s a democratisation of news sharing. But it doesn’t work now because people are not educated about what and how to share. They struggle to distinguish truth from falsity. As Carlota Perez would say, we have the technology, but we don’t have the institutions.

This is not strange. This is normal. You have innovation, then you set up norms that help making the best of it.

Some people suggest that these platforms should monitor false news. This is one form of governance – moving the responsibility from mass media to social media. I don’t like it, I think it is paternalistic and I certainly do not trust Facebook more than traditional media.

I think education and responsibility is more important. Journalist have ethics and principles: if people start taking over the job of journalist, they should learn from those ethics.

We need to jot down a set of principles for news sharing:

  • Read the message fully and carefully. If you feel an immediate impulse to share even without reading, be even more careful: it is probably because the message is deliberately designed to push you to share.
  • Verify the source of what you share. Don’t share anything that you can’t trace the origin of. Even better if you have two different sources, and perhaps reputable ones. It’s not difficult.
  • Verify that the message is not already debunked as a scam. Just google it.
  • Be suspicious of any message that openly ask you to disseminate it. They are most likely scams.
  • Do not copy paste messages, especially if they contain first person verbs. I just received a message in different Whatsapp groups saying “I have friends in the police”. Of course I trust a message if a friend of mine says he has direct insight, but the truth is he did not have these friends, he was copy-pasting. If you copy-paste, you are directly lying to your friends.
  • And most of all, remember the tale of The boy who cried Wolf! If you share false news, people won’t listen to you.

What other principles can we add? And can we make a kind of self-certification for these principles to show, for instance, in our Facebook photo profile?

Advertisements

If Trump is a zig, how long before we zag?

“The path that this country has taken has never been a straight line, we zig and we zag.”

This is what Obama said to reassure americans about the Trump presidency. Trump is considered just a phase, an antithesis in Hegelian terms. It is temporary. It won’t last. People will quickly realize their promises won’t be fulfilled and will return to sanity.

I beg differ. What we see is different. Populist leaders, from Putin to Erdogan to Berlusconi to Netanyahu, they are not fast to go even if clearly they do not deliver.

Take Berlusconi: when he was elected, we thought it was just a moment of craziness. Perhaps the most authoritative Italian journalist, Indro Montanelli, famously declared in 2001 that Berlusconi was a plague that could only be cured by vaccination, i.e. by placing him into governments. Then Italians would realize his promises were unfounded, and they would vote him out the office. It didn’t happen, at least for 10 more years.

Populists are masters in finding scapegoat, external enemies to explain their lack of success. Berlusconi blamed the rigid political system, the press, Europe, the judges.

And he got away with that.

Because public policy is difficult, success is hard to define, especially in the era of post-truth.

I don’t have a solution, but I know that populism doesn’t go away quickly, easily or by itself.

 

You don’t have access to real-life big data? Just create them through simulation!

As I previously wrote, lack of  access and reuse to corporate data can be a bottleneck for developing Artificial Intelligence.

One alternative is to re-create the data through advanced simulation. Simulation helps creating massive quantity of data from the so-called “digital twins” of industrial “cyber physical systems”. Obviously simulated data are a simplification so they might miss several real-life issues, but at the same time you are able to experiment with a much wider set of potential situations that are not yet available in real-life – such as a disastrous failure in a chemical plant.

Are simulated data good enough to substitute real sensor-generated data?

 

No data sharing, no #AI party

Artificial intelligence and machine learning requires huge amounts of data: after all, more data beats better algorithm. One of the major competitive advantage of players such as Google in the machine learning space is the massive amount of data they have.

But traditional companies, such as manufacturing, simply do not gather enough data to train algorithms effectively and do not have the internal necessary skills. Industrial plants have hundreds of machines, each equipped with hundreds of sensors (the so called Industrial Internet of Things). They produce a LOT of data, but the insight generated would grow exponentially if it was possible to cross-analyse and compare the data from MANY DIFFERENT plants and companies.

One solution would be for these traditional companies to allow third party big data companies to access, aggregate and analyse their data, and develop algorithms for them. However, big data companies tell us that their clients do not allow them to reuse the data for developing new algorithms and products, but only only for performing one-off, customised analysis. Some even say that this lack of data reuse is the main barrier towards achieving AI-led industrial plants (the industrial equivalent of the self-driving cars). There are pilots, such as data innovation spaces or industrial platforms, but they haven’t yet reached a critical mass.

Why so? Companies do not allow third parties to access and reuse their data mostly because they perceive the potential risks as higher than the advantages. In particular, the main perceived risks are twofold:

  • that the third party data company builds products that enable their competitors to learn from their best practices, and hence reduce their competitive advantage;
  • that the third party data company enters the business of running industrial plants thanks to the algorithms developed, and becomes a direct competitor.

In the context of servitisation and increased cross-sector competition, these risks are not without foundation. And big data solutions are still in the “promising” area, they have not yet delivered breakthrough. Yet the reluctance to share data can itself prevent developing these AI innovative solutions.

What do you think? Are traditional companies right in not allowing data access and reuse by third parties? How do we break the vicious circle of no data sharing – no AI progress?

Blog at WordPress.com.

Up ↑