Wojciech Przybylski spoke with Paul Nemitz, a member of the German Government's Data Ethics Commission and the Global Council for Extended Intelligence, about the realistic policies governments can adopt to protect themselves against AI and disinformation.

Artificial intelligence affects the transparency of elections and political discourse because it affects voters’ awareness in the election campaign. It’s a big threat. How to defend against it?

The more technologies that take over the functions of democratic discourse and elections, the more we need the transparency of this technology.

We cannot allow a situation in which people open Facebook or Twitter in the morning and think: “Gee, all these messages from so many people are in favour of one candidate, so there must be something to it” when, in fact, it is all machine produced. The danger is that we do not know when we are dealing with a human being and when we are not.

We, therefore, need a rule that clearly shows that the messages we receive in a discussion or forum come from a human or from a machine.

Can you create a law that determines whether a machine is being operated by an actual person or an algorithm? This task is quite difficult and certainly law

Paul Nemitz

enforcement agencies would need special tools …

There are laws that are difficult to enforce, but they are still very useful because they determine what is right and what is wrong. Before asking ourselves how to determine whether a banknote is false, we must first ask the question of whether we want laws against counterfeiting or not.

I think we should proceed in the same way with regard to artificial intelligence.

If we agree that we want to know whether we are talking to a machine or to a person, we should have this right and, parallel to that, we should think about control technologies.

There are ways to control this activity to a certain extent, as well as measures that may oblige large Internet companies to make sure that their networks are transparent.

However, no law is perfectly enforced, and I believe that we should not require this impossibility with regard to artificial intelligence.

In the process of distinguishing people from bots, technology seems to be critical.

I am sure that there are already technologies that let you know which texts are produced by the bot.

 

 

The following is an abridged version of the discussion. This article is part of the #DemocraCE project organised by Visegrad/Insight. It was published in Polish on Res Publica and can be found here

A member of the German Government's Data Ethics Commission and the Global Council for Extended Intelligence, visiting professor of law, College of Europe.


Scenarios for cohesive growth

As of 2019 the negotiations about the next Multiannual Financial Framework (MFF) will enter a critical moment. In the face of an imminent Brexit and the fallout from global turmoil, the EU has to reflect on its guiding principles and take decisions to fulfil the promise of a united Europe.

Download the report in PDF

The Visegrad/Insight is the main platform of debate and analysis on Central Europe. This report has been developed in cooperation with the Centre for European Policy Studies (CEPS).

Launched on 1 October 2019 at the European #Futures Forum in Brussels.