Wojciech Przybylski spoke with Paul Nemitz, a member of the German Government's Data Ethics Commission and the Global Council for Extended Intelligence, about the realistic policies governments can adopt to protect themselves against AI and disinformation.

Artificial intelligence affects the transparency of elections and political discourse because it affects voters’ awareness in the election campaign. It’s a big threat. How to defend against it?

The more technologies that take over the functions of democratic discourse and elections, the more we need the transparency of this technology.

We cannot allow a situation in which people open Facebook or Twitter in the morning and think: “Gee, all these messages from so many people are in favour of one candidate, so there must be something to it” when, in fact, it is all machine produced. The danger is that we do not know when we are dealing with a human being and when we are not.

We, therefore, need a rule that clearly shows that the messages we receive in a discussion or forum come from a human or from a machine.

Can you create a law that determines whether a machine is being operated by an actual person or an algorithm? This task is quite difficult and certainly law

Paul Nemitz

enforcement agencies would need special tools …

There are laws that are difficult to enforce, but they are still very useful because they determine what is right and what is wrong. Before asking ourselves how to determine whether a banknote is false, we must first ask the question of whether we want laws against counterfeiting or not.

I think we should proceed in the same way with regard to artificial intelligence.

If we agree that we want to know whether we are talking to a machine or to a person, we should have this right and, parallel to that, we should think about control technologies.

There are ways to control this activity to a certain extent, as well as measures that may oblige large Internet companies to make sure that their networks are transparent.

However, no law is perfectly enforced, and I believe that we should not require this impossibility with regard to artificial intelligence.

In the process of distinguishing people from bots, technology seems to be critical.

I am sure that there are already technologies that let you know which texts are produced by the bot.

 

 

The following is an abridged version of the discussion. This article is part of the #DemocraCE project organised by Visegrad/Insight. It was published in Polish on Res Publica and can be found here

A member of the German Government's Data Ethics Commission and the Global Council for Extended Intelligence, visiting professor of law, College of Europe.


Report

The EU is at a critical juncture. For the first time since the launching of European integration, doubts about the future of the EU have been raised by mainstream politicians and large swathes of the European public. Read about four political directions that Europe may follow after the EP elections in 2019.

Visegrad Insight is published by the Res Publica Foundation. This special edition has been prepared in cooperation with the Konrad Adenauer Foundation and with the kind support of ABTSHIELD.

Download the EP2019 report in PDF