We asked people involved in disinformation on the web to share their comments on the idea of introducing an obligation to inform EU citizens about whether they are in contact with a human being or a bot on the Internet.

Krzysztof Liedel

Krzysztof Liedel

Director of the Centre for Research on Terrorism Collegium Civitas and Head of the Institute of Information Analysis Collegium Civitas

The European Union should take up the topic of Internet traffic generated by bots. Such a mechanism should have a formal and legal character, which would imply marking or indicating which content is natural and which are generated in an artificial way. It could be done; such technologies are already appearing.

However, it would be more difficult to introduce such a solution because it requires consensus. Today, many entities have the interest to leave it as it is and not to separate artificially generated content from those created by people. That being said, I have no doubt that the introduction of a provision requiring such a distinction is well-founded.

Anna Mierzyńska

 

Anna Mierzyńska

Internet policy analyst, focused on the analysis of network traffic, manipulation and disinformation

 

Technically, it is not always possible to quickly and reliably answer the question whether the account you’re interacting with is a bot or a real user. Especially since there are more possibilities – there are accounts combining automatic activity with human activity, so-called cyborgs.

Moreover, algorithms are not always able to recognise whether we are dealing with a real or false account. Therefore, there are probably technical limitations that made impossible to comply with the request to inform us whether we are dealing with a human or a bot. And this is the reason why it is not included in the recommendations presented by the European Commission.

Kamil Basaj

Kamil Basaj

Founder and head of the Info Ops Polska project, adviser to the Minister of National Defence

Considering the issue from the perspective of the level of impact, which is implemented through a virtual information environment, the recommendation that the user of the network should be informed about the nature of the subject with which he or she is communicating is of negligible importance.

Communication activities, disinformation operations carried out by professional entities have a much more complex structure than actions consisting in “saturating” the information environment using automated tools.

Often automated activities serve to divert attention or to mask proper, informative and psychological activities.

The recommendation determining the necessity of marking the nature of an online message may constitute a certain contribution to systemic actions aimed at increasing the awareness of threats to network users, but should not be treated as a remedy for threats resulting from multidimensional information operations carried out by a state actor.

EU officer dealing with disinformation

As a rule, the virtual world could become a slightly more friendly place if citizens could be 100% sure whether they are dealing with a person or with a bot. Certainly, it would also facilitate the fight against disinformation on the web.

Bots account for about 15 per cent of all Twitter accounts. We know how they influence discussions in topics that are sensitive to Europeans, from Brexit to Yellow Vest protests.

We are testing new methods of detecting them, without using advanced technological solutions. We also know well all the warning signs (account names, language, high activity, etc.), but what we know does not give us 100% efficiency in detecting bots. For this you need more advanced tools and technologies, the availability of which is currently limited.

An additional challenge is time. From the moment when we start to suspect the presence of bots in a given discussion, until the moment when we can name such activity by name and publicly describe, a lot of time passes which the bot manager can use to put them to sleep or remove them.

A quicker identification of bots would allow for a faster response to their activities, and thus enable a greater resilience of Internet users to manipulate discussions.

The question is very important, is it possible to detect 100% of bots – and if it is possible at the moment, will it be just as effective in a few months? Together with the detectability, the “cunning” of bot creators also grows. The question is therefore how real it is to implement the abovementioned recommendation?

The next question is how to implement such a recommendation. Would an EU law be the solution to the problem of detecting bots? It would be crucial to identify the responsibility for indicating and informing users about who is a bot and who is a human. An example of cooperation in identifying disinformation and closing suspicious accounts, started between the European Commission and online platforms (Facebook, Twitter, Google) a few months ago, shows that a hard law does not have to be a starting point.

Adam Lelonek

Adam Lelonek

President of the Board and co-founder of the Foundation for Analysis of Propaganda and Disinformation

The demand that EU citizens must be informed whether they encounter a human being or a bot on the Internet raises more doubts than it seems to solve problems. Several questions immediately become apparent. Who would deal with such identification, and who would bear responsibility for errors connected with it?

It must be remembered that there are also semi-automated accounts, which, however, are real users most of the time. It seems that such a recipe could simply be unrealistic.

Even the largest companies like Facebook or Twitter with their enormous budgets are not able to counteract fake news or fact-checking. And yet the activities of bots take place also outside of social media, for example in the national online media, which do not have such financial resources as global potentates but also on blog platforms. Moreover, they are active on audio and video content.

Maia Mazurkiewicz

Maia Mazurkiewicz

Coordinator of the European Front that runs a Facebook group called Warriors of the Keyboard

Over 79 per cent of us trust what we read on the Internet; additionally, we have a problem with distinguishing facts from interpretation or opinion. We also do not know whether the content we are reading has been created and then disseminated by journalists or Internet users, or by specialised companies and bots.

The effective promotion of certain content in social media can lead to changes in social norms. An example of this is the attitude towards the EU. Social surveys and polls show that the vast majority of Polish women and Poles support Poland’s membership in the EU.

However, the results of social media monitoring (so-called social listening) show that more than 80 per cent of user-generated messages about it are negative or false. This dissonance can lead to a false sense that supporting the EU is no longer a social norm.

That is why it is so important not only to verify content and fight against lies and manipulations but also to clearly define which content is paid advertising, which comments are opinions of social media users and which have been prepared by employees of marketing companies (or bots) hired for political campaigns.

As part of the Keyboard Warriors group, we fight with false information about the European Union. In the fight against messages generated by bots or professional trolls – which will never end – the duty of informing users through social media platforms about the fact that they are in contact with content shared or duplicated by bots can help.

 

* The question in the survey was: the European Commission has published a report on European media sovereignty. On its basis, 14 recommendations were created. They lacked the demand that EU citizens should be informed about whether they are in contact with a human being or a bot in the network. What would be the consequences of introducing such a policy for the area you are dealing with?

The article was compiled and edited by Piotr Górski.

This questionnaire is part of the #DemocraCE project organised by Visegrad/Insight. 


Central European Futures

Over the past several years, it has become ever more apparent that the post-Cold War era of democratic reform, socio-economic development and Western integration in Central Europe is coming to an end.

Visegrad Insight is published by the Res Publica Foundation. This special edition has been prepared in cooperation with the German-Marshall Fund of the U.S..

Download the #CentralEuropeanFutures report in PDF