It’s possible to come across disinformation in every field of life, and the consequences of that have a clear financial dimension. In the case of political fake news the consequence may be the destabilisation of the country that affects its economy. When disinformation concerns the market of publishers and advertisers, each company can measure its losses or improperly spent money. It turns out that the biggest cheaters and swindlers are bots.
Inefficiency and losses
According to the research provided by advertisers, in the case of reach campaigns the bot traffic varies from a dozen or so per cent to even 50-60 per cent. This consists of fake views, click frauds, problems with advertising attribution and ad stalking. Bots are heavily involved in all of these activities.
“We realize the scale of the bot traffic only after campaigns. We are not able to make any decisions when they are ongoing. It is a great limit in terms of optimization”- noted Paweł Lewandowski from MediaCom media house.
It is estimated that more than 50 per cent of annual traffic on the Internet is generated by bots. Among them, there are Googlebot and Yandex bot that makes our life easier. However, bots which are considered malicious generate about 1/3 of the traffic, out of which 1/4 of those are bots that pretend to be human. This is particularly important for advertising campaigns.
We must remember, as regards fake accounts, that the scale of the phenomenon depends on thematic areas. For example, during the 2015 parliamentary elections in Poland, a third of the political activity on Twitter was generated by bots.
The World Federation of Advertising estimates that losses from advertising budgets due to the frauds caused by bots (ad fraud) by 2025 will reach 50 billion every year, which will be the second largest source of revenue for organised crime after drug trafficking. The first detentions in connection with such activities took place in November 2018 thanks to the cooperation between Google and the FBI.
The scale of bot activity is so big and the consequences so visible and severe that the discussions about the necessity of its legal regulation should not be surprising. They are being discussed in political and business circles as well as in the publishing and advertising industries, which have become ever more aware of the problem.
Zofia Bugajna-Kasdepke, MSLGROUP CEE board member, commented on the global market leaders who are already actively involved in the process of recovery. Marc Pritchard from P&G in “The New Media Supply Chain” on April 12, 2019, referred to the issue of eliminating the false traffic created by bots that causes inefficient spending of money. The largest advertiser in the world has introduced the principle of transparency and control. Pritchard also points out that the responsibility for the activities of the bot army should be taken by platforms that allow such automatic activity.
Bugajna-Kasdepke continued saying similar internal regulations are introduced by the second giant of the FMCG market, Unilever. Keith Weed, former Chief Marketing and Communication Officer at Unilever, announced at the World Federation of Advertisers’ conference the implementation of greater rigour in online advertising.
The control criteria for trusted Unilever publishers will include more than just the “3Vs”: visibility, verification and (real) value of the ad. In order to earn the title of trusted publisher from Unilever, media companies will have to undergo audits regarding advertising fraud, online brand safety, advertising experience, traffic quality, ad formatting and data access, Bugajna-Kasdepke added.
During a seminar following Chatham House rules, Bugajna-Kasdepke talked with the sector representatives about the needs and challenges of the publishing and advertising industry in relation to the unfavourable role of bots.
A threat to reputation and credibility
From the perspective of people who work on the reputation and credibility of brands, the phenomena related to bots and the traffic already generated by them will only increase.
The credibility of the media has been devalued. Until now, everything that appeared in newspapers or on television was considered well-tested information, but now they are suffering a crisis of social trust. The disinformation on the Internet, which caused the crisis, makes it difficult for the publishing and advertising industry to effectively respond to existing threats.
In March 2019, the Polish television outlet TVN reported to the prosecutor’s office that the images of its biggest stars were used in advertising campaigns carried out by bots. This hits not only the potential competitors of the advertised product, but also spoils the reputation of the people whose image is used in this way, and thus also of the other brands they are the face of.
There is still a lack of tools that would create effective protection against a massive bot attack on brands that may happen more and more often. At the moment these kinds of tools are used only by some state institutions, such as the Central Anticorruption Bureau (CBA), but they are not aimed at securing the advertising market.
Furthermore, the dissemination of deep fakes, which is only a matter of time, will change the dynamics of disinformation and image crises caused by it. Society does not seem prepared for functioning in a world with widespread, untrue – but very realistic – videos. Yet, preparation for this will be of great importance.
The temptation of profit
The publishers are currently struggling with bot traffic but the truth is that they have brought this dilemma on themselves. The corporate drive for financial results every year pushes them to make use of unfair practices. However, long-term thinking is lacking in this strategy. The only thing that counts is fulfilling the expectations of the bosses.
Generating artificial traffic is readily available in the absence of verification tools. It is achieved cheaply and then sold at expensive prices. To change this type of logic will require time and money. All the more so because it is based on the mechanism of self-deception.
The situation has not been improved by the auditors who create without reflection the so-called “media field”. In the case of a big tender, an agency that presents an offer based on the real costs is not competitive with an agency using artificial traffic. In addition to that, the ease of entry into the advertising market and running campaigns for a very large customer caused the fragmentation of the media landscape and advertising market. In the absence of proper verification tools, you can easily and cheaply meet the rates offered to the client through artificial traffic, although there are no real effects.
To heal the current situation and meet the realistic goals, each side of the market should work on raising standards. It would mean higher prices, and it would be difficult to explain, for example, that the campaign requires 30% more budget, because it is aimed at real traffic, not bots.
Furthermore, the ownership structure on the market makes it complicated to change. If the shareholders themselves – who are focused on the year-end profit – are to decide about it, then they do not have special ethical expectations for the functioning of the company.
Motors of change
However, recently, publishers have done some work to improve the quality, offering the possibility of verification of what you are buying. They have bowed under the pressure from the agency market that does not want to pay for frauds. It is in its interest to manage it.
There are some players that keep prices high, but the “many and cheap” approach remains the dominating one. Global competitors play an important role here, becoming a role model. This is the case of companies that blocked YouTube due to the fact that their ads were displayed on content considered inappropriate.
There has been a shift in the industry towards greater transparency and reluctance to pay for artificial traffic. A standard for measurement could be very helpful here and could lead to stopping paying for the fake views.
Calls for the establishment of an institution that watches over the price level on the domestic market may bring some hope too. Some time ago, such a revolution that introduced transparent price lists was carried out on television. An external internet advertising auditor that would counteract dumping would make it necessary to compete not on price but on quality, better optimization, strategy and profit margins. This would lead to new market prices.
However, reactivity remains the motor of change regarding disinformation. The market adapts to the negative trend by defending itself against them. This fact causes a lack of positive incentives to create good content. As a consequence, the content has less importance as bots are already used to create it. Whereas, it is important to bet on quality.
The need for regulation
One of the drivers of change is legal regulations. They should be set up at the European level and very carefully. In Hungary, the government introduced a fixed tax fee, but due to global nature of the business, a marketer who does not want to pay this rate may create a hub for example in Poland that will serve the Hungarian market.
The report of the European Commission on European media sovereignty also touches upon the perspective of the publishers. It proposes, among others, to build platforms that would give publishers tools to compete in the advertising market, dominated by the largest players – Google and Facebook, which receives 50 per cent of advertising budgets. However, there are no proposals for the advertising industry.
Technologies in the service of law
Technology that can identify bots and based on that decide whether to display an advertisement or not would be very beneficial for advertising agencies. The publishers would be those who “lose” the most but many problems would be solved.
More and more advertising space is sold and operated through various ad exchange systems. This means that it is not bought directly from the publisher, but in the middle there is technology. Advertising sales tools and ad-buying tools theoretically have built-in anti-fraud functionality, but no one can tell how effective they are.
In addition, even after the detection of fraud, still remains the question of execution. How to prevent that the money does not go to the owner of such a bot? The more unstandardised brokers, platforms and technologies that are on their way, the more difficult it will become.
Another challenge is the automation of the processes. Automated commercial transactions is an estimated $5 billion market in 2020. Some problems of computing power or IT resources can lead to the bankruptcy of thousands, if not millions, of people.
The same can happen in the advertising market. If once you had to steal the first million, today it can be done by bots. Where the speed of decision-making or scale of operation counts, these machines have no equal. Moreover, they are second to none as regards to slandering their competition.
At the same time, we can observe that by making a clear distinction between the activity of bots and humans, you can build the brand position and trust. Many global players lead policy based on ethical responsibility.
Maciej Surowiec, the EU Government Affairs Manager at Microsoft, explains: “In the field of artificial intelligence (AI), Microsoft relies on a number of rules. Our 10 principles cover a wide range of issues – from the requirements of transparency in the use of bots to ensuring human intervention in certain decision-making processes, when decisions taken autonomously by the AI may not be appropriate.” He continued, “Microsoft believes that the use of AI in certain contexts must be regulated.”
Transparency and social responsibility is profitable for companies.
Social changes and critical thinking
The publishing and advertising market cannot abstract from the social context in which they are functioning. For the new generations, the Internet and its rules have become a natural environment. They are getting ready to function in the digital world, also by learning how and which tools and instruments are used to achieve their goals.
The challenge also lies in the fact that bots and algorithms are able to become more and more like humans through the development of artificial intelligence. Without technology that supports us, armed only with our natural tools, it will be more and more difficult for us to know our way around the digital world.
Critical thinking remains fundamental in the fight against disinformation. But it is easier to maintain criticism when we know whether we are dealing with a bot, or a human being.