While technology is more and more digital, data-driven, decentralised and de-territorialised, democracy is still mostly analogue, value and interest-driven, centralised, and geographically bound.
That we are living in a digital age is common place today. Smart gadgets, the Internet of Things, artificial intelligence (AI), machine learning, blockchain and social media are increasingly becoming or have already become part of our everyday life. The challenge for democracy is to provide meaningful content in an age where the political agenda is increasingly dominated by technological imperatives.
There is rising tension between the output-oriented logic of efficiency facilitated by technological development, and the input-oriented logic of legitimate decision-making anchored in the idea of democracy.
Technological advancements of the past disrupted previous patterns of social interactions and political institutions, but it is worth addressing the contemporary challenges posed by technology.
AI and machine learning are in many ways natural responses to the explosion in available data and computing power. While we are still – according to the most optimistic estimates – decades away from artificial general intelligence, AI technology is used with increasing success in different areas from medicine, through law to finances.
But how would AI alter democracy? First, it could change the way we make public policy. With further advances made in data analysis and algorithms, experts, technocrats, or even politicians(?) may be substituted with AI. After all, who needs complicated deliberations which often lead to sub-optimal choices when swiftness and efficiency can be guaranteed through the right coding?
This poses various challenges two of which are highlighted here. If AI is making policy decisions based on big data, (1) who controls for the quality and purity of the data and codes to avoid algorithmic injustice, (2) what role remains for values, principles and morals to play in politics?
If data used for machine-learning is contaminated because data from the past of mankind is loaded with socio-economic and political inequalities, wouldn’t we simply reinforce already existing inequalities?
Or even worse, would the availability of AI reinforce already existing divisions within society (i.e. those with AI assistants and those without?). Additionally, data-based decisions could lead to personalised laws that can create social tensions by erecting invisible barriers between people (e.g. what is allowed for one is prohibited for others), much like China’s Social Credit System.
Furthermore, how can values and morals be incorporated within the codes that run AI? Would AI-assisted decisions mean that citizens are freeing themselves from moral responsibilities? If algorithms decide about work, loans, insurance, income distribution, etc., what room remains for political contestation? Who ensures the transparency of algorithms?
Secondly, the impact of AI on future jobs is a hotly debated topic nowadays. However, beyond the question of which professions will be made idle by AI, its potential to disrupt markets through the manipulation of algorithms has to be emphasised as well.
There is a new digital world called social media which connects us individually but polarises us as a society. Compared to AI, the impact of social media on the ways we govern ourselves is already, at least partially, traceable.
So how does social media alter democracy? Similar to AI, the article confines itself to a social and institutional element.
Social media is tailored to instant and constant low-level feedback, a kind of shallow undertaking as opposed to a deeper engagement which would be essential for deliberative democracy to strive. Algorithms that are supposed to maximise viewing time on social media come with costly trade-offs for democracy.
First, they challenge a coherent, consensus-based reality by amplifying division through echo-chamber effects and cyber-polarisation, where people are exposed only to and actually reinforced in their own beliefs.
Secondly, while perception is often controlled through filtering out unplanned and unanticipated encounters, troll-farms and bots are also used to shift public opinion.
Thirdly, the digital cacophony actually undermines social trust as the abundance of available information shifts public attention from substantive issues to debates about the truthfulness of a claim.
Last, but not least, elections are becoming software wars often run on social media based on big data and algorithms. But how far this changes our voting behaviour and makes our elections less free and fair remains a question to be answered.
A way forward
Both above-mentioned examples of technological development seem to underline one important conclusion: code has become power, and algorithms have become the new political currency. While this poses a clear threat to the healthy functioning of future democracies, there are measures that can be initiated to minimise the potential of these risks.
First and foremost, coding and algorithms have to be made as transparent as possible. This would not only mean the evolution of legal measures in this field that would address, among other things, the complexity posed by tech companies blurring the line between private and public, but also an increase in governmental capacity to control these processes.
Secondly, and in relation to the first point, education is key. Not only at the expert level, but also at the level of the average citizens to make them more tech-aware and code-alert while also wanting to break out of their echo-chambers.
Thirdly, election laws need to be updated to control the digital aspects of this fundamental institution of democracy. All in all, technological development can only be linked with liberal democracy if we are able to re-define and reinforce the idea of checks and balances in and for the digital age.
This article is part of the #DemocraCE project organised by Visegrad/Insight.