Alex Read, WFD Associate
AI can offer substantial benefits for democratic systems:
- Improving access to government. AI is supporting new means for the public to access government services and receive legal advice.
- Enhancing public participation. Generative AI can help citizens to articulate their insights in public consultation processes, overcoming barriers of language, education and disability.
- Supporting more efficient decision-making. AI can help synthesise and map inputs from the public, draw out themes, highlight well-supported arguments, and isolate false claims or misinterpretations.
- More efficient and effective public services. AI can improve government service delivery by helping allocate resources more efficiently, reducing fraud and error, predicting issues such as public health crises and improving personalisation of services.
- Reducing divisiveness of public discourse. AI chatbots can be fine-tuned to help discuss sensitive topics with the public and overcome divisiveness.
However, while many of these developments are yet to be realised, current use cases of AI pose risks to the foundations of democracy in the following ways:
Compromising the information environment
Generative AI may open opportunities to new actors to produce disinformation, lowering costs of producing messages and increasing their quality and personalisation. The ability to automate production of text and other media may saturate the public information space that underpins democracies, causing the public to lose trust in authentic news reporting, public safety messages and legal processes, where truth is critical. This can undermine democratic processes and harm freedom of expression.
Subverting public consultation processes
Generative AI can automate and scale up ‘astroturfing’: fake grassroots campaigns, often delivered via fake social media accounts or bots, that give the impression of genuine public support or opposition to an idea or cause.
AI excels at producing surveillance instruments, taking humans out of the loop in sifting through huge flows of information. Expansion of AI surveillance can undermine fundamental rights to privacy and free expression, threatening civic space and helping embed illiberal or authoritarian regimes. However, surveillance also remains a problem within democracies, with certain governments using facial recognition systems, biometric identification for national security purposes and predictive policing for law enforcement. In addition, large Western tech companies have normalised a surveillance-based business model, collecting and monetising user data in exchange for free services.
Affecting electoral processes
‘Deep fake’ AI content poses increasing risks of election manipulation. There are already examples of AI-produced audio affecting elections by damaging the reputation of parties and candidates, such as in Slovakia. Deep fakes are spread on social media, which has already been shown to algorithmically amplify sensational and divisive content.
Impacting political discourse
Deep fakes also enable a ‘liar’s dividend’ whereby political actors claim real content is fake to avoid sanction. An example from India involved a state MP who was recorded accusing party members of corruption. The MP claimed the recording was 'fabricated' by AI but experts believed it was likely genuine.
Damaging social cohesion
Biased outputs from AI systems can reinforce damaging stereotypes, marginalise minority groups and increase polarisation and inequality in society. Generative AI also has capabilities to mass produce content that promotes and amplifies incitement to hatred, discrimination or violence on the basis of race, sex, gender and other characteristics.
Power concentration and increasing inequality
As AI development is driven by a small number of companies, we risk a concentration of market power, weakening competition and consumer choice. Globally, the IMF suggests a severe widening of the gap between rich and poor nations owing to the concentration of AI industry in advanced economies.
Risks to less resilient democracies
There are specific risks to less resilient democracies:
- Generative AI’s potential to supercharge disinformation may be felt even more in countries where lower levels of digital literacy and a less robust press may struggle to push back.
- Deep fakes may have more of an impact in fragile democracies and countries experiencing war and instability.
- Surveillance capacities may be increasingly tempting to governments in new or fragile democracies seeking to consolidate control. African governments are spending over $1 billion on surveillance technology. Reports indicate Chinese firms supply surveillance tools to 63 countries globally, however companies in democracies often match or surpass Chinese sales.
If significant harms from AI emerge soon, the public may lose trust in government to keep pace with emerging technology and keep societies safe. This could lead to public anxiety about AI and compromise realising its benefits. Emerging risks from AI are also coming at a time when democracy is in decline globally. If AI now erodes democracy in the above ways, we will also lose the power to control potential long-term risk from frontier AI. The near-term risks therefore compel an immediate debate within and across democracies and a rapid international response.
Therefore, a proposed sixth objective for the UK Safety Summit is: Safeguarding the integrity and security of our democratic systems against current and future threats from AI.
This objective could be raised in discussions between democratic leaders on the sidelines of the upcoming summit, with dedicated events after the summit helping to elaborate further an approach across democracies worldwide to protecting democratic systems, human rights and fundamental freedoms.