How the election support field must adapt to artificial intelligence

Commentary

How the election support field must adapt to artificial intelligence

One of the most common fears people express about artificial intelligence (AI) is that its development will lead to less human control over our lives. We have long had an answer to threats to our autonomy as individuals and societies – democracy. Those who serve democratic processes have a special responsibility to shape a democratic future in which to the greatest extent possible AI benefits, rather than harms, our societies.
Image
""
Theme

Driven by the will to win an election, political entities are often quick to integrate technologies into their work. Driven by the will to ensure credible elections, election officials, observers, and technical consultants must also show similar forward thinking – not just at election time, but across the whole election cycle.

The election support field could have been quicker to adapt to the technological advances of recent decades. Elections in which social media has played a central role have not always been accompanied by adequate voter education, counter-disinformation, and online campaign finance oversight.

With AI, however the stakes are higher. We only get one chance to get this right.

There are five areas in which the election support field must adapt to the role AI will play in election processes:

1. International standards

Whilst technologies advance, fundamental democratic rights remain constant. Democratic elections must adhere to a body of international laws and treaties that states have agreed to. One of these, agreed to repeatedly by UN member states as far back as 2012, stipulates that all rights that apply offline apply online.

This longstanding consensus helped ensure international standards were seamlessly extended into the digital age. Now, international standards must be affirmed into a world being reshaped with AI. No new technology is of sufficient utility to require a reduction in fundamental freedoms.

The need for more democracy must be at the heart of discussions on managing the rise of AI. The UK has recently led in establishing dialogue on international agreements on artificial intelligence. Globally, new legislation related to AI should benefit from assessment from specialists for compliance with long-established fundamental democratic freedoms.

2. Counter-disinformation  

Artificial intelligence is making it difficult to tell fact from fiction.  
 
Picture a world in which anti-democratic entities can swiftly create clear, compelling, high-definition videos that appear to show candidates accepting bribes, cheating on partners, or disparaging the electorate. These videos might be created simply by vocalising the scene in question into a microphone. Thanks to AI, this reality is now with us, as underscored in Democracy Reporting International’s seminal report on text-to-image and text-to-video technology.  

Mitigating impact requires training elections professionals to tell fact from fiction, supporting independent journalism, and refining tools – many of which use AI, such as pattern recognition software – to identify AI-generated video manipulation. 

Russia’s information war on Ukraine, alongside increasing efforts by other anti-democratic actors to influence other countries’ democratic processes, have intensified international efforts to combat FIMI (foreign information manipulation and interference). FIMI is a particularly insidious type of information disorder, and AI offers the potential to make foreign interference in elections more effective than ever.  
 
Elections professionals will need to routinely undertake FIMI forecasting, whilst greater support must be given to European Commission-facilitated efforts to share threat analysis across the counter-FIMI community. 

3. Strategic communications 

In the face of AI-based disinformation, world-class communications are critical for institutional effectiveness. Election observers and officials have long been targets of disinformation. This has ranged from unscrupulous politically biased groups muddying the waters between credible and non-credible observers, to a supposed observation mission broadcast on Cameroonian television under the alias of a legitimate human rights organisation
 
AI will make such unfounded attacks far more believable. In doing so, it risks promoting the ‘Liar’s Dividend’: a loss of trust in all manner of experts and authorities. As WFD has emphasised, election practitioners must ensure effective communication to a clearly defined audience able to call out disinformation that seeks to discredit legitimate democratic rights defenders and democratic processes.  
 
Too often, well-intentioned actors are caught off-guard by outdated communications strategies. Combatting the power of AI-based disinformation will require better embracing data-driven communications, risk forecasting, short-form video, and 24/7 response capabilities. We must better understand how evolving disinformation campaigns are targeted through psychographic segmentation, and what this means for conducting effective counter-disinformation. Artificial intelligence may play a positive role in improving our strategic communications capabilities.

4. Voter education  

Voter education campaigns must educate populations about the capabilities of AI – including in schools. This is a necessary step to build resilience, particularly in countries where populations may be unaware of the degree of the threat. AI can be harnessed as part of the educative response to disinformation. WFD works to ensure that electoral management bodies (EMBs) across the globe can harness innovation from other contexts to maximise this impact. 

5. Personal data protection  

AI allows for the unprecedented granularity of analysis and modelling using personal data. It may harness psychographics, which is the study and classification of people according to their attitudes, aspirations, and other psychological criteria to manipulate the thinking of sections of the electorate, and it may not always be the case that the data used is obtained lawfully. Research should be undertaken to understand vulnerabilities in any use of personal data to target political messaging, including by anti-democratic actors, and appropriate mitigation measures established.  

If the rapid evolution of election technologies is not accompanied by corresponding progress in election scrutiny, the result will be a reduction in democratic governance, in both reality and perception. The growth of artificial intelligence in elections through ever-more accessible interfaces demands that the election support field rises to the challenge.  

Related content