Artificial intelligence (AI) offers unprecedented opportunities for democratic empowerment and societal progress. However, the manipulative use of AI can threaten democratic rights, including the rights to freedom of expression, freedom of thought, and genuine elections, whilst also presenting grave national security threats. This brief contains recommendations for democratic policymakers who seek to build resilience to the manipulative use of AI.
The challenge
AI has created a new arena of power competition. State and private actors are competing to shape this arena in favour of their political, economic and security interests. For some, the information ecosystem is not a venue for open debate but an opportunity to exert social control. Such actors pursue these interests in ways that threaten democratic processes and national security by, for example:
- conducting astroturfing operations to influence elections
- hosting assistant apps which censor factual content and store user data in autocratic jurisdictions
- generating misogynistic disinformation against female candidates
The breadth of their tactics underscores the need to engage a broad swathe of actors as part of the response.
Technological progress places further risks on the horizon, including:
- new capabilities to produce data-driven individualised disinformation at scale
- advances in text-to-image and text-to-video technologies that make it easier to produce manipulative content
- hardware improvements such as more powerful semiconductors which make AI more accessible, powerful, and affordable
- emerging advances in the stability and scale of quantum computing systems which may pose AI-enhanced risks threatening the cryptographic security of election systems
These developments underscore the need for democratic actors to share insight on emerging challenges and to face threats together.
The mitigation
Coalitions are groupings of some permanence that amplify the benefits of action. Tim Niven for the International Forum for Democratic Studies notes that “Coalitions bring together diverse skillsets to catalyse the work of prodemocracy voices; they save costs, pool resources, and avoid the duplication of efforts”.
Successful coalitions require a clear assessment of incentives. For example, it may be unrealistic to assume that actors that use AI to violate human rights will voluntarily decide to comply with global accords that contravene their economic and security interests. Building collective resilience is not synonymous with building the broadest possible coalitions.
The progressive realist approach involves three main principles to harness coalitions. These correspond to a theory of change that identifies motive (incentives), means (coalitions of purpose), and opportunity (as identified through risk-based approaches) as necessary preconditions for action.
The strategic approach
A clear-eyed understanding of incentives underscores the UK government’s overall approach to foreign policy, which it describes as ‘progressive realism’. This means recognising that states pursue their perceived self-interests, and working with that reality to pursue just ends. Progressive realism is ambitious in its aims, and realistic about the need to collaborate to achieve them.
Progressive realists regard building partnerships with like-minded allies as crucial to securing policy objectives. They are equally realistic about the extent to which actors which use AI for manipulation, or lack appropriate safeguards, can be trusted to collaborate in building a secure and democratic future for AI.
Policy recommendations
The many examples of meaningful collaboration to mitigate the manipulative use of AI often benefit from having identified or acted on incentives, mitigated risk, and acted through coalitions of purpose. Some of the brief’s recommendations to enhance these behaviours are summarised below.
Identify and act on incentives
- Disrupt AI-facilitated hostile operations at earlier stages through granular assessment of actor motivations and supply chains.
- Strengthen data protection safeguards.
- Expand sanctions registers, asset freezes and procurement blacklists.
Mitigate risk
- Bolster algorithmic transparency to limit opportunities for unchecked manipulation.
- Guarantee researcher access to platform APIs (application programming interfaces), including through meaningful enforcement of existing legislation.
- Take seriously the risks associated with artificial general intelligence (AGI) and artificial super intelligence (ASI).
- Invest in situational awareness and response capabilities globally, particularly ahead of high risk periods such as elections.
Take action through coalitions of purpose
- Empower CSOs to contribute to common information-sharing architectures such as the DISARM frameworks.
- Strengthen collaboration on commercial standards and prevent the undermining of democratic actors through autocracies’ lower standards.
- Build parliamentary resilience through peer-to-peer engagement.