Alex Read, WFD Associate
As the United Kingdom (UK) holds the first-of-its-kind AI Safety Summit on 1-2 November 2023, the need for democratic responses to the transformative impact of AI grows stronger.
Artificial Intelligence (AI) is achieving breakthroughs in healthcare and education, helping mitigate climate challenges and contributing to global growth. It offers the potential to enhance democratic systems by improving access to government, streamlining decision-making and fostering new means of public participation.
However, as the technology advances and becomes more widely adopted, societal risks will grow. Threats such as AI-driven disinformation, increased surveillance, biased and discriminatory outcomes, and concentrations of power pose challenges to the security and stability of democratic structures and institutions.
The current and near-term risks from AI should compel democratic leaders to incorporate the safety of democratic systems in the discussion around AI safety. As well as discussing technical measures to progress towards safe AI, we need to focus on building political and societal resilience to the disruption that AI will bring.
The discussion on AI governance and safety must focus on core democratic values of transparency, accountability, public participation and inclusivity. To counter illiberal and repressive uses of AI, democracies will need to set a values-based example and demonstrate a coordinated approach.
The AI Safety Summit can provide an important step towards global agreement on AI safety that incorporates a broad array of risks from current uses of AI and frontier systems, including threats to democratic systems. The summit offers a valuable opportunity to:
- Launch an inclusive discussion to develop a shared understanding on the risks to democracy from AI.
- Establish a framework for future cooperation among democracies to proactively address AI-related risks and harness its benefits.
- Call for the international expertise necessary to provide impartial, reliable and timely assessments about the progress and impact of AI and research focused on measures to protect democratic systems.
About the UK AI Safety Summit
The UK AI Safety Summit will see global leaders and cabinet ministers, academics, civil society representatives, and heads of leading AI companies gather to discuss AI safety. The summit aims to develop a shared global understanding on the risks that may emerge from frontier AI and make progress towards global approaches to manage these risks.
The five objectives that will be discussed at the summit are:
- A shared understanding of the risks posed by frontier AI and the need for action.
- A process for international collaboration on frontier AI safety, including how best to support national and international frameworks.
- Appropriate measures which individual organisations should take to increase frontier AI safety.
- Areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance.
- Showcase how ensuring the safe development of AI will enable AI to be used for good globally.
The focus is on frontier AI. Two risk categories that the AI Safety Summit will focus on are:
- Misuse risks, for example where a bad actor is aided by new AI capabilities in biological or cyber-attacks, development of dangerous technologies, or critical system interference. Unchecked, this could create significant harm, including the loss of life.
- Loss of control risks that could emerge from advanced systems that we would seek to be aligned with our values and intentions.
UK Government representatives have stressed that the summit does not aim to suggest specific forms of global regulation. However, Prime Minister Rishi Sunak has announced that the UK will set up the world’s first AI Safety Institute, building on the work undertaken by the UK’s Frontier AI Task Force.