This foreword is part of our policy brief that discusses where risks to democracy from AI are emerging, what a democratic response to AI governance and safety looks like and the role of parliaments worldwide in enabling this response. It outlines how the democratic governance community can help plot a course of action to ensure that democracy is protected in the face of rapid AI advancements.
Illustration of people emerging from screens

Foreword by Anthony Smith, CEO, Westminster Foundation for Democracy.

Foreword by Anthony Smith, WFD's CEO

This paper was published in advance of the AI Safety Summit on 1-2 November, 2023. We strongly welcome key outcomes of the summit, including the consensus achieved with the Bletchley Declaration, agreements on AI safety testing and the establishment of an AI Safety Institute. The number of countries attending indicates a strong scope for agreement on the importance of a global approach to AI safety. In advancing a democratic approach to global AI safety, we would like to add the following reflections:

First, the Bletchley Declaration, has advanced work to address the opportunities of AI and the threats it poses. We noted in particular the call in the declaration for countries to develop “a pro-innovation and proportionate governance and regulatory approach that maximises the benefits and takes into account the risks associated with AI” and we also welcome support for “development-orientated approaches and policies that could help developing countries strengthen AI capacity building and leverage the enabling role of AI to support sustainable growth and address the development gap.”

However, meeting these priorities will be a significant challenge for many countries, which will require dedicated international assistance. We call for support in particular to focus on the critical role of parliaments in ensuring democratic oversight over AI governance, policy and regulation.

We welcome the need for “human-centric, trustworthy, and responsible AI” and the emphasis on the “protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection”, all of which can help address risks to democracy from AI. In achieving this vision for societies with advanced AI, we call for an emphasis on wide-ranging inclusive public participation to help foster public trust and confidence in AI. Parliaments will have a key role in voicing public concerns around societal risks and in supporting a public dialogue on societies with transformative AI.

Our second reflection is that there is more to do to ensure that a democracy lens is applied to the follow up. It is clear to us that more needs to be done to safeguard the integrity and security of our democratic systems against current and future threats from AI. While we agree that risks from AI are “inherently international in nature, and so are best addressed through international cooperation”, we believe that it is critically important that those committed to democratic governance in their countries and societies specifically to consider the opportunities coordinate around addressing threats to democracy as a result of the increasing use of AI. The spread of countries and organisations attending the summit was very valuable, and we also call for dedicated international mechanisms that can respond to threats to democracy from AI that cross borders.

In our view, and as set out in our paper, we need:

  • A shared understanding of the opportunities and risks from AI for democratic systems;
  • Accelerated work among democracies and those supporting democracy to address those opportunities and risks; and
  • Development of international expertise to provide impartial, reliable and timely assessments about the progress and impact of AI and research focused on measures to protect democratic systems. This can potentially be achieved through the future work of the AI Safety Institute, which can be expected to report to the UK Parliament, as other independent institutions do.

We look forward to continuing the engagement with partners on these issues.