Annex B

Annex B

This annex is part of our policy brief that discusses where risks to democracy from AI are emerging, what a democratic response to AI governance and safety looks like and the role of parliaments worldwide in enabling this response. It outlines how the democratic governance community can help plot a course of action to ensure that democracy is protected in the face of rapid AI advancements.
Illustration of people emerging from screens
Authors
""

Alex Read, WFD Associate

Summary

The AI governance landscape

Why is regulating AI complex?

  1. The speed and unpredictability of change poses problems for traditional lawmaking processes. For instance, few people predicted that generative AI would begin to automate creative industries so soon. New capabilities of AI systems can also arise unpredictably during and after deployment.
  2. AI’s complexity implies an asymmetry in knowledge and resources between democratic institutions, AI developers and technology companies, creating risks of regulatory blind spots or regulatory capture.
  3. As a transformative technology, effective AI governance will need to go beyond legal and tech expertise and requires consideration of ethics and sociology. Achieving consensus on the right approach across these fields is no small feat.
  4. How to focus regulation is not exactly clear. For example, where AI causes harm to the public, where in the supply chain does accountability fall? Where governments are clear about an area they want to regulate, this can be easier said than done. For example, bias is often embedded in the training data and very difficult to attribute.
  5. National regulation is important but insufficient as systems developed in one country will be deployed in another, enhancing the need for global coordination.

What regulatory approaches are we seeing?

Regulatory approaches to AI are broadly categorised in the 2023 State of AI Report as:

  • Relying on existing laws and regulations. Light tough and pro-innovation. This approach does not envisage new specific regulation for AI. Examples include India and the UK.
  • Wide-ranging AI-specific legislation. The European Union (EU) has pioneered the introduction of AI legislation focusing on different risk categories. Legislation in China requires AI-generated content to be labelled, developers to register algorithms and ‘security assessments’ for AI deemed capable of influencing public opinion
  • Hybrid models. Slimmed down national regulation, or a reliance on local laws. Emphasis on voluntary commitments. The United States (US) currently demonstrates this model.

What approaches to AI governance are emerging at international level?

The field is crowded, including different initiatives to establish ethical frameworks and principles.

  • The OECD AI Principles, based around human-centred values, fairness and the rule of law. They promote AI that is transparent, explainable, robust, secure, and safe, with accountability mechanisms in place. They are the first set of principles to receive buy in from international leaders.
  • The Global Partnership on AI aims to foster a collaborative effort across 29 countries on AI research and global policy development.
  • At a summit in May 2023, G7 leaders initiated the Hiroshima process to help advance and harmonise AI policy, with a focus on generative AI.
  • UNESCO’s Recommendation on the Ethics of Artificial Intelligence emphasises four core values and ten principles for a human-rights centred approach to AI. They have broad buy-in from countries in the global south and also from China and Russia.
  • The International Telecommunication Union (ITU)’s AI for Good initiative focuses on developmental benefits of AI and applications that can contribute to the Social Development Goals (SDGs).
  • The EU and US have announced that they are working on a joint AI code of conduct, which will include non-binding international standards on issues including risk audits and transparency.
  • An international treaty on AI is being finalised by the Council of Europe. Signatories will need to take steps to ensure that the development and use of AI respects human rights, democracy and the rule of law.

How is AI industry leading the way?

Various governance initiatives have been called for from within AI industry. Some of the most prominent are:

  • In March 2023, there was a public call for a pause on training new more powerful AI systems from AI developers and AI luminaries.
  • Leading AI labs have formed the Frontier Model Forum – a body designed to promote the responsible development of frontier models and to share knowledge with policymakers.
  • Certain AI labs propose ‘responsible scaling’ – the continued development of frontier AI with pauses if progress outstrips current safety protocols. However, this approach, without independent oversight, may cede AI safety decisions to labs themselves.