The introduction of new technology into society is inherently political. Democracies worldwide will need to project a clear vision of AI governance and safety and make a public, positive case for how they will approach the transformative changes that AI is already bringing. What are the priorities?
Firstly, establish democratic oversight of AI safety. Leaving AI safety measures to be defined by AI industry risks ceding democratic sovereignty. Other areas of societal risk including medicine and nuclear power have required governance measures to minimise dangers. To prevent risks from AI, democracies should establish clear and binding laws and treaties together with a strong democratic checks on AI development and deployment that uphold values of transparency, accountability, and protection of human rights. Establishing oversight now allows democracies to determine how powerful AI will be used in the public benefit, decide on measures to keep societies and democracies safe, and ensure that the benefits of AI are broadly shared.
Secondly, ensure participation and inclusivity in AI governance. Citizens in democratic societies must be involved in shaping the values that determine how AI is governed. An inclusive approach also helps predict and mitigate emerging risks from AI by giving a voice to groups across society who are being impacted.
Inclusivity at an international level means ensuring that countries less advanced in AI development but more vulnerable to impacts on democracy and society are given a voice in global AI governance. Developing countries require a path to digital development and must be supported to reap the benefits from frontier AI.
Thirdly, demonstrate democratic values at home and abroad. Democracies must coordinate to push back against illiberal and repressive uses of AI. When democracies use or export repressive technologies, they compromise civil rights, weaken rule of law and diminish their own credibility. It sends mixed signals to the world and contributes to eroding democracy at home and abroad. Democracies must coordinate together to enact domestic reforms and shape global norms on AI that protect privacy and other fundamental rights.
Demonstrating democratic values also means developing and communicating a clear strategy and vision for how AI can benefit democratic societies, how it can be harnessed as a public good and managed in the public interest. This builds public trust in a technology with the potential to have transformative benefits across sectors and contribute to national and global prosperity.
The central role of parliaments in AI governance and safety
Addressing complex, technical issues of AI safety requires effective, trusted and flexible governance institutions. Parliaments as the key institution for democracy will need to champion the public interest, establish robust accountability systems and devise new methods to gather public views on highly technical topics. We will need well-informed Members of Parliament, in house expertise and routine engagement with the public and stakeholders across society.
In a fast-moving environment, democratic institutions will also need to be agile, examining their structures and processes to enable quick and decisive action that keeps pace with AI development. What actions can MPs individually, and parliament as a whole, take?
Harnessing parliament’s core functions
MPs can use their lawmaking role to consider measures that advance frontier AI safety, such as:
- Licensing requirements for companies building frontier AI systems, while preventing against risks of regulatory capture.
- Establishing safety and transparency standards for AI developers in law.
- Including legal requirements for independent oversight and audit, including testing AI models for dangerous capabilities.
- Ensuring that AI developers allocate a significant proportion of their research and development budget to addressing safety and ethics issues.
While certain legal powers will be limited to countries with advanced AI sectors, MPs in all democracies can examine and reform existing laws to protect against threats to democracy, such as privacy and data protection-related laws. They can review and revise legal liability frameworks and anti-discrimination laws to ensure accountability to individuals and groups where AI is proven to cause harm. In specific use cases that damage democracy, MPs can consider new legal measures such as criminalising the production of deep fakes used for political purposes, labelling AI-generated synthetic content and mandating that the public know they are engaging with AI online.
Parliament’s oversight role will be essential for AI safety. MPs can use powers to gather information from the government and AI industry through questions, debates, inquiries and hearings. They can advance AI safety by summoning industry leaders to testify about AI progress, request records on AI security issues and safety measures, and validate information by cross-examining AI experts. When international agreements on AI are in place, MPs will have a key role to monitor compliance and scrutinise domestic capacity to implement agreements.
Committees play a crucial role in AI oversight. By holding public hearings with representatives from AI industry, academia, ethicists, civil society, and the general public, they can foster an inclusive approach to AI governance. Committees will need strong ties with external bodies to gauge AI's influence on democracy and society, including organisations that research and classify AI risks and harms, and bodies conducting independent audits and human rights evaluations. Making committee reports and hearings public then helps to build trust in democratic institutions to tackle AI-related challenges.
When conducting budget scrutiny, MPs will need to ensure that regulatory and audit bodies have the funding to adequately monitor the deployment of AI; and that the budget funds AI monitoring and audit, safety research and public education and supports innovative and beneficial uses of AI across sectors.
MPs are uniquely placed in their representation role to provide the public with a voice on AI governance and help build societal resilience against risks from AI. To help ensure AI aligns with core values, we need continuous public engagement to define shared values in democratic societies. Public meetings, surveying, site visits and social media engagement can help MPs listen to the concerns, values and diverse perspectives of their constituents. MPs should prioritise identifying and consulting vulnerable groups who might be disproportionally impacted by AI or unable to access AI-based systems.
Ensuring accountability for the impact of AI requires a well-informed and engaged public. MPs can work together with the media, education bodies, academia, and civil society to contribute to the public discourse around technological change, help counter AI-driven disinformation and mitigate the potential use of deep fakes to disrupt electoral processes. They can support public media literacy initiatives and advocate for AI literacy, ethics and safety to be incorporated into computer science, technology and civic education curricula.
We need democratic institutions to evolve as the technology is evolving. Parliaments have an opportunity to be ambitious and visionary in addressing transformational AI. They offer institutional legitimacy to trial new deliberative processes in engaging the public on the opportunities and risks of AI. Experiments such as citizens’ assemblies (a decision-making framework involving randomly chosen individuals collaboratively developing policy solutions, also called ‘citizens' juries’ or ‘mini-publics’) may offer a novel means to connect ongoing public input into parliamentary processes.
Finally, parliaments can use international engagements and parliamentary diplomacy to help respond to threats to democracy from AI that cross borders. MPs are well-placed to raise issues of repressive or damaging uses of AI and work together with colleagues in other parliaments to coordinate policies to isolate and apply collective pressure to states that use AI for repressive purposes. Specific measures include human rights due diligence requirements and export restrictions on technologies used for surveillance and oppression. To protect the integrity of election processes, an international effort will be required to help electoral bodies and international observers to adapt to new uses of AI and address risks such as deep fakes.
The UK Government has invited China to the AI Safety Summit, providing an indication of its likely view that non-democratic countries with advanced AI industry need to be involved in global discussions on frontier AI safety. However, democracies should establish dedicated international mechanisms to help coordinate and protect democratic systems and retain democratic influence in international standard setting bodies.
Regional bodies have an important role in pooling resources, sharing expertise and providing opportunities for MPs to contribute to a collective, democratic voice in global AI governance and safety. ParlAmericas provides a good example of where regional groups of parliamentarians on AI governance have formed.