The increasing impact of AI

The increasing impact of AI

This chapter is part of our policy brief that discusses where risks to democracy from AI are emerging, what a democratic response to AI governance and safety looks like and the role of parliaments worldwide in enabling this response. It outlines how the democratic governance community can help plot a course of action to ensure that democracy is protected in the face of rapid AI advancements.
Illustration of people emerging from screens
Authors
""

Alex Read, WFD Associate

Summary

AI is a once-in-a-generation technology shift, potentially on par with electricity as a general-purpose tool that will transform lives across the globe. The UK Government recognises that AI will “fundamentally alter the way we live, work and relate to one another … [promising] to further transform nearly every aspect of our economy and society, bringing with it huge opportunities but also risks that could threaten global stability and undermine our values."

Key terms

AI is a complex term to pin down. A simple industry definition from IBM is of “any system capable of simulating human intelligence and thought processes”. The EU Artificial Intelligence Act defines it more broadly as "software that is developed with one or more of the techniques that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." John McCarthy, a pioneer of AI, states that the “the ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans”.

There is no universal definition of AI safety. The Center for Security and Emerging Technology define it as “an area of machine learning research that aims to identify causes of unintended behaviour in machine learning systems and develop tools to ensure these systems work safely and reliably”. Others define AI safety as more than a technical problem, focusing on the prevention and mitigation of various harms and potential risks from the deployment of AI in society.

Frontier AI is defined for the UK Safety Summit as "highly capable general-purpose AI models, most often foundation models, that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models”.

Foundation models are built using vast amounts of unlabelled data. GPT-4, the Large Language Model (LLM) which underpins ChatGPT, is an example. They can be built on and tailored to specific uses or needs. They have been shown to produce remarkable productivity gains, performing a wide array of tasks, developing capabilities not envisaged at creation and outperforming task-specific AI models.

Generative AI creates new content such as text, images, video and audio. They use machine learning algorithms and statistical models to understand and model the patterns in data (such as digital pixels in photographs, waveforms in audio, or text), producing original outputs.

AI has developed rapidly due to the expansion of available training data, significant improvements in neural networks and growth in computational power by a factor of one hundred million in the past ten years. AI is increasingly efficient at using data and compute, and the more this expands the more powerful AI systems become.

AI now matches or surpasses human proficiency in many tasks, from near-perfect face and object recognition to real-time language translation. Advanced AI can produce original images, compose fluent text, develop code, and even predict protein structures. In areas like strategizing and creativity, once deemed uniquely human, there are notable advancements.

AI still has limitations. Generative AI text produces falsehoods with confidence (called ‘hallucinations’). Issues around biased and unfair outputs, security and privacy vulnerabilities and legal liability persist. Despite extensive research and investment, we do not have consumer-level autonomous driving. At present, we have relatively ‘narrow’ AI which performs well at fixed tasks. However, as the biggest technology companies have the means to significantly scale AI training, we are likely to see AI continue to achieve new capabilities and overcome limitations, with models becoming more efficient and easier and cheaper to build.

We may soon see increasingly autonomous AI agents which can strategise, divide goals into sub-tasks and access their environment to take actions. Experimental projects such as AutoGPT  - while not yet fully effective – aim to connect chatbots up with web browsers and word processors to carry out sub-tasks autonomously. Prominent AI industry figure Mustafa Suleyman sees the next milestone as ‘Artificial Capable Intelligence’, achieved when AI can “go make $1 million on a retail web platform in a few months with just a $100,000 investment.” When this is possible, implications will be broader than just financial and the new powers this will give different actors come into view.

Where might we be heading? The explicit aim of the leading AI companies is to build Artificial General Intelligence (AGI), systems that can “match or exceed human abilities in most cognitive work”. As AI gets better at automating tasks like programming and data collection, we might be surprised at how quickly it advances. Some leading experts believe that AGI will be achieved as soon as 2030, however both its feasibility and exact timelines are fiercely debated by experts and researchers. What is broadly agreed within AI industry and research communities is that conversations on safety and control are essential as AI progress sees systems becoming more autonomous. This raises risks around malicious actors setting harmful objectives and AI systems pursuing goals not aligned with human interests.

Why is AI groundbreaking technology?

AI is a groundbreaking technology for several reasons:

  • It can make decisions autonomously, an example being banks using AI for loan approvals.
  • It can generate novel ideas and insights by connecting unrelated information.
  • It is multi-use and dual-use. For instance, facial recognition that unlocks your phone can also be used to identify protesters by a repressive regime.
  • Its inner workings are not fully understood, termed the ‘black box’ problem. This makes it difficult to know how to predict or change the behaviour of AI systems and leads to emergent and unexpected capabilities.
  • It amplifies human capabilities, opening the door for various actors – state and non-state - to achieve their goals more efficiently. As the UK Government states: “Frontier AI will almost certainly continue to lower the barriers to entry for less sophisticated threat actors”.

Benefits and risks from AI

As AI enhances human intelligence, we are likely to see spectacular breakthroughs. What are we already seeing?

While the transformative benefits of AI are increasingly evident, there is also growing evidence of the harms it can cause and potential risks ahead. What are we seeing?

  • Biases and discriminatory outcomes. Uses of AI for purposes such as predictive policing has been seen to reinforce societal discrimination. As foundation models are trained on vast amounts of unstructured data, outputs have mirrored biases in the data and demonstrate gender, racial and cultural stereotypes and prejudices.
  • Cybercrime. There have been cases of AI generated voices deceiving the public and overriding bank security checks.
  • Synthetic and ‘deep fake’ media. AI has been used to create faked audio and video of prominent individuals and non-consensual intimate imagery.
  • Security threats. There is evidence of people ‘jailbreaking’ LLMs, removing the training safeguards that prevent dangerous use cases. In this example, GPT-4 gave advice on planning terrorist attacks when asked in languages such as Scots Gaelic or Zulu.
  • Infringement of copyright. Cases include AI companies being sued for training models with copyright material and reproducing protected material in outputs.
  • Exploitation of workers. Public AI products require extensive human input to ensure the quality and safety of outputs, resulting in workers experiencing psychological trauma from reading and viewing graphic content, low pay and poor working conditions.
  • Intrusive surveillance. There are examples of facial recognition and other technologies that monitor, track and record activities of individuals being used in public places without scrutiny.
  • Job losses. Certain professions such as software development already face automation, and Goldman Sachs estimates that generative AI could expose 300 million jobs worldwide to automation.

The AI Safety Summit has a focus on risks from frontier AI. What does this mean?

Misuse by malicious actors. As frontier AI systems become more advanced and autonomous, this amplifies risks of their use by malicious actors for cyber-attacks, producing novel bioweapons or chemical weapons, social manipulation and deception. Restricting these uses will be very difficult, especially as AI tools are made publicly available through ‘open source’ models.

Loss of control of AI systems. If highly autonomous AI is built, we risk AI systems pursuing goals not aligned with human interests. At present, there is no way to align AI behaviour with human values and ongoing questions over how these values are defined. Risk also arises from AI developers pursuing frontier systems in a bid to outcompete others, neglecting safety testing and human oversight.

AI risk: The open source question

Open sourcing’ an AI model refers to making its design and architecture publicly available. This allows anyone to copy, fine-tune, and use it for different purposes. Tech companies such as OpenAI and Anthropic have not published details of their models to prevent misuse. Others such as Meta have released open-sourced models to the public. The argument for open source is that it fosters innovation, drives competition, and provides greater economic opportunity, counterbalancing monopoly powers of big tech. It allows civil society and researchers to independently test model safety. However, it also potentially places damaging capabilities in the hands of bad actors who can fine tune models to perform a range of tasks such as producing spam and malware. Oversight and accountability for potentially damaging uses of open source models will be essential.