AI is a massive threat to stability – but not in the ways the government is concerned about

Posted on

The government is staging a conference today on artificial intelligence, or AI. It is worried about the disruptive capacity of this new form of information technology. It has summarised its concerns in a publication timed to coincide with the conference. That summary says:

Generative AI development has the potential to bring significant global benefits. But it will also increase risks to safety and security by enhancing threat actor capabilities and increasing the effectiveness of attacks.

  • The development and adoption of generative AI technologies has the potential to bring substantial benefits if managed appropriately. Productivity and innovation across many sectors including healthcare, finance and information technology will accelerate.
  • Generative AI will also significantly increase risks to safety and security. By 2025, generative AI is more likely to amplify existing risks than create wholly new ones, but it will increase sharply the speed and scale of some threats. The difficulty of predicting technological advances creates significant potential for technological surprise; additional threats will almost certainly emerge that have not been anticipated.
  • The rapid proliferation and increasing accessibility of these technologies will almost certainly enable less-sophisticated threat actors to conduct previously unattainable attacks.
  • Risks in the digital sphere (e.g. cyber-attacks, fraud, scams, impersonation, child sexual abuse images) are most likely to manifest and to have the highest impact to 2025.
  • Risks to political systems and societies will increase in likelihood as the technology develops and adoption widens. Proliferation of synthetic media risks eroding democratic engagement and public trust in the institutions of government.
  • Physical security risks will likely rise as Generative AI becomes embedded in more physical systems, including critical infrastructure.
  • The aggregate risk is significant. The preparedness of countries, industries and society to mitigate these risks varies. Globally regulation is incomplete and highly likely failing to anticipate future developments.

There are some quite alarming elements to this thinking.

The first is just how short-term it is. 2025 is hardly a serious planning horizon, even if immediate risks need to be identified.

The second is how small-minded it is. For example, to suggest that 'productivity and innovation across many sectors including healthcare, finance and information technology will accelerate' can mean one of three things. It might mean:

  • Big increases in output per person.
  • A reduced number of people being employed to achieve the same output, or
  • New services might be supplied using the capacity created.

No hint is given as to what is meant. But as importantly, no suggestion is made on the implications for wages, employment, profits, taxation, inequality, the macroeconomy,  and consequences for public services. Those issues do not appear to be on the agenda.

Nor are other practical issues even mentioned. Take, for example, the problem of marking coursework in schools and universities - which impacts the appraisal of the education of millions of young people, and which will be a massive issue to be faced by 2025. If plagiarism is to be avoided, will productivity in this sector have to fall massively as recourse is made to face-to-face examinations, for example? The consequence is not mentioned - but the cost could be substantial.

And then let's just consider the risk from GIGO. Artificial intelligence is at considerable risk of regenerating garbage fed into the system, which it then produces as garbage spewed out of the system. Unless people retain the facility to spot utter nonsense from a few paces, the fact is that AI might replace so-called decision-making (something the UK is already not good at) in a whole host of areas with the massive risk that the economy will shortly be run on total nonsense forever more, simply because neoliberal thinking is assumed to be right when it very obviously is not.

AI does worry me. Most of all, it worries me as a means of producing opiates for the people by creating distraction techniques that provide cover for what is really happening in the world, amplifying much of what already happens in our so-called media. That could be most attractive to those who will see the opportunity to concentrate the economic power that it provides.

I note the issues the government is worried about. I also suggest that it is very largely missing the point. The threats are much more mundane and simultaneously much more serious than it is suggesting.


Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:

You can subscribe to this blog's daily email here.

And if you would like to support this blog you can, here: