The government is staging a conference today on artificial intelligence, or AI. It is worried about the disruptive capacity of this new form of information technology. It has summarised its concerns in a publication timed to coincide with the conference. That summary says:
Generative AI development has the potential to bring significant global benefits. But it will also increase risks to safety and security by enhancing threat actor capabilities and increasing the effectiveness of attacks.
- The development and adoption of generative AI technologies has the potential to bring substantial benefits if managed appropriately. Productivity and innovation across many sectors including healthcare, finance and information technology will accelerate.
- Generative AI will also significantly increase risks to safety and security. By 2025, generative AI is more likely to amplify existing risks than create wholly new ones, but it will increase sharply the speed and scale of some threats. The difficulty of predicting technological advances creates significant potential for technological surprise; additional threats will almost certainly emerge that have not been anticipated.
- The rapid proliferation and increasing accessibility of these technologies will almost certainly enable less-sophisticated threat actors to conduct previously unattainable attacks.
- Risks in the digital sphere (e.g. cyber-attacks, fraud, scams, impersonation, child sexual abuse images) are most likely to manifest and to have the highest impact to 2025.
- Risks to political systems and societies will increase in likelihood as the technology develops and adoption widens. Proliferation of synthetic media risks eroding democratic engagement and public trust in the institutions of government.
- Physical security risks will likely rise as Generative AI becomes embedded in more physical systems, including critical infrastructure.
- The aggregate risk is significant. The preparedness of countries, industries and society to mitigate these risks varies. Globally regulation is incomplete and highly likely failing to anticipate future developments.
There are some quite alarming elements to this thinking.
The first is just how short-term it is. 2025 is hardly a serious planning horizon, even if immediate risks need to be identified.
The second is how small-minded it is. For example, to suggest that 'productivity and innovation across many sectors including healthcare, finance and information technology will accelerate' can mean one of three things. It might mean:
- Big increases in output per person.
- A reduced number of people being employed to achieve the same output, or
- New services might be supplied using the capacity created.
No hint is given as to what is meant. But as importantly, no suggestion is made on the implications for wages, employment, profits, taxation, inequality, the macroeconomy, and consequences for public services. Those issues do not appear to be on the agenda.
Nor are other practical issues even mentioned. Take, for example, the problem of marking coursework in schools and universities - which impacts the appraisal of the education of millions of young people, and which will be a massive issue to be faced by 2025. If plagiarism is to be avoided, will productivity in this sector have to fall massively as recourse is made to face-to-face examinations, for example? The consequence is not mentioned - but the cost could be substantial.
And then let's just consider the risk from GIGO. Artificial intelligence is at considerable risk of regenerating garbage fed into the system, which it then produces as garbage spewed out of the system. Unless people retain the facility to spot utter nonsense from a few paces, the fact is that AI might replace so-called decision-making (something the UK is already not good at) in a whole host of areas with the massive risk that the economy will shortly be run on total nonsense forever more, simply because neoliberal thinking is assumed to be right when it very obviously is not.
AI does worry me. Most of all, it worries me as a means of producing opiates for the people by creating distraction techniques that provide cover for what is really happening in the world, amplifying much of what already happens in our so-called media. That could be most attractive to those who will see the opportunity to concentrate the economic power that it provides.
I note the issues the government is worried about. I also suggest that it is very largely missing the point. The threats are much more mundane and simultaneously much more serious than it is suggesting.
Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:
You can subscribe to this blog's daily email here.
And if you would like to support this blog you can, here:
Ministry of Truth anyone?
1984 had some ideas
I sense a lot of activity which is not going to be taxed.
AI has benefits but I find myself deeply sceptical about them because it is the drive to reduce labour costs that is chiefly behind it in my view in order enrich owners and investors.
I share all your concerns but most of all AI stands for ‘Accelerating Inequality’.
On the subject GIGO, AI could end up being ‘Accelerating Ignorance’.
To me, what has driven political indifference to austerity and destitution in this country is the fact that the policy community knows that we do not need so many people to do stuff anymore in the future. Our fate is before us.
Austerity and the reduction of public services is their way of saying the unsayable.
‘We don’t need you, we don’t want you’. And of course, what happens to democracy after that?
Even that becomes redundant.
Good summary
Eventually ordinary people will become surplus to requirements. What then?
The wealthy will realise there is no one left to make them wealthy
The Age Of Surveillance Capitalism by Shoshana Zuboff is a must read to understand the real future threats we face. Scary reading.
There is already significant concern about the AI already implemented across government and the police.
https://www.theguardian.com/technology/2023/oct/23/uk-risks-scandal-over-bias-in-ai-tools-in-use-across-public-sector
Rishi Sunak disbanded the body (CDEI) created to oversee the use of AI in government departments. He shifted the emphasis towards the more dramatic issues that you have outlined above, which give him a better platform and boost his public profile.
It’s absolute madness. Anyone who has been involved around government IT projects knows that the lowest bid gets the job, the specifications are likely to be wrong, and corners will be cut to “go live” on the target date.
People are already suffering the consequences of badly designed AI projects across government, and Rishi is not interested.
Re: exams.
A better way than switching to oral examinations is to restructure education, not as a means to inculcate facts in people’s brains, but as learning critically to assess the information that is readily available to anyone and creatively to use it to solve problems.
Granted, that is a project for well beyond 2025.
Accept3ed
Built we will still need to appraise progress
I notice that the risks document doesn’t seem to make any mention of risks in terms of monetary or environmental cost. I guess that these points are maybe out of scope for that particular document, but these are risks that the government can’t ignore.
Already new AI systems are going the way of cryptocurrency and consuming mass amounts of energy (when we need to be reducing waste energy costs to ease the transition to green energy). Massive energy waste for not a whole lot of advantage over old methods (for certain use cases such as search).
From https://www.cell.com/joule/fulltext/S2542-4351(23)00365-3
And it doesn’t mention that right now many generative AI services are running at a loss, so it seems like AI will be cheap enough to use in businesses, but this is a balance that will need to be corrected at some point. If companies lock themselves into using these services, by replacing trained human workers, they will be hit later on when prices are ramped up.
From https://www.theregister.com/2023/10/11/github_ai_copilot_microsoft/ https://www.washingtonpost.com/technology/2023/06/05/chatgpt-hidden-cost-gpu-compute/
Your highlighting the issue of non-experts being unable to identify nonsense output is welcome, and needs to be more widely discussed.
Thanks
Good points