The challenge from AI is not to our intelligence: it is instead to our unique ability to think

Posted on

I feel I should say something about artificial intelligence.

To some degree I am hindered when contemplating doing so by the fact that I am bemused by all the fuss about it.

I was an early adopter of IT as a business tool. I bought my first serious personal computer in 1984. The accounting practice I ran from 1985 never knew a non-IT based existence.

Similarly, I embraced networking, email, the web, and more as each became available. Some (most especially networking and email) had significant impact on working practices.

In the late 90s I was involved in two dot.com companies, one of which was quite successful, and both of which made a return for their shareholders.

As blogging, Twitter and much else has happened I have also embraced those changes. Again, each has radically altered work practices, not least mine.

And now the functionality of many of the tools used to date has been enhanced by, in essence, automating the intermediate instructions between defining some tasks and completing them. Not that there is anything desperately unusual about that: real world robotics seems to have been moving in this direction for some time. So too has gaming.

So what is the fuss about AI about when so much change has already happened in my lifetime and I have been discussing AI since the mid 90s, at least, when I had clients engaged in the field?

To some extent, I again stress that I am not sure what the concern is, most especially when I remember not dissimilar paranoia about the web in the 90s.

To date IT has not created mass unemployment, even though it has fundamentally changed many jobs.

Likewise, the extraordinary power of the web has not rendered human thinking redundant.

Nor has IT, as yet, fundamentally changed human relationships, although the onset of decent high-quality headphones at affordable prices did mean I did not spend my sons' teenage years yelling at them to turn the volume down.

So, I am not panicking. But, that said, I do realise that AI creates risks because it replicates human skills, as IT has always done.

I also recognise that this means that some jobs are at threat. I also know vast amounts of work is currently not done: there are no shortages of opportunity for gainful work in society.

And, of course I recognise the risk from ‘super-intelligence', most especially within politics, where ‘normal-stupidity' is commonplace.

More particularly, the risk of further concentration of economic power in the hands of a few corporations is especially worrying.

The continued absence of effective means to properly tax IT companies might become an ever-bigger issue.

That will be exacerbated by the growing need of government for revenue, most especially as it becomes the major source of new employment in the essential public services that will be the real foundation of this new economy.

I also see the risk to the climate change agenda if AI absorbs vastly excessive energy, as Bitcoin does, without any net gain to humankind.

So, when it comes down to it, AI is all about a classic political economy power struggle. Corporations will seek power. Government must constrain them. Tax must be collected, probably in increasing amounts. The dependence of people on the state as the need of the private sector for the skills they have declines. This will create tensions.

And in all that the biggest challenge is to our ability to innovate ideas, and not to our intelligence (the two not being the same thing, as some universities seem determined to prove).

Can we rethink the relationship between the state and private sectors?

Can we imagine a courageous state, rising to the challenge of creating employment to meet need?

Might tax be appropriately reformed?

Can the climate transition happen despite AI, which appears to be inherently energy intensive?

Might we learn how to imagine and counter AI threats?

Can crime detection morph in an era when fraud will become much easier?

Can we adjust our ideas of liberty within these constraints, recognising that machines (and their owners) do not enjoy a special claim in that regard?

None of these are, I suggest, questions AI is able to answer - because they require imagination that I do not think it capable of. In essence, they go to to the soul of the issue, which is a domain to which I think only humans will remain privy.

And I may of course be wrong. But, we have adapted so many times. I think we can again. I live in hope, whilst recognising the challenges.


Thanks for reading this post.
You can share this post on social media of your choice by clicking these icons:

You can subscribe to this blog's daily email here.

And if you would like to support this blog you can, here: