Key points
- Much of future AI may turn out to be way more boring and uneventful than the headlines suggest.
- Immediate risks include issues like biased behaviours, misinformation spread by bad actors, and broader societal impacts.
- AI is already being used responsibly and expertly to lead to new innovation.
It's clear 2023 has become ‘the year of AI’. Since ChatGPT was launched late last year, a plethora of evangelists and doomsayers have taken the opportunity to tell us exactly how artificial intelligence (AI) will change the world.
One side heralds AI as the ultimate productivity saviour, promising a life free from mundane tasks. The other side paints a terrifying portrait of AI unleashing chaos and destroying humanity. Without wishing to add yet more predictions, the likelihood is that neither of these extremes will happen any time soon.
A universal productivity boost?
Let’s take productivity first. AI will supercharge us, right? Karim Lakhani, a professor at Harvard Business School, recently said we're all going to experience the productivity boost of AI. A study from the National Bureau of Economic Research claims a 14 per cent productivity increase in customer service. And OpenAI and the University of Pennsylvania found 80 per cent of the US workforce could have at least 10 per cent of their tasks affected. Additionally, 19 per cent of workers might see at least 50 per cent of their tasks impacted.
When it comes to individual tasks, there’s no doubt AI can bring productivity benefits. This has been proven time and again in multiple industries. What is less clear is the overall effect of AI at an organisational or societal level. And history tells us things there might not be as rosy.
Beware the Solow paradox
In 1987 American economist Robert Solow famously quipped: "You can see the computer age everywhere but in the productivity statistics." The Solow paradox, also called productivity paradox, remains visible today. It has been observed that sometimes more investment in IT in business processes can lead to worker productivity going down instead of up.
Digital technologies, such as AI, often usher in big changes in the way we do business. If we design, develop, adopt and adapt wisely we can achieve a productivity boost, even a transformational leap.
But this isn’t so easy in practice. There are many ways we can pick-up the wrong digital tools and use them in the wrong ways. And don’t forget the mundane tasks don’t magically go away. Think of email. While email indisputably speeds up communication, it also leads to a new set of tasks, like maintaining a clean inbox, or reading all those CC messages. We are also at increasing risk of cyber-attacks through emails where clicking a cleverly disguised malicious link can halt the operations of an entire organisation.
A UK study called these new digital tasks 'digi-housekeeping'. It's a form of productivity-sapping work which is rarely considered. Stuart Mills pointed out that if tools such as ChatGPT merely automate bureaucratic inefficiencies, they won’t raise productivity at all, because some of those jobs were unproductive to begin with.
Speculating about superintelligence
Let’s examine the other extreme – where AI isn’t a productivity panacea. Instead, we’re a step away from AI super-intelligence that will destroy humanity. This is a viewpoint put forward by hundreds of AI experts who are signatories on two open letters. The first calls for a temporary halt on AI development and the second puts AI in the same bucket as nuclear war when it comes to extinction risk. Geoff Hinton, one of the pioneers of current day AI, famously resigned from Google to speak publicly about what he considers a very real and very imminent threat to humanity.
There is, however, no evidence that such an imminent threat exists. No-one has suggested that current AI systems possess superintelligence (with the possible exception of ex-Google engineer Blake Lemoine, who claimed his AI chatbot was sentient). Given there is still little consensus around how we define human intelligence, what ‘superintelligence’ actually looks like is yet to be determined. The argument rests rather on the fact that these AI systems could become superintelligent – or more intelligent than humans – at some yet to be determined time in the future. In other words, it is speculation.
Where should we focus?
There is a valid argument that if AI super-intelligence is a possibility – even if a very remote one – we should get on the front foot now. But this argument also distracts from the more immediate risks of current AI systems. These include:
- Potential to hallucinate and to exhibit biased behaviours
- How easy they make it for bad actors to spread misinformation or malevolently influence people
- Broader societal impacts of AI on the environment or digital inequalities.
These are very real and well-evidenced dangers of AI for which we still don’t have solutions, despite years of solid and rigorous work towards ‘responsible AI’.
So, if neither of these two extremes – the utopian and the dystopian – are quite right, where does the truth lie? As is usually the case, the answer is somewhere in the middle. A lot of AI may turn out to be way more boring and uneventful (but still useful) than the newspaper headlines may have you thinking.
Transformative applications to the fore
If AI is used responsibly and expertly, it will undoubtedly lead to new innovation without destroying the world at the same time. In fact, it already is. We are working with Google to use AI to help protect the Great Barrier Reef and to help manage bushfires. We are using specialist AI chatbots to assist with mental health challenges. We are using AI to better understand the universe by detecting new galaxies.
Whether AI is an existential threat or not remains to be seen. Overall, if AI is applied without proper due diligence, then all the problems of AI will be with us for decades to come.
However, if AI is developed and applied ethically and wisely then it can certainly boost productivity and improve our quality of life. This is the AI future we want and can create.
A postscript: first author's note
I initially thought I could use ChatGPT to write this article. I couldn’t summon up the energy for a writing session so why not use AI? I recorded myself speaking into a microphone with rough notes on ideas for the article, then used AI to transcribe into text, which I then put into ChatGPT, and asked for an 850-word article. Hey presto, it took about five minutes, and I had my article. Like most ChatGPT outputs, however, the result was banal and uninteresting. So, I sat down and wrote this article myself – it took about two hours, with a further couple of hours of editing and fact-checking. One sentence survived from ChatGPT.
So, did ChatGPT increase my productivity? Probably not. But what it did do was give me confidence that I could write the article. By spending those five minutes with AI, I convinced myself there was an article in me struggling to get out!