Does AI pose an existential risk? We asked 5 experts

author-image
NewsDrum Desk
New Update

Brisbane (Australia), Oct 6 (The Conversation) There are many claims to sort through in the current era of ubiquitous artificial intelligence (AI) products, especially generative AI ones based on large language models or LLMs, such as ChatGPT, Copilot, Gemini and many, many others.

AI will change the world. AI will bring “astounding triumphs”. AI is overhyped, and the bubble is about to burst. AI will soon surpass human capabilities, and this “superintelligent” AI will kill us all.

If that last statement made you sit up and take notice, you’re not alone. The “godfather of AI”, computer scientist and Nobel laureate Geoffrey Hinton, has said there’s a 10–20 per cent chance AI will lead to human extinction within the next three decades. An unsettling thought – but there’s no consensus if and how that might happen.

So we asked five experts: Does AI pose an existential risk? Three out of five said no. Here are their detailed answers.

1. The real risk isn't AI becoming too smart No.

The way technology currently stands, the real risk isn't AI becoming too smart – it's humans making poor choices about how we build and deploy these tools.

Current AI systems, though impressive, are still fundamentally "stochastic parrots". They engage in sophisticated pattern-matching that simulates intelligence through prediction. And mounting evidence shows this foundation is surprisingly brittle, unreliable and incompatible with reasoned thought, let alone intelligence.

Current technological progress in generative AI is impressive, even transformative. However, these gains are incremental and unsustainable, and the approach to AI development in this field is inherently limited.

While future technology paradigms could shift this assessment – and such advances are notoriously difficult to predict – all AI risks ultimately stem from human choices. The path to flourishing societies runs through better human leaders, institutions and governance.

Rather than speculating about a hypothetical superintelligence that might annihilate humans, we should focus on oversight frameworks and responsible development. Existential risks aren't lurking in the code. If anywhere, they’re in our decisions.

2. Leading AI models are rapidly gaining general-purpose capabilities Yes.

I strongly believe that artificial intelligence poses an existential threat. So far, today’s systems only have what’s known as weak or narrow AI – limited capacities for dedicated tasks, but not human-level intelligence, known as general AI. However, leading models are rapidly gaining general-purpose capabilities that make it more likely for misuse to happen at scale. The more capable AI becomes, the more it can also erode human control over it.

Hundreds of leading experts, public figures and researchers have warned that “mitigating the risk of extinction from AI” should be a global priority. Surveys of machine learning researchers put the median probability of extinction-level outcomes in this century at about 5 per cent. This is a small percentage, but not negligible.

Governments that signed the Bletchley Declaration in 2023 have acknowledged “catastrophic” risks. AI research labs are currently tracking concrete hazard categories, such as misuse of AI for large-scale biological and cyber attacks.

An existential catastrophe driven by AI isn’t inevitable, but wisdom dictates we need enforceable “red lines” AI can’t cross, along with rigorous pre-deployment testing, and independent evaluation as capabilities scale. We must act now before it is too late.

3. AI is only as good as the people wielding it No.

Beyond the now familiar generative AI chatbots, the future of AI more broadly remains unknown. Akin to other revolutionary technologies – think nuclear power or the printing press – AI will change our lives. It already has. Today’s job market is unrecognisable, teaching practices unfamiliar, and decision-making – whether in health care or the military – is often now shared with machines. So, from that perspective, perhaps AI does pose an existential threat.

But therein lies the problem. We talk about AI as if it were propelled into our lives from outer space; the way these systems can act autonomously is fuelling this narrative. But AI was not made in outer space. The code in AI systems is written by humans, their development is funded by humans, and their regulation is governed by humans. So, if there is a threat, it would appear to be human, not machine.

AI is an extraordinary tool that can undoubtedly help the human race. But like any other tool, it’s only as good as the people wielding it. As we face the AI revolution, an increase in critical thinking and a decrease in learned helplessness might now be in order.

4. The clearest existential pathway is militarisation Yes.

I am more concerned that humans will use AI to destroy civilisation than AI doing so autonomously by taking over. The clearest existential pathway is militarisation and pervasive surveillance. This risk grows if we fail to balance innovation with regulation and don’t build sufficient, globally enforced guardrails to keep systems out of bad actors’ hands.

Integrating AI into future weapons would reduce human control and lead to an arms race. If mismanaged, binding AI to national security even risks an AI-driven world war.

One way to minimise the existential risk posed by AI is to categorise AI systems by risk and impose limits on how they are used in defence, with mandatory human oversight in high-risk contexts.

Alongside governance, we should ensure AI systems accurately represent truths about the world without bias, distorting facts, or satisfying an agenda – but that alone isn't enough. Safety also depends on aligning the objectives of AI with human values, maintaining human control over these systems and rigorously testing them at every step. This way, we can peacefully co-exist with AI systems.

5. The "intelligence" of generative AI is seriously limited No.

There is little evidence that a superintelligent AI capable of wreaking global devastation is coming any time soon.

Current concerns – and hype – about existential risks to humanity stem from advances in generative AI, especially large language models such as ChatGPT, Claude, Gemini and others. Such AI makes pattern-based predictions about what text is likely to satisfy a particular user need based on a prompt they have typed in.

Many experts believe that the “intelligence” of this kind of AI, though impressive in its own way, is seriously limited. For example, large language models lack a reliable capacity for logical, factual and conceptual understanding. In these crucial ways, AI cannot understand, and thus cannot act, as humans can.

A future technological breakthrough enabling AI to overcome these limitations cannot be ruled out. But nor can it be assumed. Overemphasising speculative threats of superintelligent AI risks distracting us from AI’s real harms today, such as biased automated decision-making, job displacement, and copyright infringement. It could also deflect from genuine existential dangers such as climate change – a danger to which energy-hungry AI may itself contribute. (The Conversation) SKS SKS