The United States’ new military strategy is a case of ‘AI peacocking’

author-image
NewsDrum Desk
New Update

Sydney, Jan 22 (The Conversation) The United States is set to become “the world’s undisputed [artificial intelligence-enabled] fighting force”.

At least that’s the view of the country’s Department of War, which earlier this month released a new strategy to accelerate the deployment of AI for military purposes.

The “AI Acceleration Strategy” sets an unambiguous objective of setting up the US military as the frontrunner in AI warfighting. But all of the hype in the strategy ignores the realities and limitations of AI capabilities.

It can be thought of as a kind of “AI peacocking” – loud public signalling of AI adoption and leadership, which clouds the reality of unreliable systems.

What does the US AI strategy entail? --------------------------------------- Several militaries around the world, including China and Israel, are incorporating AI into their work. But the AI-first mantra of the US Department of War’s new strategy sets it apart.

The strategy seeks to make the US military more lethal and efficient. It suggests AI is the one way to achieve this goal.

The department will encourage experimentation with AI models. It will also eliminate what it calls “bureaucratic barriers” to implement AI across the military, support investment in AI infrastructure and pursue a set of major AI-powered military projects.

One of these projects seeks to use AI to turn intelligence “into weapons in hours not years”. This is concerning, given how this kind of approach has been used elsewhere.

For example, there are ongoing reports about the increased civilian death toll in Gaza resulting from the Israeli military’s use of AI-enabled decision support systems, which essentially turn intelligence into weaponised targeting information at an unprecedented speed and scale. Further accelerating this pipeline risks unnecessary escalation of civilian harm.

Another major project seeks to put American AI models – presumably ones intended to be used in military contexts – “directly in the hands of our three million civilian and military personnel, at all classification levels”.

It is not made clear why three million civilian Americans need access to military AI systems. Nor what the impacts would be of widely disseminating military capabilities across a civilian population.

The narrative vs the reality ----------------------------- In July 2025, an MIT study found 95% of organisations received a zero return on investment in generative AI.

The main reason was technical limitations of generative AI tools such as ChatGPT and Copilot. For example, most can’t retain feedback, adapt to new contexts or improve over time.

This study was focused on generative AI in business contexts. But the findings apply more broadly. They point to the shortcomings of AI, which are too often hidden by the marketing hype surrounding the technology.

AI is an umbrella term. It’s used to encompass a spectrum of capabilities – from large language models to computer vision models. These are technologically different tools with different uses and purposes.

Despite varying significantly in their applications, capabilities and success rates, most AI applications have been bundled together to form a globally successful marketing agenda.

This is reminiscent of the dotcom bubble from the early 2000s, which treated marketing as a valid business model.

This approach now seems to have bled into how the US wants to posture itself in the current geopolitical climate.

A guide to ‘AI peacocking’ --------------------------- The Department of War’s AI-first strategy reads more like a guide to “AI peacocking” than a legitimate strategy to implement technology.

AI is posited as the solution to every problem – including those which do not exist. The marketing behind AI has created a fabricated fear of falling behind. The Department of War’s new AI strategy feeds off of that fear by alluding to a technically advanced military strategy.

However, the reality is these technology capabilities fall short of their claimed effectiveness. And, in military settings, these limitations can have devastating consequences, including increased civilian death tolls.

The US is leaning heavily into a marketing-led business model to implement AI across its military without technical rigour and integrity.

This approach will likely expose a vulnerable vacuum across the Department of War when these brittle systems fail – and likely in moments of crisis when deployed in military settings. (The Conversation) AMS