Melbourne, Nov 14 (The Conversation) As nearly half of all Australians say they have recently used artificial intelligence (AI) tools, knowing when and how they’re being used is becoming more important.
Consultancy firm Deloitte recently partially refunded the Australian government after a report they published had AI-generated errors in it.
A lawyer also recently faced disciplinary action after false AI-generated citations were discovered in a formal court document. And many universities are concerned about how their students use AI.
Amid these examples, a range of “AI detection” tools have emerged to try to address people’s need for identifying accurate, trustworthy and verified content.
But how do these tools actually work? And are they effective at spotting AI-generated material? How do AI detectors work? ---------------------------- Several approaches exist, and their effectiveness can depend on which types of content are involved.
Detectors for text often try to infer AI involvement by looking for “signature” patterns in sentence structure, writing style, and the predictability of certain words or phrases being used. For example, the use of “delves” and “showcasing” has skyrocketed since AI writing tools became more available.
However the difference between AI and human patterns is getting smaller and smaller. This means signature-based tools can be highly unreliable.
Detectors for images sometimes work by analysing embedded metadata which some AI tools add to the image file.
For example, the Content Credentials inspect tool allows people to view how a user has edited a piece of content, provided it was created and edited with compatible software. Like text, images can also be compared against verified datasets of AI-generated content (such as deepfakes).
Finally, some AI developers have started adding watermarks to the outputs of their AI systems. These are hidden patterns in any kind of content which are imperceptible to humans but can be detected by the AI developer. None of the large developers have shared their detection tools with the public yet, though.
Each of these methods has its drawbacks and limitations.
How effective are AI detectors? --------------------------------- The effectiveness of AI detectors can depend on several factors. These include which tools were used to make the content and whether the content was edited or modified after generation.
The tools’ training data can also affect results.
For example, key datasets used to detect AI-generated pictures do not have enough full-body pictures of people or images from people of certain cultures. This means successful detection is already limited in many ways.
Watermark-based detection can be quite good at detecting content made by AI tools from the same company. For example, if you use one of Google’s AI models such as Imagen, Google’s SynthID watermark tool claims to be able to spot the resulting outputs.
But SynthID is not publicly available yet. It also doesn’t work if, for example, you generate content using ChatGPT, which isn’t made by Google. Interoperability across AI developers is a major issue.
AI detectors can also be fooled when the output is edited. For example, if you use a voice cloning app and then add noise or reduce the quality (by making it smaller), this can trip up voice AI detectors. The same is true with AI image detectors.
Explainability is another major issue. Many AI detectors will give the user a “confidence estimate” of how certain it is that something is AI-generated. But they usually don’t explain their reasoning or why they think something is AI-generated.
It is important to realise that it is still early days for AI detection, especially when it comes to automatic detection.
A good example of this can be seen in recent attempts to detect deepfakes. The winner of Meta’s Deepfake Detection Challenge identified four out of five deepfakes. However, the model was trained on the same data it was tested on – a bit like having seen the answers before it took the quiz.
When tested against new content, the model’s success rate dropped. It only correctly identified three out of five deepfakes in the new dataset.
All this means AI detectors can and do get things wrong. They can result in false positives (claiming something is AI generated when it’s not) and false negatives (claiming something is human-generated when it’s not).
For the users involved, these mistakes can be devastating – such as a student whose essay is dismissed as AI-generated when they wrote it themselves, or someone who mistakenly believes an AI-written email came from a real human.
It’s an arms race as new technologies are developed or refined, and detectors are struggling to keep up.
Where to from here? --------------------- Relying on a single tool is problematic and risky. It’s generally safer and better to use a variety of methods to assess the authenticity of a piece of content.
You can do so by cross-referencing sources and double-checking facts in written content. Or for visual content, you might compare suspect images to other images purported to be taken during the same time or place. You might also ask for additional evidence or explanation if something looks or sounds dodgy.
But ultimately, trusted relationships with individuals and institutions will remain one of the most important factors when detection tools fall short or other options aren’t available. (The Conversation) RD RD
/newsdrum-in/media/agency_attachments/2025/01/29/2025-01-29t072616888z-nd_logo_white-200-niraj-sharma.jpg)
Follow Us