This AI Can Help Humans Spot AI-Generated Fake News – Futurism

Getting your Trinity Audio player ready...

SOURCE: Victor Tangermann | Futurism

 

A team of researchers from Harvard and the MIT-IBM Watson AI Lab have created a system they say can detect whether text was generated by another AI. They’re hoping to lay the groundwork for future tools that could make it easier to spot fake comments or even news articles online.

The Giant Language model Test Room is a “a tool to support humans in detecting whether a text was generated by a model”, according to a preprint of the research published on arXiv in June. You can try out the demo yourself by clicking here.

 

Human Error

The basic concept behind the tool is fairly simple: human-written text is far more unpredictable in the way it is structured.

The system won’t be able to “automatically detect large-scale abuse” due to its limited scale — it was designed to sniff out fake text in individual cases only.

 

Fooling Humans

The system isn’t anywhere near perfect: by cross-referencing the text with a number of common AI text generators, it was able to improve “the human detection-rate of fake text from 54% to 72% without any prior training.”

The researchers also admit that future fake text generators could easily fool their system by changing “the sampling parameters per word or sentence to make it look more similar to the language it is trying to imitate.”