How to spot Newsbots who copy or lie? With a critical spirit and a little technology

How to spot Newsbots who copy or lie?  With a critical spirit and a little technology

[ad_1]

“Articles generated by artificial intelligence often summarize or rewrite content produced by other sources”. This was written by the NewsGuard association, an organization that has long evaluated the reliability of news sites around the world in terms of a report that identified in April 49 sites in seven languages ​​(Czech, Chinese, French, English, Portuguese, Tagalog and Thai) using artificial intelligence. It is, they write, a new generation of sites, which they have christened NewsBot. n outwardly they looked like typical news sites but which would appear to have been wholly or largely generated by AI-based language models designed to mimic human communication. The sites, which often don’t identify their owners, produce a large amount of content on a variety of topics, including politics, health, entertainment, finance and technology. Some of these sites publish hundreds of articles a day. «Certain articles – writes NewsGuard – promote false narratives. Almost all content is written using banal language and repetitive sentences, hallmarks of texts produced by artificial intelligence».

There have always been clickbait or fake news sites. These were designed to generate revenue from programmatic ads, which are placed through algorithms and fund much of the world’s media (exactly what the first generation of human-run internet content farms were designed to do). Basically when they are not lying they are deliberately copying. For example, the primary activity of BestBudgetUSA.com, a site that does not disclose ownership information and was registered anonymously in May 2022, appears to be summarizing or rewriting CNN articles.

Nothing new under the sun then. But what scares is the volume that will generate generative AI. At least the background noise will increase deafeningly on the network. It means more work for the legal departments of publishers who will have to find a way to protect the copyright of their products. But it will also be more difficult or perhaps impossible for the amateur digital reader to distinguish the quality of the contents on the Net. That means being able to distinguish whether the article was produced autonomously by AI or there is a human hand behind it. The future is nebulous but we are moving towards a content production model, whether audio or video, which will be able to use AI as a co-pilot. It will then be the public or the market that distinguishes and decides which ones are the best.

However, there are already tricks to understand who produced what. As NewsGuard writes, all 49 sites identified by NewsGuard have published at least one article containing error messages that are quite common in AI-generated texts: among these, “my deadline in September 2021”, “as an AI language model” and “ I can’t complete this request.” They are tags, bugs, errors, all information that an AI could recognize to tell us in real time if what we are reading is good or bad. If so, it would be a good (technological) response to newsbots.

Find out more

[ad_2]

Source link