Who is responsible for the nonsense that robots say?

Who is responsible for the nonsense that robots say?

[ad_1]

There was a time, not so long ago, as recently as six months ago, when you asked Google, “When did Snoopy assassinate Abraham Lincoln?” Google answered, “In 1865.” Which is really the year of the death of the president of the United States, but certainly not at the hand, or paw, of Charlie Brown’s comic book beagle.

The case

Some Bing answers are starting to scare. And not for their accuracy

by Archangel Rociola


This anecdote came to my mind reading what is happening to millions of users grappling with the powerful new artificial intelligence used by Microsoft for Bing, its search engine. A user told that we are still in 2022, to get over it and stop raving. It is just one example among the many; and after all the opponent fielded by Google, Bard, made his debut by making a sensational mistake about an astronomy answer that was rather easy, moreover: it was enough to go to Google to find out. But precisely, the new artificial intelligences not only provide a series of links to websites where you can really find the exact answer, but they tend to package it.

Artificial intelligence

The new Bing (which uses ChatGpt) already makes mistakes. But it shouldn’t surprise us

by Pier Luigi Pisa



Exactly like Google had started doing some time ago when there was the Snoopy and Abraham Lincoln incident. The problem arises from the fact that an artificial intelligence feeds on data, or information, that it fishes from the web. And the web is full of fake, absurd or propaganda news.

Hence the risk, indeed, the high probability of receiving false answers. But can an artificial intelligence be held responsible for what it says? The Supreme Court of the United States will deal with it next week: by deciding on a dispute that talks about something else, the judges have the opportunity to update the famous section 230, or the law that allowed the construction of the Internet as we know it by establishing that the platforms are not responsible for our content (otherwise each of our posts would have to be screened and approved first). But in the case of the artificial intelligence of Bing or Bard it is different: the contents are created directly, they generate them, vigorously supporting a thesis. And if it is false, if there is damage as a result of this, who can we blame?

[ad_2]

Source link