The strange appeal against AI: "We risk extinction." But companies continue to develop them

The strange appeal against AI: "We risk extinction."  But companies continue to develop them


A single sentence, barely twenty words, to reiterate once more the concern for the rapid development of artificial intelligence: “Reducing the extinction risk posed by AIs should be a global priority along with reducing other risks such as pandemics and nuclear war.”

This is the appeal spread online Center for AI Safetya non-profit organization, and signed (at the moment) by about 350 personalities, including scientists, mathematicians, researchers in the field of AI and entrepreneurs.

Artificial intelligence

Sundar Pichai and Bard's absence from the EU: "We'll be there soon, but we want to respect all the rules"

by our correspondent Emanuele Capone


Fourth appeal on the risk of extinction

The document, which ends in the short sentence quoted at the beginning, is only the latest in a series of warnings about the potential risks of this new technology and is the fourth in a relatively short period of time to speak of the risk of extinction for the human race. Previously there had been Stephen HawkingThen the writer Eliezer Yudkowskywho wrote in Time that “the most likely result of developing an AI whose intelligence surpasses that of humans is that everyone on Earth will die” and more recently scientist Geoffrey Hintonwho has decided to quit Google to "be able to speak freely about the dangers posed by AI".

Contacted by the New York Times, the president of the Center for AI Safety, Dan Hendryckshe explained that “we need to spread awareness of what is at stake before having fruitful discussions” and that the choice to limit the letter to a single sentence would be useful to “show that the risks are serious enough to have need proportionate proposals”.

The right hand that doesn't know what the left is doing?

What makes the news, more than the appeal itself (as perhaps it should be), are above all the names of the signatories: among them are Hinton himself and also Yoshua Bengiotwo of the 3 researchers who have won a Turing Award precisely for their work on AI (the third is Yann LeCun, who works for Meta and is currently unsigned), there is Sam Altman, current CEO of OpenAI, Demis Hassabis, CEO of Google DeepMind, and also the Italian American Dario Amodei, CEO of Anthropic (here the history of his company).

There is not Elon Muskwho also signed a similar petition last March, but which he no longer signs now probably because he set about doing what he signed against. And this is actually what should make the news: the fact that those who have signed up to reduce the potential risks caused by AI are meanwhile working to increase these risks.

The case of Altmanconsidered the father of ChatGPT (the mother is probably Microsoft, who showered it with money to make it) is emblematic:

  • in early March, he published an open letter investing his company in the task of helping to develop a Strong AI (a sentient AI, simplifying), which is the main source of the risks mentioned by the scientific community;

  • in late March, together with Musk and others, signed the appeal against the risks associated with the development of an AGI (another name with which the Strong AI can be defined);

  • on May 18, before the United States Congress, recalled that "we need rules for AI"because "I fear serious damage to society";

  • on 24 May he let it be understood that if in Europe those same rules he hopes for are too strict, OpenAI could leave the EU for good;

  • except then, a few days later, to present one of his to the world initiative to stimulate (for a fee) the debate on methods and strategies to be applied to AIs.

Research

What is beauty according to artificial intelligence?

by Francesco Marino



The risks associated with the development of AI

To quote a popular saying, it sounds a bit like a “the left hand doesn't know what the right is doing” story, with Altman (and Musk, et al.) on the one hand he says he is worried but on the other hand, with his own work, he only increases those worries.

It's not a criticism, it's a fact: many of the signatories of this latest appeal are both scientists, researchers, developers but they are also entrepreneurs, employees and CEOs. And if on the one hand they want to be cautious (or they want to appear cautious, especially in the eyes of European and American regulators) on the other they also want excel in their field, beat competitors on time and as well earn lots of money. It's their job, that's quite normal.

The problem, as we have often written on Italian Tech, is that in this specific field haste and the race for profit are likely to be more dangerous than in others, because (as many claim) once it's wrong it's almost impossible to go back. Intended with once created a Strong AI full of prejudices or trained in the wrong way, which has been given the keys to the nuclear arsenal (it's not science fiction, the US has just passed a law to always require a human intervention for the use of atomic weapons), who manages the police force in a city or a state, who is in charge of deciding who to give a job and who to deny it, who is likely to be treated after an accident and who is not, and so on.

Because this is what starts to scare, AI: not that they unleash a war on humanity as happens in terminators or matrixbut what undermine the foundations of our society beyond repair, further widening the gap between those who can and those who can't, between those who make it and those who are left behind, between the first and all the others. Leading us to extinction.

@capoema



Source link