The appeal of Musk and a thousand experts: “Stop the development of ChatGpt, we risk epochal upheavals”

The appeal of Musk and a thousand experts: "Stop the development of ChatGpt, we risk epochal upheavals"

[ad_1]

Stop training artificial intelligences. Penalty, facing economic and political upheavals of vast proportions in the short term. Elon Musk and other entrepreneurs and academics from all corners of the world have signed an appeal to companies and governments. A letter asking for a six-month moratorium on the development of generative AIs like ChatGpt. In particular, the letter calls for a halt to the training of Gpt-4, the OpenAI model launched in mid-March, evoking “great risks for humanity” . The six months are expected to be used to develop security protocols, AI governance systems, and refocus research to ensure AI systems are more “accurate, secure, and trustworthy.”

The expert: “Innovation cannot be stopped, but this is an exceptional case”

The letter is addressed to governments. But above all to the companies that in recent months have begun a speed race to develop these technologies. Among the signatories there is also an Italian. Domenico Talia, professor of computer engineering at the University of Calabria: “We know that innovation cannot be stopped, but this is an exceptional case. What these technologies are capable of doing is not clear even to those who create them. It’s all happening too fast. In a few months, a Gpt-5 could already be ready. Even more powerful. These technologies are set to change everything. They will change the jobs of millions of people. Hundreds of millions of people. Above all intellectual work”, says the professor to Italian Tech.

Artificial intelligence

The AI ​​had a problem with its hands. Now that he’s fixed it, we’re in trouble

by Pier Luigi Pisa


In this petition, published on the Futureoflife.org website, entrepreneurs and academics are calling for a moratorium until security systems are created. We need new regulators; need to monitor artificial intelligence systems; develop techniques to help distinguish the real from the artificial and institutions capable of handling the “dramatic economic and political upheavals (particularly for democracy) that AI will cause”. The petition brings together personalities who have already publicly expressed their fears of uncontrollable AI that would overtake humans.

The alarm of Musk and Altman, creator of ChatGpt

These include Elon Musk, owner of Twitter and founder of SpaceX and Tesla, and Yuval Noah Harari, the author of the best seller “Sapiens”. The head of OpenAI, Sam Altman, admitted he was “a little scared” by his creation when used for “large-scale disinformation or cyberattacks.” “Society needs time to adjust,” he told ABCNews in mid-March. to develop and deploy increasingly powerful digital brains, which no one – not even their creators – can reliably understand, predict or control.”

Time is the key issue. The companies behind the development of these technologies respond to the logic of the market: first come wins. And it wins more if it comes with the best product. But it is precisely this law, which has so far governed decades of technological development, that is frightening. Talia reasons: “The problem is that large private companies only have interests on Wall Street. It is to them that we address. Our appeal comes not only from academics, but also from entrepreneurs like them”.

The companies that develop these technologies answer only to Wall Street

Signatories also include Apple co-founder Steve Wozniak, members of Google’s DeepMind AI lab, Stability AI CEO Emad Mostaque, OpenAI competitor, as well as US AI experts and academics, senior engineers at Microsoft, allied group by OpenAI.

Ideas

Bill Gates’ predictions about the future of humanity in the age of artificial intelligence

by Archangel Rociola



Risks shared by Talia. “Let these tools get out of control. That you don’t understand why they do certain things. They are chatbots that respond directly from search engines. They’re fed by billions of parameters, but who has control over the responses? Who can prevent them from spreading disinformation? Maybe in particularly sensitive communities?”. Basically, the scariest thing is that ChatGpt and the other chatbots talk directly with people. Directly. Out of control. And without ethical principles. The effects, when it has even greater potential, could be devastating. It’s about months. Maybe weeks.

[ad_2]

Source link