Yann LeCun (Meta): “That’s why AI will make us all more rational. Musk’s fears? Simplistic approach”

Yann LeCun (Meta): "That's why AI will make us all more rational. Musk's fears? Simplistic approach"

[ad_1]

“Artificial intelligence will lead us towards a new humanism. It will increase everyone’s intelligence, not just that of machines. Why are some experts alarmed? They have a simplistic view of the consequences”. Yann LeCun is considered one of the fathers of artificial intelligence. Author of essays and articles considered today the Bible for those involved in machine learning, he is a professor at New York University, head of the Artificial Intelligence division. His studies on him have earned him the highest recognition in the sector: the Turing Award in 2018. His vision on AI and the impact it will have on our societies is widely shared in academia. On Monday he received an honorary degree from the University of Siena for his fundamental contribution to research in the field of artificial intelligence. He spoke at the SAIConference event in Siena, a meeting dedicated to Artificial Intelligence organized by SAIHub (Siena Artificial Intelligence Hub). And he has no doubts about the rules decided by the European Union to regulate AI: “It is wrong to decide laws on AI. It is different if we talk about how it is applied”. A broad debate has opened up on Artificial Intelligence among academics, entrepreneurs in the sector and more generally among the public opinion. Risks, opportunities, fears and hopes.

Professor LeCun, how do you feel about the debate around AI?

“There are a lot of people who have different views on the future of AI, whether it will be good for humanity or whether it will pose a risk. Personally, I think it’s a huge opportunity and that AI will create some sort of new renaissance for the humanity, which is going to amplify everyone’s intelligence, making everyone smarter and more rational in some ways. So I think overall it’s going to be very, very positive.”

So no danger?

“Of course, there are dangers and risks that need to be minimized, but it is an engineering question and we have faced similar questions before, for example how to make jet aircraft safe. It is a very complex engineering question, but the we solved it”.

The debate

Floridi: “ChatGpt is brutal and does not understand. But soon AI will replace humans in many jobs”

by Archangel Rociola


In recent months we have seen calls and letters suggesting the end of humanity without regulation of AI. Elon Musk plays a very important role in spreading such fears. Why do you think there is such a clear divide on the risks and opportunities of AI?

“I think a lot of people have a relatively simplistic way of thinking about the consequences of AI. I think, in part, they project human nature onto the intelligence of artificial systems. They assume that if a system is intelligent, it will have the same characteristics as intelligence human and perhaps will have the desire to dominate, in the same way that human beings have the desire to dominate others in some cases”.

Why do you think this approach is wrong?

“It’s false because I don’t think the desire to dominate is related to intelligence. Actually, even within humanity, people who want to be leaders aren’t necessarily the smartest among us. And that’s not true for every species. There are very intelligent species like orangutans that are not social and have no desire to dominate anything because they are not social species. We have the desire to dominate as human beings, as do baboons and chimpanzees, because we are a species social with a hierarchical organization. But it’s not a feature of intelligence, it’s a feature of how nature basically evolved us.”

In his studies he believes that language is the major limit for AI. He thinks that machines will always be limited by a partial understanding of reality, lacking a part of ‘real life’ that they cannot fully experience. Do you think it’s always like this?

“Yes, the limitation of current AI systems, such as auto-regressive language models like GPT, is that they are trained solely through language, which means they don’t have a good understanding of reality. They don’t have a good understanding of the physical world, so their intelligence is very limited, and they have very limited reasoning and planning abilities.So a big question is, how will we build AI systems in the future that learn how the world works, just like human children learn by observing it and interacting with it with it, and that they have the same kind of intelligence that we observe in animals and humans, a certain level of common sense that current artificial intelligence systems do not have? I have made some proposals in this direction, but it is a research program long-term which could take most of the next decade”.

Interview

Does religion affect our relationship with AI? In Asia they are convinced so

by Emanuele Capone



You mentioned on Twitter that humanity fears AI due to the myth of sudden takeover. Can you explain why you call it a myth?

“The myth of sudden takeover states that the moment we turn on a superintelligence system, if we make even the slightest mistake in its design and let it slip or lose control, it will immediately bolt or take over and become even more intelligent because it can infinitely perfect itself, then take over the world and destroy humanity. This scenario is ridiculous. It doesn’t work like that in the real world. We are not going to develop a superintelligence system, somehow discover the secret of a superintelligence and then design a gigantic system and we’re going to activate it. It’s not going to work that way.”

And how will it go?

“We’re going to design systems architectures that might lead us to human-level intelligence, but first we’re going to make relatively small systems, that might have the intelligence of a rat or something like that. Progressively we’re going to make them smarter and fine-tune them to behave correctly, that they are not dangerous. They will be safe and will behave in ways that are not dangerous to humans or humanity. We will make them progressively more intelligent and we will progressively perfect them. So they will not get out of our control, they will not have the desire to We will design them to be submissive to humans and they will be like three people in Star Wars, not like the Terminators.”

Social networks

Behind the scenes of Instagram (and Facebook): “How AI influences what you see”

by Emanuele Capone



Europe will soon adopt a law that will limit the use and applications of AI. Do you think that’s the correct approach?

“No, I don’t think that’s the correct approach.”

Can you explain why?

“I think regulating AI makes sense at an enforcement level. It makes sense to have laws regulating the use of facial recognition in public settings, for example, because it would be an invasion of privacy. I definitely support regulation that requires certifications for systems that drive cars autonomously or aid in medical procedures.However, general regulation that restricts the use and applications of AI could curb innovation and limit progress in AI research and development.It is important to find a balance between the protection of human rights and the promotion of technological innovation”.

[ad_2]

Source link