Because artificial intelligences can’t emulate common sense

Because artificial intelligences can't emulate common sense


Common sense and common sense escape AIs and, according to the most recent studies, they will still escape for a long time. To say it is the teacher Yejin Choi of the University of Washington, author of a paper entitled The Curious Case of Commonsense Intelligence published in the spring of 2022. Common sense, such a trivial concept for humans, turns into one of the biggest and most difficult limits for AIs. It is no coincidence that, among the many similar studies, our attention fell on Choi’s work, but we will talk about this later.

Common sense is needed for AIs to understand the needs and actions of humans who, right from birth, have rapid ability to adapt to the social environment in which they are inserted. This is equivalent to saying that AI, however advanced they may be considered today, started off with a handicap that has not yet been filled despite the fact that, starting from the sixties of the last century (and therefore more than sixty years ago) with programming languages ​​such as LISP and Prolog, researchers have attempted to develop artificial reasoning systems.

In summary, the problem is precisely this: to bring AI closer to human intellectual abilities, it is first of all necessary define the very concept of intelligence and, beyond the many encyclopedic definitions, it is anything but easy.

Artificial intelligence

Why are actors afraid of AI? A concrete example

by Emanuele Capone

Common sense is still far away

The common sense of things and common sense are partly at the origin of intuitive reasoning skills of man. Intuition allows you to find plausible explanations for partial observationsit allows to read between the lines And complete the missing pieces of the puzzle.

To return to Professor Choi’s work, her role in the Allen Institute’s Mosaic project, and based on the language and intuitive reasoning. Something of profoundly different from the large models (Large Language Models) such as GPT-3, typically trained to generate words or sentences according to statistical sequencesi.e. nothing that can be applied to common sense models.

In this regard, within the Mosaic project a system of knowledge of common sense based on language is being built and, in support, an intuitive reasoning system. A work developed on the Atomic language, i.e. a vast collection of descriptions, rules and facts from the most varied everyday contexts and accompanied by algorithms that include non-sequential nature of intuitive reasoning.

What appears to be a major breakthrough, even when done in a lab, looks more like a cul-de-sac, so much so that Choi herself argues that she can highlight potential paths to follow but that she is still far from an AI that has even an approximate dimension of what is called common sense. “Many open questions remain, including the computational mechanisms to ensure the coherence and interpretability of commonsense knowledge and reasoning,” Choi wrote in the conclusions of her paper.

The integration of language, perception and multimodal reasoning is still out of reach. About this Nicholas Catsdirector of the PoliMi Artificial Intelligence Observatory, points out that: “AI transforms information, but it does not know what it is doing, it has no knowledge of the facts. Having knowledge of what one is doing and reasoning about what one is doing fall within the very idea of ​​intelligence but, up to now, all the models that have tried to describe the functioning of some typically human functions, such as reasoning, are not the ones that decree the success of Artificial Intelligence today”.

A paradigm shift is needed to equip AI with common sense: “In principle it can be done using explicit tools such as logical models that are to be connected to the tools mainly used today, such as Deep learning, but despite some experiments, the light at the end of the tunnel cannot be seen”, explains Gatti.

The black boxes

The operations carried out by an AI are mostly impenetrable, hence the term Black box, black box. It is a somewhat simplistic description but preparatory to the topic we are dealing with: “Until we wanted to see how an AI was made from within, or until we wanted to understand how AI worked in order to dominate it, the phenomenon was never disruptive. When the black box idea was accepted, AIs became disruptiveChatGPT is an example of this”, explains Gatti.

Yet, there are those who are trying to instill common sense in algorithms, as in the case of the experiments of the research group directed by Choi: “There are attempts called neurosymbolic or implicit reasoning that are being tried at the level of the scientific community to go and prove these two aspects”, explains Gatti.

In short, the fact that it is not yet possible to create a universally recognized definition of the term “intelligence” and the fact that the Black box paradigm is now the standard, do not make the task easier for those who want to improve the capabilities of AI. You can’t get your hands on something that you can’t fully define and that reveals itself as inscrutable.


Google tests Genesis, the AI ​​for journalists

by Simone Cosimi

The two souls of Artificial Intelligence

One is engineering AI, the other is cognitive AI. The first emulates human functions, the second focuses on what in jargon is called general AI, i.e. the one capable of emulating, and even surpassing, human intelligence. The first makes great strides, cognitive AI is at a standstill. To make it progress it is necessary to equip it with common sense, intuitive peculiarities and a conscience. “In the computer science community there has never been the problem of explaining the very idea of ​​intelligence and those who have asked themselves this problem have not found a solution. The difficulties lie in understanding and therefore defining what intelligence isas well as in writing down on a sheet of paper what is intelligent”, explains Gatti.

A boring future?

Except that, in the absence of common sense and common sense of things, cognitive AIs will not see the dawn, will we become addicted to AIs in the near future? Are we going to take them for granted just as we don’t see anything revolutionary in radio or television today? “Normally we don’t get bored interacting with other men if they give us interesting topics for discussion. The same will be true for AIs. Even today we continue to watch TV, selecting what we like”, concludes Gatti.


Source link