Artificial intelligence, before the rules it is necessary to invest in research and ethics

Artificial intelligence, before the rules it is necessary to invest in research and ethics

[ad_1]

“The real risk of AI is not malicious intent, but expertise. A super-intelligent AI will be extremely effective at achieving its goals, and if these aren’t aligned with ours, we’re in trouble.”
This answer that the well-known English astrophysicist Stephen Hawking gave on the risks of artificial intelligence in – now distant – 2015 and which seems to refer to a very distant dystopian future is instead today at the center of the debate on the matter.
A cycle of hearings of experts on the subject of artificial intelligence began in the Chamber of Deputies last April with the aim of collecting as much information as possible to allow the legislator to have the right elements to start an appropriate regulatory action on the topic of algorithms.
The first two scholars to be heard were Professor Paolo Benanti, professor of ethics at the Pontifical Gregorian University, and Professor Rita Cucchiara, professor of engineering and director of the Artificial Intelligence Research and Innovation Center. In addition to the primary and indisputable international caliber of the two selected speakers, it is evident that the theme of digital ethics – again – has been proposed as urgent and deserving of primary attention. Never a choice was more apt.
The policy making action on this issue by the European Union, on the other hand, had begun precisely with a document – Ethics guidelines for trustworthy AI – whose objective was to define the key ethical requirements that AI systems should satisfy to be trusted by the European Union.
Not only. A recent article in the Financial Times signed by Ian Hogarth proposed a correct theme, taken up in the following days by Forbes and subsequently on the cover of the Economist: the evolution of technology and the growth in size of the artificial intelligences at our disposal, especially those capable of self-learning, poses new ethical questions to which it is appropriate to devote the right attention.
Among these, perhaps the most interesting is the oft-cited topic of AI alignment. Alignment refers to the process of designing AI systems that behave in a manner consistent with human values ​​and that ensure that these systems remain aligned even as they become more advanced and complex. Thematic, therefore, at the center of the aforementioned answer by prof. Hawking.
So there are two problems.
The first is that when the intelligent system becomes truly complex, it is very difficult a priori to know the possible side effects of poorly posed objectives. A theoretical example could be that of having an advanced AI manage a monetary fund with the aim of ending poverty in the shortest possible time and spending as little money as possible. Probably, the system would almost immediately stop all forms of spending in the belief that all the people who cannot buy food will die quickly and therefore poverty would be exhausted. So the goal achieved? At what price?
The second, to the extent that it is decided to limit the space of possible solutions that artificial intelligence can make its own to take into account shared human values, for example with appropriate reward functions, it is necessary to define ethical landmarks that represent our being and our way of thinking. However, these are not necessarily universal and, indeed, some studies suggest that they are not. To be part of this digital revolution and not suffer it, it is therefore necessary to continue investing in technical research and in the formalization of a position in the field of digital ethics.
It will be enough? Will this be the only way forward? It is very difficult to say now and in fact several different proposals have been made in recent weeks – some of them interesting – which aim to find the appropriate tools for managing the transitional period. Among these, it is worth mentioning the one suggested in a tweet by Elon Musk: “The “least worst” solution to the problem of AGI (artificial general intelligence, ed.) control that comes to my mind is to give a vote to every “human verified.”” Therefore, if a human “verifies” that he is a human (almost reversing the order of the test of the Turing test) he will have the right to vote. Who will live, he will see.

*Roberto Marseglia, research assistant at the University of Pavia

Find out more

[ad_2]

Source link