Ai like nuclear power. Here is OpenAi's proposal to regulate artificial superintelligence
Robots may not steal our jobs, not anytime soon, but they will almost certainly reduce people's opportunities. An example of this is the Bing Image Creator platform, based on Dall-e technology, which in a few seconds creates a graphic artifact starting from our synthetic brief. For those who don't want to spend money on creative agencies and large commissions, this is already a way to take advantage of artificial intelligence today, optimizing times. Rightly or wrongly, there are people who can, as far as possible, direct the future of AI, and in a sense channel it towards friendlier paths than those envisioned by James Cameron in his Terminator. But we must act now, without delay.
And Sam Altman, CEO of OpenAI, is convinced of this, the organization that has benefited from huge investments by Microsoft to develop, and then make available, the GPT model, the basis of the ChatGPT chatbot. With a long speech in the United States Senate, the CEO responded to doubts about the rise of AI and its aims not to control the world but to revolutionize the way we approach many scenarios, from web research to content production digital. The summary of Altman's intervention is here: «If not regulated, AI could cause significant damage to the world». The CEO, together with colleagues Greg Brockmann and Ilya Sutskever of OpenAI, published a post in which he better explains his idea.
Like nuclear power
“Given the framework, it is conceivable that within the next ten years AI systems will surpass the level of expertise of experts in most domains and perform the same production activity as one of the largest companies today. In terms of potential advantages and disadvantages, superintelligence will be greater than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future but we must manage risk on the path forward. Given the innumerable potential, we cannot only be reactive. Nuclear power is a commonly used historical example of a technology with this property; synthetic biology is another. We need to mitigate the risks but it is clear that AI will require special treatment and coordination."
How do you deal with such an issue?
Starting from a shared governance. It means, for OpenAI, relying on an agency, impartial, which valid pros and cons not so much of AI in general but of the specific applications that come into play. "It is likely that in the end we will need something similar to the IAEA (International Atomic Energy Agency ed )." For superintelligence efforts; any effort above a certain threshold of capability (or resource, such as computing) will need to be subject to an international authority that can inspect systems, request audits, test compliance with security standards, impose restrictions on grades diffusion and security levels, etc. “But the governance of the most powerful systems, as well as decisions about their deployment, must have strong public scrutiny. We believe that people around the world should democratically decide on the limits and defaults for AI systems. We do not yet know how to design such a mechanism, but we plan to experiment with its development,” explain Altman and associates.
Clear and shared rules
A fundamental point, which in a certain sense brings the issue with the Italian Guarantor to the surface is where OpenAI affirms the need to apply rules to open source projects but without greatly limiting the technological threshold on which organizations can draw for their same development. “We should be careful not to shift the focus to larger initiatives by applying standards that downsize the applied technology”. During the Senate subcommittee hearing, Altman was asked about the future of work, a growing concern in the face of accelerating AI automation. The CEO emphasized that “there will be much more jobs on the other side and better than today”. The rapid spread of chatbots has led to much wider questions about how artificial intelligence can simplify the dissemination of false, misleading and copyright-infringing content. Themes on which OpenAI has said it is willing to collaborate, to optimize the technology and not bury it ahead of time.