Il Time: “OpenAI has put pressure on the EU not to end up among the “high-risk” AI”

Il Time: "OpenAI has put pressure on the EU not to end up among the "high-risk" AI"

[ad_1]

“The CEO of OpenAI, Sam Altman, has spent the last few months on a European tour which has brought him to meet various heads of government. Altman has repeatedly said that there is a need for rules governing artificial intelligence globally. But behind the scenes, OpenAI has been lobbying for the AI ​​Act [la legge appena approvata dal Parlamento europeo] was watered down to the point of not being a danger to the company”.

That’s what the claims Timeattaching to his exclusive a seven-page document that OpenAI, the San Francisco company that created ChatGpthe sent to the European Council last September.

Artificial intelligence

Altman to the US Congress: “We need rules for AI, as for nuclear power. I fear serious damage for society”

by Archangel Rociola


In the document, entitled “OpenAI White Paper on the European Union’s Artificial Intelligence Act”the company led by Sam Altman points out that GPT-3the AI ​​model on which ChatGpt bases its extraordinary creative skills, should not be considered “a high-risk system” but “has capabilities that can potentially be used for uses that can be classified as high-risk”.

Artificial intelligences at “high risk”, as specified in the law approved on 14 June by the European Parliamentwill be subject to strict regulation which includes transparency, traceability and data governance.

So that was what scared Sam Altman’s company nine months ago. Having to respond to rules that would have brought his business to its knees: that of the development of a powerful artificial intelligence “behind closed doors”. In fact, we know nothing about the way OpenAI trains and develops its ChatGpt. And Altman has every interest in preserving this “black box” which guarantees the company profits and investments. Microsoft, for example, has already shelled out approx 20 billion dollars to make sure OpenAI technology.

Artificial intelligence

The strange appeal against AI: “We risk extinction.” But companies continue to develop them

by Emanuele Capone



At some point, last May, Sam Altman he also said it publicly: “We will stop operating in Europe if we fail to satisfy the requests of the AI ​​Act. There are technical limits to what is possible”. Altman himself, a few days later, had backtracked, saying he wanted to “cooperate” with the European Parliament and promising not to leave Europe in any case.

In the end, the pressures of OpenAI and the visits of its co-founder to the main European cities – none of them Italian – bore fruit: ChatGpt is never mentioned in the AI ​​Act and “generic” artificial intelligences that are not developed with a very specific purpose are not considered “high risk” AI. The so-called general-purpose AI.

The case

Why did Sam Altman skip Italy?

by Riccardo Luna



The AI ​​Act ultimately only required vendors for so-called “base models,” ie AI systems trained on large amounts of datasuch as the LLMs (Large Language Models) on which ChatGpt was trained, by meet minimum requirements including preventing the generation of illegal content, created from copyrighted content, and performing risk assessments.

[ad_2]

Source link