IA, US charter of rights. What’s there and what’s missing

IA, US charter of rights.  What's there and what's missing

[ad_1]

On October 4, the White House Office of Science and Technology Policy issued recommendations on Artificial Intelligence (AI) which, theoretically, should protect people from potential harm deriving from automation.

Recommendations, no more than tech companies, trade associations and other government agencies have already adopted which have drawn up dozens of similar documents in recent years, some of which are reported in a special appendix to the document issued by the Biden administration which it limits itself to formulating wishes using the conditional, emphasizing what algorithms and software should do to avoid discrimination. Nothing that sets clear boundaries, except that the Magna Carta of AI has been awaited for over a year by the subjects of the sector who, now, could move away from real restrictions.

Today on newsstands the special Italian Tech Week

Federico Faggin, from the first microchip to artificial intelligence

by Bruno Ruffilli


The main points

The recommendations on the use of AI, all conditional, can be summarized in five main points.

· People should be protected by secure automated systems
People should not be discriminated against by algorithms, especially with reference to their ethnicity, skin color or gender
· Individuals should not be exposed to the indiscriminate use of data and the unauthorized use of surveillance technologies
· People should be informed when confronted with an AI and should be able to understand how it makes decisions
· People should be enabled to renounce the use of AI systems and prefer human intervention also in the fields of justice, education, health and employment.

These guidelines are difficult to interpret even as regards the audience to which they are addressed. They do not place limits on those who develop AI and do not unequivocally sanction the rights of those who, willingly or not, use them. To try to understand the meaning of the work presented by the White House, we used the opinion of Alessandro Piva, director of the Artificial Intelligence Observatory of the Politecnico di Milano, according to whom the US guidelines: “They respond to the logic of the American approach to Artificial intelligence, are long-awaited guidelines that seem to favor the development of AI in support of companies in the sector, also to continue affirming the superiority of US companies – the Big tech in the first place – and giving freedom to companies that want to undertake a path of adoption of AI technologies within its processes. No binding opinions or particular obligations are given “.

ITW 2022

Barbara Caputo: “From artificial intelligence the next Italian unicorn”

by Bruno Ruffilli


What is there and what is missing

The document presented by the US government is flawed, according to the most critical voices, it does not take into account the use that the authorities make of AI in the field of intelligent weapons and public and national security. Other less stringent voices find it acceptable that the guidelines rely on those adopting AI (including companies and government agencies) to conduct tests and make the results public, thus offering them for analysis by all interested parties, and a “Via mediana” considered viable by the largest possible number of fronts involved.

The US government document begins by emphasizing that the recommendations are not intended to ban or restrict the activities of government agencies, law enforcement and intelligence agencies. Alondra Nelson, interim director of the Office of Science and Technology Policy, pursued by some American media, rejects the accusations that the document risks not taking into account the human rightsmerely saying that the work done is for the sole purpose of offering suggestions to the president of the United States.

What is missing in the American guidelines is a government stance: the US is postponing the responsibilities to the subjects operating in the AI ​​market while theEuropeon the contrary, it takes on the task of centralizing on itself the rules that must rule.

As proof of this aspect, the US has accepted to support the international principles established in 2019 by the Organization for Economic Cooperation and Development which, in fact, invite (and do not oblige) companies that use AI systems to evaluate in full awareness of how these can have repercussions on employment and the economy. The White House considers it reasonable that, following the publication of the broad guidelines, the federal agencies should each undertake on their own in order to reduce discrimination in relevant areas, including health and education.

As for the scope of the autonomous weaponsthe critical voices against the document released by the White House should be downsized because Piva recalls: “In 2020, binding guidelines were formulated under the Trump administration, on this point the criticisms cannot be shared”.

Deepening

Photos, videos, audio: artificial intelligence is learning to replicate reality

by Francesco Marino


The European regulation

It should be emphasized that the guidelines of the Biden administration come just as the European Union is approaching regulation of AI systems based on the risk criterion they entail and, also from this angle of observation, the fact that the US is not approaching a package of rules, it leaves us at least perplexed.

The draft of the European regulation demonstrates greater attention to the limits that an AI must not exceed. Alessandro Piva speaks of two almost opposite lines: “The European approach is quite different from the American one. The Artificial Intelligence Act proposed a logic for assessing the risks of AI based on ethical principles established by the EU itself. The European draft is a step forward compared to the American one even if its too stringent application in terms of controls and sanctions can slow down the development of AI in Europe. The important thing is that the application of the future European regulation does not cause slowdowns on the dynamics of the use of AI in business processes, while it is clear that when it comes to human rights it is appropriate that there are very clear limits beyond which we can go and, from this point of view, the EU draft is much more explicit than that of the US ”.

So, on the one hand, too much freedom does not guarantee the efficiency and transparency of AI and, on the other hand, too many controls and restrictions limit their development. Finding a balance could be difficult but, in this regard, Alessandro Piva offers an idea: “The American approach is freer and it is necessary to deal with the panorama. If in Europe we are too restrictive and in the US we are allowed to move as we wish, the risk could be that of limiting the development of AI in Europe in favor of other competing economies ”.

A sense of unfinished

AI has a high transformative potential and this imposes the need for new laws, however difficult to define because the very nature of AI is difficult to regulate. When, a quarter of a century ago, the internet began to spread, no one predicted the negative variations it would bring and, in fact, we let ourselves be surprised by fake news, interference in privacy and even in democracies. How to predict the threats and opportunities that AI will confront us with? Being able to write rules that are not too permissive and that do not clip the wings of innovation is complex and should be a process of constant regulation.

[ad_2]

Source link