Blocking ChatGPT is a worse remedy than the evil

Blocking ChatGPT is a worse remedy than the evil

[ad_1]

The “ChatGPT block” was ordered on 30 March 2023 by the Personal Data Protection Authority on the assumption that the data used to train the model were collected without informing the persons to whom they refer and without verifying their age . This, the provision reads verbatim, exposes minors who use the service “to absolutely unsuitable responses with respect to their degree of development and self-awareness”.

The provision, it must be said in no uncertain terms, is highly questionable from a technical, legal and cultural point of view. It reveals, on the one hand the weakness with which the national data protection authorities deal with the matter and, on the other, the substantial inapplicability of the “privacy protection” legislation. Finally, it triggers a very dangerous reciprocity mechanism whereby other countries that have similar regulations – including Russia and China – could use them as a “legal” tool to strike subjects on this side of the new Iron Curtain.

Artificial intelligence

Scorza (Privacy Guarantor): “Against OpenAI and ChatGpt emergency measure”. Here’s what can happen now

by Archangel Rociola


Let’s go in order.

The consent of the person concerned is only one of the legal bases on which the data can be processed and, contrary to what the Guarantor claims (which does not motivate on the point) even wanting to follow the European legislation OpenAI would have had at least a “legitimate interest” (which the national protection authorities are so sorry for) which can be exercised in good faith. The data used by OpenAI, in fact, have been made freely available by individuals on public profiles, blogs and platforms. In the absence of clauses such as those provided by copyright law to limit the use and reuse of data (the so-called “reserved reproduction” or one of the Creative Commons licenses), it is therefore not reasonable to assume an “alleged prohibition”. Of course, it can always be argued that, always referring by analogy to copyright, “what is not granted is prohibited” but it would be paradoxical because this would translate into a limitation on the circulation of data that the European regulation itself does not allow. Moreover, how generative AI works has been known for some time, so if the problem of data processing for training purposes were indeed relevant, it would have been necessary to intervene immediately in order to prevent the oxen from escaping from the stable.

Furthermore, and this is a critical aspect, before proceeding, it should be understood where the personal data is located which would be collected by OpenAI. If they had been present on non-EU serversin fact, their treatment would have been primarily subject to the legislation of the country of location. It is true that the European Union, not alone, has established the applicability of its legislation also outside the borders of the member states, but it is highly questionable whether this can be done due to the political consequences in relations with the USA of such a choice, especially if adopted in a partial and inconsistent way.

Strategikon

More than ChatGPT, we should be afraid of ourselves

by Andrea Monti



The never-ending saga of various “safe-harbours” and “privacy shields” (US-EU data protection agreements) systematically annulled by the European Court of Justice demonstrates that sending data to the other side of the Atlantic can’t be done. But the national data protection authorities of the EU Member States, apart from a few announcements or impromptu measures, have not done much. Standing by will be politically necessary, but legally unacceptable. If personal data cannot be delivered to the USA (or to other countries that have less legal protection than ours) this should not be allowed regardless of expediency issues; otherwise one would have to conclude that the application of the law obeys political necessity and that, therefore, the law is not (any longer) above everything and everyone.

The empirical proof of this statement lies in the inertia of the Italian Guarantor (not only) on search engines, social-network and user-generated content platforms, non-EU DNS managers and non-EU software-as-a-service suppliers. OpenAI is certainly not the only one to “process” Italian personal data outside national borders and it is certainly not the one that processes it “more” than other subjects. So, it is more than legitimate to ask on the basis of what criteria measures such as the one against OpenAI are adopted. In other words: if non-EU systems for processing personal data are really dangerous, then they must all be blocked and not selectively.

Another objection of the Guarantor regards the fact that ChatGPT produces unreliable results and produces disinformation. So where is the problem? The service is admittedly experimental and should not be used for applications that involve human consequences. ChatGPT does not produce more “disinformation” than any search engine that does not select the reliability of the results or Wikipedia which, while trying to stem the problem, does not have global and overall control over its contents. Those who use these systems, including ChatGPT, do so at their own risk and responsibility.

And while we’re talking about responsibility, let’s get to the issue of “protection of minors”.

It is not up to the Guarantor of personal data, who is not new to such choices, to replace those who by law, mothers, fathers and guardians, have parental authority over minors. Allowing or not the use of this, as well as other services, is, therefore, a matter of jurisdiction of the “legal representatives” of minors who cannot plead “ignorance”, “incompetence” or the “inevitability” of the fact that “digital natives” cannot control themselves.

Artificial intelligence

The appeal of Musk and a thousand experts: “Stop the development of ChatGpt, we risk epochal upheavals”

by Archangel Rociola



However, if the Guarantor’s power to replace the exercise of parental authority exists, then this power should be exercised with respect to all electronic communication and information society services used by minors. So, for example, Apple and those who use Android on their terminals should be sanctioned for not having provided mechanisms for verifying the user’s identity (for example, providing that the smartphone can only be activated from the first use by an adult and then “associated” with a specific minor, in this case activating “parental control” functions). Or they should sanction telecommunications operators who do not take adequate measures (and which ones?) to check whether the terminal is being used by a minor. Or again, payment system managers should be sanctioned that allow “apps” to be used to exchange money and buy goods or services, even for minors. Will such measures ever be enacted?

Of course, this doesn’t change the fact that ChatGPT is made available Also to minors and the presence of even worse situations does not “absolve” OpenAI of the sin, just as violating the speed limit on the highway is not justified by the fact that other drivers “pull” even more. In the case of ChatGPT, however, it is doubtful that there has been a violation of the rights of the minor. Unlike other AI platforms, access to the service is not free because a contract must be entered into. According to Italian law, only adults can do so with full legal effect. If the minor enters into a contract autonomously (perhaps lying about age) it is once again up to the parents to request its annulment according to articles 1425 and 1426 of the Civil Code. The choice of the Italian Civil Code is very reasonable because on the one hand it does not block transactions (which, it should be remembered, legally represent a superior interest to be protected) and on the other it offers protection to weak subjects.

The summary of this reasoning is that the Guarantor’s provision creates more problems than it solves. It poses serious political and economic problems because it calls into question the legitimacy of the entire US ecosystem based on platforms and the data-economy without Italy having a valid alternative for citizens and businesses. It reinforces the principle of individual irresponsibility and civil disengagement because it suggests that – in order to use yet another digital gadget on the market – one can renounce to personally claim respect for one’s rights because someone else will take care of it. It justifies the abdication of adults from the role of educators and guides of the weak subjects who depend on them and are entrusted to them by nature, even before the law.

[ad_2]

Source link