The OpenAI experiment: ChatGPT enters politics

The OpenAI experiment: ChatGPT enters politics

[ad_1]

Artificial intelligence tries to give itself rules. “Laws codify values ​​and norms to regulate behavior. Beyond the legal framework, AI, just like society, needs more complex and adaptive guidelines for its conduct,” reads a recent blog post from Open AI, the company behind the most famous and discussed example of generative artificial intelligence, namely ChatGPT. The post is a sort of manifesto, but also a call to collaborate to give artificial intelligence in general, and not just that of OpenAI, a set of rules and principles that can guide growth.

Artificial intelligence

ChatGPT: a weapon of mass approval?

by Guido Scorza*


Many questions

The European Union has proposed legislation on AI that mainly focuses on strengthening standards on data quality, transparency, human oversight and accountability. It also aims to address ethical issues and implementation challenges in various sectors ranging from healthcare to education, finance and energy. But OpenAI goes further. In the meantime, alongside the general questions, he poses very specific questions:

  • Should AI have opinions about public figures? For example, should the AI ​​be able to comment on the actions or policies of politicians, celebrities or influencers?
  • Should AIs be allowed to criticize or praise governments? For example, should AIs be able to express support or opposition to a certain regime, party or leader?
  • How can AI be prevented from spreading disinformation or propaganda? For example, how can AI-generated fake news be detected and corrected?
  • How can we ensure that AI respects human dignity and rights? For example, how do we protect the privacy, autonomy and consent of individuals who interact with AI systems or are influenced by their decisions?
  • How to define and measure the social impact of AI? For example, how to evaluate the positive and negative effects of AI on various aspects of society, such as economy, environment, culture, health?
  • How to encourage the development and adoption of beneficial AI applications while minimizing potential harms and risks?
  • How to ensure that the voices and interests of diverse groups and communities are represented and respected in the design and governance of AI systems?

For this, OpenAI has announced that it will award 10 grants of $100,000 each to fund experiments on democratic processes to decide how AI should be regulated and controlled.

The interview

Morozov: “The Guarantor on ChatGpt has done well. The AI ​​of Silicon Valley must be opposed on a political and philosophical level”

by Archangel Rociola



Increased democracy

“By ‘democratic process’ we mean a process in which a broadly representative group of people exchange views, engage in deliberative discussions and finally decide on an outcome through transparent decision-making,” the post reads. OpenAI also provides some examples: Wikipedia, Twitter Community Notes, DemocracyNext, Platform Assemblies, MetaGov, RadicalxChange, People Powered, Collective Response Systems and pol.is, or the work of the Collective Intelligence Project (CIP).

Since it is OpenAI, however, one can also imagine that the process of collective decision of the rules on artificial intelligence uses precisely artificial intelligence. As a tool, however, and not as a party: ChatGPT could act as a mediator between various motions, take similar positions, manage the votes. In short, it would serve to optimize communication between different people in the context of the democratic process. It is perhaps the first time that artificial intelligence has been called into question in an area hitherto reserved for politics.

The OpenAI nonprofit grants are open to anyone with a good idea and a viable plan for conducting an AI governance experiment. It is not essential that applicants have any experience or relationship with OpenAI or AI research. The deadline for submitting proposals is 25 June 2023 and the winners will be announced on 25 July 2023. The first results of the work of the various groups will be announced to the public by 20 October.

The case

Why did Sam Altman skip Italy?

by Riccardo Luna



The challenges

The road to democratic AI governance requires a collective effort from companies, governments, academics and society as a whole. Democratic involvement, transparency and collaboration are fundamental pillars for tackling the ethical and social challenges posed by artificial intelligence, and if OpenAI scholarships will not answer all open questions, at least they indicate the only possible path, the one of the collaboration. They do even more: they place OpenAI in a privileged position, which should serve to avoid obstacles such as the suspension of the service in Italy due to non-compliance with privacy regulations. This time it is OpenAi that is asking governments and other actors to participate in drafting the rules for AI: “The governance of the most powerful systems, as well as the decisions relating to their use, must have strong public supervision”, reads the post . But the San Francisco company is looking even further, to a future which, despite the very rapid progress of artificial intelligence, is still science fiction today: “This program represents a step forward in defining democratic processes for the supervision of general artificial intelligence, and, ultimately, of superintelligence.”

beautiful minds

Daniela Amodei: “Claude, our AI is helpful, not harmful and honest. And kinder than ChatGPT”

by Eleonora Chioda



[ad_2]

Source link