Morozov: “The Guarantor on ChatGpt has done well. The AI ​​of Silicon Valley must be opposed on a political and philosophical level”

Morozov: "The Guarantor on ChatGpt has done well. The AI ​​of Silicon Valley must be opposed on a political and philosophical level"

[ad_1]

Evgeny Morozov is considered one of the leading internet and digital experts. Born in 1984, Belarusian, sociologist, his comments are featured in the Guardian, the Economist and the New York Times. His popularity is mainly due to an intuition: that of technological solutionism. Described and criticized as a characteristic trait of our time, it consists of an iron faith in the saving power of technology, capable with its innovations of solving social and political problems on its own. Thesis that ideally contrasts it with Silicon Valley and the culture of enterprise, state and society it proposes. Morozov has dedicated some of his last reflections to the topic of Artificial Intelligence. A concept from the “Cold War”, which will thus do more good for Bay Area companies than for society. And not for threatened jobs. Morozov proposes a deeper critique of Artificial Intelligence. “It should be opposed more for its political and philosophical aspects”. And politics, the media, have a fundamental role in this process.

The Italian Privacy Guarantor last week forced OpenAi to stop collecting the personal data of Italian users. What did you think of this decision?

“It’s their job to enforce the rules. OpenAI, like most tech startups, bends the rules to get their product to market fast, to save legal costs, and argues that their practices clash with outdated social norms that need to be changed. It’s been that way for decades.”

Is ChatGpt a privacy threat?

“Judging by the last few months, when OpenAI’s use of an open-source software library led to some user data being exposed online, I think the answer is obvious. One wonders: how many more What shortcuts do they have to take to launch their products? I think there’s a way to do everything more slowly. With more testing. Without having to rely on cheap open-source libraries.”

In an editorial in the Guardian he criticized the very concept of artificial intelligence. According to her it is neither intelligent nor artificial. What exactly does he mean?

“I was pointing out that the very idea of ​​’artificial intelligence’ has a Cold War bias. It emerged in the 1950s, at the height of the Cold War. Its first uses were in the military and much of its work is “It was funded by the military. The argument I’ve made is that, as a concept, it belongs in the museum. Like other Cold War terms, like ‘Sputnik moment’ or ‘domino theory.'”

And how does this relate to the present day?

“There was an erroneous attempt to build something called ‘artificial intelligence’ in the 1950s – and it’s still an erroneous attempt to build something called ‘artificial general intelligence’ in 2023. At best, we are talking about tools that will be able to replicate – not match or replace – some of the functions performed by human beings. And that’s a great thing. Once upon a time there were ‘computers’ who were real people and as time went on we ended up calling machines ‘computers.’ The same goes for ‘calculators.’ The techniques we currently call ‘AI’ belong to this vein: they do some limited things well. We should make them do them better, and in a better way. supervised and tightly regulated. In Silicon Valley they argue that we need this Swiss army knife (artificial intelligence) capable of doing everything, even if most things will hurt it”.

Who will benefit from all this?

“I know it’s good for Silicon Valley business models. I’m not sure why it should be good for the world; I prefer to continue to place my faith and trust in specific institutions (and limited rather than generic technologies) that do a few things but do them well.

What role do the media play in this debate?

“I could go on and on about what the media could do. The media – and more generally what is called the ‘public sphere’ – should help us in what Immanuel Kant called the ‘public use of reason’. They help us understand why and how certain laws and institutions work. The whole push towards AGI tells us that such issues don’t matter. That we should focus only on getting things done and efficiency, celebrating ‘black boxes’ instead of ‘the public use of reason’. I believe this is a suicidal move for society as a whole; excessive attention to efficiency – without investigating the costs to generate it – can produce enormous problems – of which climate change is one of the most recent manifestations”.

Should we fight against AGI?

“AGI must be opposed not only for privacy issues, but also for more political and philosophical reasons: the increase in the epistemology of the black box, which underlies it, will further and further distance society from understanding how the power, social justice, who are the good guys and the bad guys. AGI focuses only on performance and goal achievement through statistical correlations. It doesn’t need theories about the world. But without theories, there is no politics”.

This is linked to the theme of work. Do you think artificial intelligence is a threat?

“The impact on work worries me less than other issues. If critics like David Graeber were right, and if most of the work being done today consisted of “cheap work,” AI taking over wouldn’t be such a bad thing. But Graeber’s (and, in a different way, Marx’s) argument is that these meaningless jobs are structurally necessary for the maintenance of the current economic and political system. So I don’t think they will disappear anytime soon.”

What worries you more then?

“I’m much more concerned about the drive to build AGI, even though I think they won’t be able to build it. It’s how it gets there that worries me; the solutionist ideology that I have been denouncing for a decade will further take root. Why should we trust Silicon Valley to solve political and social problems, even if they can build artificial intelligence? Is this a conscious decision we made as a society? I don’t believe it.”

Do you see opportunities?

“Asking this question about AI is like asking it about the calculator. Sure, it’s great if you have very specific tasks at your disposal. But I wouldn’t use the calculator to compose a symphony. Similarly, with the LLM (Large Language Model, or large linguistic models, ed.) you can drastically improve the style of the sentences you write.

It seems that soon he will be able to produce texts, scripts, very similar to those made by professionals.

“I highly doubt it. It will produce a lot of predictable rubbish, which just repeats what has been done before, but with a new twist. Hollywood is already very good at producing it even without an LLM. But some talented novelists and screenwriters will use LLM to create more beautiful and engaging sentences?I have no doubt.But they will be stupid to use them to generate plots – the heart of creative fiction.

This revolution is currently being led by private companies, with institutions occasionally trying to put a stop to it. Is it a danger?

“Yes, and my concern concerns the privatization of politics more than the privatization of the welfare state or public administration. I think robust democracies need a strong public sphere in which different conceptions of the common good and the good life can be articulated and challenged. We also need to have several reports as to why some problems. Let’s take a problem like poverty. Does it exist because the poor are financially irresponsible and need an app to educate them? Or does it exist because billionaires exploit loopholes in tax law? With Silicon Valley AGI, our natural impulse is to assume the former, because it’s a problem more easily solved with technology. So there’s not even a debate; we limit ourselves to picking the lowest fruits. However, most of our problems are of the latter kind: they are structural and have powerful forces behind them, and we need a causal account of why they exist. But the Silicon Valley giants are not interested in such explanations; they want to commercialize their solutions and get rich in the process. What is impoverished in this process is our democracy”.

So what role should the institutions have? How should they address these challenges?

“First, they must limit the encroachment of tech giants on our public life. Where this is unavoidable, they must subject them to strong regulation and controls, both in terms of inputs (data and models) and outputs (forecasts). I would like them to also start building their own LLMs or at least invest in curating high quality datasets that can be fed into models built by others; such datasets are evidently public goods that must be produced in the way we build library collections. Otherwise, we end up with low-quality data discarded from online sources like Reddit. But governments must also strengthen the public sphere, the lifeblood of our democracy. This means funding and granting autonomy and independence to the public media. If they don’t, we’ll end up slipping further into the quagmire of solutionism, where we accept the simplest solution just because it’s clean and inefficient – ​​even if to most people it would seem unfair.”

[ad_2]

Source link