Glaze protects artists from artificial intelligence: how it works and how to use it

Glaze protects artists from artificial intelligence: how it works and how to use it

[ad_1]

In the subreddit of Midjourney, one of the best-known AI imaging tools, there’s an extremely fascinating thread, even if it’s just a photo gallery: the first is a photo taken in the mountains, the subsequent ones are that same photo transformed into works by 9 artists, from Michelangelo to Van Gogh, up to Salvador Dalì.

Among the probably most striking features of systems like Midjourney or Dall-E 2 there is really the ability to adapt reality to the vision of more or less famous artists. This situation does not represent a huge problem for those who are into art history books like Michelangelo or Van Gogh: but it can be more difficult to accept for those who make a living from their work as a photographer or illustrator.

He made it very clear a group of Italian designers, who published a Manifesto for the protection of human creativity and asked the European Union to protect the rights of those who feed artificial intelligence in its AI Act.

Future and controversy

Artificial intelligence beyond human intelligence and humans like ants: OpenAI and the risks of Strong AI

by Emanuele Capone


What Glaze is and how it works

The central theme of the debate is basically one: is it legitimate that image creation services are also enriched starting from works covered by copyright? Pending a legal evolution of the theme, artists can do very little to avoid appearing in a database if they publish their works on the Internet. There is no, in short, an opt-out from being placed in an archive.

A defense strategy comes from the University of Chicago, which has released a tool called Glaze. The software, which can be downloaded for free, prevents artificial intelligences from learning to reproduce the style of a particular artist.

The operation it’s relatively simple. Let’s imagine we’re an artist and we want to publish a work online: however, we don’t want what we upload to be used as training for artificial intelligence. In short, we want to avoid finding AI-created images around that resemble our style.

Before disseminate that work online, we can upload a digital version of it to Glaze and choose to modify it by adding a certain type of style, perhaps that of Picasso or Pollock. The tool, at that point, modifies the file invisibly to the human eye, but recognizable by artificial intelligence, which will see a mix between our style and that of the selected artist. In other words, he won’t be able to reproduce that specific touch, that way of painting or photographing: “It’s one strategy to regain consensus – said the artist and illustrator Karla Ortiz to the New York Times – Artificial intelligence services earn from our works, taking data that does not belong to them but which are the property of the artists who produced them”.

Again: “What we do is try to understand how the AI ​​model perceives its version of what the artistic style is – he explained Ben Zhao, who leads the project, in a long interview with TechCrunch – And then we work in that dimension, to distort what the model interprets as a precise style”. In other words, Glaze works in an empty space: the one between how we see the world and how an artificial intelligence perceives it.

Glaze’s approach is almost piratical: the ultimate goal is to confuse the AIs, make them less effective, a little as does the Italian startup Cap_able with facial recognition: “Even just a few modified images are enough to have a significant impact on the output of these models. The more the art is protected before databases are ingested, the more these AIs will produce styles that are further away from the original artist. What we are actually doing, in purely technical terms, it is an attack, not a defense”.

tutorials

Artificial intelligence, machine learning, deep learning: minimum glossary to understand AI

by Francesco Marino



How important is the training database of an artificial intelligence

What’s inside an artificial intelligence? In other words: what are the data from which AIs learn about the world? The question is particularly important, even in light of the arrival of GPT-4. In presenting the new language model, OpenAI, he decided not to release any information on the dataset, or rather on the texts on which he trained the system: “We can consider the opening of OpenAI finished”, commented AI expert Ben on Twitter Schmidt with an effective play on words.

The point is that know the sources from which artificial intelligences are trained is particularly important: it is to understand their vision of the world (and therefore the prejudices and stereotypes into which they could fall) but also, especially in the case of AIs that create images, to defend copyright.

Systems such as Dall-E, Stable Diffusion or Midjourney are trained starting from a huge amount of images, through databases, such as Laion 5B, which contain works of art, photos, graphics, labeled with the corresponding text. The archives are open and available to anyone who wants to train models, as long as they don’t make money: a tacit agreement that image generation tools don’t seem to respect. It is also from this point that some of the lawsuits that people like Getty Images are filing right up against services like Stable Diffusion.

Waiting for the jurisprudence to take its course and indicate a direction, one solution remains: to confuse the machinesas the Glaze project teaches.

With Firefly, Adobe brings generative AI to Photoshop





[ad_2]

Source link