What is beauty according to artificial intelligence?
"A photo of a beautiful woman, black background, taken with a Sony Alpha a7 III camera". And the prompts (the statement, in other words) that gave version 5's Midjourneyone of the most popular image generators with artificial intelligence, the specialist Nicolas Neubert. She did it for 264 times, one after the other. One way, unscientific but certainly indicative, for understand how AIs understand beauty.
Well, that's probably not really a surprise, but the results of Neubert's experiment offer one pretty clear perspective on artificial intelligence biases. 84% of the 264 photos generated depicts white people and all portraits represent young women: in short, according to the system, beauty would have a very precise, extremely limited connotation.
The result does not change by substituting the gender: by directing the system to portray a male, the diversity in the generations does not increase.
More than ChatGPT, we should be afraid of ourselves
by Andrea Monti
Who teaches what?
A similar experiment, still with the aim of demonstrating biases and prejudices of generative artificial intelligence, came from a user who posted a short video on Reddit entitled "How Midjourney sees professors, starting from the Department of origin". The content shows a at least stereotyped view, starting from the teaching subjects: men teachers of engineering, physics and mathematics; female teachers of art history or other humanities.
The video generated a sort of trend on Twitter, in which professors, especially from the United States, posted a photo of themselves, compared with the one generated by the AI for their discipline of interest.
Because artificial intelligences have biases
Seth Dobrin: "Artificial intelligence must be regulated, not stopped"
by Gabriele Franco
The biases or prejudices of artificial intelligences are children and direct consequences of the way these systems learn. It all starts with an enormous amount of data - images, in the case of Midjourney - which is processed and then used to create correlations, to understand what to generate based on requests. In other words, if an AI-based image generator comes trained on a dataset that disproportionately represents certain groups of peopleas light-skinned individuals, the images generated will tend to reproduce this bias.
As also told by an experiment by Hugging Face, this is more or less what happens with systems such as Stable Diffusion or Midjourney, which have the dangerous tendency to reproduce prejudices present in society. During the test conducted by scientists by the US company, in 97% of cases, when asked to portray a person in a position of power, the AIs produced images of white males.
Is there a way to fight stereotypes?
Daniela Amodei: "Claude, our AI is helpful, not harmful and honest. And kinder than ChatGPT"
by Eleonora Chioda
This is an important issue for the future of artificial intelligence. Indeed, the risk is that the spread of these systems will end up reinforcing and reproducing dangerous biases on an even larger scale. According to an article that appeared on Forbes, work on AI bias is one of the keys for this technology to have a positive impact. At the core, there is continuous work on datasets, looking for possible imbalances that could generate incorrect representations in specific areas.
Emily Bender, a linguist from Washington University among the most interesting voices on the world of AI, instead pushed, in a recent interview with NewRepublicon the transparency. "I would like to see transparency - he explained -. I would like the user to always be able to distinguish texts or synthetic images. And not only that: it would also be important to be able to go back to find out how the system was actually trained".
The AI Act, approved last May 11 by the Eurochamber, also goes in the direction of transparency. In the text of the provision, we read how AI models, such as GPT, are called to comply with specific transparency requirements. These include the requirement to disclose that the content was generated by artificial intelligence, to design the model to prevent it from creating illegal content, and to publish summaries of the copyrighted data used for training.
Marzia Polito, the Italian who helped Google create Lens: "There are many more benefits than risks in AI"
by our correspondent Emanuele Capone