Can AI applied to research be trusted? An experiment

Can AI applied to research be trusted?  An experiment

[ad_1]

An artificial intelligence program, such as ChatGPT, can be used to create real scientific fraud. It is useful to ask whether we are ready to plunge into such frightening informational noise

L’artificial intelligence represents a big step in scientific research. But how can it be exploited in an unethical way? An AI program, like Chat GPTcan be used for create real scientific fraud. In practice, users can generate realistic and reliable texts starting from the memos provided. This has made possible the construction of false scientific abstracts through an automated system, with documents that can fool even professionals in the field.

Thanks to the huge database of more than 8 million sentences, ChatGPT is able to “learn” how to write properly on a human level.

First, ChatGPT can quickly generate fake scientific text using a variety of sources. Secondly, the software can change the tone and content of the text according to the user’s needs. Finally, the program is able to integrate with other editing tools to make the process even easier.

If you want to create fake images to accompany a text generated by ChatGPT, then a software like ImaGen is the right tool for you. By inserting the textual input with which you want to generate an image and choosing the graphic theme among those available, in a few seconds you will have a personalized image ready for use in the document. With ImaGen you can even compose multiple images together to create more interesting overall content.

The modification of the draft generated through the previous steps can be done with ChatGPT, which will provide increasingly complex variables of the source text in response to user input, until a satisfactory result is obtained.

ChatGPT dramatically increases the scale and credibility with which fraudulent scientific articles can be generated. In the past, you had to use a team of people to create fake content and make it look like the real thing, but ChatGPT makes it so much easier.

This means that it is easier for bad actors to release fake search results or falsified data.

In a system where researchers’ career and funding evaluations depend on the number of published papers, this finding could have negative consequences for honest researchers.

Well: everything you have read so far, except this last sentence and what follows, has been generated as a text in Italian through a software called copymatic, one of the many available to do exactly what the danger is denounced in this writing , or generate a credible and reasoned text to support any thesis – scientific or not.

Are we ready for an invasion of artificially generated contents, whose reliability is undecidable to the reader and above all whose total mass will frighteningly increase the informative noise, thus allowing to manipulate every aspect of our life based on written, video and audio communications?

[ad_2]

Source link