Why the Gpt3 chatbot is so popular with students (from the last row)

Why the Gpt3 chatbot is so popular with students (from the last row)

[ad_1]

Why doesn’t professors like Gpt3, the most popular chatbot and all-rounder on the web? Because students can use it to copy, so much so that some have already started banning it and others say it’s the end of homework. Nothing really new under the sun, it happened that someone complained even when search engines appeared. But this time it didn’t take very little to learn how to tame OpenAi’s conversational AI, transforming it into the nerd of the moment, the bullied one who starts doing your homework for you. ChatGpt lends itself very well to the role. He’s a very self-confident chatbot – even when he’s wrong – he never gets tired and is also creative. What it means? Who can do homework, study activities, write essays or reports or even short stories and novels. It corrects if you write in broken English and is also able to adjust its style. For example, you can ask him to write an essay on foreign policy for the university but also an essay for a fifth grade class on a topic of your choice. The result is in some cases truly extraordinary.

We remind you that Gpt-3 is not yet open to everyone, sometimes you have to wait for access, and they are thinking of paying for it. To use Playground AI, you need to create an account on the OpenAI website. Nothing inaccessible The OpenAI Playground tool allows you to start a conversation. It means that you can ask him to perform calculations, solve problems and write theses or essays. The results are sometimes extraordinary, the generated text looks like it was written by a human being. So much so that the New York Department of Education has banned the use of ChatGPT after discovering that some students have falsified their exam papers.

Understanding whether or not a text has been generated by an artificial intelligence is one of the most urgent tasks. And not just for the world of education. But also in the workplace. Primarily to clear the potential of the so-called generative artificial intelligence but also to manage the atavistic anxieties of those who have felt and still feel in competition with machines.

A solution could come from OpenAi itself which is working on a brand to make its “writings” recognizable. At the same time, applications and services designed to recognize whether a text is human or artificial are being born. They are tests, like in Blade Runner to unmask the replicants. But instead of being interviews, they are software that analyzes how sentences are “constructed” and, based on their complexity, indicate the probability that it comes from an AI or not. Examples in this sense are Open AI Detector And Detect GPT. The former was built on the basis of an older version of ChatGpt while the latter is a Google Chrome extension. Both indicate whether or not the text of the page we are visiting has been generated by an artificial intelligence. They can be wrong. But it will take – and it is a paradox – machine learning algorithms to learn how not to make mistakes.

Find out more

[ad_2]

Source link