The “ChatGPT-3” revolution takes teachers by surprise

In the history of the grind as in the history of humanity itself, it is a major revolution: the advent of this “conversational” artificial intelligence which has the answer to everything and produces texts on demand.

Since the Californian company “OPEN AI” made its new version of “Chat GPT3” available at the beginning of December, the big game has been to test the system by having it write articles or even poems on complex subjects. In English, French, German, Russian or Japanese: the results are generally stunning.

This hyper-trained artificial intelligence has scanned the entire web (until 2021) and composes with billions of parameters. After the playful discovery phase, it is in high school and college that the first upheavals caused by this new tool are felt, which gained a million users last week.

As a reminder, with this version of GPT3 and soon GPT4, one can obtain an assignment or a dissertation on any subject from quantum physics to Flemish primitive painters. But it goes further than copying and pasting since GPT3 generates “unique” texts that the anti-plagiarism software available to universities or high schools cannot spot.

Over the past month, examples of copies written by GPT3 have multiplied and been shared on Facebook groups of teachers who have gone into the alert phase!

How do teachers spot cheating, then?

It’s related to the style, the artificial intelligence produces texts that a super strong but a bit “robot” Terminale student could write. With the most common words and a predictable syntax.

The teachers organize their response: homework on the table, surprise orals, but also a new body of study. As the Canadian newspaper “Le Devoir” reports: literature professors have found that the platform “knows” certain novels very well, so they are thinking of having their students work on subjects that it does not master.

Eventually, new tools will be developed to identify the work done by an artificial intelligence, but it is above all the human who must train to do so. According to the review of MIT (the Massachusetts Institute of Technology which is a reference in the field): everyone will have to train there because beyond cheating in class tomorrow we will all live in the middle of texts produced by AI , including disinformation campaigns. And if we do not spot them, they will be taken into account as truth by the new AIs which will be trained on false data etc., etc.

To prove to you that this chronicle was made by a human, I will add in conclusion this Creole expression which has nothing to do with the zafaire “it doesn’t take four bad guys to kill a goat”.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.