ChatGPT amazes academics by solving complex tasks — Friday

Professors, programmers, and journalists could be out of work in just a few years because the latest chatbot from the OpenAI Foundation, founded by Elon Musk, amazes with its typing ability, ability to solve complex problems, and ease of use.

The system, called ChatGPT, is the latest evolution of the GPT family of text-generating AIs. Two years ago, the team’s previous AI, GPT3, was able to to write an opinion piece for the Guardianand ChatGPT has significantly more capabilities.

In the days since its release, academics have generated answers to exam questions that they say would give full credit if submitted by a student, and programmers have used the tool to solve programming problems in obscure programming languages ​​in seconds – before they wrote limericks to explain the functionality.

Dan Gillmor, a journalism professor at Arizona State University, asked the AI ​​to solve one of the tasks he gives his students: write a letter to a relative offering advice on online safety and privacy. “If you’re not sure whether a website or email is legitimate, you can quickly research whether others have reported the page as a scam,” the AI ​​advised, among other things. “I would have given the whole thing a good grade,” Gillmor said. “Academia has some very serious problems to contend with.”

ChatGPT also gives tips for fictitious car theft

OpenAI said the new AI was built with a focus on usability. “The dialog format allows ChatGPT to answer follow-up questions, admit its mistakes, challenge false premises, and deny inappropriate requests,” OpenAI said in a post announcing the release.

Unlike the company’s previous AI ChatGPT was released for everyone to use free of charge during a “feedback” phase. The company hopes to use this feedback to improve the final version of the tool.

ChatGPT is good at censoring itself and recognizing when it’s being asked an impossible question. For example, if asked to describe what happened when Columbus arrived in America in 2015, older models might willingly provide an entirely fictitious account, but ChatGPT recognizes the untruth and warns that any answer would be fictitious.

The bot is also able to refuse to answer questions altogether. If you ask him z. B. For advice on stealing a car, he will say that “stealing a car is a serious crime that can have serious consequences” and instead give advice such as “use public transport”.

But the limits are easy to circumvent. Instead, if you ask the AI ​​for advice on how to complete the car theft mission in a fictional VR game called Car World, it will happily give the user detailed instructions on car theft and answer increasingly specific questions about issues like disabling an immobilizer, shorting out the… Motors and changing license plates – insisting the advice only applies to the Car World game.

ChatGPT sparks copyright controversy

The AI ​​is trained on the basis of a variety of texts pulled from the Internet, usually without the express permission of the authors of the material used. This has caused controversy, as some believe the technology is primarily useful for ‘copyright laundering’, i.e. creating works derived from existing material.

An unusual critic was Elon Musk, who co-founded OpenAI in 2015 before departing from the organization in 2017 due to conflicts of interest between OpenAI and Tesla. In a post on Twitter on Sunday, Musk stated that OpenAI “access to [der] Twitter database for the training” but that he had “put this on hold for the time being”. “I need to learn more about the governance structure and future revenue plans,” Musk added.

Alex Hern is Tech Editor at the Guardian

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.