Sam Altman, CEO of OpenAI, walks from a luncheon during the Allen & Company Sun Valley Conference on July 6, 2022, in Sun Valley, Idaho.
Kevin Dietsch Getty Images News | Getty Images
Artificial intelligence research startup OpenAI on Tuesday introduced a tool designed to determine whether text is human-generated or written by a computer.
The release comes two months after OpenAI caught the public’s attention when it introduced ChatGPT, a chatbot that generates text that might have been written by a human in response to a human request. Following the wave of attention, last week Microsoft announced a multi-billion dollar investment in OpenAI and said it would integrate the startup’s AI models into products for consumers and businesses.
Schools have been quick to restrict the use of ChatGPT due to concerns that the software could harm learning. Sam Altman, CEO of OpenAI, said that education has changed after technology such as calculators appeared, but he also said that there are ways for companies to help teachers find texts written by AI.
The tool can make mistakes and still does, the company’s Jan Hendrik Kirchner, Lama Ahmad, Scott Aaronson and Jan Leike wrote in a blog post, noting that OpenAI wants feedback on the classification from parents and teachers.
“In our evaluation of the ‘challenge set’ of English texts, our classifier correctly identified 26% of AI-written texts (true positives) as ‘likely AI-written,’ while incorrectly labeling 9% of human-written texts as AI-written . time (false positive),” wrote an OpenAI employee.
This is not the first effort to find out if the text comes from the machine. Princeton University student Edward Tian earlier this month announced a tool called GPTZero and said on his website that it was created for educators. OpenAI itself released a detector in 2019 along with a large language model, or LLM, which is less sophisticated than the core of ChatGPT. The new version is better prepared to handle text from the new AI system, the employee wrote.
The new tool is not strong enough to analyze input containing less than 1.9000 characters, and OpenAI does not recommend using it in languages other than English. In addition, the text from the AI can be updated slightly to keep the classification correctly determined that it is not the work of a human, the employee wrote.
Even in 2019, OpenAI makes it clear that recognizing synthetic text is not an easy task. It is my intention to continue pursuing challenges.
“Our work on AI-generated text detection will continue, and we hope to show better methods in the future,” wrote Hendrik Kirchner, Ahmad, Aaronson and Leike.
WATCH: China’s Baidu is developing an AI-powered chatbot to rival OpenAI, reports say
