ChatGPT: the promises, pitfalls and panic

The excitement surrounding ChatGPT – an easy-to-use AI chatbot that can deliver essays or computer code on demand and within seconds – has sent schools into a panic and Big Tech green with envy.

The potential impact of ChatGPT on society remains complex and unclear even though its creators on Wednesday announced a paid subscription version in the United States.

Here’s a closer look at what ChatGPT is (and isn’t):

Is this the turning point?

It is likely that the release of ChatGPT in November by the California company OpenAI will be remembered as a turning point in introducing a new wave of artificial intelligence to the wider public.

What is less clear is whether ChatGPT is truly a breakthrough with some critics calling it a good PR move that helped OpenAI score billions of dollars in investment from Microsoft.

Yann LeCun, Chief AI Scientist at Meta and a professor at New York University, believes that “ChatGPT is not a very interesting scientific advance,” calling the app a “flashy demo” built by talented engineers.

LeCun, speaking to the Big Technology Podcast, said ChatGPT doesn’t have “any internal model in the world” and just creates “one word after another” based on input and patterns found on the internet.

“When working with this AI model, you have to remember that it’s a slot machine, not a calculator,” warned Haomiao Huang of Kleiner Perkins, a Silicon Valley venture capital firm.

“Every time you ask a question and pull your arm, you get an answer that may surprise you… or not… Failure can be very unpredictable,” Huang wrote on Ars Technica, a technology news website.

Like Google

ChatGPT is powered by an AI language model that is almost three years old — OpenAI’s GPT-3 — and the chatbot uses only a fraction of its capabilities.

The real revolution is human-like conversation, says Jason Davis, a research professor at Syracuse University.

“It’s familiar, it’s a conversation and guess what? It’s like making a Google search request,” he said.

ChatGPT’s rockstar success even surprised its creators at OpenAI, which received billions in new funding from Microsoft in January.

“Given the magnitude of the economic impact we want here, the more gradual the better,” said OpenAI CEO Sam Altman in an interview with StrictlyVC, a newsletter.

“We put GPT-3 out almost three years ago … so the additional update from it to ChatGPT, I felt like it was predictable and I want to do more introspection on what I sort of miscalibrated,” he said.

The risk, Altman added, has alarmed the public and policymakers and on Tuesday his company unveiled a tool to detect AI-generated text amid concerns from teachers that students could rely on artificial intelligence to do homework.

What now?

From lawyers to speechwriters, from coders to journalists, everyone is waiting with bated breath to experience the disruption caused by ChatGPT. OpenAI has just launched a paid version of its chatbot – $20 per month for better and faster service.

Now, officially, the first significant application of OpenAI technology is for Microsoft software products.

Although details are scarce, it is widely assumed that ChatGPT-like capabilities will appear in the Bing search engine and in the Office suite.

“Think about Microsoft Word. I don’t have to write an essay or an article, I just have to tell Microsoft Word what I want quickly,” said Davis.

He believes that influencers on TikTok and Twitter will be the earliest adopters of this generative AI because virality requires a lot of content and ChatGPT can take care of it quickly.

This inevitably raises the specter of disinformation and spamming being carried out on an industrial scale.

Right now, Davis says ChatGPT’s reach is severely limited by computing power, but once it’s solved, the potential opportunities and dangers will grow exponentially.

And just like the arrival of the self-driving car that never happened, experts disagree on whether the question is months or years away.

Make fun of it

LeCun said Meta and Google have refrained from releasing an AI as powerful as ChatGPT for fear of ridicule and backlash.

Releases of quieter language-based bots – like Blenderbot Meta or Microsoft’s Tay for example – have been shown to quickly produce racist or inappropriate content.

Tech giants should think hard before releasing something “that will spew nonsense” and disappoint, he said.

© Agence France-Presse



Source link

Leave a Reply