Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article

Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article

AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3.

GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

The headline of the article which Duursma questions is: “A robot wrote this entire article. Are you scared yet, human?”

It’s a headline that’s bound to generate some clicks. However, often a headline is as far as the reader gets:

So there will be people who’ve read the headline and now believe there are powerful “robots” writing entire articles—a false and dangerous narrative in a world with an already growing distrust in the media.

GPT-3 requires human input and must first be supplied with text prompts. To offer a simplified explanation, Duursma calls it “essentially a super-advanced auto-complete system.”

There’s another group of readers; those which skim-read perhaps the first half of an article to get the gist. It’s understandable, life is hectic. However, that means us writers need to ensure any vital information is near the top and not somewhat hidden:

AI technologies will remain assistive to humans for the foreseeable future. While AIs can help with things like gathering research and completing tasks, it all still requires human prompts.

In the case of The Guardian’s article, a human first wrote 50 words. GPT-3 then created eight drafts from the contributed text. A human then went through each of the eight drafts and picked the best parts. Finally, a human went on to edit the text to make it coherent before publishing it.

That’s a lot of human intervention for an article which claims to have been entirely written by AI.

Research scientist Janelle Shane has access to GPT-3 and used it to generate 12 essays similar to what The Guardian would have sifted through to help create its AI-assisted article. Most of the generated text isn’t particularly human-like:

Super-intelligent AIs which can do all of these tasks like a human, known as AGIs (Artificial General Intelligences), are likely decades away.

Last year, AI experts participated in a survey on AGI timing:

  • 45% predict AGI will be achieved before 2060.
  • 34% expect after 2060.
  • 21% believe the so-called singularity will never occur.

Even if/when AGI is achieved, there’s a growing consensus that all decisions should ultimately be made by a human to ensure accountability. That means a theoretical generated by an AI would still be checked by a human before it’s published.

Articles like the one published by The Guardian create unnecessary fear which hinders innovation. Such articles also raise unrealistic expectations about what today’s AI technologies can achieve.

Both outcomes are unhealthy for an emerging technology which has huge long-term potential benefits but requires some realism about what’s actually possible today and in the near future.

(Photo by Roman Kraft on Unsplash)

Tags: , , , , , , ,