text generator – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 10 Sep 2020 15:14:03 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png text generator – AI News https://news.deepgeniusai.com 32 32 Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/ https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/#respond Thu, 10 Sep 2020 15:14:01 +0000 https://news.deepgeniusai.com/?p=9839 AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3. GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated. The headline of the... Read more »

The post Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article appeared first on AI News.

]]>
AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3.

GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

The headline of the article which Duursma questions is: “A robot wrote this entire article. Are you scared yet, human?”

It’s a headline that’s bound to generate some clicks. However, often a headline is as far as the reader gets:

So there will be people who’ve read the headline and now believe there are powerful “robots” writing entire articles—a false and dangerous narrative in a world with an already growing distrust in the media.

GPT-3 requires human input and must first be supplied with text prompts. To offer a simplified explanation, Duursma calls it “essentially a super-advanced auto-complete system.”

There’s another group of readers; those which skim-read perhaps the first half of an article to get the gist. It’s understandable, life is hectic. However, that means us writers need to ensure any vital information is near the top and not somewhat hidden:

AI technologies will remain assistive to humans for the foreseeable future. While AIs can help with things like gathering research and completing tasks, it all still requires human prompts.

In the case of The Guardian’s article, a human first wrote 50 words. GPT-3 then created eight drafts from the contributed text. A human then went through each of the eight drafts and picked the best parts. Finally, a human went on to edit the text to make it coherent before publishing it.

That’s a lot of human intervention for an article which claims to have been entirely written by AI.

Research scientist Janelle Shane has access to GPT-3 and used it to generate 12 essays similar to what The Guardian would have sifted through to help create its AI-assisted article. Most of the generated text isn’t particularly human-like:

Super-intelligent AIs which can do all of these tasks like a human, known as AGIs (Artificial General Intelligences), are likely decades away.

Last year, AI experts participated in a survey on AGI timing:

  • 45% predict AGI will be achieved before 2060.
  • 34% expect after 2060.
  • 21% believe the so-called singularity will never occur.

Even if/when AGI is achieved, there’s a growing consensus that all decisions should ultimately be made by a human to ensure accountability. That means a theoretical generated by an AI would still be checked by a human before it’s published.

Articles like the one published by The Guardian create unnecessary fear which hinders innovation. Such articles also raise unrealistic expectations about what today’s AI technologies can achieve.

Both outcomes are unhealthy for an emerging technology which has huge long-term potential benefits but requires some realism about what’s actually possible today and in the near future.

(Photo by Roman Kraft on Unsplash)

The post Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/feed/ 0
Two grads recreate OpenAI’s text generator it deemed too dangerous to release https://news.deepgeniusai.com/2019/08/27/grads-recreate-openai-text-generator-dangerous-release/ https://news.deepgeniusai.com/2019/08/27/grads-recreate-openai-text-generator-dangerous-release/#respond Tue, 27 Aug 2019 16:06:34 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5972 Two graduates have recreated and released a fake text generator similar to OpenAI’s which the Elon Musk-founded startup deemed too dangerous to make public. Unless you’ve been living under a rock, you’ll know the world already has a fake news problem. In the past, at least fake news had to be written by a real... Read more »

The post Two grads recreate OpenAI’s text generator it deemed too dangerous to release appeared first on AI News.

]]>
Two graduates have recreated and released a fake text generator similar to OpenAI’s which the Elon Musk-founded startup deemed too dangerous to make public.

Unless you’ve been living under a rock, you’ll know the world already has a fake news problem. In the past, at least fake news had to be written by a real person to make it convincing.

OpenAI created an AI which could automatically generate fake stories. Combine fake news, with Cambridge Analytica-like targeting, and the general viral nature of social networks, and it’s easy to understand why OpenAI decided not to make their work public.

On Thursday, two recent master’s degree graduates decided to release what they claim is a recreation of OpenAI’s software anyway.

Aaron Gokaslan, 23, and Vanya Cohen, 24, believe their work isn’t yet harmful to society. Many would disagree, but their desire to show the world what’s possible – without being a huge company without large amounts of funding and resources – is nonetheless admirable.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Cohen to WIRED. “I’ve gotten scores of messages, and most of them have been like, ‘Way to go.’”

That’s not to say their work is easy nor particularly cheap. Gokaslan and Cohen used around $50,000 worth of cloud computing from Google. However, cloud computing costs are becoming more cost-effective while increasing in power each year.

OpenAI continues to maintain its stance that such work is better off not being in the public domain until more safeguards against fake news can be put in place.

Social networks have come under pressure from governments, particularly in the West, to do more to counter fake news and disinformation. Russia’s infamous “troll farms” are often cited as being used to create disinformation and influence global affairs.

Facebook is seeking to label potential fake news using fact-checking sites like Snopes, in addition to user reports.

Last Tuesday, OpenAI released a report which said it was aware of five other groups that had successfully replicated its own software but all made the decision not to release it.

Gokaslan and Cohen are in talks with OpenAI about their work and the potential societal implications.

? , , , AI &

The post Two grads recreate OpenAI’s text generator it deemed too dangerous to release appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/27/grads-recreate-openai-text-generator-dangerous-release/feed/ 0