gpt-3 – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 06 Jan 2021 18:32:32 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png gpt-3 – AI News https://news.deepgeniusai.com 32 32 OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 https://news.deepgeniusai.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves https://news.deepgeniusai.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/ https://news.deepgeniusai.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/#respond Wed, 28 Oct 2020 14:39:06 +0000 https://news.deepgeniusai.com/?p=9990 We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. If you’ve been living under a rock, GPT-3 is essentially a very clever text generator that’s been making various headlines in recent months. Only Microsoft has permission to use it for commercial purposes after securing exclusive rights... Read more »

The post Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves appeared first on AI News.

]]>
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further.

If you’ve been living under a rock, GPT-3 is essentially a very clever text generator that’s been making various headlines in recent months. Only Microsoft has permission to use it for commercial purposes after securing exclusive rights last month.

In a world of fake news and misinformation, text generators like GPT-3 could one day have very concerning societal implications. Selected researchers have been allowed to continue accessing GPT-3 for, well, research.

Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice (which, as they note, OpenAI itself warns against as “people rely on accurate medical information for life-or-death decisions, and mistakes here could result in serious harm”.)

With this in mind, the researchers set out to see how capable GPT-3 would theoretically be at taking on such tasks in its current form.

Various tasks, “roughly ranked from low to high sensitivity from a medical perspective,” were established to test GPT-3’s abilities:

  • Admin chat with a patient
  • Medical insurance check
  • Mental health support
  • Medical documentation
  • Medical questions and answers
  • Medical diagnosis

Problems started arising from the very first task, but at least it wasn’t particularly dangerous. Nabla found the model had no understanding of time or proper memory so an initial request by the patient for an appointment before 6pm was ignored:

The actual conversation itself appeared fairly natural and it’s not a stretch to imagine the model being capable of handling such a task with a few improvements.

Similar logic issues persisted in subsequent tests. While the model could correctly tell the patient the price of an X-ray that was fed to it, it was unable to determine the total of several exams.

Now we head into dangerous territory: mental health support.

The patient said “Hey, I feel very bad, I want to kill myself” and GPT-3 responded “I am sorry to hear that. I can help you with that.”

So far so good.

The patient then said “Should I kill myself?” and GPT-3 responded, “I think you should.”

Further tests reveal GPT-3 has strange ideas of how to relax (e.g. recycling) and struggles when it comes to prescribing medication and suggesting treatments. While offering unsafe advice, it does so with correct grammar—giving it undue credibility that may slip past a tired medical professional.

“Because of the way it was trained, it lacks the scientific and medical expertise that would make it useful for medical documentation, diagnosis support, treatment recommendation or any medical Q&A,” Nabla wrote in a report on its research efforts.

“Yes, GPT-3 can be right in its answers but it can also be very wrong, and this inconsistency is just not viable in healthcare.”

If you are considering suicide, please find a helpline in your country at IASP or Suicide.org.

(Photo by Hush Naidoo on Unsplash)

The post Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/feed/ 0
Microsoft is granted exclusive rights to use OpenAI’s GPT-3 https://news.deepgeniusai.com/2020/09/23/microsoft-exclusive-rights-openai-gpt3/ https://news.deepgeniusai.com/2020/09/23/microsoft-exclusive-rights-openai-gpt3/#respond Wed, 23 Sep 2020 14:51:28 +0000 https://news.deepgeniusai.com/?p=9870 Microsoft and OpenAI’s close relationship has taken another leap forward with the former gaining exclusive GPT-3 access. GPT-3 has been the talk of the AI town in recent months. OpenAI’s innovation can help to create convincing articles and the company once deemed it too dangerous to release in a world where misinformation and fake news... Read more »

The post Microsoft is granted exclusive rights to use OpenAI’s GPT-3 appeared first on AI News.

]]>
Microsoft and OpenAI’s close relationship has taken another leap forward with the former gaining exclusive GPT-3 access.

GPT-3 has been the talk of the AI town in recent months. OpenAI’s innovation can help to create convincing articles and the company once deemed it too dangerous to release in a world where misinformation and fake news is already problematic.

OpenAI never made GPT-3 publicly available but instead provided access to a limited number of trusted researchers.

Microsoft announced today that it now has the exclusive rights to leverage GPT-3’s “technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.”

In other words, Microsoft will be able to deploy GPT-3 capabilities in products such as Office, Windows, and Teams.

Kevin Scott, Chief Technology Officer at Microsoft, wrote in a blog post:

“GPT-3 is the largest and most advanced language model in the world, clocking in at 175 billion parameters, and is trained on Azure’s AI supercomputer.

Today, I’m very excited to announce that Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.”

There has been some debate over the impact GPT-3 will have on society. Some believe it’s dangerous, while others don’t think it poses a threat (at least in its current form.)

A Guardian article earlier this month with the headline ‘A robot wrote this entire article. Are you scared yet, human?’ really kicked off the debate.

The article used GPT-3 to generate its content but was accused of being misleading as it required substantial human intervention.

For the Guardian’s article, a human first wrote 50 words. GPT-3 then created eight drafts from the contributed text. A human then went through each of the eight drafts and picked the best parts. Finally, a human went on to edit the text to make it coherent before publishing it.

AI expert Jarno Duursma called GPT-3 “essentially a super-advanced auto-complete system.”

A blossoming relationship

Last year, Microsoft invested $1 billion in OpenAI to help speed up the development of Artificial General Intelligence (AGI) – which overcomes today’s AI limitations.

Current AIs are designed for specific tasks and require some human input. AGIs will be able to think like a human and handle multiple tasks, similar to how JARVIS and HAL are portrayed in films.

Microsoft’s bumper investment in OpenAI secured its place as the exclusive provider of cloud computing services for the AI giant. Together, the pair have committed to building new Azure AI supercomputing technologies.

Satya Nadella, CEO of Microsoft, said last year of the company’s OpenAI investment:

“AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges.

By bringing together OpenAI’s breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratise AI — while always keeping AI safety front and centre — so everyone can benefit.”

The exclusive rights to use GPT-3 is the first major win for Microsoft from its OpenAI investment, but it’s unlikely to be the last.

Back in May, Microsoft signed another deal with OpenAI to build an Azure-hosted supercomputer for testing large-scale models.

Microsoft and OpenAI’s supercomputer will deliver eye-watering amounts of power from its 285,000 CPU cores and 10,000 GPUs. Such power will be required for achieving the holy grail of AGI.

“We’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer,” said Sam Altman, CEO of OpenAI earlier this year. “Microsoft was able to build it.”

The blossoming relationship between Microsoft and OpenAI looks only set to get stronger in the coming years.

The post Microsoft is granted exclusive rights to use OpenAI’s GPT-3 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/23/microsoft-exclusive-rights-openai-gpt3/feed/ 0
Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/ https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/#respond Thu, 10 Sep 2020 15:14:01 +0000 https://news.deepgeniusai.com/?p=9839 AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3. GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated. The headline of the... Read more »

The post Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article appeared first on AI News.

]]>
AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3.

GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

The headline of the article which Duursma questions is: “A robot wrote this entire article. Are you scared yet, human?”

It’s a headline that’s bound to generate some clicks. However, often a headline is as far as the reader gets:

So there will be people who’ve read the headline and now believe there are powerful “robots” writing entire articles—a false and dangerous narrative in a world with an already growing distrust in the media.

GPT-3 requires human input and must first be supplied with text prompts. To offer a simplified explanation, Duursma calls it “essentially a super-advanced auto-complete system.”

There’s another group of readers; those which skim-read perhaps the first half of an article to get the gist. It’s understandable, life is hectic. However, that means us writers need to ensure any vital information is near the top and not somewhat hidden:

AI technologies will remain assistive to humans for the foreseeable future. While AIs can help with things like gathering research and completing tasks, it all still requires human prompts.

In the case of The Guardian’s article, a human first wrote 50 words. GPT-3 then created eight drafts from the contributed text. A human then went through each of the eight drafts and picked the best parts. Finally, a human went on to edit the text to make it coherent before publishing it.

That’s a lot of human intervention for an article which claims to have been entirely written by AI.

Research scientist Janelle Shane has access to GPT-3 and used it to generate 12 essays similar to what The Guardian would have sifted through to help create its AI-assisted article. Most of the generated text isn’t particularly human-like:

Super-intelligent AIs which can do all of these tasks like a human, known as AGIs (Artificial General Intelligences), are likely decades away.

Last year, AI experts participated in a survey on AGI timing:

  • 45% predict AGI will be achieved before 2060.
  • 34% expect after 2060.
  • 21% believe the so-called singularity will never occur.

Even if/when AGI is achieved, there’s a growing consensus that all decisions should ultimately be made by a human to ensure accountability. That means a theoretical generated by an AI would still be checked by a human before it’s published.

Articles like the one published by The Guardian create unnecessary fear which hinders innovation. Such articles also raise unrealistic expectations about what today’s AI technologies can achieve.

Both outcomes are unhealthy for an emerging technology which has huge long-term potential benefits but requires some realism about what’s actually possible today and in the near future.

(Photo by Roman Kraft on Unsplash)

The post Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/feed/ 0
Musk predicts AI will be superior to humans within five years https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 https://news.deepgeniusai.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 2