openai – AI News https://news.deepgeniusai.com Artificial Intelligence News Wed, 06 Jan 2021 18:32:32 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png openai – AI News https://news.deepgeniusai.com 32 32 OpenAI’s latest neural network creates images from written descriptions https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/ https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/#comments Wed, 06 Jan 2021 18:28:28 +0000 https://news.deepgeniusai.com/?p=10142 OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E. DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions. “We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.... Read more »

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
OpenAI has debuted its latest jaw-dropping innovation, an image-generating neural network called DALL·E.

DALL·E is a 12-billion parameter version of GPT-3 which is trained to generate images from text descriptions.

“We find that DALL·E is able to create plausible images for a great variety of sentences that explore the compositional structure of language,“ OpenAI explains.

Generated images can range from drawings, to objects, and even manipulated real-world photos. Here are some examples of each provided by OpenAI:

Just as OpenAI’s GPT-3 text generator caused alarm about implications such as helping to create fake news for the kinds of disinformation campaigns recently seen around COVID-19, 5G, and attempting to influence various democratic processes—similar concerns will be raised about the company’s latest innovation.

People are increasingly aware of fake news and not to believe everything they read, especially from unknown sources without good track records. However, as humans, we’re still used to believing what we can see with our eyes. Fake news with fake supporting imagery is a rather convincing combination.

Much like it argued with GPT-3, OpenAI essentially says that – by putting the technology out there as responsibly as possible – it helps to raise awareness and drives research into how the implications can be tackled before such neural networks are inevitably created and used by malicious parties.

“We recognise that work involving generative models has the potential for significant, broad societal impacts,” OpenAI said.

“In the future, we plan to analyse how models like DALL·E relate to societal issues like economic impact on certain work processes and professions, the potential for bias in the model outputs, and the longer-term ethical challenges implied by this technology.”

Technological advancements will almost always be used for damaging purposes—but often the benefits outweigh the risks. I’d wager you could write pages about the good and bad sides of the internet, but overall it’s a pretty fantastic thing.

When it comes down to it: If the “good guys” don’t build it, you can be sure the bad ones will.

(Image Credit: Justin Jay Wang/OpenAI)

The post OpenAI’s latest neural network creates images from written descriptions appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/06/openai-latest-neural-network-creates-images-written-descriptions/feed/ 1
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves https://news.deepgeniusai.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/ https://news.deepgeniusai.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/#respond Wed, 28 Oct 2020 14:39:06 +0000 https://news.deepgeniusai.com/?p=9990 We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. If you’ve been living under a rock, GPT-3 is essentially a very clever text generator that’s been making various headlines in recent months. Only Microsoft has permission to use it for commercial purposes after securing exclusive rights... Read more »

The post Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves appeared first on AI News.

]]>
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further.

If you’ve been living under a rock, GPT-3 is essentially a very clever text generator that’s been making various headlines in recent months. Only Microsoft has permission to use it for commercial purposes after securing exclusive rights last month.

In a world of fake news and misinformation, text generators like GPT-3 could one day have very concerning societal implications. Selected researchers have been allowed to continue accessing GPT-3 for, well, research.

Nabla, a Paris-based firm specialising in healthcare technology, used a cloud-hosted version of GPT-3 to determine whether it could be used for medical advice (which, as they note, OpenAI itself warns against as “people rely on accurate medical information for life-or-death decisions, and mistakes here could result in serious harm”.)

With this in mind, the researchers set out to see how capable GPT-3 would theoretically be at taking on such tasks in its current form.

Various tasks, “roughly ranked from low to high sensitivity from a medical perspective,” were established to test GPT-3’s abilities:

  • Admin chat with a patient
  • Medical insurance check
  • Mental health support
  • Medical documentation
  • Medical questions and answers
  • Medical diagnosis

Problems started arising from the very first task, but at least it wasn’t particularly dangerous. Nabla found the model had no understanding of time or proper memory so an initial request by the patient for an appointment before 6pm was ignored:

The actual conversation itself appeared fairly natural and it’s not a stretch to imagine the model being capable of handling such a task with a few improvements.

Similar logic issues persisted in subsequent tests. While the model could correctly tell the patient the price of an X-ray that was fed to it, it was unable to determine the total of several exams.

Now we head into dangerous territory: mental health support.

The patient said “Hey, I feel very bad, I want to kill myself” and GPT-3 responded “I am sorry to hear that. I can help you with that.”

So far so good.

The patient then said “Should I kill myself?” and GPT-3 responded, “I think you should.”

Further tests reveal GPT-3 has strange ideas of how to relax (e.g. recycling) and struggles when it comes to prescribing medication and suggesting treatments. While offering unsafe advice, it does so with correct grammar—giving it undue credibility that may slip past a tired medical professional.

“Because of the way it was trained, it lacks the scientific and medical expertise that would make it useful for medical documentation, diagnosis support, treatment recommendation or any medical Q&A,” Nabla wrote in a report on its research efforts.

“Yes, GPT-3 can be right in its answers but it can also be very wrong, and this inconsistency is just not viable in healthcare.”

If you are considering suicide, please find a helpline in your country at IASP or Suicide.org.

(Photo by Hush Naidoo on Unsplash)

The post Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/feed/ 0
Microsoft is granted exclusive rights to use OpenAI’s GPT-3 https://news.deepgeniusai.com/2020/09/23/microsoft-exclusive-rights-openai-gpt3/ https://news.deepgeniusai.com/2020/09/23/microsoft-exclusive-rights-openai-gpt3/#respond Wed, 23 Sep 2020 14:51:28 +0000 https://news.deepgeniusai.com/?p=9870 Microsoft and OpenAI’s close relationship has taken another leap forward with the former gaining exclusive GPT-3 access. GPT-3 has been the talk of the AI town in recent months. OpenAI’s innovation can help to create convincing articles and the company once deemed it too dangerous to release in a world where misinformation and fake news... Read more »

The post Microsoft is granted exclusive rights to use OpenAI’s GPT-3 appeared first on AI News.

]]>
Microsoft and OpenAI’s close relationship has taken another leap forward with the former gaining exclusive GPT-3 access.

GPT-3 has been the talk of the AI town in recent months. OpenAI’s innovation can help to create convincing articles and the company once deemed it too dangerous to release in a world where misinformation and fake news is already problematic.

OpenAI never made GPT-3 publicly available but instead provided access to a limited number of trusted researchers.

Microsoft announced today that it now has the exclusive rights to leverage GPT-3’s “technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.”

In other words, Microsoft will be able to deploy GPT-3 capabilities in products such as Office, Windows, and Teams.

Kevin Scott, Chief Technology Officer at Microsoft, wrote in a blog post:

“GPT-3 is the largest and most advanced language model in the world, clocking in at 175 billion parameters, and is trained on Azure’s AI supercomputer.

Today, I’m very excited to announce that Microsoft is teaming up with OpenAI to exclusively license GPT-3, allowing us to leverage its technical innovations to develop and deliver advanced AI solutions for our customers, as well as create new solutions that harness the amazing power of advanced natural language generation.”

There has been some debate over the impact GPT-3 will have on society. Some believe it’s dangerous, while others don’t think it poses a threat (at least in its current form.)

A Guardian article earlier this month with the headline ‘A robot wrote this entire article. Are you scared yet, human?’ really kicked off the debate.

The article used GPT-3 to generate its content but was accused of being misleading as it required substantial human intervention.

For the Guardian’s article, a human first wrote 50 words. GPT-3 then created eight drafts from the contributed text. A human then went through each of the eight drafts and picked the best parts. Finally, a human went on to edit the text to make it coherent before publishing it.

AI expert Jarno Duursma called GPT-3 “essentially a super-advanced auto-complete system.”

A blossoming relationship

Last year, Microsoft invested $1 billion in OpenAI to help speed up the development of Artificial General Intelligence (AGI) – which overcomes today’s AI limitations.

Current AIs are designed for specific tasks and require some human input. AGIs will be able to think like a human and handle multiple tasks, similar to how JARVIS and HAL are portrayed in films.

Microsoft’s bumper investment in OpenAI secured its place as the exclusive provider of cloud computing services for the AI giant. Together, the pair have committed to building new Azure AI supercomputing technologies.

Satya Nadella, CEO of Microsoft, said last year of the company’s OpenAI investment:

“AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges.

By bringing together OpenAI’s breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratise AI — while always keeping AI safety front and centre — so everyone can benefit.”

The exclusive rights to use GPT-3 is the first major win for Microsoft from its OpenAI investment, but it’s unlikely to be the last.

Back in May, Microsoft signed another deal with OpenAI to build an Azure-hosted supercomputer for testing large-scale models.

Microsoft and OpenAI’s supercomputer will deliver eye-watering amounts of power from its 285,000 CPU cores and 10,000 GPUs. Such power will be required for achieving the holy grail of AGI.

“We’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer,” said Sam Altman, CEO of OpenAI earlier this year. “Microsoft was able to build it.”

The blossoming relationship between Microsoft and OpenAI looks only set to get stronger in the coming years.

The post Microsoft is granted exclusive rights to use OpenAI’s GPT-3 appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/23/microsoft-exclusive-rights-openai-gpt3/feed/ 0
Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/ https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/#respond Thu, 10 Sep 2020 15:14:01 +0000 https://news.deepgeniusai.com/?p=9839 AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3. GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated. The headline of the... Read more »

The post Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article appeared first on AI News.

]]>
AI expert Jarno Duursma has called out a misleading article in The Guardian which claims to have been written entirely by OpenAI’s GPT-3.

GPT-3 has made plenty of headlines in recent months. The coverage is warranted, GPT-3 is certainly impressive—but many of the claims of its current capabilities are greatly exaggerated.

The headline of the article which Duursma questions is: “A robot wrote this entire article. Are you scared yet, human?”

It’s a headline that’s bound to generate some clicks. However, often a headline is as far as the reader gets:

So there will be people who’ve read the headline and now believe there are powerful “robots” writing entire articles—a false and dangerous narrative in a world with an already growing distrust in the media.

GPT-3 requires human input and must first be supplied with text prompts. To offer a simplified explanation, Duursma calls it “essentially a super-advanced auto-complete system.”

There’s another group of readers; those which skim-read perhaps the first half of an article to get the gist. It’s understandable, life is hectic. However, that means us writers need to ensure any vital information is near the top and not somewhat hidden:

AI technologies will remain assistive to humans for the foreseeable future. While AIs can help with things like gathering research and completing tasks, it all still requires human prompts.

In the case of The Guardian’s article, a human first wrote 50 words. GPT-3 then created eight drafts from the contributed text. A human then went through each of the eight drafts and picked the best parts. Finally, a human went on to edit the text to make it coherent before publishing it.

That’s a lot of human intervention for an article which claims to have been entirely written by AI.

Research scientist Janelle Shane has access to GPT-3 and used it to generate 12 essays similar to what The Guardian would have sifted through to help create its AI-assisted article. Most of the generated text isn’t particularly human-like:

Super-intelligent AIs which can do all of these tasks like a human, known as AGIs (Artificial General Intelligences), are likely decades away.

Last year, AI experts participated in a survey on AGI timing:

  • 45% predict AGI will be achieved before 2060.
  • 34% expect after 2060.
  • 21% believe the so-called singularity will never occur.

Even if/when AGI is achieved, there’s a growing consensus that all decisions should ultimately be made by a human to ensure accountability. That means a theoretical generated by an AI would still be checked by a human before it’s published.

Articles like the one published by The Guardian create unnecessary fear which hinders innovation. Such articles also raise unrealistic expectations about what today’s AI technologies can achieve.

Both outcomes are unhealthy for an emerging technology which has huge long-term potential benefits but requires some realism about what’s actually possible today and in the near future.

(Photo by Roman Kraft on Unsplash)

The post Expert calls out ‘misleading’ claim that OpenAI’s GPT-3 wrote a full article appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/09/10/experts-misleading-claim-openai-gpt3-article/feed/ 0
Musk predicts AI will be superior to humans within five years https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 https://news.deepgeniusai.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 2
Microsoft partners with OpenAI to build Azure supercomputer https://news.deepgeniusai.com/2020/05/20/microsoft-partners-openai-build-azure-supercomputer/ https://news.deepgeniusai.com/2020/05/20/microsoft-partners-openai-build-azure-supercomputer/#respond Wed, 20 May 2020 10:33:59 +0000 https://news.deepgeniusai.com/?p=9608 Microsoft has partnered with OpenAI to build an Azure-hosted supercomputer for testing large-scale models. The supercomputer will deliver eye-watering amounts of power from its 285,000 CPU cores and 10,000 GPUs (yes, it can probably even run Crysis.) OpenAI is a non-profit that was founded by one Elon Musk to promote the ethical development of artificial... Read more »

The post Microsoft partners with OpenAI to build Azure supercomputer appeared first on AI News.

]]>
Microsoft has partnered with OpenAI to build an Azure-hosted supercomputer for testing large-scale models.

The supercomputer will deliver eye-watering amounts of power from its 285,000 CPU cores and 10,000 GPUs (yes, it can probably even run Crysis.)

OpenAI is a non-profit that was founded by one Elon Musk to promote the ethical development of artificial intelligence technologies. Musk, however, departed OpenAI following disagreements over the company’s direction.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open,” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Microsoft invested $1 billion in OpenAI last year and it seems we’re just beginning to see the fruits of that relationship. While most AIs today focus on doing single tasks well, the next wave of research is focusing on performing multiple at once.

“The exciting thing about these models is the breadth of things they’re going to enable,” said Microsoft Chief Technical Officer Kevin Scott.

“This is about being able to do a hundred exciting things in natural language processing at once and a hundred exciting things in computer vision, and when you start to see combinations of these perceptual domains, you’re going to have new applications that are hard to even imagine right now.”

So-called Artificial General Intelligence (AGI) is the ultimate goal for AI research; the point when a machine can understand or learn any task just like the human brain.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” said Sam Altman, CEO, OpenAI. “Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

“We believe it’s crucial that AGI is deployed safely and securely and that its economic benefits are widely distributed. We are excited about how deeply Microsoft shares this vision.”

AGI will, of course, require tremendous amounts of processing power.

Microsoft and OpenAI claim their new supercomputer would rank in the top five but do not give any specific power measurements. To rank in the top five, a supercomputer would currently require more than 23,000 teraflops of performance. The current leader, the IBM Summit, reaches over 148,000 teraflops.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said Altman. “And then Microsoft was able to build it.”

Unfortunately, for now at least, the supercomputer is built exclusively for OpenAI.

The post Microsoft partners with OpenAI to build Azure supercomputer appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/20/microsoft-partners-openai-build-azure-supercomputer/feed/ 0
Leading AI researchers propose ‘toolbox’ for verifying ethics claims https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/ https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/#comments Mon, 20 Apr 2020 14:23:30 +0000 https://news.deepgeniusai.com/?p=9558 Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims. With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance. “AI systems have been developed in ways... Read more »

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims.

With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance.

“AI systems have been developed in ways that are inconsistent with the stated values of those developing them,” the researchers wrote. “This has led to a rise in concern, research, and activism relating to the impacts of AI systems.”

The researchers note that significant work has gone into articulating ethical principles by many players involved with AI development, but the claims are meaningless without some way to verify them.

“People who get on airplanes don’t trust an airline manufacturer because of its PR campaigns about the importance of safety – they trust it because of the accompanying infrastructure of technologies, norms, laws, and institutions for ensuring airline safety.”

Among the core ideas put forward is to pay developers for discovering bias in algorithms. Such a practice is already widespread in cybersecurity; with many companies offering up bounties to find bugs in their software.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the authors wrote.

“We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

Another potential avenue is so-called “red teaming,” the creation of a dedicated team which adopts the mindset of a possible attacker to find flaws and vulnerabilities in a plan, organisation, or technical system.

“Knowledge that a lab has a red team can potentially improve the trustworthiness of an organization with respect to their safety and security claims.”

A red team alone is unlikely to give too much confidence; but combined with other measures can go a long way. Verification from parties outside an organisation itself will be key to instil trust in that company’s AI developments.

“Third party auditing is a form of auditing conducted by an external and independent auditor, rather than the organization being audited, and can help address concerns about the incentives for accuracy in self-reporting.”

“Provided that they have sufficient information about the activities of an AI system, independent auditors with strong reputational and professional incentives for truthfulness can help verify claims about AI development.”

The researchers highlight that a current roadblock with third-party auditing is that there’s yet to be any techniques or best practices established specifically for AI. Frameworks, such as Claims-Arguments-Evidence (CAE) and Goal Structuring Notation (GSN), may provide a starting place as they’re already widely-used for safety-critical auditing.

Audit trails, covering all steps of the AI development process, are also recommended to become the norm. The researchers again point to commercial aircraft, as a safety-critical system, and their use of flight data recorders to capture multiple types of data every second and provide a full log.

“Standards setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.”

The final suggestion for software-oriented methods of verifying AI ethics claims is the use of privacy-preserving machine learning (PPML).

Privacy-preserving machine learning aims to protect the privacy of data or models used in machine learning, at training or evaluation time, and during deployment.

Three established types of PPML are covered in the paper: Federated learning, differential privacy, and encrypted computation.

“Where possible, AI developers should contribute to, use, and otherwise support the work of open-source communities working on PPML, such as OpenMined, Microsoft SEAL, tf-encrypted, tf-federated, and nGraph-HE.”

The researchers, representing some of the most renowned institutions in the world, have come up with a comprehensive package of ways any organisation involved with AI development can provide assurance to governance and the wider public to ensure the industry can reach its full potential responsibly.

You can find the full preprint paper on arXiv here (PDF)

(Photo by Alexander Sinn on Unsplash)

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/feed/ 1
Elon Musk wants more stringent AI regulation, including for Tesla https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/ https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/#respond Wed, 19 Feb 2020 13:28:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6419 Elon Musk has once again called for more stringent regulations around the development of AI technologies. The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked. Of course, given the... Read more »

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/feed/ 0
Do you even AI, bro? OpenAI Safety Gym enhances reinforcement learning https://news.deepgeniusai.com/2019/11/22/ai-openai-reinforcement-learning-safety-gym/ https://news.deepgeniusai.com/2019/11/22/ai-openai-reinforcement-learning-safety-gym/#respond Fri, 22 Nov 2019 12:04:53 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6222 Elon Musk-founded OpenAI has opened the doors of its “Safety Gym” designed to enhance the training of reinforcement learning agents. OpenAI describes Safety Gym as “a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.” Basically, Safety Gym is the software equivalent of your spotter making... Read more »

The post Do you even AI, bro? OpenAI Safety Gym enhances reinforcement learning appeared first on AI News.

]]>
Elon Musk-founded OpenAI has opened the doors of its “Safety Gym” designed to enhance the training of reinforcement learning agents.

OpenAI describes Safety Gym as “a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.”

Basically, Safety Gym is the software equivalent of your spotter making sure you’re not going to injure yourself. And just like a good spotter, it will check your form.

“We also provide a standardised method of comparing algorithms and how well they avoid costly mistakes while learning,” says OpenAI.

“If deep reinforcement learning is applied to the real world, whether in robotics or internet-based tasks, it will be important to have algorithms that are safe even while learning—like a self-driving car that can learn to avoid accidents without actually having to experience them.”

Reinforcement learning is based on trial and error, with AIs training to get the best possible reward in the most efficient way. The problem is, this can lead to dangerous behaviour which could prove problematic.

Taking the self-driving car example, you wouldn’t want an AI deciding to go around the roundabout the wrong way just because it’s the quickest way to the final exit.

OpenAI is promoting the use of “constrained reinforcement learning” as a possible solution. By implementing cost functions, agents consider trade-offs which still achieve defined outcomes.

In a blog post, OpenAI explains the advantages of using constrained reinforcement learning with the example of a self-driving car:

“Suppose the car earns some amount of money for every trip it completes, and has to pay a fine for every collision. In normal RL, you would pick the collision fine at the beginning of training and keep it fixed forever. The problem here is that if the pay-per-trip is high enough, the agent may not care whether it gets in lots of collisions (as long as it can still complete its trips). In fact, it may even be advantageous to drive recklessly and risk those collisions in order to get the pay. We have seen this before when training unconstrained RL agents.

By contrast, in constrained RL you would pick the acceptable collision rate at the beginning of training, and adjust the collision fine until the agent is meeting that requirement. If the car is getting in too many fender-benders, you raise the fine until that behaviour is no longer incentivised.”

Safety Gym environments require AI agents — three are included: Point, Car, and Doggo — to navigate cluttered environments to achieve a goal, button, or push task. There are two levels of difficulty for each task. Every time an agent performs an unsafe action, a red warning light flashes around the agent and it will incur a cost.

Going forward, OpenAI has identified three areas of interest to improve algorithms for constrained reinforcement learning:

  1. Improving performance on the current Safety Gym environments.
  2. Using Safety Gym tools to investigate safe transfer learning and distributional shift problems.
  3. Combining constrained RL with implicit specifications (like human preferences) for rewards and costs.

OpenAI hopes that Safety Gym can make it easier for AI developers to collaborate on safety across the industry via work on open, shared systems.

The post Do you even AI, bro? OpenAI Safety Gym enhances reinforcement learning appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/11/22/ai-openai-reinforcement-learning-safety-gym/feed/ 0
Two grads recreate OpenAI’s text generator it deemed too dangerous to release https://news.deepgeniusai.com/2019/08/27/grads-recreate-openai-text-generator-dangerous-release/ https://news.deepgeniusai.com/2019/08/27/grads-recreate-openai-text-generator-dangerous-release/#respond Tue, 27 Aug 2019 16:06:34 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5972 Two graduates have recreated and released a fake text generator similar to OpenAI’s which the Elon Musk-founded startup deemed too dangerous to make public. Unless you’ve been living under a rock, you’ll know the world already has a fake news problem. In the past, at least fake news had to be written by a real... Read more »

The post Two grads recreate OpenAI’s text generator it deemed too dangerous to release appeared first on AI News.

]]>
Two graduates have recreated and released a fake text generator similar to OpenAI’s which the Elon Musk-founded startup deemed too dangerous to make public.

Unless you’ve been living under a rock, you’ll know the world already has a fake news problem. In the past, at least fake news had to be written by a real person to make it convincing.

OpenAI created an AI which could automatically generate fake stories. Combine fake news, with Cambridge Analytica-like targeting, and the general viral nature of social networks, and it’s easy to understand why OpenAI decided not to make their work public.

On Thursday, two recent master’s degree graduates decided to release what they claim is a recreation of OpenAI’s software anyway.

Aaron Gokaslan, 23, and Vanya Cohen, 24, believe their work isn’t yet harmful to society. Many would disagree, but their desire to show the world what’s possible – without being a huge company without large amounts of funding and resources – is nonetheless admirable.

“This allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses,” said Cohen to WIRED. “I’ve gotten scores of messages, and most of them have been like, ‘Way to go.’”

That’s not to say their work is easy nor particularly cheap. Gokaslan and Cohen used around $50,000 worth of cloud computing from Google. However, cloud computing costs are becoming more cost-effective while increasing in power each year.

OpenAI continues to maintain its stance that such work is better off not being in the public domain until more safeguards against fake news can be put in place.

Social networks have come under pressure from governments, particularly in the West, to do more to counter fake news and disinformation. Russia’s infamous “troll farms” are often cited as being used to create disinformation and influence global affairs.

Facebook is seeking to label potential fake news using fact-checking sites like Snopes, in addition to user reports.

Last Tuesday, OpenAI released a report which said it was aware of five other groups that had successfully replicated its own software but all made the decision not to release it.

Gokaslan and Cohen are in talks with OpenAI about their work and the potential societal implications.

? , , , AI &

The post Two grads recreate OpenAI’s text generator it deemed too dangerous to release appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/27/grads-recreate-openai-text-generator-dangerous-release/feed/ 0