Regulation – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Regulation – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
Musk predicts AI will be superior to humans within five years https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 https://news.deepgeniusai.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 2
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Baidu ends participation in AI alliance as US-China relations deteriorate https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/ https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/#respond Fri, 19 Jun 2020 16:13:07 +0000 https://news.deepgeniusai.com/?p=9700 Baidu will no longer participate in the Partnership on AI (PAI) alliance amid deteriorating relations between the US and China. PAI is a US-led alliance which aims to foster the ethical development and deployment of AI technologies. Baidu was the only Chinese member. The loss of Baidu’s expertise and any representation from China is devastating... Read more »

The post Baidu ends participation in AI alliance as US-China relations deteriorate appeared first on AI News.

]]>
Baidu will no longer participate in the Partnership on AI (PAI) alliance amid deteriorating relations between the US and China.

PAI is a US-led alliance which aims to foster the ethical development and deployment of AI technologies. Baidu was the only Chinese member.

The loss of Baidu’s expertise and any representation from China is devastating for PAI. Ethical AI development requires global cooperation to set acceptable standards which help to ensure safety while not limiting innovation.

Baidu has officially cited financial pressures for its decision to exit the alliance.

In a statement, Baidu wrote:

“Baidu shares the vision of the Partnership on AI and is committed to promoting the ethical development of AI technologies. 

We are in discussions about renewing our membership, and remain open to other opportunities to collaborate with industry peers on advancing AI.”

Directors from PAI hope to see Baidu renew its membership to the alliance next year.

Cooperation between American and Chinese firms

Cooperation between American and Chinese firms is getting more difficult as the world’s largest economies continue to implement sanctions on each other.

The US has criticised China for its handling of the coronavirus outbreak, trade practices, its mass imprisonment and alleged torture of Uyghur Muslims in “re-education” camps, and breaking the semi-autonomy of Hong Kong.

In the tech world, much of the focus has been on Chinese telecoms giant Huawei – which the US accuses of being a national security threat. Canada arrested Huawei CFO Meng Wanzhou last year on allegations of using the company’s subsidiaries to flout US sanctions against Iran. Two Canadian businessmen that were arrested in China shortly after Meng’s detention, in a suspected retaliation, were charged with spying by Beijing this week.

An increasing number of Chinese companies, including Huawei, have found themselves being added to an ‘Entity List’ in the US which bans American companies from working with them without explicit permission from the government.

The US added six Chinese AI companies to its Entity List last October, citing their role in alleged human rights violations.

Earlier this week, the US Commerce Department made an exception to Huawei’s inclusion on the Entity List which allows US companies to work with the Chinese giant for the purposes of developing 5G standards. Hopefully, we can see the same being done for AI companies.

However, on the whole, cooperation between American and Chinese firms is getting more difficult as a result of the political climate. It wouldn’t be surprising to see more cases of companies like Baidu dropping out of well-intentioned alliances such as PAI if sensible resolutions to differences are not sought.

(Photo by Erwan Hesry on Unsplash)

The post Baidu ends participation in AI alliance as US-China relations deteriorate appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
US Patent Office: AIs cannot be credited as inventors https://news.deepgeniusai.com/2020/04/30/us-patent-office-ai-credited-inventor/ https://news.deepgeniusai.com/2020/04/30/us-patent-office-ai-credited-inventor/#respond Thu, 30 Apr 2020 15:08:28 +0000 https://news.deepgeniusai.com/?p=9575 The US Patent and Trademark Office (USPTO) has ruled that an AI cannot be legally credited as an inventor. AI will assist us mere humans in coming up with new innovations in the years to come. However, the USPTO will not let them take the credit. The USPTO has rejected two early filings of inventions... Read more »

The post US Patent Office: AIs cannot be credited as inventors appeared first on AI News.

]]>
The US Patent and Trademark Office (USPTO) has ruled that an AI cannot be legally credited as an inventor.

AI will assist us mere humans in coming up with new innovations in the years to come. However, the USPTO will not let them take the credit.

The USPTO has rejected two early filings of inventions credited to an AI system called DABUS which was created by Stephen Thaler.

DABUS invented two devices; a shape-shifting food container, and a new type of emergency flashlight.

The filings were submitted by the Artificial Inventor Project (AIP) last year. AIP’s lawyers argued that Thaler is an expert in building AI systems like DABUS but has no experience in consumer goods and would not have created them himself.

The USPTO concluded that “only natural persons may be named as an inventor in a patent application,” under the current law.

Similar applications by the AIP in the UK and EU were rejected along the same lines by their respective patent authorities.

“If I teach my Ph.D. student and they go on to make a final complex idea, that doesn’t make me an inventor on their patent, so it shouldn’t with a machine,” editor Abbott, a professor at the University of Surrey who led a group of legal experts in the AI patent project, told the Wall Street Journal last year.

The case over whether only humans should hold such rights has similarities to the infamous monkey selfie saga where PETA argued that a monkey could own the copyright to a selfie.

The US Copyright Office also ruled in that instance that only photographs taken by humans can be copyrighted and PETA’s case was subsequently dismissed.

(Photo by Jesse Chan on Unsplash)

The post US Patent Office: AIs cannot be credited as inventors appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/30/us-patent-office-ai-credited-inventor/feed/ 0
Leading AI researchers propose ‘toolbox’ for verifying ethics claims https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/ https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/#comments Mon, 20 Apr 2020 14:23:30 +0000 https://news.deepgeniusai.com/?p=9558 Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims. With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance. “AI systems have been developed in ways... Read more »

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
Researchers from OpenAI, Google Brain, Intel, and 28 other leading organisations have published a paper which proposes a ‘toolbox’ for verifying AI ethics claims.

With concerns around AI spanning from dangerous indifference to innovation-halting scaremongering; it’s clear there’s a need for a system to achieve a healthy balance.

“AI systems have been developed in ways that are inconsistent with the stated values of those developing them,” the researchers wrote. “This has led to a rise in concern, research, and activism relating to the impacts of AI systems.”

The researchers note that significant work has gone into articulating ethical principles by many players involved with AI development, but the claims are meaningless without some way to verify them.

“People who get on airplanes don’t trust an airline manufacturer because of its PR campaigns about the importance of safety – they trust it because of the accompanying infrastructure of technologies, norms, laws, and institutions for ensuring airline safety.”

Among the core ideas put forward is to pay developers for discovering bias in algorithms. Such a practice is already widespread in cybersecurity; with many companies offering up bounties to find bugs in their software.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the authors wrote.

“We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

Another potential avenue is so-called “red teaming,” the creation of a dedicated team which adopts the mindset of a possible attacker to find flaws and vulnerabilities in a plan, organisation, or technical system.

“Knowledge that a lab has a red team can potentially improve the trustworthiness of an organization with respect to their safety and security claims.”

A red team alone is unlikely to give too much confidence; but combined with other measures can go a long way. Verification from parties outside an organisation itself will be key to instil trust in that company’s AI developments.

“Third party auditing is a form of auditing conducted by an external and independent auditor, rather than the organization being audited, and can help address concerns about the incentives for accuracy in self-reporting.”

“Provided that they have sufficient information about the activities of an AI system, independent auditors with strong reputational and professional incentives for truthfulness can help verify claims about AI development.”

The researchers highlight that a current roadblock with third-party auditing is that there’s yet to be any techniques or best practices established specifically for AI. Frameworks, such as Claims-Arguments-Evidence (CAE) and Goal Structuring Notation (GSN), may provide a starting place as they’re already widely-used for safety-critical auditing.

Audit trails, covering all steps of the AI development process, are also recommended to become the norm. The researchers again point to commercial aircraft, as a safety-critical system, and their use of flight data recorders to capture multiple types of data every second and provide a full log.

“Standards setting bodies should work with academia and industry to develop audit trail requirements for safety-critical applications of AI systems.”

The final suggestion for software-oriented methods of verifying AI ethics claims is the use of privacy-preserving machine learning (PPML).

Privacy-preserving machine learning aims to protect the privacy of data or models used in machine learning, at training or evaluation time, and during deployment.

Three established types of PPML are covered in the paper: Federated learning, differential privacy, and encrypted computation.

“Where possible, AI developers should contribute to, use, and otherwise support the work of open-source communities working on PPML, such as OpenMined, Microsoft SEAL, tf-encrypted, tf-federated, and nGraph-HE.”

The researchers, representing some of the most renowned institutions in the world, have come up with a comprehensive package of ways any organisation involved with AI development can provide assurance to governance and the wider public to ensure the industry can reach its full potential responsibly.

You can find the full preprint paper on arXiv here (PDF)

(Photo by Alexander Sinn on Unsplash)

The post Leading AI researchers propose ‘toolbox’ for verifying ethics claims appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/20/ai-researchers-toolbox-verifying-ethics-claims/feed/ 1
Elon Musk wants more stringent AI regulation, including for Tesla https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/ https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/#respond Wed, 19 Feb 2020 13:28:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6419 Elon Musk has once again called for more stringent regulations around the development of AI technologies. The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked. Of course, given the... Read more »

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/feed/ 0
Google CEO: We need sensible AI regulation that does not limit its potential https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/ https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/#respond Tue, 21 Jan 2020 15:49:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6380 Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society. Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.” Few people debate the need for AI regulation... Read more »

The post Google CEO: We need sensible AI regulation that does not limit its potential appeared first on AI News.

]]>
Google CEO Sundar Pichai has called for sensible AI regulation that does not limit the huge potential benefits to society.

Writing in a FT editorial, Pichai said: “…there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.”

Few people debate the need for AI regulation but there are differing opinions when it comes to how much. Overregulation limits innovation while lack of regulation can pose serious dangers – even existential depending on who you listen to.

Pichai says AI is “one of the most promising new technologies” that has “the potential to improve billions of lives,” but warns of the possible risks if development is left unchecked.

“History is full of examples of how technology’s virtues aren’t guaranteed,” Pichai wrote. “The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread.”

Google is one of the companies which people have voiced concerns about given its reach and questionable record when it comes to user privacy. Pichai’s words today will offer some comfort that Google’s leadership wants sensible regulation to guide its efforts.

So far, Google has shown how AI can be used for good. A study by Google, published in science journal Nature, showed how its AI model was able to spot breast cancer in mammograms with “greater accuracy, fewer false positives, and fewer false negatives than experts.”

Governments around the world are beginning to shape AI regulations. The UK, Europe’s leader in AI developments and investments, aims to focus on promoting ethical AI rather than attempt to match superpowers like China and the US in other areas.

In a report last year, the Select Committee on Artificial Intelligence recommended the UK capitalises on its “particular blend of national assets” to “forge a distinctive role for itself as a pioneer in ethical AI”.

The EU, which the UK leaves at the end of this month, recently published its own comprehensive proposals on AI regulation which many believe are too stringent. The US warned its European allies against overregulation of AI earlier this month.

In a statement released by the Office of Science and Technology Policy, the White House wrote:

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach.

The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values.”

Pichai refrains from denouncing either the White House’s calls for light AI regulation, or the EU’s plans for stringent rules. Instead, Pichai only calls for the need to balance “potential harms… with social opportunities.”

Google has certainly not been devoid of criticism over its forays into AI. The company was forced to back out from a Pentagon contract in 2018 called Project Maven over backlash about Google building AI technology for deploying and monitoring unmanned aerial vehicles (UAVs).

Following the decision to back out from Project Maven, Pichai outlined Google’s ethical principles when it comes to AI:

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Pichai promised the company “will work to limit potentially harmful or abusive applications” and will block the use of their technology if they “become aware of uses that are inconsistent” with the principles.

Time will tell whether Google will abide by its principles when it comes to AI, but it’s heartening to see Pichai call for sensible regulation to help enforce it across the industry.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Google CEO: We need sensible AI regulation that does not limit its potential appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/01/21/google-ceo-sensible-ai-regulation-limit-potential/feed/ 0