Policy – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 14 Dec 2020 16:34:35 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Policy – AI News https://news.deepgeniusai.com 32 32 EU human rights agency issues report on AI ethical considerations https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/ https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/#respond Mon, 14 Dec 2020 16:34:34 +0000 https://news.deepgeniusai.com/?p=10117 The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology. FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting... Read more »

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
The European Union’s Fundamental Rights Agency (FRA) has issued a report on AI which delves into the ethical considerations which must be made about the technology.

FRA’s report is titled Getting The Future Right and opens with some of the ways AI is already making lives better—such as helping with cancer diagnosis, and even predicting where burglaries are likely to take place.

“The possibilities seem endless,” writes Michael O’Flaherty, Director of the FRA, in the report’s foreword. “But how can we fully uphold fundamental rights standards when using AI?”

The FRA interviewed over a hundred public administration officials, private company staff, and a diverse range of experts, in a bid to answer that question.

With evidence of algorithms having biases which could lead to automating societal issues like racial profiling—it’s a question that needs answering if the full potential of AI is going to be unlocked for the whole of society.

O’Flaherty says:

“AI is not infallible, it is made by people – and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people’s rights both in the development and use of AI.

“We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.”

AI is being used in almost every industry in some form or another—if not already, it will be soon.

Biases in AI are more dangerous in some industries than others. Policing is an obvious example, but in areas like financial services it could mean one person being given a loan or mortgage compared to another.

Without due transparency, these biases could happen without anyone knowing the reasons behind such decisions—it could simply be because someone grew up in a different neighbourhood. Each automated decision has a very real human impact.

The FRA calls for the EU to:

  • Make sure that AI respects ALL fundamental rights – AI can affect many rights – not just privacy or data protection. It can also discriminate or impede justice. Any future AI legislation has to consider this and create effective safeguards.
  • Guarantee that people can challenge decisions taken by AI – people need to know when AI is used and how it is used, as well as how and where to complain. Organisations using AI need to be able to explain how their systems take decisions.
  • Assess AI before and during its use to reduce negative impacts – private and public organisations should carry out assessments of how AI could harm fundamental rights.
  • Provide more guidance on data protection rules – the EU should further clarify how data protection rules apply to AI. More clarity is also needed on the implications of automated decision-making and the right to human review when AI is used.
  • Assess whether AI discriminates – awareness about the potential for AI to discriminate, and the impact of this, is relatively low. This calls for more research funding to look into the potentially discriminatory effects of AI so Europe can guard against it.
  • Create an effective oversight system – the EU should invest in a more ‘joined-up’ system to hold businesses and public administrations accountable when using AI. Authorities need to ensure that oversight bodies have adequate resources and skills to do the job.

The EU has increased its scrutiny of “big tech” companies like Google in recent years over concerns of invasive privacy practices and abusing their market positions. Last week, AI News reported that Google had controversially fired leading AI ethics researcher Timnit Gebru after she criticised her employer in an email.

Google chief executive Sundar Pichai wrote in a memo: “We need to accept responsibility for the fact that a prominent black, female leader with immense talent left Google unhappily.

“It’s incredibly important to me that our black, women, and under-represented Googlers know that we value you and you do belong at Google.”

Gebru gave an interview to the BBC this week in which she called Google and big tech “institutionally racist”. With that in mind, the calls made in the FRA’s report seem especially important to heed.

You can download a full copy of the FRA’s report here.

(Photo by Guillaume Périgois on Unsplash)

The post EU human rights agency issues report on AI ethical considerations appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/12/14/eu-human-rights-agency-issues-report-ai-ethical-considerations/feed/ 0
CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
Baidu ends participation in AI alliance as US-China relations deteriorate https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/ https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/#respond Fri, 19 Jun 2020 16:13:07 +0000 https://news.deepgeniusai.com/?p=9700 Baidu will no longer participate in the Partnership on AI (PAI) alliance amid deteriorating relations between the US and China. PAI is a US-led alliance which aims to foster the ethical development and deployment of AI technologies. Baidu was the only Chinese member. The loss of Baidu’s expertise and any representation from China is devastating... Read more »

The post Baidu ends participation in AI alliance as US-China relations deteriorate appeared first on AI News.

]]>
Baidu will no longer participate in the Partnership on AI (PAI) alliance amid deteriorating relations between the US and China.

PAI is a US-led alliance which aims to foster the ethical development and deployment of AI technologies. Baidu was the only Chinese member.

The loss of Baidu’s expertise and any representation from China is devastating for PAI. Ethical AI development requires global cooperation to set acceptable standards which help to ensure safety while not limiting innovation.

Baidu has officially cited financial pressures for its decision to exit the alliance.

In a statement, Baidu wrote:

“Baidu shares the vision of the Partnership on AI and is committed to promoting the ethical development of AI technologies. 

We are in discussions about renewing our membership, and remain open to other opportunities to collaborate with industry peers on advancing AI.”

Directors from PAI hope to see Baidu renew its membership to the alliance next year.

Cooperation between American and Chinese firms

Cooperation between American and Chinese firms is getting more difficult as the world’s largest economies continue to implement sanctions on each other.

The US has criticised China for its handling of the coronavirus outbreak, trade practices, its mass imprisonment and alleged torture of Uyghur Muslims in “re-education” camps, and breaking the semi-autonomy of Hong Kong.

In the tech world, much of the focus has been on Chinese telecoms giant Huawei – which the US accuses of being a national security threat. Canada arrested Huawei CFO Meng Wanzhou last year on allegations of using the company’s subsidiaries to flout US sanctions against Iran. Two Canadian businessmen that were arrested in China shortly after Meng’s detention, in a suspected retaliation, were charged with spying by Beijing this week.

An increasing number of Chinese companies, including Huawei, have found themselves being added to an ‘Entity List’ in the US which bans American companies from working with them without explicit permission from the government.

The US added six Chinese AI companies to its Entity List last October, citing their role in alleged human rights violations.

Earlier this week, the US Commerce Department made an exception to Huawei’s inclusion on the Entity List which allows US companies to work with the Chinese giant for the purposes of developing 5G standards. Hopefully, we can see the same being done for AI companies.

However, on the whole, cooperation between American and Chinese firms is getting more difficult as a result of the political climate. It wouldn’t be surprising to see more cases of companies like Baidu dropping out of well-intentioned alliances such as PAI if sensible resolutions to differences are not sought.

(Photo by Erwan Hesry on Unsplash)

The post Baidu ends participation in AI alliance as US-China relations deteriorate appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/19/baidu-ai-alliance-us-china-relations-deteriorate/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/ https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/#comments Fri, 29 May 2020 13:48:55 +0000 https://news.deepgeniusai.com/?p=9660 The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns. “Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The ACLU is taking its fight to defend privacy... Read more »

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we double down on our work in legislatures and city councils nationwide.”

Clearview AI has repeatedly come under fire due to its practice of scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently.

The company’s facial recognition system is used by over 2,200 law enforcement agencies around the world – and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

In a press release, the ACLU wrote:

“The New York Times revealed the company was secretly capturing untold numbers of biometric identifiers for purposes of surveillance and tracking, without notice to the individuals affected.

The company’s actions embodied the nightmare scenario privacy advocates long warned of, and accomplished what many companies — such as Google — refused to try due to ethical concerns.”

However, even more concerning is Clearview AI’s extensive ties with the far-right.

Clearview AI founder Hoan Ton-That claims to have since disassociated from far-right views, movements, and individuals. Ekeland, meanwhile, has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

The ACLU says its lawsuit represents the first “to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.”

Facial recognition technologies have become a key focus for the ACLU.

Back in March, AI News reported the ACLU was suing the US government for blocking a probe into the use of facial recognition technology at airports. In 2018, the union caught our attention for highlighting the inaccuracy of Amazon’s facial recognition algorithm – especially when identifying people of colour and females.

“Clearview’s actions represent one of the largest threats to personal privacy by a private company our country has faced,” said Jay Edelson of Edelson PC, lead counsel handling this case on a pro bono basis.

“If a well-funded, politically connected company can simply amass information to track all of us, we are living in a different America.”

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/feed/ 1
Japan passes bill to build AI-powered ‘super cities’ addressing societal issues https://news.deepgeniusai.com/2020/05/28/japan-bill-build-ai-super-cities-societal-issues/ https://news.deepgeniusai.com/2020/05/28/japan-bill-build-ai-super-cities-societal-issues/#respond Thu, 28 May 2020 11:58:22 +0000 https://news.deepgeniusai.com/?p=9631 Japan has passed a bill to build “super cities” which address societal issues using emerging technologies such as AI. The bill, passed on Wednesday, aims to accelerate the sweeping change of regulations across various fields to support the creation of such futuristic cities. Addressing issues such as depopulation and an aging society will be the... Read more »

The post Japan passes bill to build AI-powered ‘super cities’ addressing societal issues appeared first on AI News.

]]>
Japan has passed a bill to build “super cities” which address societal issues using emerging technologies such as AI.

The bill, passed on Wednesday, aims to accelerate the sweeping change of regulations across various fields to support the creation of such futuristic cities.

Addressing issues such as depopulation and an aging society will be the focus of the super cities. Technologies including big data and AI will be key to successfully tackling the challenging problems.

Large amounts of data will be collected and organised from across administrative organisations.

Local governments will be selected for the ambitious projects which will launch forums with the national government and private companies to take forward the plans.

Draft plans will be created from this deep public-private collaboration that will subsequently be submitted to the state government if approved by local residents.

As with many smart city plans, there are deep concerns about the collection of personal data and what it could mean for individual privacy. Local residents are sure to want assurance that any data collection is anonymous.

A similar bill was submitted to the Diet (Japan’s decision-making institution) last year. The bill was scrapped following calls from the ruling government to review it.

The revised bill was passed on Wednesday. Given the appetite for the project across the government; the plans are now expected to progress swiftly.

(Photo by Jezael Melgoza on Unsplash)

The post Japan passes bill to build AI-powered ‘super cities’ addressing societal issues appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/28/japan-bill-build-ai-super-cities-societal-issues/feed/ 0
Google pledges to no longer build AIs for the fossil fuel industry https://news.deepgeniusai.com/2020/05/22/google-no-longer-build-ai-fossil-fuel-industry/ https://news.deepgeniusai.com/2020/05/22/google-no-longer-build-ai-fossil-fuel-industry/#respond Fri, 22 May 2020 15:45:52 +0000 https://news.deepgeniusai.com/?p=9614 Google has pledged to no longer build AIs for the fossil fuel industry as it further distances itself from controversial developments. A report from Greenpeace earlier this month exposed Google as being one of the top three developers of AI tools for the fossil fuel industry. Greenpeace found AI technologies boost production levels by as... Read more »

The post Google pledges to no longer build AIs for the fossil fuel industry appeared first on AI News.

]]>
Google has pledged to no longer build AIs for the fossil fuel industry as it further distances itself from controversial developments.

A report from Greenpeace earlier this month exposed Google as being one of the top three developers of AI tools for the fossil fuel industry. Greenpeace found AI technologies boost production levels by as much as five percent.

In an interview with CUBE’s John Furrier, the leader of Google’s CTO office, Will Grannis, said that Google will “no longer develop artificial intelligence (AI) software and tools for oil and gas drilling operations.”

The pledge from Google Cloud is welcome, but it must be taken in a wider context.

In 2019, Google Cloud’s revenue from oil and gas was approximately $65 million. A hefty sum, but less than one percent of all Google Cloud revenues. Furthermore, Google Cloud’s revenue from oil and gas decreased by about 11 percent despite overall revenue growing by 53 percent.

While Google Cloud’s revenue from the oil and gas industry was declining, the public’s intolerance towards big polluters is increasing. The reputational damage caused to Google of continuing its relationship with polluters would likely have been more costly over the long-term.

This isn’t the first time Google has cut-off an AI-related relationship with a controversial industry to preserve its reputation.

Back in 2018, Google was forced into ending a contract with the Pentagon called Project Maven to build AI technologies for drones. Over 4,000 Google employees signed a petition demanding their management cease the project and never again “build warfare technology.”

Following the Project Maven backlash, Google CEO Sundar Pichai promised in a blog post the company will not develop technologies or weapons that cause harm, or anything which can be used for surveillance violating “internationally accepted norms” or “widely accepted principles of international law and human rights”.

Back in January, Pichai called for sensible AI regulation that does not limit the potential societal benefits.

PAX, a Dutch NGO, ranked Google among the safest companies developing AI while slamming rivals such as Amazon and Microsoft for being among the “highest risk” tech firms in the world.

(Photo by Zbynek Burival on Unsplash)

The post Google pledges to no longer build AIs for the fossil fuel industry appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/22/google-no-longer-build-ai-fossil-fuel-industry/feed/ 0
US Patent Office: AIs cannot be credited as inventors https://news.deepgeniusai.com/2020/04/30/us-patent-office-ai-credited-inventor/ https://news.deepgeniusai.com/2020/04/30/us-patent-office-ai-credited-inventor/#respond Thu, 30 Apr 2020 15:08:28 +0000 https://news.deepgeniusai.com/?p=9575 The US Patent and Trademark Office (USPTO) has ruled that an AI cannot be legally credited as an inventor. AI will assist us mere humans in coming up with new innovations in the years to come. However, the USPTO will not let them take the credit. The USPTO has rejected two early filings of inventions... Read more »

The post US Patent Office: AIs cannot be credited as inventors appeared first on AI News.

]]>
The US Patent and Trademark Office (USPTO) has ruled that an AI cannot be legally credited as an inventor.

AI will assist us mere humans in coming up with new innovations in the years to come. However, the USPTO will not let them take the credit.

The USPTO has rejected two early filings of inventions credited to an AI system called DABUS which was created by Stephen Thaler.

DABUS invented two devices; a shape-shifting food container, and a new type of emergency flashlight.

The filings were submitted by the Artificial Inventor Project (AIP) last year. AIP’s lawyers argued that Thaler is an expert in building AI systems like DABUS but has no experience in consumer goods and would not have created them himself.

The USPTO concluded that “only natural persons may be named as an inventor in a patent application,” under the current law.

Similar applications by the AIP in the UK and EU were rejected along the same lines by their respective patent authorities.

“If I teach my Ph.D. student and they go on to make a final complex idea, that doesn’t make me an inventor on their patent, so it shouldn’t with a machine,” editor Abbott, a professor at the University of Surrey who led a group of legal experts in the AI patent project, told the Wall Street Journal last year.

The case over whether only humans should hold such rights has similarities to the infamous monkey selfie saga where PETA argued that a monkey could own the copyright to a selfie.

The US Copyright Office also ruled in that instance that only photographs taken by humans can be copyrighted and PETA’s case was subsequently dismissed.

(Photo by Jesse Chan on Unsplash)

The post US Patent Office: AIs cannot be credited as inventors appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/04/30/us-patent-office-ai-credited-inventor/feed/ 0