law – AI News https://news.deepgeniusai.com Artificial Intelligence News Thu, 06 Aug 2020 12:45:25 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png law – AI News https://news.deepgeniusai.com 32 32 University College London: Deepfakes are the ‘most serious’ AI crime threat https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/ https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/#respond Thu, 06 Aug 2020 12:41:52 +0000 https://news.deepgeniusai.com/?p=9794 Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats. The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity. Deepfakes –... Read more »

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
Researchers from University College London have released a ranking of what experts believe to be the most serious AI crime threats.

The researchers first created a list of 20 expected ways AI will be used by criminals within the next 15 years. 31 experts were then asked to rank them by potential severity.

Deepfakes – AI-generated images, videos, and articles – ranked top of the list as the most serious threat.

New and dangerous territory

It’s of little surprise to see deepfakes rank so highly, given the existing issues with disinformation campaigns.

Most fake content today at least must be created by humans, such as those working in the likes of Russia’s infamous “troll farms”. Human-generated disinformation campaigns take time to produce to a convincing standard and often have patterns which make them easier to trace. 

Automating the production of fake content en masse, to influence things such as democratic votes and public opinion, is entering into new and dangerous territory.

One of the most high-profile deepfake cases so far was that of US house speaker Nancy Pelosi. In 2018, a deepfake video circulated on social media which made Pelosi appear drunk and slurring her words. Pelosi criticised Facebook’s response, or lack thereof, and later told California’s KQED: “I think they have proven — by not taking down something they know is false — that they were willing enablers of the Russian interference in our election.”

The deepfake of Pelosi was unsophisticated and likely created to be amusing rather than malicious, but it’s an early warning of how such fakes could be used to cause reputational damage – or worse. Just imagine the potential consequences a fake video of the president announcing an imminent strike on somewhere like North Korea could have.

Deepfakes also have obvious potential to be used for fraud purposes, to pretend to be someone else to access things like bank accounts and sensitive information.

Then there’s the issue of blackmail. Deep learning has already been used to put the faces of celebrities on adult performers. While fake, the threat to release such videos – and the embarrassment caused – could lead to some paying a ransom to keep it from being made public.

“People now conduct large parts of their lives online and their online activity can make and break reputations,” comments first author Dr Matthew Caldwell of UCL Computer Science. “Such an online environment, where data is property and information power, is ideally suited for exploitation by AI-based criminal activity.”

All in all, it’s easy to see why experts are so concerned about deepfakes.

As part of a bid to persuade Facebook to change its policies on deepfakes, Israeli startup Canny AI created a deepfake of Facebook CEO Mark Zuckerberg last year which made it appear like he said: “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Other AI crime threats

There were four other major AI crime threats identified by the researchers: the use of driverless cars as weapons, automated spear fishing, harvesting information for blackmail, and the disruption of AI-controlled systems.

“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation,” explained senior author Professor Lewis Griffin of UCL Computer Science. “To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives.”

Of medium concern were of things such as the sale of items and services wrongly called AI, such as security screening and targeted advertising solutions. The researchers believe leading people to believe they’re AI-powered could be lucrative.

Among the lesser concerns is things such as so-called “burglar bots” which could get in through access points of a property to unlock them or search for data. The researchers believe this poses less of a threat because they can be easily prevented through methods such as letterbox cages.

Similarly, the researchers note the potential for AI-based stalking is damaging for individuals but isn’t considered a major threat as it could not operate at scale.

You can find the researchers’ full paper in the Crime Science Journal here.

(Photo by Bill Oxford on Unsplash)

The post University College London: Deepfakes are the ‘most serious’ AI crime threat appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/08/06/university-college-london-experts-deepfakes-ai-crime-threat/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/ https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/#comments Fri, 29 May 2020 13:48:55 +0000 https://news.deepgeniusai.com/?p=9660 The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns. “Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The ACLU is taking its fight to defend privacy... Read more »

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we double down on our work in legislatures and city councils nationwide.”

Clearview AI has repeatedly come under fire due to its practice of scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently.

The company’s facial recognition system is used by over 2,200 law enforcement agencies around the world – and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

In a press release, the ACLU wrote:

“The New York Times revealed the company was secretly capturing untold numbers of biometric identifiers for purposes of surveillance and tracking, without notice to the individuals affected.

The company’s actions embodied the nightmare scenario privacy advocates long warned of, and accomplished what many companies — such as Google — refused to try due to ethical concerns.”

However, even more concerning is Clearview AI’s extensive ties with the far-right.

Clearview AI founder Hoan Ton-That claims to have since disassociated from far-right views, movements, and individuals. Ekeland, meanwhile, has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

The ACLU says its lawsuit represents the first “to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.”

Facial recognition technologies have become a key focus for the ACLU.

Back in March, AI News reported the ACLU was suing the US government for blocking a probe into the use of facial recognition technology at airports. In 2018, the union caught our attention for highlighting the inaccuracy of Amazon’s facial recognition algorithm – especially when identifying people of colour and females.

“Clearview’s actions represent one of the largest threats to personal privacy by a private company our country has faced,” said Jay Edelson of Edelson PC, lead counsel handling this case on a pro bono basis.

“If a well-funded, politically connected company can simply amass information to track all of us, we are living in a different America.”

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/feed/ 1
AI Experts: Dear Amazon, stop selling facial recognition to law enforcement https://news.deepgeniusai.com/2019/04/04/ai-experts-amazon-stop-facial-recognition-law/ https://news.deepgeniusai.com/2019/04/04/ai-experts-amazon-stop-facial-recognition-law/#respond Thu, 04 Apr 2019 14:02:16 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5462 A group of AI experts have signed an open letter to Amazon demanding the company stops selling facial recognition to law enforcement following bias findings. Back in January, AI News reported on findings by Algorithmic Justice League founder Joy Buolamwini who researched some of the world’s most popular facial recognition algorithms. Buolamwini found most of... Read more »

The post AI Experts: Dear Amazon, stop selling facial recognition to law enforcement appeared first on AI News.

]]>
A group of AI experts have signed an open letter to Amazon demanding the company stops selling facial recognition to law enforcement following bias findings.

Back in January, AI News reported on findings by Algorithmic Justice League founder Joy Buolamwini who researched some of the world’s most popular facial recognition algorithms.

Buolamwini found most of the algorithms were biased and misidentified subjects with darker skin colours and/or females more often.

Here were the results in descending order of accuracy:

Microsoft

  • Lighter Males (100 percent)
  • Lighter Females (98.3 percent)
  • Darker Males (94 percent)
  • Darker Females (79.2 percent)

Face++

  • Darker Males (99.3 percent)
  • Lighter Males (99.2 percent)
  • Lighter Females (94 percent)
  • Darker Females (65.5 percent)

IBM

  • Lighter Males (99.7 percent)
  • Lighter Females (92.9 percent)
  • Darker Males (88 percent)
  • Darker Females (65.3 percent)

Amazon executives rebuked the findings and claimed a lower level of accuracy was used than what they recommend for law enforcement use.

“The answer to anxieties over new technology is not to run ‘tests’ inconsistent with how the service is designed to be used, and to amplify the test’s false and misleading conclusions through the news media,” Matt Wood, GM of AI for Amazon’s cloud-computing division, wrote in a January blog post.

Signatories of the open letter came to Buolamwini’s defense, including AI pioneer Yoshua Bengio who is a recent winner of the Turing Award.

“In contrast to Dr. Wood’s claims, bias found in one system is cause for concern in the other, particularly in use cases that could severely impact people’s lives, such as law enforcement applications,” they wrote.

Despite having the most accurate facial recognition, Microsoft has rightly not been content at that and has further improved its accuracy since Buolamwini’s work. The firm supports a policy requiring signs to be visible wherever facial recognition is used.

IBM has also made huge strides in levelling the accuracy of their algorithms to represent all parts of society. Earlier this year, the company unveiled a new one million image dataset more representative of the diversity in society.

When Buolamwini reassessed IBM’s algorithm, the accuracy when assessing darker males jumped from 88 percent to 99.4 percent, for darker females from 65.3 percent to 83.5 percent, for lighter females from 92.9 percent to 97.6 percent, and for lighter males it stayed the same at 97 percent.

Buolamwini commented: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

Aside from potentially automating societal problems like racial profiling, inaccurate facial recognition could be the difference between life and death. For example, a recent study (PDF) found that driverless cars observing the road for pedestrians had a more difficult time detecting individuals with darker skin colours.

Everyone, not just AI experts, should be pressuring companies to ensure biases are kept well away from algorithms.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI Experts: Dear Amazon, stop selling facial recognition to law enforcement appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/04/04/ai-experts-amazon-stop-facial-recognition-law/feed/ 0
AI is sentencing people based on their ‘risk’ assessment https://news.deepgeniusai.com/2019/01/22/ai-sentencing-people-risk-assessment/ https://news.deepgeniusai.com/2019/01/22/ai-sentencing-people-risk-assessment/#respond Tue, 22 Jan 2019 10:42:12 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4489 AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions. During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system. America imprisons more people than any other nation. It’s not just a result of the population of... Read more »

The post AI is sentencing people based on their ‘risk’ assessment appeared first on AI News.

]]>
AI-powered tools for determining the risk of an individual are being used to make incarceration and sentencing decisions.

During the Data for Black Lives conference last weekend, several experts shared how AI is evolving America’s controversial prison system.

America imprisons more people than any other nation. It’s not just a result of the population of the country, the incarceration per head is the highest in the world at ~716 per 100,000 of the national population. The second largest, Russia, incarcerates ~455 per 100,000 population.

Black males are, by far, America’s most incarcerated:

AI has been proven to have bias problems. Last year, the American Civil Liberties Union found that Amazon’s facial recognition technology disproportionately flagged those with darker skin colours as criminals more often.

The bias is not intentional but a result of a wider problem in STEM career diversity. In the West, the fields are dominated by white males.

A 2010 study by researchers at NIST and the University of Texas in Dallas found (PDF) algorithms designed and tested in East Asia are better at recognising East Asians, while those developed in Western countries are more accurate when detecting Caucasians.

Deploying such inherently-biased AIs is bound to exacerbate societal problems. Most concerning, US courtrooms are using AI tools for ‘risk’ assessments to make sentencing decisions.

Using a defendant’s profile, the AI generates a recidivism score – a number which aims to estimate if an individual will reoffend. A judge then uses that score to make decisions such as the severity of their sentence, what services the individual should be provided, and if a person should be held in jail before trial.

Last July, a statement (PDF) was signed by over 100 civil rights organisations – including the ACLU – calling for AI to be kept clear of risk assessments.

When the bias problem with AIs is solved, their use in the justice system could improve trust in decisions. Current questions over whether a judge was prejudiced in their sentencing will be reduced. However, we’re yet to be anywhere near that point.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post AI is sentencing people based on their ‘risk’ assessment appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/01/22/ai-sentencing-people-risk-assessment/feed/ 0
Amazon expert suggests AI regulation after ACLU’s bias findings https://news.deepgeniusai.com/2018/07/30/amazon-expert-ai-regulation-aclu-bias/ https://news.deepgeniusai.com/2018/07/30/amazon-expert-ai-regulation-aclu-bias/#respond Mon, 30 Jul 2018 14:31:26 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3561 An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement. Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial... Read more »

The post Amazon expert suggests AI regulation after ACLU’s bias findings appeared first on AI News.

]]>
An expert from Amazon has suggested the government should implement a minimum confidence level for the use of facial recognition in law enforcement.

Dr. Matt Wood, GM of Deep Learning and AI at Amazon Web Services, made the suggestion in a blog post responding to the ACLU’s (American Civil Liberties Union) findings of a racial bias in the ‘Rekognition’ facial recognition algorithm by Amazon.

In their findings, the ACLU found Rekognition erroneously labelled those with darker skin colours as criminals more often when members of Congress were matched against a database of 25,000 arrest photos.

Amazon argued the ACLU left Rekognition’s default confidence setting of 80 percent on when it suggests 95 percent or higher for law enforcement.

Commenting on the ACLU’s findings, Wood wrote:

“The default confidence threshold for facial recognition APIs in Rekognition is 80%, which is good for a broad set of general use cases (such as identifying celebrities on social media or family members who look alike in photos apps), but it’s not the right setting for public safety use cases.

The 80% confidence threshold used by the ACLU is far too low to ensure the accurate identification of individuals; we would expect to see false positives at this level of confidence.”

Wood provided a case example of their own test where – using a dataset of over 850,000 faces commonly used in academia – the company searched against public photos of all members of US Congress ‘in a similar way’ to the ACLU.

Using the 99 percent confidence threshold, the misidentification rate dropped to zero despite comparing against a larger number of faces (30x larger than the ACLU test).

Amazon is naturally keen to highlight the positive uses its technology has been used for. The company says it’s been used for things such as fighting human trafficking and reuniting lost children with their families.

However, the ACLU’s test shows the potential for the technology to be misused to disastrous effect. Without oversight, civil liberties could be impacted and lead to increased persecution of minorities.

To help prevent this from happening, Wood calls it “a very reasonable idea” for “the government to weigh in and specify what temperature (or confidence levels) it wants law enforcement agencies to meet to assist in their public safety work.”

A 2010 study by researchers at NIST and the University of Texas in Dallas found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in Western countries are more accurate at detecting Caucasians.

When a clear bias problem remains in AI algorithms, it’s little wonder there’s concern about the use of inaccurate facial recognition for things such as police body cams.

Should a minimum confidence level be set for law enforcement?

 

The post Amazon expert suggests AI regulation after ACLU’s bias findings appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/30/amazon-expert-ai-regulation-aclu-bias/feed/ 0
White House will take a ‘hands-off’ approach to AI regulation https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/ https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/#respond Fri, 11 May 2018 12:16:37 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3083 The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set. Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking. Musk... Read more »

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
The White House has decided it will take a ‘hands-off’ approach to AI regulation despite many experts calling for safe and ethical standards to be set.

Some of the world’s greatest minds have expressed concern about the development of AI without regulations — including the likes of Elon Musk, and the late Stephen Hawking.

Musk famously said unregulated AI could post “the biggest risk we face as a civilisation”, while Hawking similarly warned “the development of full artificial intelligence could spell the end of the human race.”

The announcement that developers will be free to experiment with AI as they see fit was made during a meeting with representatives of 40 companies including Google, Facebook, and Intel.

Strict regulations can stifle innovation, and the U.S has made clear it wants to emerge a world leader in the AI race.

Western nations are often seen as somewhat at a disadvantage to Eastern countries like China, not because they have less talent, but citizens are more wary about data collection and their privacy in general. However, there’s a strong argument to be made for striking a balance.

Making the announcement, White House Science Advisor Michael Kratsios noted the government did not stand in the way of Alexander Graham Bell or the Wright brothers when they invented the telephone and aeroplane. Of course, telephones and aeroplanes weren’t designed with the ultimate goal of becoming self-aware and able to make automated decisions.

Both telephones and aeroplanes, like many technological advancements, have been used for military applications. However, human operators have ultimately always made the decisions. AI could be used to automatically launch a nuclear missile if left unchecked.

Recent AI stories have some people unnerved. A self-driving car from Uber malfunctioned and killed a pedestrian. At Google I/O, the company’s AI called a hair salon and the receptionist had no idea they were not speaking to a human.

People not feeling comfortable with AI developments is more likely to stifle innovation than balanced regulations.

What are your thoughts on the White House’s approach to AI regulation?

 

The post White House will take a ‘hands-off’ approach to AI regulation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/05/11/white-house-hands-off-ai-regulation/feed/ 0