Smart Cities – AI News https://news.deepgeniusai.com Artificial Intelligence News Mon, 11 Jan 2021 17:12:10 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png Smart Cities – AI News https://news.deepgeniusai.com 32 32 Police use of Clearview AI’s facial recognition increased 26% after Capitol raid https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/ https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/#respond Mon, 11 Jan 2021 17:12:08 +0000 https://news.deepgeniusai.com/?p=10153 Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol. The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a... Read more »

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
Clearview AI reports that police use of the company’s highly-controversial facial recognition system jumped 26 percent following the raid on the Capitol.

The facial recognition system relies on scraping the data of people from across the web without their explicit consent, a practice which has naturally raised some eyebrows—including the ACLU’s which called it a “nightmare scenario” for privacy.

Around three billion images are said to have been scraped for Clearview AI’s system.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland once argued.

Whether you call them protestors or domestic terrorists, the Trump supporters who raided the US Capitol Building last week – incited by the president to halt democracy and overturn the votes of millions of Americans – committed clear criminal offences that were bipartisanly condemned.

In comments to New York Times, Clearview AI CEO Hoan Ton-That claimed the company’s witnesses “a 26 percent increase of searches over our usual weekday search volume” on January 7th, following the riots.

Given the number of individuals involved, law enforcement has a gargantuan task to identify and locate the people that went far beyond exercising their right to peaceful protest and invaded a federal building, caused huge amounts of damage, and threatened elected representatives and staff.

The FBI has issued public appeals, but it’s little surprise that law enforcement is turning to automated means—regardless of the controversy. According to Clearview AI, approximately 2,400 agencies across the US use the company’s facial recognition technology.

Last year, the UK and Australia launched a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

A similar probe was also launched by the EU’s privacy watchdog. The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI has already been forced to suspend operations in Canada after the federal Office of the Privacy Commissioner of Canada opened an investigation into the company.

While Clearview AI’s facial recognition tech continues to have widespread use in the US, some police departments have taken the independent decision to ban officers from using such systems due to the well-documented inaccuracies which particularly affect minority communities.

The post Police use of Clearview AI’s facial recognition increased 26% after Capitol raid appeared first on AI News.

]]>
https://news.deepgeniusai.com/2021/01/11/police-use-clearview-ai-facial-recognition-increased-26-capitol-raid/feed/ 0
UK and Australia launch joint probe into Clearview AI’s mass data scraping https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/ https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/#respond Fri, 10 Jul 2020 14:49:51 +0000 https://news.deepgeniusai.com/?p=9745 The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI. Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web. “Common law has never recognised a right... Read more »

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
The UK and Australia have launched a joint probe into the controversial “data scraping” practices of Clearview AI.

Clearview AI has repeatedly made headlines, and rarely for good reason. The company’s facial recognition technology is impressive but relies on scraping billions of people’s data from across the web.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

Regulators in the UK and Australia seem to have a different perspective than Ekeland and have announced a joint probe into Clearview AI’s practices.

“The Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO) have opened a joint investigation into the personal information handling practices of Clearview Inc., focusing on the company’s use of ‘scraped’ data and biometrics of individuals,” the ICO wrote in a statement.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalized data environment.”

A similar probe was launched by the EU’s privacy watchdog last month.

The European Data Protection Board ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime” and that it “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world. A recent leak suggests it’s even being used by commercial businesses like Best Buy and Macy’s. In May, Clearview said it would stop working with non–law enforcement entities.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI in May after calling it a “nightmare scenario” for privacy.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

(Photo by The Creative Exchange on Unsplash)

The post UK and Australia launch joint probe into Clearview AI’s mass data scraping appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/07/10/uk-australia-probe-clearview-ai-mass-data-scraping/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/ https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/#comments Fri, 29 May 2020 13:48:55 +0000 https://news.deepgeniusai.com/?p=9660 The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns. “Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project. “The ACLU is taking its fight to defend privacy... Read more »

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
The American Civil Liberties Union (ACLU) is suing controversial facial recognition provider Clearview AI over privacy concerns.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

“The ACLU is taking its fight to defend privacy rights against the growing threat of this unregulated surveillance technology to the courts, even as we double down on our work in legislatures and city councils nationwide.”

Clearview AI has repeatedly come under fire due to its practice of scraping billions of photos from across the internet and storing them in a database for powerful facial recognition services.

“Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland said recently.

The company’s facial recognition system is used by over 2,200 law enforcement agencies around the world – and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

In a press release, the ACLU wrote:

“The New York Times revealed the company was secretly capturing untold numbers of biometric identifiers for purposes of surveillance and tracking, without notice to the individuals affected.

The company’s actions embodied the nightmare scenario privacy advocates long warned of, and accomplished what many companies — such as Google — refused to try due to ethical concerns.”

However, even more concerning is Clearview AI’s extensive ties with the far-right.

Clearview AI founder Hoan Ton-That claims to have since disassociated from far-right views, movements, and individuals. Ekeland, meanwhile, has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

The ACLU says its lawsuit represents the first “to force any face recognition surveillance company to answer directly to groups representing survivors of domestic violence and sexual assault, undocumented immigrants, and other vulnerable communities uniquely harmed by face recognition surveillance.”

Facial recognition technologies have become a key focus for the ACLU.

Back in March, AI News reported the ACLU was suing the US government for blocking a probe into the use of facial recognition technology at airports. In 2018, the union caught our attention for highlighting the inaccuracy of Amazon’s facial recognition algorithm – especially when identifying people of colour and females.

“Clearview’s actions represent one of the largest threats to personal privacy by a private company our country has faced,” said Jay Edelson of Edelson PC, lead counsel handling this case on a pro bono basis.

“If a well-funded, politically connected company can simply amass information to track all of us, we are living in a different America.”

The post ACLU sues Clearview AI calling it a ‘nightmare scenario’ for privacy appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/29/aclu-clearview-ai-nightmare-scenario-privacy/feed/ 1
Japan passes bill to build AI-powered ‘super cities’ addressing societal issues https://news.deepgeniusai.com/2020/05/28/japan-bill-build-ai-super-cities-societal-issues/ https://news.deepgeniusai.com/2020/05/28/japan-bill-build-ai-super-cities-societal-issues/#respond Thu, 28 May 2020 11:58:22 +0000 https://news.deepgeniusai.com/?p=9631 Japan has passed a bill to build “super cities” which address societal issues using emerging technologies such as AI. The bill, passed on Wednesday, aims to accelerate the sweeping change of regulations across various fields to support the creation of such futuristic cities. Addressing issues such as depopulation and an aging society will be the... Read more »

The post Japan passes bill to build AI-powered ‘super cities’ addressing societal issues appeared first on AI News.

]]>
Japan has passed a bill to build “super cities” which address societal issues using emerging technologies such as AI.

The bill, passed on Wednesday, aims to accelerate the sweeping change of regulations across various fields to support the creation of such futuristic cities.

Addressing issues such as depopulation and an aging society will be the focus of the super cities. Technologies including big data and AI will be key to successfully tackling the challenging problems.

Large amounts of data will be collected and organised from across administrative organisations.

Local governments will be selected for the ambitious projects which will launch forums with the national government and private companies to take forward the plans.

Draft plans will be created from this deep public-private collaboration that will subsequently be submitted to the state government if approved by local residents.

As with many smart city plans, there are deep concerns about the collection of personal data and what it could mean for individual privacy. Local residents are sure to want assurance that any data collection is anonymous.

A similar bill was submitted to the Diet (Japan’s decision-making institution) last year. The bill was scrapped following calls from the ruling government to review it.

The revised bill was passed on Wednesday. Given the appetite for the project across the government; the plans are now expected to progress swiftly.

(Photo by Jezael Melgoza on Unsplash)

The post Japan passes bill to build AI-powered ‘super cities’ addressing societal issues appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/05/28/japan-bill-build-ai-super-cities-societal-issues/feed/ 0
Elon Musk is hosting a ‘super fun’ AI hackathon at his house https://news.deepgeniusai.com/2020/02/04/elon-musk-hosting-fun-ai-hackathon-house/ https://news.deepgeniusai.com/2020/02/04/elon-musk-hosting-fun-ai-hackathon-house/#respond Tue, 04 Feb 2020 17:07:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6398 Fresh from kicking off his EDM career, Elon Musk has announced Tesla will be hosting a “super fun” AI hackathon at his house. In a tweet, Musk wrote: “Tesla will hold a super fun AI party/hackathon at my house with the Tesla AI/autopilot team in about four weeks. Invitations going out soon.” The hackathon will... Read more »

The post Elon Musk is hosting a ‘super fun’ AI hackathon at his house appeared first on AI News.

]]>
Fresh from kicking off his EDM career, Elon Musk has announced Tesla will be hosting a “super fun” AI hackathon at his house.

In a tweet, Musk wrote: “Tesla will hold a super fun AI party/hackathon at my house with the Tesla AI/autopilot team in about four weeks. Invitations going out soon.”

The hackathon will focus on the AI behind Tesla’s problematic AutoPilot feature which has been reported to accelerate erratically. Tesla has denied the claims but the hackathon suggests the company at least wants to make AutoPilot more robust.

Ahead of the hackathon announcement, Musk called for developers to join Tesla’s AI team which, he says, reports directly to him.

Talking up the opportunity, Musk highlighted that Tesla will soon have over a million connected vehicles worldwide. Every Tesla is fitted with the sensors and computing power needed for full self-driving which “is orders of magnitude more than everyone else combined”.

Musk says that an individual’s educational background is irrelevant. However, before you get too excited you can just waltz into Tesla, you will be required to pass a “hardcore coding test”.

Python is used “for rapid iteration” at Tesla to build neural networks before the code is converted into “C++/C/raw metal driver code” for the speed required for such critical tasks as piloting a vehicle.

As part of that need for speed, Tesla is also taking on the challenge of building its own AI chips. Musk says Tesla is seeking world-class chip designers to join the company’s teams based in Palo Alto and Austin.

Musk has been vocal about his fears of AI – calling it a potentially existential threat if left unchecked. However, he is also well aware of its opportunities (if that’s of any surprise given how it’s helping to inflate his and investors’ wallets).

“My actions, not just words, show how critically I view (benign) AI,” Musk wrote.

Interested in hearing industry leaders discuss subjects like this? , , , AI &

The post Elon Musk is hosting a ‘super fun’ AI hackathon at his house appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/02/04/elon-musk-hosting-fun-ai-hackathon-house/feed/ 0
UK police are concerned AI will lead to bias and over-reliance on automation https://news.deepgeniusai.com/2019/09/17/uk-police-concerned-ai-bias-automation/ https://news.deepgeniusai.com/2019/09/17/uk-police-concerned-ai-bias-automation/#respond Tue, 17 Sep 2019 10:00:22 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6021 British police have expressed concern that using AI in their operations may lead to increased bias and an over-reliance on automation. A study commissioned by UK government advisory body the Centre for Data Ethics and Innovation warned that police felt AI may “amplify” prejudices. 50 experts were interviewed by the Royal United Services Institute (RUSI)... Read more »

The post UK police are concerned AI will lead to bias and over-reliance on automation appeared first on AI News.

]]>
British police have expressed concern that using AI in their operations may lead to increased bias and an over-reliance on automation.

A study commissioned by UK government advisory body the Centre for Data Ethics and Innovation warned that police felt AI may “amplify” prejudices.

50 experts were interviewed by the Royal United Services Institute (RUSI) for the research, including senior police officers.

Racial profiling continues to be a huge problem. More young black men are stopped than young white men. The experts interviewed by the RUSI are worried these human prejudices could make their way into algorithms if they’re trained on existing police data.

It’s also noted how individuals from disadvantaged backgrounds tend to use more public transport. With data likely to be collected from the use of public transport, this increases the likelihood of those individuals being flagged.

The accuracy of facial recognition algorithms has often been questioned. Earlier this year, the Algorithmic Justice League tested all the major technologies and found that algorithms particularly struggled with darker-skinned females.

A similar report published by the American Civil Liberties Union focused on Amazon’s so-called Rekognition facial recognition system. When tested against members of congress, it incorrectly flagged those with darker skin more often.

Both findings show the potentially devastating societal impact if such technology was rolled out publicly today. It’s good to hear British authorities are at least aware of the potential complications.

The RUSI reports that experts in the study want to see clearer guidelines established for acceptable use of the technology. They hope this will provide confidence to police forces to adopt such potentially beneficial new technologies, but in a safe and responsible way.

“For many years police forces have looked to be innovative in their use of technology to protect the public and prevent harm and we continue to explore new approaches to achieve these aims,” Assistant Chief Constable Jonathan Drake told BBC News.

“But our values mean we police by consent, so anytime we use new technology we consult with interested parties to ensure any new tactics are fair, ethical and producing the best results for the public.”

You can find the full results of the RUSI’s study here.

The post UK police are concerned AI will lead to bias and over-reliance on automation appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/09/17/uk-police-concerned-ai-bias-automation/feed/ 0
Police in China will use AI face recognition to identify ‘lost’ elderly https://news.deepgeniusai.com/2019/08/05/police-china-ai-face-recognition-identify-lost-elderly/ https://news.deepgeniusai.com/2019/08/05/police-china-ai-face-recognition-identify-lost-elderly/#respond Mon, 05 Aug 2019 15:57:52 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5896 Chinese police hope to use AI-powered facial recognition, in combination with the nation’s mass surveillance network, to identify lost elderly people. The country’s surveillance network is often scrutinised for being invasive, but the ability to detect potentially vulnerable people helps to shift the perception that it primarily benefits the government. Public data suggests around 500,000... Read more »

The post Police in China will use AI face recognition to identify ‘lost’ elderly appeared first on AI News.

]]>
Chinese police hope to use AI-powered facial recognition, in combination with the nation’s mass surveillance network, to identify lost elderly people.

The country’s surveillance network is often scrutinised for being invasive, but the ability to detect potentially vulnerable people helps to shift the perception that it primarily benefits the government.

Public data suggests around 500,000 elderly people get lost each year, the equivalent of around 1,370 per day. About 72 percent of the missing persons were reported mentally challenged, requiring extra policing effort to identify them and ensure they get home safe.

China is home to many pioneering facial recognition companies. SenseTime became the world’s most funded AI startup in April last year, and launched another $2 billion funding round in January.

Part of the attraction for investors in SenseTime is due to providing technology for the Chinese government’s vast surveillance network. SenseTime’s so-called Viper system aims to process and analyse over 100,000 simultaneous real-time streams from traffic cameras, ATMs, and more to automatically tag and keep track of individuals.

SenseTime claims to have experienced around 400 percent growth in recent years, evidence of the appetite for facial recognition technology.

Despite being such a major player in facial recognition technology, SenseTime CEO Xu Li has called for facial recognition standards to be established for a ‘healthier’ industry.

Public distrust in facial recognition systems remains high. Earlier this year, AI News reported on the findings of the American Civil Liberties Union which found Amazon’s facial recognition AI erroneously labelled those with darker skin colours as criminals more often when matching against mugshots.

A later report from the Algorithmic Justice League tested facial recognition algorithms from Microsoft, Face++, and IBM. All of the algorithms tested struggled most with darker-skinned females, with as low as just 65.3 percent accuracy.

Following the findings, IBM said it would improve its algorithm. When reassessed, IBM’s accuracy for darken-skinned females jumped from 65.3 percent to 83.5 percent.

Algorithmic Justice League founder Joy Buolamwini said: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

In June, Face++ introduced a smart city AI called Wisdom Community in the Haidian district of Beijing. Wisdom Community also helps to detect and track down elderly persons who had been reported missing.

Face++’s technology has already been assisting with missing person cases elsewhere in the country. In October last year, a man in his 70s with Alzheimer’s disease was identified at Changle Middle Road Police Station in Xincheng District, Xi’an and sent home in less than a hour.

While there are still huge concerns around facial recognition things such as accuracy and privacy invasion, the use of the technology by the Chinese police to help find vulnerable people shows a positive use case that could save lives and reunite families.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Police in China will use AI face recognition to identify ‘lost’ elderly appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/05/police-china-ai-face-recognition-identify-lost-elderly/feed/ 0
No Rekognition: Police ditch Amazon’s controversial facial recognition https://news.deepgeniusai.com/2019/07/19/rekognition-police-amazon-facial-recognition/ https://news.deepgeniusai.com/2019/07/19/rekognition-police-amazon-facial-recognition/#respond Fri, 19 Jul 2019 16:11:04 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5849 Orlando Police have decided to ditch Amazon’s controversial facial recognition system Rekognition following technical issues. Rekognition was called out by the American Civil Liberties Union (ACLU) for erroneously labelling those with darker skin tones as criminals more often in a test using a database of mugshots. Jacob Snow, Technology and Civil Liberties Attorney at the... Read more »

The post No Rekognition: Police ditch Amazon’s controversial facial recognition appeared first on AI News.

]]>
Orlando Police have decided to ditch Amazon’s controversial facial recognition system Rekognition following technical issues.

Rekognition was called out by the American Civil Liberties Union (ACLU) for erroneously labelling those with darker skin tones as criminals more often in a test using a database of mugshots.

Jacob Snow, Technology and Civil Liberties Attorney at the ACLU Foundation of Northern California, said:

“Face surveillance will be used to power discriminatory surveillance and policing that targets communities of colour, immigrants, and activists. Once unleashed, that damage can’t be undone.”

Amazon disputed the methodology used by the ACLU claiming the default ‘confidence’ setting of 80 percent was left on when it suggests at least 95 percent for law enforcement purposes.

Orlando Police was using Rekognition to automatically detect suspected criminals in live footage taken by surveillance cameras. Despite help from Amazon, the police spent 15 months failing to get it to work properly.

“We haven’t even established a stream today,” the city’s chief information officer Rosa Akhtarkhavari told the Orlando Weekly. “We’re talking about more than a year later. We have not, today, established a reliable stream.”

Employees of Amazon recently wrote a letter to CEO Jeff Bezos expressing their concerns over the sale of facial recognition software and other services to US government bodies such as ICE (Immigration and Customs Enforcement).

In their letter, the Amazonians wrote:

“We refuse to build the platform that powers ICE, and we refuse to contribute to tools that violate human rights. As ethically concerned Amazonians, we demand a choice in what we build and a say in how it is used.”

Orlando Police has now cancelled its contract with Amazon. The news will be of some relief to those concerned about the privacy implications of such big brother-like systems.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post No Rekognition: Police ditch Amazon’s controversial facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/07/19/rekognition-police-amazon-facial-recognition/feed/ 0