policing – AI News https://news.deepgeniusai.com Artificial Intelligence News Fri, 27 Nov 2020 16:10:36 +0000 en-GB hourly 1 https://deepgeniusai.com/news.deepgeniusai.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png policing – AI News https://news.deepgeniusai.com 32 32 CDEI launches a ‘roadmap’ for tackling algorithmic bias https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/ https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/#respond Fri, 27 Nov 2020 16:10:35 +0000 https://news.deepgeniusai.com/?p=10058 A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias. The analysis was commissioned by the UK government in October 2018 and will receive a formal response. Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing... Read more »

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
A review from the Centre for Data Ethics and Innovation (CDEI) has led to the creation of a “roadmap” for tackling algorithmic bias.

The analysis was commissioned by the UK government in October 2018 and will receive a formal response.

Algorithms bring substantial benefits to businesses and individuals able to use them effectively. However, increasing evidence suggests biases are – often unconsciously – making their way into algorithms and creating an uneven playing field.

The CDEI is the UK government’s advisory body on the responsible use of AI and data-driven technology. CDEI has spent the past two years examining the issue of algorithmic bias and how it can be tackled.

Adrian Weller, Board Member for the Centre for Data Ethics and Innovation, said:

“It is vital that we work hard now to get this right as adoption of algorithmic decision-making increases. Government, regulators, and industry need to work together with interdisciplinary experts, stakeholders, and the public to ensure that algorithms are used to promote fairness, not undermine it.

The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK to achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals.

Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.”

The report focuses on four key sectors where algorithmic bias poses the biggest risk: policing, recruitment, financial services, and local government.

Today’s facial recognition algorithms are relatively effective when used on white males, but research has consistently shown how ineffective they are with darker skin colours and females. The error rate is, therefore, higher when facial recognition algorithms are used on some parts of society over others.

In June, Detroit Police chief Editor Craig said facial recognition would misidentify someone around 96 percent of the time—not particularly comforting when they’re being used to perform mass surveillance of protests.

Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.

And that’s just one example of where AI can unfairly impact some parts of society over another.

“Fairness is a highly prized human value,” the report’s preface reads. “Societies in which individuals can flourish need to be held together by practices and institutions that are regarded as fair.”

Ensuring fairness in algorithmic decision-making

Transparency is required for algorithms. In financial services, a business loan or mortgage could be rejected without transparency simply because a person was born in a poor neighbourhood. Job applications could be rejected not on a person’s actual skill but dependent on where they were educated.

Such biases exist in humans and our institutions today, but automating them at scale is a recipe for disaster. Removing bias from algorithms is not an easy task but if achieved would lead to increased fairness by taking human biases out of the equation.

“It is well established that there is a risk that algorithmic systems can lead to biased decisions, with perhaps the largest underlying cause being the encoding of existing human biases into algorithmic systems. But the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care.

When changing processes that make life-affecting decisions about individuals we should always proceed with caution. It is important to recognise that algorithms cannot do everything. There are some aspects of decision-making where human judgement, including the ability to be sensitive and flexible to the unique circumstances of an individual, will remain crucial.”

The report’s authors examined the aforementioned four key sectors to determine their current “maturity levels” in algorithmic decision-making.

In recruitment, the authors found rapid growth in the use of algorithms to make decisions at all stages. They note that adequate data is being collected to monitor outcomes but found that understanding of how to avoid human biases creeping in is lacking.

“More guidance is needed on how to ensure that these tools do not unintentionally discriminate against groups of people, particularly when trained on historic or current employment data.”

The financial services industry has relied on data to make decisions for longer than arguably any other to determine things like how likely it is an individual can repay a debt.

“Specific groups are historically underrepresented in the financial system, and there is a risk that these historic biases could be entrenched further through algorithmic systems.”

CDEI found limited use of algorithmic decision-making in UK policing but found variance across forces with regards to both usage and managing ethical risks.

“The use of data analytics tools in policing carries significant risk. Without sufficient care, processes can lead to Review into bias in algorithmic decision-making: Executive summary Centre for Data Ethics and Innovation 8 outcomes that are biased against particular groups, or systematically unfair.

In many scenarios where these tools are helpful, there is still an important balance to be struck between automated decision-making and the application of professional judgement and discretion.”

Finally, in local government, CDEI noted an increased use of algorithms to inform decision-making but most are in their early stages of deployment. Such tools can be powerful assets for societal good – like helping to plan where resources should be allocated to maintain vital services – but can also carry significant risks.

“Evidence has shown that certain people are more likely to be overrepresented in data held by local authorities and this can then lead to biases in predictions and interventions.”

The CDEI makes a number of recommendations in its report but among them is:

  • Clear and mandatory transparency over how algorithms are used for public decision-making and steps taken to ensure the fair treatment of individuals.
  • Full accountability for organisations implementing such technologies.
  • Improving the diversity of roles involved with developing and deploying decision-making tools.
  • Updating model contracts and framework agreements for public sector procurement to incorporate minimum standards around the ethical use of AI.
  • The government working with regulators to provide clear guidance on the collection and use of protected characteristic data in outcome monitoring and decision-making processes. They should then encourage the use of that guidance and data to address current and historic bias in key sectors.
  • Ensuring that the Equality and Human Rights Commission has sufficient resources to investigate cases of alleged algorithmic discrimination.

CDEI is overseen by an independent board which is made up of experts from across industry, civil society, academia, and government; it is an advisory body and does not directly set policies. The Department for Digital, Culture, Media & Sport is consulting on whether a statutory status would help the CDEI to deliver its remit as part of the National Data Strategy.

You can find a full copy of the CDEI’s report into tackling algorithmic bias here (PDF)

(Photo by Matt Duncan on Unsplash)

The post CDEI launches a ‘roadmap’ for tackling algorithmic bias appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/11/27/cdei-launches-roadmap-tackling-algorithmic-bias/feed/ 0
The ACLU uncovers the first known wrongful arrest due to AI error https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/ https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/#respond Thu, 25 Jun 2020 12:05:26 +0000 https://news.deepgeniusai.com/?p=9711 The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm. While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove. On Wednesday, the ACLU lodged a complaint against the Detroit police after black... Read more »

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
The ACLU (American Civil Liberties Union) has forced the police to acknowledge a wrongful arrest due to an erroneous algorithm.

While it’s been suspected that documented racial bias with facial recognition algorithms has led to false arrests, it’s been difficult to prove.

On Wednesday, the ACLU lodged a complaint against the Detroit police after black male Robert Williams was arrested on his front lawn “as his wife Melissa looked on and as his daughters wept from the trauma”. Williams was held in a “crowded and filthy” cell overnight without being given any reason.

Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.

During an interrogation the day after his arrest, the police admitted that “the computer must have gotten it wrong”. Williams was kept incarcerated until after dark “at which point he was released out the front door, on a cold and rainy January night, where he was forced to wait outside on the curb for approximately an hour while his wife scrambled to find child care for the children so that she could come pick him up.”

Speaking to the NY Times, a Detroit police spokesperson said the department “does not make arrests based solely on facial recognition,” and claims witness interviews and a photo lineup were used.

However, a response from the Wayne County prosecutor’s office confirms the department used facial recognition to identify Williams using the security footage and an eyewitness to the crime was not shown the alleged photo lineup.

In its complaint, the ACLU demands that Detroit police end the use of facial recognition “as the facts of Mr. Williams’ case prove both that the technology is flawed and that DPD investigators are not competent in making use of such technology.”

This week, Boston became the latest city to ban facial recognition technology for municipal use. Boston follows an increasing number of cities like San Francisco, Oakland, and California who’ve banned the technology over human rights concerns.

“Facial recognition is inherently dangerous and inherently oppressive. It cannot be reformed or regulated. It must be abolished,” said Evan Greer, deputy director of the digital rights group Fight for the Future.

“Boston just became the latest major city to stop the use of this extraordinary and toxic surveillance technology. Every other city should follow suit.”

Cases like Mr Williams’ are certainly strengthening such calls. Over 1,000 experts signed an open letter this week against the use of AI for the next chilling step, crime prediction.

(Photo by ev on Unsplash)

The post The ACLU uncovers the first known wrongful arrest due to AI error appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/25/aclu-uncovers-wrongful-arrest-ai-error/feed/ 0
Over 1,000 researchers sign letter opposing ‘crime predicting’ AI https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/ https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/#respond Wed, 24 Jun 2020 12:24:25 +0000 https://news.deepgeniusai.com/?p=9706 More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime. Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources –... Read more »

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
More than 1,000 researchers, academics, and experts have signed an open letter opposing the use of AI to predict crime.

Anyone who has watched the sci-fi classic Minority Report will be concerned about attempts to predict crime before it happens. In an ideal scenario, crime prediction could help determine where to allocate police resources – but the reality will be very different.

The researchers are speaking out ahead of an imminent publication titled ‘A Deep Neural Network Model to Predict Criminality Using Image Processing’. In the paper, the authors claim to be able to predict whether a person will become a criminal based on automated facial recognition.

“By automating the identification of potential threats without bias, our aim is to produce tools for crime prevention, law enforcement, and military applications that are less impacted by implicit biases and emotional responses,” says Harrisburg University Professor and co-author of the paper Nathaniel J.S. Ashby.

“Our next step is finding strategic partners to advance this mission.”

Finding willing partners may prove to be a challenge. Signatories of the open letter include employees working on AI from tech giants including Microsoft, Google, and Facebook.

In their letter, the signatories highlight the many issues of today’s AI technologies which make dabbling in crime prediction so dangerous.

Chief among the concerns is the well-documented racial bias of algorithms. Every current facial recognition system is more accurate when detecting white males and often incorrectly flags members of the BAME community as criminals more often when used in a law enforcement setting.

However, even if the inaccuracies with facial recognition algorithms are addressed, the researchers highlight the problems with the current justice system which have been put in the spotlight in recent weeks following the murder of George Floyd.

In their letter, the researchers explain:

“Research of this nature — and its accompanying claims to accuracy — rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system. These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences.

Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data. Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”

Among the co-authors of the disputed paper is Jonathan W. Korn, a Ph.D. student who is highlighted as an NYPD veteran. Korn says that AI which can predict criminality would be “a significant advantage for law enforcement agencies.”

While such a system would make the lives of law enforcement officers easier, it would do so at the cost of privacy and the automation of racial profiling.

“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” warn the letter’s authors.

“The uncritical acceptance of default assumptions inevitably leads to discriminatory design in algorithmic systems, reproducing ideas which normalise social hierarchies and legitimise violence against marginalised groups.”

You can find the full open letter here.

(Photo by Bill Oxford on Unsplash)

The post Over 1,000 researchers sign letter opposing ‘crime predicting’ AI appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/24/over-1000-researchers-sign-letter-crime-predicting-ai/feed/ 0
The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/ https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/#respond Thu, 11 Jun 2020 14:33:29 +0000 https://news.deepgeniusai.com/?p=9688 The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal. Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak. The EDPB has now ruled that any... Read more »

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
The European Data Protection Board (EDPB) believes use of Clearview AI’s controversial facial recognition system would be illegal.

Clearview AI’s facial recognition system is used by over 2,200 law enforcement agencies around the world and even commercial businesses like Best Buy and Macy’s, according to a recent leak.

The EDPB has now ruled that any use of the service by law enforcement in Europe would “likely not be consistent with the EU data protection regime.”

Furthermore, the watchdog “has doubts as to whether any Union or Member State law provides a legal basis for using a service such as the one offered by Clearview AI.”

Clearview AI scrapes billions of photos from across the internet for its powerful system, a practice which has come under fire by privacy campaigners. “Common law has never recognised a right to privacy for your face,” Clearview AI lawyer Tor Ekeland argued recently.

The American Civil Liberties Union (ACLU) launched a lawsuit against Clearview AI last month after calling it a “nightmare scenario” for privacy.

“Companies like Clearview will end privacy as we know it, and must be stopped,” said Nathan Freed Wessler, senior staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Aside from the company’s practices, concerns have been raised about Clearview AI’s extensive ties with the far-right. Ekeland himself has gained notoriety as “The Troll’s Lawyer” for defending clients such as neo-Nazi troll Andrew Auernheimer.

Backlash over Clearview AI forced the company to announce it will no longer offer its services to private companies. The EU’s ruling will limit Clearview AI’s potential customers even further.

Concerns have grown in recent weeks about facial recognition services amid protests over racial discrimination. Facial recognition services have been repeatedly found to falsely flag minorities; stoking fears they’ll lead to automated racial profiling.

IBM and Amazon have both announced this week they’ll no longer provide facial recognition services to law enforcement and have called on Congress to increase regulation to help ensure future deployments meet ethical standards.

(Photo by Christian Lue on Unsplash)

The post The EU’s privacy watchdog takes aim at Clearview AI’s facial recognition appeared first on AI News.

]]>
https://news.deepgeniusai.com/2020/06/11/eu-privacy-watchdog-aim-clearview-ai-facial-recognition/feed/ 0
Police in China will use AI face recognition to identify ‘lost’ elderly https://news.deepgeniusai.com/2019/08/05/police-china-ai-face-recognition-identify-lost-elderly/ https://news.deepgeniusai.com/2019/08/05/police-china-ai-face-recognition-identify-lost-elderly/#respond Mon, 05 Aug 2019 15:57:52 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5896 Chinese police hope to use AI-powered facial recognition, in combination with the nation’s mass surveillance network, to identify lost elderly people. The country’s surveillance network is often scrutinised for being invasive, but the ability to detect potentially vulnerable people helps to shift the perception that it primarily benefits the government. Public data suggests around 500,000... Read more »

The post Police in China will use AI face recognition to identify ‘lost’ elderly appeared first on AI News.

]]>
Chinese police hope to use AI-powered facial recognition, in combination with the nation’s mass surveillance network, to identify lost elderly people.

The country’s surveillance network is often scrutinised for being invasive, but the ability to detect potentially vulnerable people helps to shift the perception that it primarily benefits the government.

Public data suggests around 500,000 elderly people get lost each year, the equivalent of around 1,370 per day. About 72 percent of the missing persons were reported mentally challenged, requiring extra policing effort to identify them and ensure they get home safe.

China is home to many pioneering facial recognition companies. SenseTime became the world’s most funded AI startup in April last year, and launched another $2 billion funding round in January.

Part of the attraction for investors in SenseTime is due to providing technology for the Chinese government’s vast surveillance network. SenseTime’s so-called Viper system aims to process and analyse over 100,000 simultaneous real-time streams from traffic cameras, ATMs, and more to automatically tag and keep track of individuals.

SenseTime claims to have experienced around 400 percent growth in recent years, evidence of the appetite for facial recognition technology.

Despite being such a major player in facial recognition technology, SenseTime CEO Xu Li has called for facial recognition standards to be established for a ‘healthier’ industry.

Public distrust in facial recognition systems remains high. Earlier this year, AI News reported on the findings of the American Civil Liberties Union which found Amazon’s facial recognition AI erroneously labelled those with darker skin colours as criminals more often when matching against mugshots.

A later report from the Algorithmic Justice League tested facial recognition algorithms from Microsoft, Face++, and IBM. All of the algorithms tested struggled most with darker-skinned females, with as low as just 65.3 percent accuracy.

Following the findings, IBM said it would improve its algorithm. When reassessed, IBM’s accuracy for darken-skinned females jumped from 65.3 percent to 83.5 percent.

Algorithmic Justice League founder Joy Buolamwini said: “So for everybody who watched my TED Talk and said: ‘Isn’t the reason you weren’t detected because of, you know, physics? Your skin reflectance, contrast, et cetera,’ — the laws of physics did not change between December 2017, when I did the study, and 2018, when they launched the new results.”

“What did change is they made it a priority.”

In June, Face++ introduced a smart city AI called Wisdom Community in the Haidian district of Beijing. Wisdom Community also helps to detect and track down elderly persons who had been reported missing.

Face++’s technology has already been assisting with missing person cases elsewhere in the country. In October last year, a man in his 70s with Alzheimer’s disease was identified at Changle Middle Road Police Station in Xincheng District, Xi’an and sent home in less than a hour.

While there are still huge concerns around facial recognition things such as accuracy and privacy invasion, the use of the technology by the Chinese police to help find vulnerable people shows a positive use case that could save lives and reunite families.

deepgeniusai.com/">AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, , & .

The post Police in China will use AI face recognition to identify ‘lost’ elderly appeared first on AI News.

]]>
https://news.deepgeniusai.com/2019/08/05/police-china-ai-face-recognition-identify-lost-elderly/feed/ 0
AI tags potential criminals before they’ve done anything https://news.deepgeniusai.com/2018/11/28/ai-tags-potential-criminals/ https://news.deepgeniusai.com/2018/11/28/ai-tags-potential-criminals/#comments Wed, 28 Nov 2018 13:13:04 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4248 British police want to use AI for highlighting who is at risk of becoming a criminal before they’ve actually committed any crime. Although it sounds like a dystopian nightmare, there are clear benefits. Resources and outreach programs can be allocated to attempt preventing a crime, stop anyone becoming a victim, and remove the costs associated... Read more »

The post AI tags potential criminals before they’ve done anything appeared first on AI News.

]]>
British police want to use AI for highlighting who is at risk of becoming a criminal before they’ve actually committed any crime.

Although it sounds like a dystopian nightmare, there are clear benefits.

Resources and outreach programs can be allocated to attempt preventing a crime, stop anyone becoming a victim, and remove the costs associated with prosecuting and jailing someone.

With prisons overburdened and space limited, reducing the need to lock someone up is a win for everyone. Courts can also prioritise other matters to improve the efficiency of the whole legal infrastructure.

The proposed system is called the National Data Analytics Solution (NDAS) and uses data from local and national police databases.

According to NDAS’ project leader, over a terabyte of data has been collected from the aforementioned databases. This data includes logs of committed crimes in addition to around five million identifiable people.

There are over 1,400 indicators within the data which the AI uses to calculate an individual’s risk of committing a crime. Such indicators could include past offences, whether the person had assistance, and whether those in their network are criminals.

Alleviating some fears, there are no plans to arrest someone before they’ve committed a crime based on their potential. The system is designed as a preventative measure.

 AI & >.

The post AI tags potential criminals before they’ve done anything appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/11/28/ai-tags-potential-criminals/feed/ 1
INTERPOL investigates how AI will impact crime and policing https://news.deepgeniusai.com/2018/07/17/interpol-ai-impact-crime-policing/ https://news.deepgeniusai.com/2018/07/17/interpol-ai-impact-crime-policing/#respond Tue, 17 Jul 2018 14:49:38 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3503 INTERPOL hosted an event in Singapore bringing leading experts together with the aim of examining how AI will affect crime and prevention. The event, organised by INTERPOL and the UNICRI Centre for AI and Robotics, was held at the former’s Global Complex for Innovation. Experts from across industries gathered to discuss issues and several private... Read more »

The post INTERPOL investigates how AI will impact crime and policing appeared first on AI News.

]]>
INTERPOL hosted an event in Singapore bringing leading experts together with the aim of examining how AI will affect crime and prevention.

The event, organised by INTERPOL and the UNICRI Centre for AI and Robotics, was held at the former’s Global Complex for Innovation. Experts from across industries gathered to discuss issues and several private sector companies gave live demonstrations of related projects.

Some technological advances in AI pose a threat. In a recent interview with Irakli Beridze, he provided us with an example of AI potentially being used for impersonation. This could eventually lead to completely automated fraud.

Speaking about the Singapore event, Beridze said:

“I believe that we are taking critical first steps to building a platform for ‘future-proofing’ law enforcement.

Initiatives such as this will help us to prepare for potential future types of crime and capitalize on technological advancements to develop new and effective tools for law enforcement.”

Bringing policing up-to-date on these emerging threats is vital. Some 50 participants in law enforcement from 13 countries attended the event to exchange their expertise with the private sector and academia.

Some of the potential use cases for AI in law enforcement was fascinating. Discussions were held about things such as conducting virtual autopsies, predicting crime to optimise resources, detecting suspicious behaviour, combining with blockchain technology for traceability, and the automation of patrol vehicles.

Anita Hazenberg, Director of INTERPOL’s Innovation Centre, commented:

“Innovation is not a matter for police alone. Strong partnerships between all stakeholders with expertise is necessary to ensure police can quickly adapt to future challenges and formulate inventive solutions.”

Naturally, there are many obstacles to overcome both technologically and socially before such ideas can be used.

One major concern is that of AI bias. Especially where things such as facial recognition and behaviour detection are concerned, there’s potential for automated racial profiling.

A 2010 study by researchers at NIST and the University of Texas in Dallas has already found that algorithms designed and tested in East Asia are better at recognising East Asians, while those designed in western countries are more accurate at detecting Caucasians.

Several live demonstrations were given at the event by private sector companies including virtual communications, facial recognition, and incident prediction and response optimisation systems.

Police forces are planning to invest heavily in AI. Singapore Police, for example, has deployed patrolling robots and shared their experience with them during the conference.

Next on INTERPOL’s agenda is drones. The organisation will be holding a drone expert forum in August to further assist police in understanding how drones can be a tool, a threat, and a source of evidence.

What impact do you think AI will have on crime and policing?

 

The post INTERPOL investigates how AI will impact crime and policing appeared first on AI News.

]]>
https://news.deepgeniusai.com/2018/07/17/interpol-ai-impact-crime-policing/feed/ 0