An innocent woman was arrested today (while working in an informal job without any personally identifiable documentation) in Rio de Janeiro. The automated facial recognition systems (FRS) identified her as another woman, wanted for murder - it turns out the actual criminal is _already in jail_ and the police organization operating the FRS wasn't informed about it yet due to delays between the systems.
This is the second day of operation of the system, according to the press. However, a restricted version of the same FRS was used for 15 days during the Carnaval festivities with no alarming failures and reportedly resulted in a few arrests.
It has now been deployed in 25 locations in Copacabana. (Anedoctally, I live here and can't tell you where they are - the equipment, cameras, etc. must be well hidden, or I'm terrible at spotting them).
Source, in Portuguese: "Facial recognition fails in its second day and innocent woman confused with criminal is arrested" https://oglobo.globo.com/rio/reconhecimento-facial-falha-em-segundo-dia-mulher-inocente-confundida-com-criminosa-ja-presa-23798913
Related previous submission (with zero comments): https://news.ycombinator.com/item?id=19074434
And in other news, 25% of all stranger identifications by human eyewitnesses also were found to be erroneous.
I agree that automated systems applied to the same task will (eventually) beat human performance by a lot, but there are 2 issues that make this less relevant in this case.
The first is one relates to this application of facial recognition targeting poor people. This woman had no documentation on her. I'd wager this is really common in Brazil. How could she possibly prove she is _NOT_ the person the software with 99.999..% accuracy says she is?
If the actual criminal wasn't already in jail, would she be set free in the same day she was arrested?
The second issue is perhaps more universal to other automations, but certainly potentializes the first one, dealing with _scale_. Human cops can't check every person they see against the entire database of wanted people.
25% of false-positives from every cop is a hell of a lot less than 0.00001% of an automated system when applied to every crowd, everywhere, all the time.
> 99.999% accuracy
OK, so for a given facial profile, 0.001% of people passing in a crowd will be incorrectly identified as a match. That's 1 in 100,000 people who pass by will be incorrectly matched against a particular profile, correct? How many people walk past this camera? 1000 in a day? How many cameras are deployed in an area? How many facial profiles does it have in its database that it is checking against?
It seems to me that if you have a lot of faces you are looking for and a big crowd, or even a modest crowd, then you are going to get a lot of false positives. Now if the wanted person walks past the camera then it has a chance to get its identification correct. If they don't walk past that camera there's no chance to get its identification correct. A given streetcorner with a camera where 1000 people walk past in a day may be in a city with 5 million people, only one of whom is the wanted person, and who might never pass that streetcorner.
Given all this, if a match is triggered, what are the actual odds that the person is the right person? Is it more than 1 in 100? More than 1 in 1000? Does an officer have probable cause for an arrest if there is a 0.1% chance someone has done something? Do they have reasonable suspicion for a detention? Is the officer justified in shooting and killing the person who has had a facial match if they attempt to arrest the person and the person runs away?
Maybe an unexpected benefit of AI-augmented law enforcement is that police, lawyers, and judges will be required to learn (more) applied statistics.
> How could she possibly prove she is _NOT_ the person the software with 99.999..% accuracy says she is?
She couldn't prove it any more of the source of the erroneous identification was a human looking at a wanted picture and matching her against it.
Any reasonable criminal justice system incorporates the fact that innocent people will be arrested because it's impractical to impose high proof requirements for arrest, but at the same time will provide due process and have high proof requirements for conviction.
If a justice system has a problem on either of those dimensions z they need urgently to be addressed independently of the use of facial recognition as an input to arrest decisions.
Scalability is definitely the problem here.
For the first issue, carry your documents. I've been taught that since I first got my ID at 12yo.
My understanding is that the automation is to verify people who are needed by law, right? They don't voluntarily hear a beep and go into the jail. There are police officers that still have to arrest them and check their documents.
The situation is so bad in Rio that society simply will have to deal with any deficiencies in the automated system. It will still be better than the current reality.
The fact that police officers are probably not giving people of color enough flexibility in situations like these is beyond the scope of the automation system.
EDIT: Carrying your documents is not a popular opinion among the US audience, I understand. Still, I find it very reasonable and I think the situation that happened to this woman is being blown out of proportion.
Indeed, notion of carrying your documents is very culturally dependent, from a rational decision to quasi-religious fervently-held belief.
Coming from Eastern Europe originally, I carry my ID everywhere as a matter of habit in Canada; but in turn get very very very nervous and uncomfortable if it's taken from me (for example, at hotels in some countries, or at the border for a quick check).
The US tries not to be a papers please police state where it's a crime not to carry papers with you and present them on demand. We consider systems like that to be totalitarian and despotic based on our past experiences observing and waging war against hostile oppressive nations with such systems.
> We consider systems like that to be totalitarian
Who is "we"? Non-citizens of the USA are required to carry their immigration status documents with them,  and checkpoints within the 100-mile border zone (where 2/3 of the population lives ) are one place where they must be presented. If you are a US citizen without proof, and are stopped, and the official thinks you are lying, I believe you are going to have A Bad Day.
> If you are a US citizen without proof, and are stopped, and the official thinks you are lying, I believe you are going to have A Bad Day.
e.g. US citizen Valeria Alvarado was shot and killed when she panicked and tried to get away from plainclothes CBP agents.
Well, in most cases "we" means the permanent legal inhabitants, not tourists. Legislation is not created by tourists.
Permanent legal residents are required to carry their IDs.
He/she is not making this up. This is pretty much exactly how I think. When somebody says, “don’t want to be arrested by a bot? Just carry you papers!”, that sounds like absolutely evil tyranny to me.
The US requires permanent residents to carry their green cards at all times.
What’s the point of bringing that up? They’re not mutually exclusive. The difference though is that the accused is granted the right to test the veracity of the witness. Will the accused in cases like this be allowed to challenge the code? How can a poor person (who will undoubtedly be the one affected by this issue) afford that? And even if they can, the truth will be muddied by “experts” and a judge without any training or knowledge about facial recognition.
What is there to challenge, than you look like someone else?
Or, as one example, that the algorithm was never tested on people of color and so the false positive rate is higher?
You're assuming the "algorithm" has the same faulty memory as witnesses. There are recorded images that can be triple checked by humans before any verdict is given (in the case of a conviction).
A judge is not simply going "oh sorry, the system tells me it was you and I can't verify that. Gotta trust it".
You’re equally assuming plenty. If someone is arrested based on a false positive, they would likely be interrogated by the police. How crazy would it be for that same person to admit to a crime they didn’t do? There are plenty of hypos. My point is that there needs to be strong checks against the use of facial recognition. Citizens should see the algorithm, the training data, and results. The public should test the software rather than a poor criminal defendant.
As long as biometrics alone are the basis for an arrest, we'll see more and more of this. I'm curious what the accuracy score for the biometric source "evidence" was on which she was arrested, and what the match score was between her face and the source evidence.
She wasn't arrested. She was detained and taken to the police station and her family brought her documents to prove she wasn't the other woman they were after.
Being detained is in the same category as being arrested. She was forcibly taken into custody only to be released without any form of compensation for lost time.
In many countries people can be held for significant amounts of time without being formally charged with any crime.
Similar category but very different outcomes, rights, etc.
You're making a pedantic argument and backing it up with a link specific to the US court system. What the US supreme court says had no bearing in Brazil.
No, I'm not. The Brazilian supreme court says pretty much the same thing.
I feel you're missing the point entirely: to the person who was forcibly taken into custody by police and likely didn't have a clear and confident understanding of their rights (a fair assumption for the general populace, I'd say), it doesn't matter what you call it.
Of course it's upsetting. I don't think anyone is making the case that it's just a regular day and you shouldn't complain. The automation system needs to improve, that's blatantly obvious. And the police officers need to learn to work with it.
This is what happened (detention): "I was working and the police arrived. They confounded me with someone else they were looking for and I had to go to the police station and prove I wasn't that person. What a day".
Not this (arrest): ""I was working and the police arrived. They confounded me with someone else they were looking for and I had to go to the police station and prove I wasn't that person. They still didn't believe me and I'm in a jail cell waiting for a judge to hear my case. I don't think I'll be home for a few days".
> She wasn't arrested. She was detained
Good thing no one is ever harmed while being detained but not yet arrested. /s
Being detained doesn’t equal being beaten up.
> She wasn't arrested.
> She was detained and taken to the police station
What's that if it's not an arrest?
That's a detention.
An arrest is when you're charged with a crime. It's a much more serious situation.
I severely dislike public facial recognition systems, but wouldn't it basically be the same if a police officer mistook her for someone else based on a sketch or a composite? The scale is obviously different though.
With a sketch or composite, the identifying individual is the one with the full responsibility to make the match. With a recognition system, the software externalizes the responsibility, at best giving an individual the responsibility to reject the match. That setup results in the recognition system being inherently given a strong tendency to override individual judgement.
> the software externalizes the responsibility
The arresting officer still has the responsibility though to decide whether or not to use deadly force against the person should they resist the false arrest.
Human cops don't have implicit faith in the accuracy of their human colleagues. But if a COMPUTER lists you as a biometric match for a person of interest/suspect, then clearly you did _something_...
Right. Many people believe that computers can't make mistakes. It might not be true, but you'll still have a harder time convincing them that despite the computer identifying you, the computer is mistaken.
And humans are inclined to believe CSI type magic.
Pardon my ignorance, but isn't this a human rights issue ? Can the UN or other organizations not question the governments who are in such a rush to enforce these system regardless of all the evidence that it isn't reliable yet.
Yes it is.
But under the current authoritarian Brazilian president "human rights" has been demonized.
Rio de Janeiro's Governor has advocated for snipers to shoot-to-kill people identified as criminals in the favelas. It's horrible.
I'm not concerned with false positives. That's something the legal system can expect and handle properly. My issue with this kind of technology is the potential for abuse by incumbent powers. What happens when the government starts using this to track whistleblowers?
In Brazil simply being preto can get you in trouble with military police, can only imagine it has gotten much worse with bolzonaro
There's a first, wrong person apprehended