Lack Of Disclosure In AI Facial Recognition Arrests Raises Concerns
In a Rush? Here are the Quick Facts!
- Hundreds arrested using facial recognition technology without awareness of its use.
- Over 1,000 cases revealed police concealed reliance on facial recognition.
- Federal law does not require disclosure of facial recognition technology usage.
An investigation published today by The Washington Post reveals that hundreds of Americans have been arrested due to facial recognition technology, without ever being informed of its use.
The investigation uncovered more than 1,000 criminal cases across 15 states where police failed to disclose their reliance on the technology, often masking its role by attributing suspect identification to “investigative means” or witnesses.
This lack of transparency raises concerns about fairness, particularly as facial recognition software has been shown to be prone to errors, especially in identifying people of color.
For instance, the case of Robert Williams, a Black man, who reached a settlement with the City of Detroit following his wrongful arrest in 2020 due to faulty facial recognition.
The Post reports that federal tests indicate leading facial recognition software is more likely to misidentify certain groups, including people of color, women, and older individuals.
The reason for this is that their facial features are underrepresented in the data used to train the algorithms. This information comes from Patrick Grother, who leads biometric testing at the National Institute of Standards and Technology in Washington, as reported by The Post.
The Post reported that in Evansville, Indiana, and Pflugerville, Texas, suspects were identified using facial recognition technology, but were never informed of its role in their arrest. Police cited physical features or investigative databases instead, concealing the software’s involvement.
This pattern was common across many departments, with only 30 out of 100 providing relevant records to The Post, reflecting a broader reluctance to disclose the use of facial recognition.
The Post explains that facial recognition software, such as Clearview AI, works by comparing images from crime scenes to vast databases of photos, including mugshots and social media images.
Critics argue that this practice puts innocent people at risk of being falsely implicated in crimes simply because their image appears online, as reported by The Post. Civil rights groups and defense lawyers argue that people have a right to know when they are identified by such technology, especially given its susceptibility to error, notes the Post.
The Post states that in some recent court cases, facial recognition results have been successfully challenged due to questions about the technology’s reliability. However The Post notes that police departments continue to defend their practice of non-disclosure, with some citing investigative privilege as justification.
Despite growing concern, federal law does not currently require police to disclose the use of facial recognition, said The Post. In some states, such as New Jersey, courts have ruled that defendants have the right to know if facial recognition was used in their case.
However, most states have no such requirement, leaving defendants in the dark about the role AI may have played in their arrests, noted The Post.
Leave a Comment
Cancel