Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use 3

Drew Harwell
The Washington Post

Dec. 19, 2019 at 3:43 p.m. PST

Facial-recognition systems misidentified people of color more often than white people, a landmark federal study released Thursday shows, casting new doubts on a rapidly expanding investigative technique widely used by law enforcement across the United States.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.
Algorithms developed in the United States also showed high error rates for “one-to-one” searches of Asians, African Americans, Native Americans and Pacific Islanders. Such searches are critical to functions including cellphone sign-ons and airport boarding schemes, and errors could make it easier for impostors to gain access to those systems

Read full.

3 comments

  1. There is a potential technical reason for this. In about 2004, I saw a movie which included computer-generated humans, perhaps intended to be as precisely simulated as the then-technology was able to do. Most of the characters were white, but there was one black. (Maybe this was one of the “Final Fantasy” movies, but I’m not sure…)

    It was noticed that the rendering of the black character seemed better than the rest. I read a technical explanation: If you are trying to portray (white) skin, the reflection does not merely come back from the surface: some comes back from distances before that surfaces. Simulating that, using computer techniques, isn’t easy. In contrast, black skin behaves more like a painted-on surface, so it is much easier to simulate.

    It’s possible this leads to a better ability to compare faces: If the analysis includes near-infrared, perhaps the location of blood vessels under the surface of the skin adds to the uniqueness factor. But, black skin might tend to shield such blood vessels, and thus impact the identification function.

    • Makes sense. Although an article about the technical reasons of computer techniques and facial recognition on darker skin tones would get a lot less clicks than “Racial bias spreads through computer algorithms” etc.

      • Ha ha, yes. I don’t see much discussion of the issue of IR illumination in facial recognition. For example, outdoors during daylight includes a great deal of near-IR. Inside, under fluorescent, probably has little near-IR.
        There is probably a need for research as to what invisible-to-visible light make-up can be used to trick outdoor facial recognition. An infrared-absorptive chemical, drawn in lines over a person’s face, might confuse outdoor facial recognition system. (and if it cannot be seen in visible light, won’t attract recognition).
        If a common camera (such as on a smartphone) is to render realistic images, it probably has to be filtered to remove most near-IR. We ordinarily see faces based solely on their visible-light reflectivity. Illuminate faces with near-IR, and view them with a camera sensitive to near-IR, and then display them (or use them for FR purposes) might be a problem.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s