In an August post, I wrote about some academic reports that had alleged ethnic and gender bias in facial recognition algorithm programs. These reports resulted in some major technology vendors withholding the sales of their facial recognition software to law enforcement agencies in the United States. Fortunately, we have an objective organization to help provide the answer to the question of whether there is bias in facial recognition algorithms.

That organization is the nonregulatory government agency, the National Institute of Standards and TechnologyOff-site link (NIST). NIST, under the umbrella of the U.S. Department of Commerce, was founded in 1901 and operates one of the country's oldest physical science laboratories, providing measurements and standards for a wide range of technologies including biometrics. Its mission is to "promote U.S. innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life."

Since 2000, NIST has been evaluating the performance of facial recognition algorithms submitted by vendors as part of an ongoing objective measurement effort called the Face Recognition Vendor TestOff-site link. Testing results are updated and published annually. While vendor participation is voluntary, NIST believes the participants are representative of a substantial part of the facial recognition industry.

The overall testing cycle was composed of three types of facial recognition algorithm testing: one-to-one matching, one-to-many matching, and, the most recent, testing of demographic effectsOff-site link. This testing used a database of approximately 18 million quality facial images representing 8.5 million individuals. The testing included 189 commercial algorithms submitted by 99 developers from companies and academic institutions from all over the world.

The measurements that NIST made were categorized into false negatives (where two images of the same individual are not associated) and false positives (where an image of two different individuals are erroneously identified as the same person). The latter error has far greater consequences, including the risk of giving an unauthorized person access to a secure location or of possibly falsely arresting an individual. The overall results of the testing are too detailed and numerous to list in this post. As one would expect with such a wide set of submissions, the results of the various algorithms ranged from what I would categorize as highly accurate to substandard. I recommend you watch a YouTube video video fileOff-site link in which Mei Ngan of NIST covers the test results. (The Women In Identity organization produced the video.) I think that, after you see the results, you'll agree with my assessment of whether there is bias in facial recognition: "It depends." Some of the algorithms show no bias and others do, indicating a need for additional improvement in their development.

In my August post, I also raised the issue of how face coverings will affect the performance of facial recognition programs such as those run by the Transportation Security Administration and Customs and Border Protection. NIST has recently tested the algorithms with this restriction and generally found that accuracy was substantially lower, although the developers are making modifications to the algorithms to improve their performance. Ms. Ngan covers this subject in her presentation as well.

Stay tuned for more biometrics information and discussion in our posts, and check out our October 29 Talk About Payments webinar that will feature one of the foremost biometrics experts in the country.