Facial recognition is becoming part of the fabric of everyday life. You might already use it to log in to your phone or computer, or authenticate payments with your bank. In China, where the technology is more common, your face can be used to buy fast food, or claim your allowance of toilet paper at a public restroom. And this is to say nothing of how law enforcement agencies around the world are experimenting with facial recognition as tool of mass surveillance.

But the widespread uptake of this technology belies underlying structural problems, not least the issue of bias. By this, researchers mean that software used for facial identification, recognition, or analysis performs differently based on the age, gender, and ethnicity of the person it’s identifying.

A study published in February by researchers from MIT Media Lab found that facial recognition algorithms designed by IBM, Microsoft, and Face++ had error rates of up to 35 percent higher when detecting the gender of darker-skinned women compared to lighter-skinned men. In this way, bias in facial recognition threatens to reinforce the prejudices of society; disproportionately affecting women and minorities, potentially locking them out of the world’s digital infrastructure, or inflicting life-changing judgements on them.

blog post by its chief legal officer, Brad Smith. In the post, Smith said it was time for the US government to regulate its own use of facial recognition, though not its deployment by private firms. Including, perhaps, setting minimum accuracy standards.

last month of a diverse dataset for training facial recognition systems, curated to combat bias. Ruchir Puri, chief architect of IBM Watson, toldThe Vergein June that the company was interested in helping establish accuracy benchmarks. “There should be matrixes through which many of these systems should be judged,” said Puri. “But that judging should be done by the community, and not by any particular player.”

Amazon also did not respond to questions, but directed us to statements it issued earlier this year after being criticized by the ACLU for selling facial recognition to law enforcement. (The ACLU made similar criticisms today: it tested the company’s facial recognition software to identify pictures of Congress members, and found that they incorrectly matched 28 individuals to criminal mugshots.)

Amazon says it will withdraw customers’ access to its algorithms if they are used to illegally discriminate or violate the public’s right to privacy, but doesn’t mention any form of oversight. The company toldThe Vergeit had teams working internally to test for and remove biases from its systems, but would not share any further information. This is notable considering Amazon continues to sell its algorithms to law enforcement agencies.

Of the enterprise vendorsThe Vergeapproached, some did not offer a direct response at all, including FaceFirst, Gemalto, and NEC. Others, like Cognitec, a German firm which sells facial recognition algorithms to law enforcement and border agencies around the world, admitted that avoiding bias was hard without the right data.

“Databases that are available are often biased,” Cognitec’s marketing manager, Elke Oberg, toldThe Verge. “They might just be of white people because that’s whatever the provider had available as models.” Oberg says Cognitec does its best to train on diverse data, but says market forces will weed out bad algorithms. “All the vendors are working on [this problem] because the public is aware of it,” she said. “And I think if you want to survive as a vendor you will definitely need to train your algorithm on highly diverse data.”

Miami Int’l Airport To Use Facial Recognition Technology At Passport Control
Facial recognition is commonly used at the border, although in this environment it is easier not to make mistakes.
Photo: Raedle/Getty Images

How can we address the issue of bias?

These answers show that although there is awareness of the problem of bias, there’s no coordinated response. So what to do? The solution most experts suggest is conceptually simple, but tricky to implement: create industry-wide tests for accuracy and bias.

The interesting thing is that a such test already exists, sort of. It’s called the FRVT (Face Recognition Vendor Test) and is administered by the National Institute of Standards and Technology, or NIST. It tests the accuracy of dozens of facial recognition systems in different scenarios, like matching a passport photo to a person standing at a border gate, or matching faces from CCTV footage to mugshots in a database. And it tests “demographic differentials” — how algorithms perform based on gender, age, and race.

However, the FRVT is entirely voluntary, and the organizations that submit their algorithms tend to be either enterprise vendors trying to sell their services to the federal government, or academics testing out new, experimental models. Smaller firms like NEC and Gemalto submit their algorithms, but none of the big commercial tech companies do.

Garvie suggests that rather than creating new tests for facial recognition accuracy, it might be a good idea to expand the reach of the FRVT. “NIST does a very admirable job in conducting these tests,” says Garvie. “[But] they also have limited resources. I suspect we would need legislation or federal funding support to increase the capacity of NIST to test other companies.” Another challenge is that the deep learning algorithms deployed by the likes of Amazon and Microsoft can’t be easily sent for analysis. They are huge pieces of constantly updating software; very different to older facial recognition systems, which can usually fit on a single thumb drive.

announced that his company would not sell facial recognition systems to law enforcement at all because of the possibility for misuse.

half the country’s adult population.) Similarly, decisions made by the government using these algorithms will have a greater impact on individuals’ lives.

“The use case [for law enforcement] isn’t just a camera on the street; it’s body cameras, mugshots, line-ups,” says Brackeen. If bias is a factor in these scenarios, he says, then “you have a greater opportunity for a person of color to be falsely accused of a crime.”

Discussion about bias, then, looks like it will only be the beginning of a much bigger debate. As Buolamwini says, benchmarks can play their part, but more needs to be done: “Companies, researchers, and academics developing these tools have to take responsibility for placing context limitations on the systems they develop if the want to mitigate harms.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here