How face recognition rules in the US got stuck in political gridlock
This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
This week, I published an in-depth story about efforts to restrict face recognition in the US. The story’s genesis came during a team meeting a few months back, when one of my editors casually asked what on earth had happened to the once-promising campaign to ban the technology. Just several years ago, the US seemed on the cusp of potentially getting police use of the technology restricted at a national level.
I even wrote a story in May 2021 titled “We could see federal regulation on face recognition as early as next week.” News flash: I was wrong. In the years since, the push to regulate the technology seems to have ground to a halt.
The editor held up his iPhone. “Meanwhile, I’m using it constantly throughout the day,” he said, referring to the face recognition verification system on Apple’s smartphone.
My story was an attempt to understand what happened by zooming in on one of the hotbeds for debate over police use of face recognition: Massachusetts. Lawmakers in the state are considering a bill that would be a breakthrough on the issue and could set a new tone of compromise for the rest of the country.
The bill distinguishes between different types of technology, such as live video recognition and retroactive image matching, and sets some strict guardrails when it comes to law enforcement. Under the proposal, only the state police could use face recognition, for example.
During reporting, I learned that face recognition regulation is being held up in a unique type of political stasis, as Andrew Guthrie Ferguson, a law professor at the American University Washington College of Law who specializes in policing and tech, put it.
The push to regulate face recognition technology is bipartisan. However, when you get down to details, the picture gets muddier. Face recognition as a tool for law enforcement has become more contentious in recent years, and Republicans tend to align with police groups, at least partly because of growing fears about crime. Those groups often say that new tools like face recognition help increase their capacity during staffing shortages.
Little surprise, then, that police groups have no interest in regulation. Police lobbies and companies that provide law enforcement with their tech are content to continue using the technology with few guardrails, especially as staffing shortages put pressure on law enforcement to do more with less. Having no restrictions on it suits them fine.
But civil liberties activists are generally opposed to regulation too. They think that compromising on measures short of a ban decreases the likelihood that a ban will ever be passed. They argue that police are likely to abuse the technology, so giving them any access to it poses risks to the public, and specifically to Black and brown communities that are already overpoliced and surveilled.
“The battle between ‘abolition’ and ‘don’t regulate it at all’ has led to an absence of regulation. That’s not the fault of the abolitionists,” says Ferguson. “But it has meant that the normal potential political compromise that you might’ve seen in Congress hasn’t happened because the normal political actors are not willing to concede for any regulation.”
Some abolitionist groups, such as S.T.O.P. in New York, are turning their advocacy work away from police bans toward regulating private uses of face recognition—for example, at Madison Square Garden.
“We see growing momentum to pass bans on private-sector use of facial recognition,” says S.T.O.P.’s executive director, Albert Fox Cahn. However, he thinks eventually we will see a resurgence of calls to ban police use of the technology too.
In the meantime, it’s deeply unfortunate that as face recognition technology continues to proliferate and become normalized in our lives, regulation is stuck in gridlock, especially when there is bipartisan agreement that we need it.
Compromises that set new guardrails on the technology, but are short of an absolute ban, might be the most promising path forward.
What I am reading this week
- This morning, the White House announced a new AI initiative in which companies voluntarily agreed to a set of requirements, such as watermarking AI-generated content and submitting to external review. Notably left off the list of requirements were stipulations around transparency and data privacy. The voluntary agreements, while better than nothing, seem pretty fluffy.
- I really enjoyed Charlie Warzel’s latest piece, which was a love letter to the phone number in the Atlantic. I am a sap for user-focused technologies. We often don’t think of the 10-digit identity as a breakthrough, but oh … how it is.
- Regardless of the FTC’s recent losses, President Biden’s team seems to be sticking to its aggressive antitrust strategy. It’ll be interesting to watch how it plays out and whether the Justice Department can eventually do something to break up Big Tech.
What I learned this week
This week, I finally dove into our latest magazine issue on accessibility. A story about the digital border wall really stood out. Since January, US Customs and Border Protection has been using a new app to organize immigration flows and secure initial appointments for entry. One problem, though, is that the app—called CBP One—barely works. It puts a massive strain on people trying to enter the country.
Lorena Rios writes about Keisy Plaza, a migrant traveling from Colombia. “When she was staying in a shelter in Ciudad Juárez in March, she tried the app practically every day, never losing hope that she and her family would eventually get their chance.” After seven weeks of constant worry, Plaza finally got an appointment.
Rios’s story is heartbreaking—a bit dystopian, but useful, as she really gets at how technology can completely upend people’s lives. Take a read this weekend!