Accounting for the dead and letting families know the fate of their relatives is a human rights imperative written into international treaties, protocols, and laws like the Geneva Conventions and the International Committee of the Red Cross’ (ICRC) Guiding Principles for Dignified Management of the Dead. It is also tied to much deeper obligations. Caring for the dead is among the most ancient human practices, one that makes us human, as much as language and the capacity for self-reflection. Historian Thomas Laqueur, in his epic meditation, The Work of the Dead, writes that “as far back as people have discussed the subject, care of the dead has been regarded as foundational—of religion, of the polity, of the clan, of the tribe, of the capacity to mourn, of an understanding of the finitude of life, of civilization itself.” But identifying the dead using facial recognition technology uses the moral weight of this type of care to authorize a technology that raises grave human rights concerns. In Ukraine, the bloodiest war in Europe since World War II, facial recognition may seem to be just another tool brought to the grim task of identifying the fallen, along with digitizing morgue records, mobile DNA labs, and exhuming mass graves. But does it work? Ton-That says his company’s technology “works effectively regardless of facial damage that may have occurred to a deceased person.” There is little research to support this assertion, but authors of one small study found results “promising” even for faces in states of decomposition. However, forensic anthropologist Luis Fondebrider, former head of forensic services for the ICRC, who has worked in conflict zones around the world, casts doubt on these claims. “This technology lacks scientific credibility,” he says. “It is absolutely not widely accepted by the forensic community.” (DNA identification remains the gold standard.) The field of forensics “understands technology and the importance of new developments” but the rush to use facial recognition is “a combination of politics and business with very little science,” in Fondebrider’s view. “There are no magic solutions for identification,” he says. Using an unproven technology to identify fallen soldiers could lead to mistakes and traumatize families. But even if the forensic use of facial recognition technology were backed up by scientific evidence, it should not be used to name the dead. It is too dangerous for the living. Organizations including Amnesty International, the Electronic Frontier Foundation, the Surveillance Technology Oversight Project, and the Immigrant Defense Project have declared facial recognition technology a form of mass surveillance that menaces privacy, amplifies racist policing, threatens the right to protest, and can lead to wrongful arrest. Damini Satija, head of Amnesty International’s Algorithmic Accountability Lab and deputy director of Amnesty Tech, says that facial recognition technology undermines human rights by “reproducing structural discrimination at scale and automating and entrenching existing societal inequities.” In Russia, facial recognition technology is being used to quash political dissent. It fails to meet legal and ethical standards when used in law enforcement in the UK and US, and is weaponized against marginalized communities around the world. Clearview AI, which primarily sells its wares to police, has one of the largest known databases of facial photos, at 20 billion images, with plans to collect an additional 100 billion images—equivalent to 14 photos for every person on the planet. The company has promised investors that soon “almost everyone in the world will be identifiable.” Regulators in Italy, Australia, UK, and France have declared Clearview’s database illegal and ordered the company to delete their citizens’ photos. In the EU, Reclaim Your Face, a coalition of more than 40 civil society organizations, has called for a complete ban on facial recognition technology. AI ethics researcher Stephanie Hare says Ukraine is “using a tool, and promoting a company and CEO, who have not only behaved unethically but illegally.” She conjectures that it’s a case of “the end justifies the means,” but asks, “Why is it so important that Ukraine is able to identify dead Russian soldiers using Clearview AI? How is this essential to defending Ukraine or winning the war?” This interview and others like it suggest that Clearview AI’s technology is being used for many purposes in Ukraine, but only its potentially positive use to identify the dead is being publicly discussed. In other words, we are being allowed “a sneak peek” that spotlights a humanitarian application while more controversial deployments are hidden from view. At a moment when Clearview AI is facing legal action from regulators for its use in policing, it has found a way to reinvent itself as a purveyor of humanitarian technology, all while expanding its scope from law enforcement to the military. Because the EU recognizes facial recognition as a “dual use” technology with both military and civilian applications, its use in war for any purpose, including the identification of the dead, must be subject to strict scrutiny and oversight. And on the battlefield, facial recognition may be used to name the dead but can also be used to target the living—for example, when incorporated into drones and machine guns or used to develop lethal autonomous weapons. Fully autonomous drones are already deployed in Ukraine and a Russian manufacturer has announced plans to develop uncrewed robot combat vehicles. Fedorov recently said that fully autonomous killer drones are “a logical and inevitable next step” in weapons development. Amnesty International’s Satija says that “killer robots are not just about a Terminator-style system. It’s about a system that can select and engage a human target without meaningful human control and built with technologies which will accelerate violence and discrimination. When we call for regulation and red lines around autonomous weapons, we’re also talking about the components of those systems, like facial recognition, which are by design in violation of international human rights standards.” Ultimately, the faces of the dead soldiers scanned by Clearview AI silence these other applications. Anthropologist Katherine Verdery, writing about the political lives of dead bodies, shows how corpses are mute and mutable—“ambiguous, protean symbols” that can be put to many political uses. In the name of the dead, facial recognition is given a humanitarian pretext that obscures its role in emerging mass surveillance and future automated violence.