Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators | WIRED
Overview
Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators
More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature—reportedly known inside the company as “Name Tag”—would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public.
Details
The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current “dynamic political environment” as cover for the rollout, betting that civil society groups would have their resources “focused on other concerns.”
Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta's smart glasses, allowing wearers to pull up information about people in their field of view. Engineers have reportedly been weighing two versions of the feature: one that would only identify people the wearer is already connected to on a Meta platform, and a broader version that could recognize anyone with a public account on a Meta service such as Instagram.
The coalition wants Meta to scrap the feature entirely. In a letter to CEO Mark Zuckerberg on Monday, it argues that face recognition in inconspicuous consumer eyewear “cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards.” Bystanders in public have no meaningful way to consent to being identified, it says.
Meta is also urged to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases; disclose any past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about the use of Meta wearables or data from them; and commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device.
“People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors," write the groups, which also include Common Cause, Jane Doe Inc., Ultra Violet, the National Organization for Women, the New York State Coalition Against Domestic Violence, the Library Freedom Project, and Old Dykes Against Billionaire Tech Bros, among others.
Meta did not immediately respond to WIRED’s request for comment.
Essilor Luxottica, the Italian-French eyewear conglomerate that owns Ray-Ban and Oakley and manufactures the smart glasses with Meta, did not immediately respond to a request for comment.
In the May 2025 memo from Meta’s Reality Labs that the Times obtained, Meta reportedly wrote that it would launch “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
The coalition calls the distraction play “vile behavior” and accuses the company of taking advantage of “rising authoritarianism” and the Trump administration’s “disregard for the rule of law.”
The Electronic Privacy Information Center (EPIC) sent its own letters to the Federal Trade Commission (FTC) and state enforcers in February urging them to investigate and block Name Tag’s rollout. Real-time face recognition, the group warned, would compound what it called the “already serious and apparently unlawful” privacy risks of the existing Ray-Ban Meta glasses, which can covertly record bystanders with no warning beyond a small light that is easily hidden. People could be identified at protests, places of worship, support groups, and medical clinics, EPIC wrote, “destroying the concept of privacy or anonymity in public spaces.”
Meta has shut down face recognition before, though never fully. In November 2021, the company ended Facebook's photo-tagging system and said it would delete the face recognition templates of more than a billion users, framing the decision as “a company-wide move away from this kind of broad identification.”
Meta said at the time that it needed to “weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules,” and committed to “working with the civil society groups and regulators who are leading this discussion.”
The shutdown followed years of costly litigation. Meta has paid roughly
The legal pressure on Meta's design choices has only intensified. In March, a Los Angeles jury found Meta and Google's You Tube negligent in the design of Instagram and You Tube, concluding the companies knew their platforms were dangerous and failed to warn users, and awarded $6 million in compensatory and punitive damages in the first “bellwether trial” of a sprawling social media addiction case.
Last week, the Massachusetts Supreme Judicial Court ruled that Section 230 does not shield Meta from a consumer protection lawsuit alleging the company deliberately designed Instagram features—infinite scroll, push notifications, autoplaying videos—to addict young users, the first such ruling by a state high court.
In your inbox: Upgrade your life with WIRED-tested gear
In your inbox: Upgrade your life with WIRED-tested gear
What you need to know about the foreign-made router ban
What you need to know about the foreign-made router ban
Big Story: Anduril wants to own the future of war tech
Big Story: Anduril wants to own the future of war tech
How Trump’s plot to grab Iran's nuclear fuel would actually work
How Trump’s plot to grab Iran's nuclear fuel would actually work
Key Takeaways
-
Meta Is Warned That Facial Recognition Glasses Will Arm Sexual Predators
-
More than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are demanding that Meta abandon plans to deploy face recognition on its Ray-Ban and Oakley smart glasses, warning that the feature—reportedly known inside the company as “Name Tag”—would hand stalkers, abusers, and federal agents the ability to silently identify strangers in public
-
The coalition, which includes the ACLU, the Electronic Privacy Information Center, Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, is demanding Meta kill the feature before launch, after internal documents surfaced showing the company hoped to use the current “dynamic political environment” as cover for the rollout, betting that civil society groups would have their resources “focused on other concerns
-
Name Tag, as revealed in February by The New York Times, would work through the artificial intelligence assistant built into Meta's smart glasses, allowing wearers to pull up information about people in their field of view
-
The coalition wants Meta to scrap the feature entirely



