Meta's Ray-Ban smart glasses can now identify strangers' faces in real time. The company calls it a feature. Seventy-two civil liberties organizations call it a weapon for stalkers.
On Monday, the American Civil Liberties Union, the Electronic Privacy Information Center, and Fight for the Future jointly released a letter demanding Meta halt rollout of facial recognition capabilities in its AI-powered eyewear. The coalition spans digital rights groups, survivor advocacy organizations, immigrant rights nonprofits, and LGBTQ+ support networks—a breadth that surprised even those tracking tech policy for decades.
"We have not seen this level of civil society coordination against a technology since Cambridge Analytica," said one policy director at a Washington-based advocacy group, speaking without attribution to maintain ongoing negotiations with the company. The three organizations have historically diverged on tactics: the ACLU favors litigation and legislative lobbying, EPIC pursues regulatory complaints, and Fight for the Future organizes grassroots campaigns. That they converged on a single demand signals something has changed in how the privacy advocacy ecosystem perceives the threat.
The letter specifically cites risks to domestic abuse survivors, undocumented immigrants, and queer individuals who rely on anonymity for safety. Meta's glasses, when pointed at a stranger, can cross-reference that person's face against social media profiles, workplace databases, or personal contact lists. A stalker could identify a victim's home address within seconds. An abusive ex-partner could locate a survivor in hiding. Immigration enforcement could use the technology to identify undocumented individuals in sanctuary cities.
Meta pushed back against the characterization. A company spokesperson told Wired that the facial recognition feature requires explicit user consent on both ends and includes safeguards against misuse. The feature, currently in limited rollout, allows wearers to identify friends and acquaintances by name—but researchers note that the underlying technology is identical to what's needed for mass surveillance. The distinction between "convenient" and "dangerous" is purely a software toggle.
The coalition's letter arrives as federal regulators weigh expanded authority over AI-enabled consumer devices. The FTC has already signaled interest in reviewing smart glasses under existing surveillance device statutes, and several members of Congress have introduced legislation that would prohibit real-time facial recognition in public spaces without a warrant. Meta's glasses represent the first commercial product to test whether that prohibition, if enacted, would apply to consumer hardware.
What makes this moment different from earlier privacy battles is the hardware's invisibility. A security camera announces its presence with a visible housing. A phone's camera is obvious to anyone being photographed. But glasses-wearers can capture faces covertly, at eye level, without their subjects knowing. The coalition's argument is that this asymmetry—between the data being collected and the consent of those being collected from—represents a categorical shift in surveillance risk.
The pressure on Meta is unlikely to relent. Two of the signatories have filed formal complaints with the FTC; a third is coordinating with state attorneys general in California and New York. Unless Meta voluntarily disables the feature, the battleground shifts to regulators and courtrooms within the next six months.