On the Polish website Pimeyes.com, which describes itself as a “reverse image search for faces” and features friendly cursive fonts and jewel-tone illustrations, you can upload a face to find where else it’s lurking on the internet.
“376 results in .82 seconds,” the website recently informed me after I gave it permission to use my webcam. It turned up friends’ Tumblr posts, stills of VICE News Tonight segments, and a 9 year-old photo that I never knew existed, taken in a nightclub. (Also: hundreds of doppelgangers, including a Swedish soccer player and several cam girls.)
Facial recognition uses AI to compare facial features — for instance, the distance between them, or their color and texture — against a database of other images. Law enforcement and corporations already use it, for everything from surveillance aimed at apprehending criminals to identifying VIPs in retail stores to speeding up entry onto military bases. The technology is controversial. Cities including Boston, Portland, and San Francisco have banned the use of facial recognition by police, on grounds that it threatens privacy and is notoriously biased, in part because it has a history of misidentifying minorities.
Recently, access to facial recognition technology such as Pimeyes has filtered down to the public. In an email, Pimeyes’s founders asked to remain anonymous and declined a Zoom interview, but described their product as singular. “We think that face recognition technology should be available for everyone, not only for the governments, corporations, or rich people who can afford private detectives,” they said.
According to its website, people use Pimeyes to track down scammers, identify DMCA violations, and protect their own privacy online. It makes money by charging fees to reveal more information about where the images come from, or monitor the web for new ones that pop up.
But earlier this month, a new use case for Pimeyes emerged: doxing suspected Capitol Hill rioters.Technologists have also deployed their own facial recognition and detection tools on videos and images of the Capitol Hill riots, but Pimeyes is easier to use.
David Sebba, 31, of Florida, was one of at least seven Twitter users who posted about using Pimeyes to attempt to identify individuals illegally entering the US capitol. “We’ve sat around for four years while no one was held accountable. The idea of getting some justice on insurrectionists isn’t too bad,” he told me in a message.
Sebba found the website by Googling “facial recognition AI”. Using a screenshot from a YouTube video posted by a protest attendee, in which an anonymous mustachioed man admitted to going inside the Capitol, Sebba then searched Pimeyes for matching faces — and came upon photos of Bill Tryon, a conservative activist from Albany, New York. A local news outlet later confirmed it was Tryon. Another Twitter replied to Sebba that he was submitting Tryon’s name to the FBI’s tip page. Tyron has not been charged, according to a search of court documents.
Following an investigation by the German website NetzPolitik in July 2020, the European Parliament discussed the legality of Pimeyes. One MP proposed a moratorium on the tool, asking whether “the dam had been breached” on private data, and comparing it to Clearview AI. That startup was the subject of backlash after it scraped social media data and violated platform policy to build a facial-recognition system for US law enforcement.
Soon after the Parliament discussion, Pimeyes relocated to the Seychelles. “The company has gone through a restructuring and the current board has decided to create the company in a country where there is no public record of owners and shareholders,” they told Motherboard. The founders disagreed that their tool compromises privacy, pointing out that even Google can be used for nefarious purposes.
Unlike Clearview AI, Pimeyes doesn’t scrape social media posts; instead, it indexes other types of websites. For example, if your photo is on your company’s website, it could be able to find you. That’s what happened to someone at Los Angeles’s “Stop the Steal” rally on January 6. Photos from the event showed a red-bearded man appearing to attack a black woman. Also surrounding her was a mob in pro-Trump gear, who sprayed pepper spray, and snatched the woman’s wig. Online observers were furious.
The Twitter user @Punisher910 ran the bearded man’s photo through Pimeyes. The website returned an image of what appeared to be the same man on the employee page of a local Toyota dealership. Soon, dozens of users were sharing his name. “@ToyotaSoCal, is [the person’s supposed name] STILL employed by you? Employing a White Supremacist Terrorist who engages in kidnapping, harassment, and torture isn’t a good look,” wrote one person. Hours later, Toyota tweeted that the man was no longer employed in its dealership.
It looked like a closed case of justice by algorithm. But the context was as murky as the images were hi-def. The victim, Berlinda Nibo, later said the bearded man was helping her to safety. “I call him my hero,” she told the local NBC station. Then, a day after that, Nibo said he may have hurt her more than helped her. In the meantime, other videos surfaced that appeared to show the man using racial slurs at a different location.
Motherboard couldn’t reach the man in question for comment. (Motherboard is not naming him because we could not reach him and could not 100 percent identify him.) Patrick Elahmadie, the general counsel for the company that owns the Toyota dealership, said that the man “didn’t deny” that he was in the images when they fired him for them. “That’s not conduct that jives with what we’d expect from employees,” Elahmadie told Motherboard, noting that the dealership and the man were “headed for a split” even prior to January 6.
For police, facial recognition is also useful but imperfect. In 2018, commercial software failed to recognize the gender of women of color 35 percent of the time in one study by Joy Buolamwini, a researcher at MIT Media Lab. Last year, the National Institute of Standards and Technology examined the accuracy of 200 facial recognition algorithms in a report. Some top-performing algorithms showed no demographic bias, but most still performed better on men and light-skinned people.
So far, at least three people — all of them black men—have been wrongfully arrested by police in part thanks to incorrect facial recognition matches. Departments say their investigators are trained on how to use the tool, though that has not prevented some high-profile cases of misidentification.
No one trains random people on Twitter. “The internet is a tool of surveillance and as a result you have to fight hard against the impulse to turn everyone into cops,” Joan Donovan, the research director of Harvard Kennedy’s Shorenstein Center, told Motherboard. She called the viral Twitter threads where users collaborated to identify suspected rioters “distributed bounty hunting” and “a bad idea.” That’s because “the capacity for misidentification is really high.”
Pimeyes was just one tool that Tweeters used to identify suspected rioters. They isolated symbols on T-shirts and flags, cross-referenced faces against those in an archive of Parler videos, and spread photos so widely that someone was bound to recognize them. Like facial recognition, these crowdsourced investigative methods were often fruitful and occasionally faulty, accusing, for instance, a firefighter who was several states away of being at the Capitol.
Donovan also criticized the use of facial recognition by private researchers. According to an article in The New Yorker, John Scott-Railton of Citizen Lab and Michael Sheldon of the Atlantic Council both used an unspecified facial recognition tool to help identify the Air Force combat veteran Larry Rendall Brock, Jr., after he entered the Senate chambers with zip cuffs. Scott-Railton and Sheldon both declined to comment to Motherboard about which specific tool they used.
“As a researcher, I’m glad Pimeyes exists,” says Giancarlo Fiorella, of Bellingcat, the open-source investigations website, noting that unless you have access to law enforcement or government tools, it’s pretty much the only option. Last summer, he and his colleagues used it, alongside other research methods, to help identify a man who had beaten people with a baton at a Portland protest.
Still, “as a private individual myself, I’m concerned that this makes it easier to stalk someone or look for information on them,” says Fiorella. In an ideal world, “this sort of technology wouldn’t exist at all.”
This post has been read 17 times!