Key Points
- Meta is testing facial recognition technology to block scam ads featuring public figures’ likenesses.
- Around 50,000 public figures will be automatically enrolled in the trial, with the option to opt-out.
- The trial begins in December but excludes regions with strict biometric data regulations, such as the EU, Britain, and some U.S. states.
- The move marks Meta’s careful reintroduction of facial recognition after shutting down its previous system in 2021.
Three years after Meta (formerly Facebook) shut down its facial recognition software due to privacy concerns and regulatory pressure, the company is reintroducing the technology to tackle a rising wave of online scams. On Tuesday, Meta announced that it is testing facial recognition technology to prevent “celeb bait” scams, where scammers use the images of public figures to deceive users.
Meta plans to enroll around 50,000 public figures in the trial, which will automatically compare their Facebook profile photos with images used in suspected scam advertisements. If a match is detected and the company suspects the ad is fraudulent, it will prevent it from appearing. Public figures enrolled in the program will be notified and can opt-out if they do not wish to participate.
The global trial is set to begin in December. Still, it will exclude several regions where Meta does not have regulatory approval, such as the European Union, Britain, South Korea, and U.S. states like Texas and Illinois. These areas have stricter laws regarding biometric data and facial recognition technology, and Meta has faced legal challenges in these regions before. For example, in August 2023, Meta was ordered to pay $1.4 billion to settle a lawsuit with the state of Texas over the illegal collection of biometric data.
Monika Bickert, Meta’s vice president of content policy, stated that the trial is part of the company’s broader effort to protect public figures from being exploited in scam ads. “The idea here is to roll out as much protection as we can for them,” Bickert explained, adding that the company wants to make it easy for celebrities to benefit from these protections.
The reintroduction of facial recognition highlights Meta’s attempt to balance the need for security with ongoing concerns about privacy and data handling. In 2021, Meta shut down its facial recognition system and deleted the face scan data of over one billion users, citing “growing societal concerns” over privacy. Now, however, the company is trying to use the technology responsibly while addressing rising concerns about scams that exploit the likenesses of celebrities.
In addition to blocking scam ads, Meta said it will immediately delete any facial recognition data generated during the comparison process, regardless of whether a scam is detected. The company emphasized that the tool had undergone extensive privacy and risk review both internally and with external experts before being tested.
Meta also hinted at the possibility of using facial recognition technology for non-celebrities in the future, including as a method to help users regain access to compromised accounts or accounts locked due to forgotten passwords on Facebook and Instagram.