This all means it’s up to the tech companies themselves to act on their own initiative. And Big Tech has rarely acted without legislative threats or significant social or financial pressure. Companies won’t do anything if “it’s not mandatory, it’s not enforced by the government,” and most important, if companies don’t profit from it, says Wang, from the University of Texas. While a group of tech companies, including Meta, Match, and Coinbase, last year announced the formation of Tech Against Scams, a collaboration to share tips and best practices, experts tell us there are no concrete actions to point to yet.
And at a time when more resources are desperately needed to address the growing problems on their platforms, social media companies like X, Meta, and others have laid off hundreds of people from their trust and safety departments in recent years, reducing their capacity to tackle even the most pressing issues. Since the reelection of Trump, Meta has signaled an even greater rollback of its moderation and fact checking, a decision that earned praise from the president.
Still, companies may feel pressure given that a handful of entities and executives have in recent years been held legally responsible for criminal activity on their platforms. Changpeng Zhao, who founded Binance, the world’s largest cryptocurrency exchange, was sentenced to four months in jail last April after pleading guilty to breaking US money-laundering laws, and the company had to forfeit some $4 billion for offenses that included allowing users to bypass sanctions. Then last May, Alexey Pertsev, a Tornado Cash cofounder, was sentenced to more than five years in a Dutch prison for facilitating the laundering of money stolen by, among others, the Lazarus Group, North Korea’s infamous state-backed hacking team. And in August last year, French authorities arrested Pavel Durov, the CEO of Telegram, and charged him with complicity in drug trafficking and distribution of child sexual abuse material.
“I think all social media [companies] should really be looking at the case of Telegram right now,” USIP’s Tower says. “At that CEO level, you’re starting to see states try to hold a company accountable for its role in enabling major transnational criminal activity on a global scale.”
Compounding all the challenges, however, is the integration of cheap and easy-to-use artificial intelligence into scamming operations. The trafficked individuals we spoke to, who had mostly left the compounds before the widespread adoption of generative AI, said that if targets suggested a video call they would deflect or, as a last resort, play prerecorded video clips. Only one described the use of AI by his company; he says he was paid to record himself saying various sentences in ways that reflected different emotions, for the purposes of feeding the audio into an AI model. Recently, reports have emerged of scammers who have used AI-powered “face swap” and voice-altering products so that they can impersonate their characters more convincingly. “Malicious actors can exploit these models, especially open-source models, to produce content at an unprecedented scale,” says Gabrielle Tran, senior analyst for technology and society at IST. “These models are purposefully being fine-tuned … to serve as convincing humans.”
Experts we spoke with warn that if platforms don’t pick up the pace on enforcement now, they’re likely to fall even further behind.
Every now and again, Gavesh still goes on Facebook to report pages he thinks are scams. He never hears back.
But he is working again in the tourism industry and on the path to recovering from his ordeal. “I can’t say that I’m 100% out of the trauma, but I’m trying to survive because I have responsibilities,” he says.