Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology8 min read

‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams | WIRED

Dozens of Telegram channels reviewed by WIRED include job listings for “AI face models.” The (mostly) women who land these gigs are likely being used to dupe...

scamscrimeartificial intelligencesecuritycryptocurrency+1 more
‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams | WIRED
Listen to Article
0:00
0:00
0:00

‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams | WIRED

Overview

‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams

When applying for jobs, Angel talks up her language skills. “I can speak fluent English, I can speak good Chinese, I also speak Russian and Turkish,” the glamorous, 24-year-old Uzbekistani woman explains in a selfie-style video made for recruiters. Angel had arrived in the Cambodian city of Sihanoukville that day, she said, and was ready to start work immediately.

Details

Those impressive language skills, however, have likely been put to use as part of elaborate “pig-butchering” scams targeting Americans. That’s because, instead of applying for a conventional corporate job, Angel was putting herself forward to work as an “AI face model”—sitting in front of a computer all day and making deepfake video calls to manipulate potential scam victims. Her application, which also required her height and weight, says she has already clocked up “1 year as an AI model.”

Angel is far from alone in this pursuit. A WIRED review of dozens of recruitment videos and job ads posted to Telegram show people from around the world—including Turkey, Russia, Ukraine, Belarus, and multiple Asian countries—applying to be AI models or “real face” models in Cambodia and Southeast Asia. The region has become home to vast, industrialized scamming operations that hold thousands of human trafficking victims captive and force them to run online cryptocurrency investment and romance scams.

As well as tricking people into working in scam compounds, these high-tech, multibillion-dollar criminal enterprises can also attract people into seeking “work” as part of the operations. “In the past year until today, they are also hiring people doing AI modeling,” says Hieu Minh Ngo, a cybercrime investigator at the Vietnamese scam-fighting nonprofit Chong Lua Dao. “They will give you the software so they can swap their face by using AI and they can do romance scams,” he says.

Ngo, a reformed criminal hacker who now tracks scam compound activity and supports victims, identified around two dozen channels on Telegram that have some job postings for AI models in the region. Humanity Research Consultancy, an anti-human-trafficking organization, has also tracked people applying on Telegram for jobs in “known scam hub cities” as “models” and “AI models,” including Angel’s application.

The rise of AI models comes as cybercriminals are broadly adopting AI and using face-swapping as part of their online scamming. Typically, fraudsters will use fake personas to contact potential victims on social media or messaging platforms. They will often use stolen images of celebrities or attractive men or women to entice a person into talking to them.

Once they make contact, they will then bombard them with attention to help build up a relationship, before trying to get them to part with their cash. In some instances, multiple people may control the scammers’ account and message the victim under a single fake persona. But if a potential victim asks for a video call during these interactions—to check if the person they are speaking to is real, for instance—that’s when deepfake video calls and models who have their faces swapped can be used. Some Southeast Asian scam centers have dedicated “AI rooms” where the calls are made from.

Other posts list up to 150 potential calls per day. “Filters may be used, but ensure the image is realistic. Live-action videos are permitted; wigs are prohibited,” another ad reads. For the privilege, the person would allegedly get one full day and four half days off per month. Yet another ad lists working hours as between 10 pm and 10 am in Cambodia and a preference that the person will have a “Western accent.” One model-job ad says: “The company will retain your passport for visa and work permit management.” Taking people’s passports is one of the primary ways scam compound operators hold people captive.

While a few men apply for the AI model roles, the vast majority of applications viewed by WIRED were from young women, mostly in their early twenties. Applicants are asked to send a short video introducing themselves, text about their experience and expectations and photographs of themselves; some are required to include their marital status and “vaccination” status.

"For over three years, I have worked with Chinese companies for different kinds of projects including stock market, cryptocurrency, and love story,” one person says in a recruitment video. Another says: “Based on my experience, I am good handling customer, I persuade them to invest by using my own techniques and discussing how gold trading benefits them."

The video applications do not contain full names or contact details, so WIRED was unable to contact those applying for roles.

Modeling applicants have requested salaries of up to $7,000 per month, according to Humanity Research Consultancy. They also make specific requests about their working conditions, many of which may not be afforded to people who have been trafficked into the scam operations. One woman requested her own room and that she “can go outside.” Another requested that they could “go home on day off” and have a “personal washing machine.”

Although some of the models are recruited to work in the roles and may get more freedoms than victims of human trafficking, says Ling Li, the cofounder of the nonprofit EOS collective which works with victims of the scam industry, they may still face harsh treatment from bosses. “One European victim told us that he saw some Italian models in his compound, but he cannot tell [if] they are [there] willingly or not because they were beaten in front of him,” she says. “And also there is some sexual harassment.”

WIRED sent Telegram a list of two dozen jobs channels and recruitment channels that have advertised AI models, alongside other roles, in recent months. The company did not appear to remove any of the channels; however, a spokesperson says its policies do not allow scamming-related activity to take place.

The vast majority of the model-job ads and applications on Telegram don’t specifically mention scamming work, but they include a host of red flags indicating scamming, Ngo says. “Why [do you] need AI model? That’s the first question,” Ngo says. Other warning signs include the locations being in known scamming sites in Cambodia, claims of high salaries for the region, and frequent requirements for Chinese language skills, Ngo says.

The ads and video applications also include language closely aligned with scams, according to researchers and WIRED’s review of the posts. This includes frequent mentions of “clients,” a term scam operations use instead of “victims,” plus frequent references to cryptocurrency investments or gold trading. One person, who claimed to have been working as an AI model and “real face” model for 18 months, said their previous work involved convincing people to invest: “I really know how to make good communication to a client, how to make them trust us, how to send a good picture to them, and how to make them laughing.”

However, some posts are more explicit, listing a “job market” someone was applying for as: “love scam.” Another post describes a person’s experience as: “3 year as customer service (killer) of scamming platform crypto.”

After Frank Mc Kenna’s mom started getting scam text messages about making investments last year, he began intercepting them and talking with the senders. Mc Kenna, the chief strategist at anti-fraud software firm Point Predictive who has closely tracked “AI models,” says he wanted to understand how they were operating, so he set up a video call between them and his mom.

“The only purpose of that call was to prove that they’re a real person and to gain trust,” Mc Kenna says. During the call, he says, the young woman on camera appeared to be using an AI filter on her face. “It’s kind of glitchy. There’s other people in the room with her, so there’s echoing,” he says. “Then we had another short call with another AI model.”

A month or so later, Mc Kenna says, he saw what appeared to be the same model’s recruitment video posted online, saying she was looking for a new contract as hers had expired. “It was kind of a small world of these AI models who seem to go from place to place, completely voluntarily, making pretty good money,” Mc Kenna says. “They’re probably just in the video room doing calls all day with tons of different victims.”

In your inbox: The week’s biggest tech news in perspective

In your inbox: The week’s biggest tech news in perspective

This popular pro-Trump X account is apparently run by a White House staffer

This popular pro-Trump X account is apparently run by a White House staffer

Big Story: The five big ‘known unknowns’ of Trump’s war with Iran

Big Story: The five big ‘known unknowns’ of Trump’s war with Iran

The system that intercepted Iran’s missiles over the UAE

The system that intercepted Iran’s missiles over the UAE

Key Takeaways

  • ‘100 Video Calls Per Day’: Models Are Applying to Be the Face of AI Scams

  • When applying for jobs, Angel talks up her language skills

  • Those impressive language skills, however, have likely been put to use as part of elaborate “pig-butchering” scams targeting Americans

  • Angel is far from alone in this pursuit

  • As well as tricking people into working in scam compounds, these high-tech, multibillion-dollar criminal enterprises can also attract people into seeking “work” as part of the operations

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.