China grapples with rising AI video and voice scams

High-tech scams in China are growing at an alarming rate as scammers become more and more tech-savvy with advanced AI technology, with some going as far as harnessing face-cloning technology to carry out their scams. 

In a recent case of a telecom scam, Mr. Guo, a legal representative of a tech company in Fuzhou City, received a WeChat video call from a friend. The friend requested a 10,000 RMB (1,421.67 USD) security deposit, claiming it needed to be transferred to Mr. Guo’s company account for a public posting.

Trusting the friend and based on the video chat, Mr. Guo did not verify the transfer and subsequently sent 4.3 million RMB (611,316.42 USD) in two transactions. However, it turned out to be a sophisticated scam using AI face-changing technology. Mr. Guo only realized this when he called his friend, who denied any knowledge of the incident. Later, he realised that the scammer had posed as Mr. Guo’s friend, deceiving him with an altered face and voice.

Luckily for Mr. Guo, it took only 10 minutes to successfully recover 3.3684 million RMB (478,874.01 USD) from the fraud account, with only 931,600 RMB (132,442.41 USD) remaining. His case serves as a reminder to remain vigilant in the face of fraud that uses new methods such as AI to gain the other party’s trust.

With the wide application of ChatGPT and various AI technologies, AI fraud methods are becoming more and more abundant in China, with various cases popping up left and right. In April 2023, the legal representative of a self-media company in Shenzhen came under investigation for using ChatGPT to publish multiple articles spreading false news of a train accident in Gansu that killed nine people. Other recent trends of AI fraud have included voice cloning, video cloning, and forwarded WeChat voice notes.

In response to the rising incidents of AI fraud, a collaborative effort between the State Internet Information Office, the Ministry of Industry and Information Technology, and the Ministry of Public Security resulted in the creation of the “Regulations on the Administration of Deep Synthesis of Internet Information Services”

Effective as of January 10, 2023, the Regulations impose obligations on the providers and users of “deep synthesis technology” which uses mixed datasets and algorithms to produce synthetic content, such as deepfakes. Under these new rules, deep synthesis providers must prevent the use of deepfake services to produce or spread information prohibited by law and ensure their services fully comply with applicable laws and regulations, Moreover, such providers must ensure that labels are properly added to the content generated using their technology and conduct reviews of their algorithms and security procedures.

With these measures in place, it is hoped that the prevalence of AI fraud can be curbed, ensuring a safer digital landscape for all. That being said, the most recent cases highlight the need for vigilance in the face of these new and deceptive methods.

Share

Join our newsletter