听力频道
登录注册网站首页

这两个国家的电信诈骗犯,已经用上AI了,效果以假乱真,多人上当受骗!

来源:融媒体采编平台
作者:世纪君
日期:2023-04-04

据外媒报道,近来,美国和加拿大使用AI合成语音进行电信诈骗的案例多发,不少上当的都是老年人。这一现象引起了业界和媒体的关注。

(视频戳这里
 

据美国国家公共电台(NPR)3月22日报道,多年来,一个常见的电信骗局(scam)是,一个自称是权威人士比如警察的人给你打电话,紧急要求你付钱帮助你的朋友或家人摆脱困境。

NBC新闻视频截图

 

然而,美国联邦监管机构警告说,现在这样的电话可能来自一个听起来很像你朋友或家人的人,但实际上是一个骗子在使用AI生成的克隆声音。
 

For years, a common scam has involved getting a call from someone purporting to be an authority figure, like a police officer, urgently asking you to pay money to help get a friend or family member out of trouble.
 

Now, federal regulators warn, such a call could come from someone who sounds just like that friend or family member — but is actually a scammer using a clone of their voice.


美国联邦贸易委员会(Federal Trade Commission)近期发布了一份消费者警告,敦促人们警惕犯罪分子用来诈骗人们钱财的最新技术——使用人工智能生成语音的诈骗电话。

The Federal Trade Commission recently issued a consumer alert urging people to be vigilant for calls using voice clones generated by artificial intelligence, one of the latest techniques used by criminals hoping to swindle people out of money.
 

美国国家公共电台:亲戚打来的紧急电话?美国联邦贸易委员会警告说,这可能是一个使用语音克隆的小偷打来的

美国国家公共电台报道截图

 

“骗子(scammer)所需要的只是一段你家人声音的短音频以及一个声音克隆程序,这段音频可以从网上发布的内容中获取。” 该委员会警告称,“当骗子打电话给你时,他听起来就像你所爱的人一模一样。”
 

"All [the scammer] needs is a short audio clip of your family member's voice — which he could get from content posted online — and a voice-cloning program," the commission warned. "When the scammer calls you, he'll sound just like your loved one."
 

美国联邦贸易委员会建议,如果有听起来像朋友或亲戚的人管你要钱,特别是如果他们想让你通过电汇、加密货币或礼品卡支付,你应该挂断电话,直接打电话给那个人,以核实他们的说法。
 

The FTC suggests that if someone who sounds like a friend or relative asks for money — particularly if they want to be paid via a wire transfer, cryptocurrency or a gift card — you should hang up and call the person directly to verify their story.
 

骗子使用AI克隆声音冒充家人,骗取老人钱财
 

《华盛顿邮报》在3月5日的一篇相关报道中提到了两个真实案例。


《华盛顿邮报》:“他们以为是亲人在求救。其实是一个人工智能骗局。”

 
《华盛顿邮报》报道截图


73岁的露丝·卡德和她75岁的丈夫格雷格·格雷斯就曾接到一通自称是他们的孙子布兰登的人打来的电话。电话中,骗子冒充的布兰登说自己在监狱里,没有钱包和手机,需要现金保释。
 

由于电话里的声音听起来跟孙子几乎没有差别,两位老人焦急的冲到银行,取走了每日最高限额的3000加元(约合人民币15289元),之后又急着去第二家分行取钱。


好在,一位银行经理察觉到不对头,提醒他们最近另一位顾客也接到了类似的电话,并得知那个声音是伪造的。两位老人这才意识到自己被骗了。


The man calling Ruth Card sounded just like her grandson Brandon. So when he said he was in jail, with no wallet or cellphone, and needed cash for bail, Card scrambled to do whatever she could to help.


Card, 73, and her husband, Greg Grace, 75, dashed to their bank in Regina, Saskatchewan, and withdrew 3,000 Canadian dollars, the daily maximum. They hurried to a second branch for more money. But a bank manager pulled them into his office: Another patron had gotten a similar call and learned the eerily accurate voice had been faked, Card recalled the banker saying. The man on the phone probably wasn’t their grandson.


另一位受害者——本杰明·珀金(Benjamin Perkin)年迈的父母——则在一次AI合成语音诈骗中损失了数万加元。
 

珀金在采访中回忆,他的父母当时接到一个自称是律师的电话,说珀金在一场车祸中撞死了一名美国外交官,现在在监狱里,需要钱支付法律费用。
 

Benjamin Perkin’s elderly parents lost thousands of dollars to a voice scam. His parents received a phone call from an alleged lawyer, saying their son had killed a U.S. diplomat in a car accident. Perkin was in jail and needed money for legal fees.
 

之后,这位“律师”把电话转交给了“珀金”。电话里,珀金的“克隆”声音对父母说他爱他们,感激他们,急需钱。几个小时后,律师再次打电话给珀金的父母,说珀金在当天晚些时候开庭前需要21,000加元(约合人民币107039元)。
 

The lawyer put Perkin, 39, on the phone, who said he loved them, appreciated them and needed the money. A few hours later, the lawyer called Perkin’s parents again, saying their son needed $21,000 in Canadian dollars before a court date later that day.
 

电话里的声音听起来“很逼真,以至于我的父母真的相信他们在和我说话,” 珀金在采访中讲道。他的父母惊慌失措地跑到几家银行取钱,之后通过比特币终端把钱汇给了律师。
 

The voice sounded “close enough for my parents to truly believe they did speak with me,” he said. In their state of panic, they rushed to several banks to get cash and sent the lawyer the money through a bitcoin terminal.
 

珀金说,他们已经向加拿大联邦当局提交了一份警方报告,但目前并没能把钱追回来。


The family has filed a police report with Canada’s federal authorities, Perkin said, but that hasn’t brought the cash back.


美国全国广播公司(NBC)也在近日采访了一位美国父亲,当时他接到了一个听起来像他女儿的电话,说她被绑架了,然后劫匪拿过电话,要求他支付赎金。幸运的是,他的妻子很警觉,立刻给女儿打了电话确认其是否安全,才发现这是一通用人工智能合成语音的诈骗电话。

NBC新闻视频截图

 

只需一小段发布在社交媒体的音频,就能克隆声音
 

《华盛顿邮报》报道称,在人工智能的加持下,大量廉价的在线工具可以将音频文件转成克隆的声音,让骗子可以用这种工具“说出”他们输入的任何内容。
 

Powered by AI, a slew of cheap online tools can translate an audio file into a replica of a voice, allowing a swindler to make it “speak” whatever they type.
 

美国加州大学伯克利分校数字取证教授哈尼·法里德(Hany Farid)介绍说,人工智能语音生成软件会分析一个人的声音独特之处——包括年龄、性别和口音——并在庞大的声音数据库中搜索,找到相似的声音并预测其模式。
 

AI voice-generating software analyzes what makes a person’s voice unique — including age, gender and accent — and searches a vast database of voices to find similar ones and predict patterns, Farid said.
 

然后,它可以重新创造一个人的音调、音色和个人独特的声音,从而创造出类似的整体效果。法里德说,软件需要一个简短的音频样本,从YouTube、播客、广告、TikTok、Instagram或Facebook视频等地方就能截取。
 

It can then re-create the pitch, timbre and individual sounds of a person’s voice to create an overall effect that is similar, he added. It requires a short sample of audio, taken from places such as YouTube, podcasts, commercials, TikTok, Instagram or Facebook videos, Farid said.
 

“两年前,甚至一年前,你需要大量音频来克隆一个人的声音,” 法里德表示。“而现在,如果你用脸书,或者你录制了一段TikTok视频,你的声音在上面播放了30秒,人们就可以克隆你的声音。”
 

Two years ago, even a year ago, you needed a lot of audio to clone a persons voice,” said Hany Farid, a professor of digital forensics at the University of California at Berkeley. “Now … if you have a Facebook page … or if you’ve recorded a TikTok and your voice is in there for 30 seconds, people can clone your voice.”

NBC新闻视频截图

 

然而,专家表示,目前,美国联邦监管机构、执法部门和法院都没有能力遏制这种迅速发展的骗局。大多数受害者几乎没有线索可以确定犯罪者是谁,警方也很难追踪来自世界各地骗子的电话和资金。而且几乎没有法律先例让法院追究生产这些工具的公司的责任。
 

Experts say federal regulators, law enforcement and the courts are ill-equipped to rein in the burgeoning scam. Most victims have few leads to identify the perpetrator and it’s difficult for the police to trace calls and funds from scammers operating across the world. And there’s little legal precedent for courts to hold the companies that make the tools accountable for their use.
 

另据《商业内幕》网报道,2022年6月,美国联邦贸易委员会建议美国国会通过法律,以防止人工智能工具造成额外的伤害。
 

In June 2022, the FTC recommended Congress pass laws so AI tools do not cause additional harm.
 

美国联邦贸易委员会发言人朱莉安娜·格林瓦尔德说:“我们担心深度造假(deepfake)和其他基于人工智能的合成媒体的风险,这些媒体变得越来越容易创作和传播,它们将被用于欺诈。”
 

"We're also concerned with the risk that deepfakes and other AI-based synthetic media, which are becoming easier to create and disseminate, will be used for fraud," FTC spokesperson Juliana Gruenwald told Insider.

 

她还表示,“联邦贸易委员会已经发现社交媒体上的欺诈行为(fraud)出现了惊人的增长。”
 

“生成看似真实的视频、照片、音频和文本的人工智能工具可能会加速这一趋势,让骗子能对更多人、更迅速地实施诈骗”,从冒名顶替骗局和身份盗窃(identity theft),到支付欺诈(payment fraud)和虚假网站创建(fake website creation)。格林沃尔德说,聊天机器人(chatbot)可能会加剧这些趋势。

 

"The FTC has already seen a staggering rise in fraud on social media," she said. "AI tools that generate authentic-seeming videos, photos, audio, and text could supercharge this trend, allowing fraudsters greater reach and speed," from imposter scams and identity theft to payment fraud and fake website creation. Chatbots could exacerbate these trends, Gruenwald said.
 

当地时间2019年1月24日,美国华盛顿,一名女子观看一段视频,这段视频篡改了美国总统特朗普和前总统奥巴马的言论,显示了“深度伪造”技术是如何欺骗观众的。图源:视觉中国

 

联合国教科文组织:呼吁各国尽快实施人工智能伦理标准


据新华社报道,3月30日,联合国教科文组织(UNESCO)总干事奥德蕾·阿祖莱(Audrey Azoulay)发表声明,呼吁各国尽快实施该组织通过的《人工智能伦理问题建议书》(Recommendation on the Ethics of Artificial Intelligence),为人工智能发展设立伦理标准。
 

The United Nations Educational, Scientific and Cultural Organization (UNESCO) called on governments last Thursday to fully and immediately implement its recommendation on the ethics of artificial intelligence (AI).
 

联合国教科文组织在2021年11月通过了《人工智能伦理问题建议书》,这是首份涉及人工智能伦理标准的全球性协议。该协议包含人工智能发展的规范以及相关应用领域的政策建议,旨在最大程度发挥人工智能的优势并降低其风险。

联合国教科文组织官网截图

 

The recommendation was endorsed by all UNESCO member states in November 2021. It is the first global framework for the ethical use of AI.
 

It guides countries in maximizing the benefits of AI and reducing the risks it entails. It contains values and principles, and detailed policy recommendations in all relevant areas.
 

阿祖莱30日在声明中说,世界需要更强有力的人工智能伦理规则,“这是当前时代的挑战。联合国教科文组织的《建议书》设立了适当的规范框架。现在是时候在国家层面实施这些战略和法规了。我们要言出必行,确保实现《建议书》的目标和内容。”
 

"The world needs stronger ethical rules for artificial intelligence: this is the challenge of our time. UNESCO's recommendation on the ethics of AI sets the appropriate normative framework and provides all the necessary safeguards," UNESCO Director-General Audrey Azoulay said in a press release, “It is high time to implement the strategies and regulations at national level. We have to walk the talk and ensure we deliver on the Recommendation’s objectives.”
 

联合国教科文组织表示,人工智能创新可能会引发伦理问题,尤其是歧视和刻板印象、性别不平等伦理问题。人工智能还可能对打击虚假信息、隐私权、个人信息数据保护以及人权和环境等问题产生负面影响。
 

According to UNESCO, AI innovations may raise ethical issues, especially discrimination and stereotyping, including the issue of gender inequality. AI may also have a negative impact on the fight against disinformation, the right to privacy, the protection of personal data, and human and environmental rights.
 

根据声明的说法,联合国教科文组织的《建议书》将使用一种评估工具来指导各成员国。这一工具可以帮助各国确定劳动力所需的能力和技能,以确保对人工智能领域强有力的监管。协议还规定,各国需要每四年提交一次报告,定期报告在人工智能领域的进展和实践。
 

UNESCO’s Recommendation places a Readiness Assessment tool at the core of its guidance to Member States. This tool enables countries to ascertain the competencies and skills required in the workforce to ensure robust regulation of the artificial intelligence sector. It also provides that the States report regularly on their progress and their practices in the field of artificial intelligence, in particular by submitting a periodic report every four years.
 

声明指出,迄今为止,有40多个国家已经与联合国教科文组织展开合作,依据《建议书》在国家层面制定人工智能规范措施。该组织呼吁所有国家加入这一行动。预计将于今年12月在斯洛文尼亚召开的联合国教科文组织全球人工智能伦理问题论坛(UNESCO Global Forum on the Ethics of Artificial Intelligence)上,还将发布相关进展报告。
 

To this date, more than 40 countries in all regions of the world are already working with UNESCO to develop AI checks and balances at the national level, building on the Recommendation. UNESCO calls on all countries to join the movement it is leading to build an ethical AI. A progress report will be presented at the UNESCO Global Forum on the Ethics of Artificial Intelligence in Slovenia in December 2023.
 

综合来源:华盛顿邮报, NPR,商业内幕网,新华网,联合国教科文组织

 

分享到


联系我们  |  诚聘英才  |  演讲比赛  |  关于我们
© i21st.cn   京ICP备13028878号-12