Fraud and Fraud Prevention in the Age of AI

Artificial intelligence (AI) is fast becoming a core technology for optimizing efficiency and driving innovation, finding wide applications across sectors such as healthcare, business operations, and public safety. However, just like the other side of the same coin, these advancements also present opportunities for criminals. As AI technology continues to grow more sophisticated, it is…


Dr Maurice Tse and Mr Clive Ho

25 June 2025

Artificial intelligence (AI) is fast becoming a core technology for optimizing efficiency and driving innovation, finding wide applications across sectors such as healthcare, business operations, and public safety. However, just like the other side of the same coin, these advancements also present opportunities for criminals. As AI technology continues to grow more sophisticated, it is difficult to guard against hacking, scams, blackmailing involving fabricated videos, and dissemination of disinformation orchestrated by criminal organizations.

Particularly egregious is the abuse of AI in the financial field. By 2027, generative AI is projected to quadruple scam-related losses across the world. The past decade has witnessed not only an acceleration of digitalization in the financial and banking sectors but also the coronavirus pandemic’s reinforcement of the dominant position of digital banks. Such a change has clearly enhanced both service efficiency and business volume, but it has simultaneously given criminals an opportunity to commit fraud. In 2023, approximately US$3.1 trillion in illicit funds passed through the global financial system, involving activities like human trafficking, drug dealing, and terrorist financing. Losses from bank fraud in the same year were estimated to total US$485.6 billion.

A foot of vice for every inch of virtue

As pointed out by the US Department of the Treasury in 2024, the existing financial risk management frameworks may be insufficient to address the challenges imposed by emerging AI technologies. This means that only by using AI against AI can an effective defense mechanism be built.

In this day and age, organized scammers rely on generative AI tools, instead of humans, to craft near-indiscernible phishing emails and deepfake frauds. Last year, there were multiple fraudulent cases where impersonation of senior corporate executives led to remittance by company staff of vast sums of money into fake accounts. This demonstrates that generative AI has become a key tool for scammers to bypass the traditional security and manipulate trust. A credit report of TransUnion reveals an 80% surge in digital fraud compared with pre-pandemic levels, with credit card scams rising 76% and account takeovers soaring between 81% and 131%.

The US Federal Trade Commission points out that losses from scams broke the US$10 billion mark in 2023, soaring by US$1 billion compared with the previous year. According to the Nasdaq Global Financial Crime Report, in the same year, fraud scams and bank fraud schemes totalled more than US$485 billion in projected losses worldwide (see Note). The rate of the 20-to-29 age group falling victim is even higher than that of the 70-plus age group, indicating that the scammers no longer target only the elderly.

“Rug pull” scams are now common in cryptocurrency investments. Investors lose every penny as a result of the currency developer fleeing with all the funds after shutting down their development projects. The globalization and professionalization of organized crime have created a new form of commercialized crime, for instance, “crime as a service”. The INTERPOL has reported that some victims have been lured by fake job advertisements and trafficked to fraud centres in Southeast Asia, South America, etc. The use of technology and personal exploitation has contributed to the rise of a large-scale and industrialized fraud industry chain.

Generative AI as a hotbed of fabrication

The advent of technologies like synthetic identity generators and automated cryptocurrency account setup tools has not only expedited money laundering but may also jeopardize the overall financial system and facilitate the expansion of transnational criminal networks. Money laundering has now become a commercialized service, offering different tiers of operational solutions based on clients’ payment levels. At various stages of the money-laundering process, high-end customers can utilize low-activity accounts and conduct multiple small transactions through separate money mule networks to evade regulation.

In the face of an enormous amount of information and multifarious data, financial institutions are often hard-pressed to identify irregularities. Meanwhile, criminals exploit the overwhelming volumes of information to devise hard-to-detect fraud schemes. From initial surveillance and analysis of defence system vulnerabilities to optimization of fraud patterns, AI is widely used in large language models (LLMs), video generators, and biometric identification technology. The synthesized content can be misused for criminal purposes, e.g. money laundering and fraudulent title deed activities.

The pervasiveness of technology crimes

Obviously, scams conducted through online meetings with impersonated financial consultants can have a severe impact on financial services. Existing security measures, including biometric recognition and third-party data verification, are now facing formidable challenges due to the rapid evolution of AI technology.

It is undeniable that the misuse of generative AI is getting more serious than ever, particularly in scams and cybercrime. In June 2023, WormGPT―a malicious counterpart of ChatGPT with illicit capabilities―made its debut on the dark web. FraudGPT has been revealed as an LLM specifically designed to identify system loopholes, write malicious codes, and automatically generate phishing emails. Since the introduction of ChatGPT-4, “jailbreak version” models such as BlackHatGPT and “jailbreaking-as-a-service” platforms have emerged, advancing the harmful use of AI.

Currently, financial institutions are heavily reliant on mobile devices and mobile banking apps to conduct digital businesses. Despite the significant improvements in efficiency and convenience, information security risks have also escalated. From user identity verification to one-time passwords, the procedures typically depend on a single device. If a SIM card is attacked by a virus or malicious app, the entire system could be paralysed. As pinpointed by various research studies, the potential for criminal exploitation appears almost limitless. The risks are substantial, ranging from manipulative attacks and infrastructure sabotage to the full weaponization of AI systems.

Alongside increasingly complex systems, driven by specific incentives and machine learning, AI could even devise new methods of criminality on its own. As much as this may sound like science fiction, cases of illicit behaviour by AI systems have already occurred in the financial sector due to flawed incentive designs. However, the existing security framework is still ill-prepared for such risks.

Watertight responses covering all bases

Amidst the mushrooming crime techniques enabled by AI, the authorities should tackle the problems through a four-pronged approach encompassing technology, policy, international collaboration, and education.

In terms of technology, AI-driven surveillance systems serve as the first line of defence. Given their proven practicality, tools for detecting deepfake fabrication and abnormal financial transactions should be further integrated into the information security framework in order to reduce the risk of large-scale attacks. For example, CryptoTrace is a virtual asset analytics platform jointly developed by the University of Hong Kong and the Hong Kong Police Force to effectively trace cryptocurrency transactions linked to criminal cases. In April 2025, this project was awarded a Gold Medal with the Congratulations of Jury at the International Exhibition of Inventions of Geneva.

In terms of policy, policymakers should establish a regulatory environment that fosters innovation and the prevention of AI abuse. The Report on Responsible AI in Financial Markets released in May 2024 emphasizes that despite its wide applications in risk management and predictive analytics, AI has also given rise to such risks as deepfake fabrication, phishing, and algorithm manipulation. The establishment of a corresponding AI risk management framework is therefore recommended.

In terms of international collaboration, given the cross-boundary nature of AI-related crime, joints efforts are essential to addressing these problems. Both the INTERPOL and the United Nations advocate for the unification of AI usage standards, the adoption of ethical criteria and punishment mechanisms, as well as the strengthening of legal enforcement across countries.

In terms of education, it is of utmost importance to increase the public’s ability to identify scams and disinformation. Launched by the Hong Kong Police Force, the Scameter Series, designed to facilitate real-time scam detection by the public, was awarded an International Press Prize and a Gold Medal at the International Exhibition of Inventions of Geneva in April 2025.

AI as both offence and defence

While scammers have AI as their weapon, the community can harness it as a protective shield. On the one hand, criminals use generative models to produce highly-convincing fake invoices and fabricated accounts, thereby fueling money laundering and scams. On the other hand, AI algorithms can automatically detect forged documents and flag abnormal transaction patterns, substantially improving risk identification and protection. When used appropriately, AI can present opportunities even amid crises.

Although there have not yet been patterns of crime dominated by AI, the current technological development trend clearly indicates that preventive measures should be implemented immediately to nip potential threats in the bud. Towards this end, public-private partnership should be made a top priority. Law enforcement agencies, governments, and businesses need to work more closely together to introduce AI-based security systems. Financial institutions and enterprises can integrate risk management and information security measures into AI systems. Meanwhile, through policy guidance and funding support, governments should encourage research and innovation, with a commitment to driving coordinated responses across sectors and national borders.

Note: https://www.nasdaq.com/global-financial-crime-report

Translation
人工智能(AI)正迅速成為優化效率與推動創新的核心技術,廣泛應用於醫療、企業營運與公共安全等領域,但正如一枚錢幣的另一面,犯罪分子卻有機可乘。隨着 AI 技術日益成熟,犯罪組織更透過駭客攻擊、詐騙、偽造影片勒索,以及散播錯誤資訊,令人防不勝防。

AI 在金融方面的濫用尤為顯著。到 2027 年,預期生成式 AI 將使全球詐騙損失增加四倍。近10年來,金融與銀行業的數字化轉型加速,冠狀病毒病疫情更鞏固了數字銀行的主導地位。此一轉變無疑提升了服務效率與交易量,卻也被金融犯罪分子找到下手機會。2023 年,全球約有 3.1 兆美元的不法資金流經金融系統,其中包括人口販運、毒品交易和恐怖主義融資等活動。同年,銀行詐騙造成的損失估計共達 4856 億美元。
道高一尺  魔高一丈

美國財政部在 2024 年指出,現有金融風險管理架構可能無法涵蓋新興 AI 技術,顯示唯有以 AI 對抗 AI,才能建立有效的防禦機制。

今時今日,詐騙集團已捨人力操作而取生成式 AI 工具,釣魚郵件和深度偽造騙案的內容幾可亂真。去年多宗案件牽涉模仿公司高層,令員工將大額資金匯入假帳戶;可見生成式 AI 已成為詐騙者繞過傳統安全防線、操控信任的關鍵工具。從環聯(TransUnion )的信貸報告可見,數字詐騙案件較疫情前暴增 80%,其中信用卡騙案上升 76%,而帳戶盜用(account takeover)的增幅更高達 81% 至 131%。

美國聯邦貿易委員會指出,2023 年詐騙損失突破 100 億美元,較2022年增加 10 億。根據納斯達克全球金融犯罪報告【註】,2023 年全球欺詐騙局和銀行騙案預計總損失超過 4850 億美元;20 至 29 歲年輕人的受騙率高於 70 歲以上長者,反映詐騙目標已不再限於高齡組別。

虛擬貨幣投資亦不乏「拉地毯」(rug pull)騙局,貨幣開發者關閉項目後即捲款潛逃,導致投資者血本無歸。有組織犯罪更朝全球化與專業化發展,催生出新興的犯罪商業模式,如「服務式犯罪」(crime as a service)。此外,國際刑警組織指出,部分受害者遭假工作廣告誘騙,更被販運至東南亞與南美等地的詐騙中心,結合科技與人身剝削,形成規模化、工業化的詐騙產業鏈。
生成智能  便於造假

隨着合成身份生成器與自動化加密貨幣帳戶開設工具等 AI 技術的出現,不僅大幅加快洗錢流程,甚至可能動搖整體金融體系,並助長跨國犯罪網絡擴張。洗錢如今已演變為一種商業化服務,按客戶付費等級來提供不同層次的操作方案。在分層洗錢階段,高端客戶可動用不常使用的帳戶,並透過分散的傀儡戶口(money mule)網絡進行小額、多宗交易,以規避監管。

金融機構面對龐大的資料量和多樣化的數據,往往難以即時辨識異常行為。相反,犯罪分子則藉這些資料設計出難以察覺的詐騙手法。由前期偵察、分析防禦系統弱點,以至優化詐騙模式,通過大型語言模型、影片生成工具,以至生物辨識技術,AI都廣被應用。這些合成內容可應用於洗錢、地契詐騙等犯罪場景。

顯而易見,透過模擬理財顧問的線上會議進行詐騙,將對金融服務構成重大衝擊。傳統防線如生物識別與第三方資料驗證,正面臨 AI 技術快速演進的嚴峻挑戰。
科技犯罪  無孔不入

無可諱言,生成式 AI 被濫用的情況日益嚴重,特別是在詐騙與網絡犯罪領域。2023 年 6 月,名為 WormGPT 的黑暗版 ChatGPT首次出現在暗網,具備執行多種非法任務的能力。FraudGPT則被揭露為一款專門用於搜尋系統漏洞、撰寫惡意程式碼和自動生成釣魚郵件的大型語言模型。自 ChatGPT-4 問世以來,BlackHatGPT等「越獄版」模型,以及所謂「服務式越獄」(jailbreaking as a service)平台相繼出現,進一步推動 AI 的惡意應用。

當前金融機構高度依賴行動裝置和流動銀行應用程式,以進行數字業務,效率與便利性得以躍升,然而資訊安全風險亦相應增高。從用戶身份驗證到一次性密碼,流程主要集中於單一裝置,一旦智能卡遭受病毒或惡意程式攻擊,或會癱瘓整體系統。不少研究已經指出,AI 作奸犯科的潛力近乎無限,從操控式攻擊、基礎設施破壞,到 AI 系統全面武器化,皆具高度風險。

隨着系統日益複雜,AI 在特定誘因和機器學習驅動下,甚至可能自主研發新型犯罪手法。乍聽有如科幻情節,實際上金融業界已出現 AI 系統因誘因設計不當,而採取不道德行為的案例,惟現行防禦架構尚未為此類風險作好準備。
周全應對  面面俱到

面對層出不窮的 AI犯罪手法,當局應從技術、政策、國際協作、教育四管齊下。

技術方面,AI 驅動的偵測系統是第一道防線。用於辨識深度偽造內容和金融異常交易的工具,已證實具實用性,應進一步整合至資訊安全架構中,以降低大規模攻擊風險。香港大學和香港警務處合作研發的虛擬資產追蹤系統CryptoTrace,利用創新科技,有效追蹤涉及案件的虛擬貨幣交易。2025年4月,此項目在日內瓦國際發明展榮獲評審團嘉許金獎。

政策方面,決策者應建立既能促進創新,又能防止濫用的監管環境。美國商品期貨交易委員會於 2024 年 5 月發布的《金融市場負責任人工智能》報告中強調,AI 雖廣泛應用於風險管理與預測分析,但也帶來深度偽造、網絡釣魚與演算法操控等風險,呼籲建立相應的 AI 風險管理框架。

國際合作方面:AI 犯罪具跨國特性,必須全球攜手應付。國際刑警組織與聯合國皆倡議統一 AI 使用規範,訂立倫理準則、懲罰機制,並強化各國執法力度。

教育方面:深化大眾對 AI 詐騙與誤導性資訊的辨識能力至關重要。香港警務處推出名為「防騙視伏器系列」的手機應用程式,有助市民實時偵測詐騙行為;本年4月更在日內瓦國際發明展贏得國際傳媒大獎及金獎。
攻也AI   守也AI

既然騙徒有AI之矛可用,各界亦不妨以AI之盾加以防禦。一方面,犯罪分子運用生成式模型製造仿真度高的假發票和合成帳戶,加速洗錢與詐騙流程。另一方面,AI 演算法能自動偵測偽造文件、比對異常交易模式,顯著提升風險識別與防範之效。如能善加利用,危中依然有機。

縱使目前尚未出現完全由 AI 主導的犯罪模式,但技術發展的趨勢已清楚顯示,務須馬上部署預防措施,以便防患未然。為此,公私營協作無疑是重中之重。執法機關、政府與企業應緊密合作,建立以 AI 為核心的安全防護體系。金融機構與企業可將風險管理結合資訊安全防護,內置於 AI 系統之中;政府則應透過政策引導與撥款支持,促進研發創新,並全力推動跨界別,以至跨國界的協同對策。

註:https://www.nasdaq.com/global-financial-crime-report

謝國生博士
港大經管學院金融學首席講師、新界鄉議局當然執行委員

何敏淙
香港大學附屬學院講師

(本文同時於二零二五年六月二十五日載於《信報》「龍虎山下」專欄)