The Impact of Generative Artificial Intelligence on Cybersecurity in Hong Kong

Digital infrastructures and data are becoming vital assets for many companies and organisations. Digital infrastructures encompass hardware (servers, data centres, networking equipment), cloud services, and software platforms that support digital activities.


1. Background

Digital infrastructures and data are becoming vital assets for many companies and organisations. Digital infrastructures encompass hardware (servers, data centres, networking equipment), cloud services, and software platforms that support digital activities. Data may include customers’ private data and companies’ internal data. Malfunctioning of these infrastructures or leakage of data may result in huge financial losses.

Cybersecurity incidents in Hong Kong have been affecting both public and private organisations. The observations from publicly reported cases between 2020 and 2025 indicate a consistent rise in the frequency, intensity, and complication of cybersecurity incidents. Recent major incidents include the cyberattack on a convenience-store chain in September 2025, which disrupted electronic payments at more than 400 retail sites,[1] and the data breach of a luxury fashion brand, which affected more than 400,000 individuals.[2] These incidents underscore the growing vulnerability of business activities and consumer data to security breaches. There are also multiple ransomware assaults aimed at non-government and cultural organisations. All these cases reveal how civil-society groups with minimal technical resources have become one of the primary targets for exploitation.

At the same time, the Office of the Privacy Commissioner for Personal Data (PCPD) reported that several government agencies were involved in data leakages or data design flaw, affecting over thousands of individuals. [3] [4] The Digital Policy Office (DPO) and the Office of the Government Chief Information Officer (OGCIO) also reveal that there were multiple ransomware incidents in government agencies in recent years. Though authorities claim no sensitive data was compromised, these repeated events reveal that stronger cybersecurity measures are needed for the public sector.  

Cybersecurity has become a key governance issue requiring substantial attention and effort. Recent incidents underscore the persistent risk faced by both the public and private sectors due to reasons such as deficiencies in system design or oversight by humans. Both public agencies and private organisations need to strengthen their defence against cybersecurity incidents and take appropriate measures to get prepared, in order to improve accountability and mitigate financial and reputational threats.

The recent surge in the adoption of generative artificial intelligence (GenAI)—systems capable of producing human-like text, code, images, and audio—presents an opportunity for unparalleled productivity and innovation. However, this revolutionary technology simultaneously functions as a powerful, double-edged sword, profoundly reshaping the cyber threat landscape. For a highly concentrated and digitally dependent city like Hong Kong, the intersection of rapid GenAI adoption and cyber risk necessitates an urgent and strategic response.

2. Key Cybersecurity Threats in the GenAI Era

The primary GenAI-driven cybersecurity threats are characterised by their high efficiency, low cost of execution, and unprecedented ability to bypass human and automated security filters. These threats can be clustered into two major domains: the weaponisation of language models for psychological manipulation and the democratisation of advanced malicious code.

2.1. The Weaponisation of Social Engineering and Deepfakes

GenAI’s ability to produce highly contextual and linguistically flawless content has fundamentally amplified the scale and effectiveness of social engineering attacks, moving beyond generic spam to highly targeted deception. Traditional phishing campaigns were often detectable by poor grammar, foreign language idioms, or generic requests. GenAI erases these red flags. Attackers can now leverage large language models (LLMs) to analyse vast amounts of publicly-available corporate and personal data—such as online profiles, press releases, and social media posts—to construct highly-personalised emails and messages.

For the private sector, especially financial institutions and law firms, this translates to refined attacks. GenAI can mimic a senior colleague’s or client’s tone, dialect, and communication patterns, making requests for wire transfers, data disclosure, or credential theft appear authentic. The volume of phishing incidents in Hong Kong, already escalating significantly, is poised to surge further, with the attacks becoming more functionally indistinguishable from legitimate communication, thereby eroding the reliability of human judgement as a primary security layer.

One of the most concerning GenAI-driven social engineering threats is the weaponisation of deepfakes—synthetic media (video, audio, or images) that convincingly replicate specific individuals. Hong Kong experienced a globally reported incident in January 2024 where an employee of a multinational company was duped into transferring approximately HK$200 million after participating in a video conference with deepfake replicas of the company’s Chief Financial Officer and other colleagues.[5] This incident highlights several critical vulnerabilities for both the public and private sectors. Deepfakes bypass conventional human and technological identity verification processes (e.g., video calls, voice authentication for high-value transactions). The attacks exploit the inherent organisational trust and deference granted to senior management or government officials, especially in high-pressure financial or sensitive decision-making contexts. As GenAI tools become cheaper and more accessible, creating high-quality, convincing deepfakes is no longer limited to attackers with ample resources but is available to other criminal groups globally. This accessibility makes it easier to target employees in various public and private organizations.

2.2. Democratisation of Advanced Malware

GenAI is rapidly lowering the technical skill ceiling required for advanced cyber criminality, turning sophisticated attack methodologies into accessible services. Attackers are already utilising GenAI models, sometimes trained specifically on malicious datasets, such as WormGPT and DarkBard,[6] to generate malicious codes. These tools facilitate the creation of polymorphic malware—a code that autonomously changes its structure and signature with each instance or execution. This capability allows malicious payloads to evade traditional, signature-based antivirus and security systems, a core component of legacy defences still prevalent in many organisations. This threatens the public sector’s operational resilience, particularly in managing critical infrastructure where stability and continuity are paramount. Attacks on systems controlling transport, energy distribution, and public healthcare could be executed by attackers with relatively less expertise.

2.3 LLM-Specific Vulnerabilities and Data Leakage

Beyond utilising GenAI to create external threats, organisations that deploy or use LLMs internally face inherent risks from the models themselves. One example of these model-specific vulnerabilities is prompt injection, where attackers can manipulate prompts and other model inputs to hijack the model’s objective, causing it to reveal sensitive information, generate malicious code, or perform unauthorised actions. Attackers may also subtly corrupt the training data set of proprietary models, leading to biased, compromised, or exploitable model behaviour after deployment.

Another immediate risk in using LLM is unintentional data leakage. Employees using public GenAI tools for work, such as summarising documents or debugging codes, risk uploading confidential or sensitive data that is then processed and potentially used to refine the LLM, effectively leaking secrets or user privacy to third-party providers. A survey indicating that only a small percentage of enterprises in Hong Kong have established an AI security policy underscores this risk.[7]

3. Recommendations for Comprehensive Cyber Defence

Based on our analyses, we offer the following recommendations to strengthen cybersecurity in Hong Kong.

3.1 Using AI for Cyber Defence

It is important to leverage AI and machine learning to combat GenAI-powered cybersecurity threats. The government and the private sector should collaboratively invest in deploying machine learning and AI tools specifically designed for real-time threat detection and forensic analysis. These tools must be capable of analyzing behavioural anomalies and identifying sophisticated threats that bypass security systems. Funding should be provided to encourage and support research on using AI for cybersecurity defence.

The government should also consider mandating or incentivising the use of biometric verification and deepfake detection technologies, especially in areas involving critical infrastructure, high financial stakes, privileged access, or sensitive communication within both government and commercial environments. Deepfake detection systems based on biometrics can be designed to detect deepfake video or audio attempts at authentication, addressing the primary deepfake risk.

In addition, natural language process and GenAI can be employed to develop security tools specifically designed to analyse incoming communications (e.g., email, chat) for specific style or features, which may signal a GenAI-crafted spear-phishing attempt.

3.2 Preventing GenAI data leakage and threats

It is important to promote in both the public and private sectors the importance in using GenAI models responsibly and ensuring that they are well protected, to avoid being a source of data leakage. As the data used to train and fine-tune internal LLMs is a possible avenue of attack and data leakage, it is desirable to prioritise the use of anonymised or fully sanitised data sets for internal model training to minimise the risk associated with data poisoning attacks and data leakage. Data leakage prevention modules can also be directly integrated into all GenAI interfaces and pipelines. These modules should automatically mask or reject any personally identifiable information or confidential data before it can be inputted into an external LLM service.

In addition, GenAI and LLM deployment must adhere to the highest standards of secure software development. Before deploying any high-risk GenAI model, mandatory adversarial testing must be performed. This involves simulating attacks, including prompt injection, data poisoning, and attempts to extract sensitive training data. This will help identify and patch vulnerabilities proactively. Continuous monitoring tools should be used to track the models’ performance and output integrity. If a model’s performance or compliance degrades over time, it may signal a subtle, ongoing adversarial attack or unintentional bias, necessitating immediate human intervention and recalibration. Such practices should be promoted and incentivised in both the public and private sectors.

3.3 Training and awareness

Because of the easy access to GenAI tools, many employees may attempt to use publicly-available GenAI tools rather than those approved by their employer. Clear policies need to be established to specify what tasks can use GenAI tools, what tools can be used, and what data can be provided to these tools.

In addition, since a majority of successful cyber attacks exploit human weakness, mandatory and continuous training is essential. Training must be dynamic, focusing on current threats such as recognising deepfake media and identifying highly-personalised phishing. Systematic training on responsible and ethical use of GenAI tools and the risks associated with the inappropriate use of such tools (e.g., the explicit danger of inputting confidential data into public LLMs) is also necessary.

Organisations must foster a supportive culture where employees feel safe and empowered to report suspicious activity or accidental data exposure without fear of punitive action. This transforms the workforce from being a security vulnerability into a front-line defence layer.

It is noted that many organisations in Hong Kong, especially small-and-medium enterprises and non-government organisations, do not have the resources to conduct such training and security measures implementation. The government can provide assistance, which can include training workshops and funding focused on GenAI and cybersecurity.

Reference

[1] https://www.scmp.com/news/hong-kong/hong-kong-economy/article/3327078/store-chain-circle-k-confirms-cyberattack-hong-kong-apologises-customers

[2] https://www.reuters.com/sustainability/boards-policy-regulation/hong-kong-investigates-louis-vuitton-data-leak-affecting-419000-customers-2025-07-21/

[3] https://www.pcpd.org.hk/english/news_events/media_statements/press_20250312.html

[4] https://www.info.gov.hk/gia/general/202412/09/P2024120900460.htm

[5] https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk

[6] Schröer, S. L., Pajola, L., Castagnaro, A., Apruzzese, G., & Conti, M. (2025). Exploiting AI for Attacks: On the Interplay between Adversarial AI and Offensive AI. IEEE Intelligent Systems.

[7] http://www.hkpc.org/en/about-us/media-centre/press-releases/2025/ai-readiness-in-workplace-survey-2025

Translation

生成式人工智能對香港網絡安全之影響


周昭瀧


1. 背景


對不少企業和機構而言,數碼基建與數據正逐漸成為重要資產。數碼基建涵蓋硬體(伺服器、數據中心、網路設備)、雲端服務,以及支援數碼活動的軟件平台。數據可能包括客戶的個人資料與企業的內部資料;若這些基礎設施發生故障或資料外洩,可能導致巨額財務損失。

香港的公私營機構一直受網絡安全事故影響。根據 2020 至 2025 年公開報告的案例,網絡安全事故的頻率、強度與複雜性持續上升。近期重大事故包括2025 年 9 月針對某連鎖便利店的網絡攻擊,導致超過 400 間零售店的電子支付系統中斷,[1] 以及某奢華時裝品牌的資料外洩,影響超過 40 萬人。[2] 這些事故凸顯了商業活動與消費者資料在保安漏洞下的脆弱性。此外,還有多宗針對非政府及文化機構的勒索軟件攻擊,顯示技術資源有限的民間團體已成為其中一個主要的攻擊目標。

同時,個人資料私隱專員公署報告顯示,數個政府機構涉及資料外洩或資料設計缺陷,影響數千人。[3],[4] 數字政策辦公室及政府資訊科技總監辦公室亦指出,近年政府機構發生多宗勒索軟件事故。雖然當局聲稱未有敏感資料外洩,但這類事故重複發生,顯示公共部門需強化網絡安全措施。

網絡安全已成為一項必須多加關注的重要管治議題。近期事故凸顯公私營界別因系統設計缺陷或人為疏忽而面臨持續風險。公私營機構必須加強抵禦網絡安全事故,並採取適當措施做好準備,以提升問責性並減低財務與聲譽威脅。

近期生成式人工智能的使用激增;這類系統能生成類似人類製作的文字、程式碼、圖像與音訊,帶來前所未有的生產力與創新機會。然而,這項革命性技術同時也是一把威力強大的雙刃劍,正在影響網絡安全威脅。對於像香港如此高度集中且依賴數碼科技的城市而言,亟須以策略性方式,緊急應對生成式人工智能的快速使用與網絡風險的交集。

2.生成式人工智能時代的主要網絡安全威脅


生成式人工智能驅動的網絡安全威脅,主要特徵在於高效率、低執行成本,以及前所未有的能力來繞過人工與自動化的保安過濾機制。這些威脅可分為兩大領域:用於心理操控的語言模型武器化,以及先進惡意程式碼的大眾化。

2.1.社交工程與深偽技術武器化


生成式人工智能能夠產出高度情境化且語言上極為精準的內容,這種能力從根本上擴大了社交工程攻擊的規模與效能,使其超越一般濫發訊息,進化為高度針對性的欺詐行為。傳統釣魚攻擊通常可透過語法錯誤、外語習慣用語或過於廣泛的要求來識別,生成式人工智能卻消除了這些警示信號。攻擊者目前可以利用大型語言模型分析大量公開的企業與個人資料,例如網上個人檔案、新聞稿和社交媒體貼文,以構建高度個人化的電子郵件與訊息。

對私人企業而言,尤其是金融機構與律師事務所,這意味着攻擊手法更加精密。生成式人工智能能夠模仿高層同事或客戶的語氣、方言及溝通模式,而令要求匯款、資料披露或憑證竊取的訊息看似真實可信。香港的釣魚攻擊事故數量已顯著上升,未來更可能進一步激增,且攻擊手法將與正常溝通幾乎無法區分,從而削弱人類判斷作為主要保安防線的可靠性。

在生成式人工智能驅動的社交工程威脅中,最令人關注的其中一項是深偽技術武器化,亦即足以逼真複製特定個人的合成媒體(影片、音訊或圖像)。2024 年 1 月,香港發生一宗全球廣為報道的事故:某跨國公司員工與利用深偽技術製作的公司財務總監及其他同事「複製影像」進行視像會議,其後被騙匯款約 2 億港元。[5] 此一事故凸顯公私營界別幾項關鍵漏洞。深偽技術能繞過傳統的人為與技術核實身分程序(如視像通話、針對高額交易的語音認證),並利用機構對高層管理人員或政府官員的信任與尊重,尤其是在高壓的財務或敏感決策情境中。隨着生成式人工智能工具變得更廉價且易於取得,製作高品質、逼真的深偽內容不再僅限於資源充足的攻擊者,而是全球其他犯罪集團也能輕易使用,更便於向公共與私人機構員工下手。

2.2.先進惡意軟件大眾化


生成式人工智能正迅速降低進行精密網絡犯罪所需的技術門檻,將複雜的攻擊方法轉化為可輕易取得的服務。攻擊者已開始使用生成式人工智能模型,有時甚至專門以惡意數據集進行訓練,例如 WormGPT 和 DarkBard,[6] 從而生成惡意程式碼。這些工具便於製作多形變種惡意軟件,此類程式碼能在每次實例或執行時自動改變其結構與特徵。這種能力使惡意載荷能夠避開傳統的、基於特徵碼的防毒與保安系統,許多機構仍然沿用這些系統作為核心防禦機制。此現象對公共部門的營運韌性構成威脅,尤其是在管理關鍵基礎設施時,穩定性與持續性至關重要。攻擊者即使技術能力相對較低,亦足以對控制交通、能源分配及公共醫療的系統發動攻擊。

2.3大型語言模型特有的漏洞與資料外洩


除了利用生成式人工智能製造外部威脅外,在內部配置或使用大型語言模型的機構也面臨模型本身的固有風險。其中一個模型特有的漏洞是提示注入攻擊(prompt injection),即攻擊者可藉以操縱提示詞或其他模型輸入,劫持模型的目標,使其洩露敏感資訊、生成惡意程式碼,或執行未經授權的操作。攻擊者還可能在專有模型的訓練數據集中微妙地加以破壞,導致模型在部署後出現偏差、受損或可被利用的行為。

另一個使用大型語言模型的即時風險,就是資料外洩。員工工作時使用公共生成式人工智能工具(例如總結文件或為程式碼「除蟲」),可能會上傳機密或敏感資料,這些資料隨後或會被用於優化大型語言模型,有機會將機密或用戶私隱洩露給第三方供應商。一項調查顯示,香港僅有少數企業已制定人工智能安全政策,足見有關風險的嚴重性。[7]

3.有關全面網絡防禦的建議


根據上述分析,我們提出三大加強香港網絡安全的建議。

3.1運用人工智能進行網絡防禦


我們必須善用人工智能與機器學習,以打擊由生成式人工智能驅動的網絡安全威脅。政府與私營界別應協力投資,使用專門設計用於實時威脅偵測與法證分析的機器學習與人工智能工具。這些工具必須能夠分析行為異常和識別可繞過保安系統的複雜威脅。政府同時應提供資助,以鼓勵和支持利用人工智能進行網絡安全防禦的研究。

政府亦應考慮強制或提供誘因,推動特別在涉及關鍵基礎設施、高度金融風險、特權存取或政府與商業環境中的敏感通訊時,使用生物特徵識別及深偽檢測技術。基於生物特徵的深偽檢測系統可設計用來識別深偽影片或音訊驗證,從而應對深偽技術帶來的主要風險。

此外,自然語言處理與生成式人工智能可用作開發專為分析通訊內容(如電子郵件、聊天訊息)而設的保安工具,透過檢測特定風格或特徵,辨別是否生成式人工智能製作的針對性釣魚攻擊。

3.2 防止生成式人工智能資料外洩與威脅


政府必須在公私營界別推廣如何有責任地使用生成式人工智能模型,確保其安全,避免成為資料外洩的源頭。由於用於訓練與微調內部大型語言模型的資料可能成為攻擊與外洩的途徑,因此當進行內部模型訓練時,應優先使用匿名化的數據,以降低投毒攻擊與資料外洩的風險。此外,資料外洩防護模組可直接集成至所有生成式人工智能介面與流程中,這些模組應自動屏蔽或拒絕任何可識別的個人資訊或機密資料,以免有關資料被輸入至外部大型語言模型服務。

同時,使用生成式人工智能與大型語言模型必須遵循安全軟件開發的最高標準。在配置任何高風險的生成式人工智能模型前,必須進行強制性對抗測試,其中涉及模擬攻擊,包括提示注入攻擊、資料投毒,以及嘗試提取訓練中的敏感資料,以便主動識別並修補漏洞。機構應使用持續監控工具追蹤模型的效能與輸出完整性,若模型效能或合規性隨時間下降,就可能表示存在隱蔽的持續性對抗攻擊或非故意偏差,需立即進行人為干預與重新校準。政府應在公私營界別中積極推廣這些做法並提供誘因。

3.3 培訓與意識提升


由於生成式人工智能工具易於取得,許多員工可能傾向使用公開可用的軟件,而非僱主所批准的。因此,機構必須制定明確政策,規範哪些任務可以使用生成式人工智能工具、哪些工具可以使用,以及可以提供什麼資料給這些工具。

此外,由於大多數成功的網絡攻擊都利用人類弱點,強制且持續的培訓至關重要。培訓須聚焦於最新的安全威脅,例如辨識深偽媒體及識別高度個人化的網絡釣魚攻擊。同時,機構還需進行有系統的員工培訓,使員工了解以負責任且合乎道德的方式使用生成式人工智能工具的必要,以及不當使用這些工具的相關風險(例如將機密資料輸入公共大型語言模型的顯著危險)。

各機構都必須培養互相支援的文化,使員工能夠在有安全感和信心的情況下,報告可疑活動或意外資料外洩,而不必擔心遭受懲罰。這有助員工從安全隱患變為防禦前線。

值得注意的是,香港不少機構(尤其是中小型企業及非政府組織)缺乏資源來進行此類培訓與實施安全措施。政府應考慮提供協助,包括舉辦培訓工作坊及提供專注於生成式人工智能與網絡安全的資助。

參考文獻

[1] https://www.scmp.com/news/hong-kong/hong-kong-economy/article/3327078/store-chain-circle-k-confirms-cyberattack-hong-kong-apologises-customers

[2] https://www.reuters.com/sustainability/boards-policy-regulation/hong-kong-investigates-louis-vuitton-data-leak-affecting-419000-customers-2025-07-21/

[3] https://www.pcpd.org.hk/english/news_events/media_statements/press_20250312.html

[4] https://www.info.gov.hk/gia/general/202412/09/P2024120900460.htm

[5] https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk

[6] Schröer, S. L., Pajola, L., Castagnaro, A., Apruzzese, G., & Conti, M. (2025). Exploiting AI for Attacks: On the Interplay between Adversarial AI and Offensive AI. IEEE Intelligent Systems.

[7] http://www.hkpc.org/en/about-us/media-centre/press-releases/2025/ai-readiness-in-workplace-survey-2025