1. Background
Digital infrastructures and data are becoming vital assets for many companies and organisations. Digital infrastructures encompass hardware (servers, data centres, networking equipment), cloud services, and software platforms that support digital activities. Data may include customers’ private data and companies’ internal data. Malfunctioning of these infrastructures or leakage of data may result in huge financial losses.
Cybersecurity incidents in Hong Kong have been affecting both public and private organisations. The observations from publicly reported cases between 2020 and 2025 indicate a consistent rise in the frequency, intensity, and complication of cybersecurity incidents. Recent major incidents include the cyberattack on a convenience-store chain in September 2025, which disrupted electronic payments at more than 400 retail sites,[1] and the data breach of a luxury fashion brand, which affected more than 400,000 individuals.[2] These incidents underscore the growing vulnerability of business activities and consumer data to security breaches. There are also multiple ransomware assaults aimed at non-government and cultural organisations. All these cases reveal how civil-society groups with minimal technical resources have become one of the primary targets for exploitation.
At the same time, the Office of the Privacy Commissioner for Personal Data (PCPD) reported that several government agencies were involved in data leakages or data design flaw, affecting over thousands of individuals. [3] [4] The Digital Policy Office (DPO) and the Office of the Government Chief Information Officer (OGCIO) also reveal that there were multiple ransomware incidents in government agencies in recent years. Though authorities claim no sensitive data was compromised, these repeated events reveal that stronger cybersecurity measures are needed for the public sector.
Cybersecurity has become a key governance issue requiring substantial attention and effort. Recent incidents underscore the persistent risk faced by both the public and private sectors due to reasons such as deficiencies in system design or oversight by humans. Both public agencies and private organisations need to strengthen their defence against cybersecurity incidents and take appropriate measures to get prepared, in order to improve accountability and mitigate financial and reputational threats.
The recent surge in the adoption of generative artificial intelligence (GenAI)—systems capable of producing human-like text, code, images, and audio—presents an opportunity for unparalleled productivity and innovation. However, this revolutionary technology simultaneously functions as a powerful, double-edged sword, profoundly reshaping the cyber threat landscape. For a highly concentrated and digitally dependent city like Hong Kong, the intersection of rapid GenAI adoption and cyber risk necessitates an urgent and strategic response.
2. Key Cybersecurity Threats in the GenAI Era
The primary GenAI-driven cybersecurity threats are characterised by their high efficiency, low cost of execution, and unprecedented ability to bypass human and automated security filters. These threats can be clustered into two major domains: the weaponisation of language models for psychological manipulation and the democratisation of advanced malicious code.
2.1. The Weaponisation of Social Engineering and Deepfakes
GenAI’s ability to produce highly contextual and linguistically flawless content has fundamentally amplified the scale and effectiveness of social engineering attacks, moving beyond generic spam to highly targeted deception. Traditional phishing campaigns were often detectable by poor grammar, foreign language idioms, or generic requests. GenAI erases these red flags. Attackers can now leverage large language models (LLMs) to analyse vast amounts of publicly-available corporate and personal data—such as online profiles, press releases, and social media posts—to construct highly-personalised emails and messages.
For the private sector, especially financial institutions and law firms, this translates to refined attacks. GenAI can mimic a senior colleague’s or client’s tone, dialect, and communication patterns, making requests for wire transfers, data disclosure, or credential theft appear authentic. The volume of phishing incidents in Hong Kong, already escalating significantly, is poised to surge further, with the attacks becoming more functionally indistinguishable from legitimate communication, thereby eroding the reliability of human judgement as a primary security layer.
One of the most concerning GenAI-driven social engineering threats is the weaponisation of deepfakes—synthetic media (video, audio, or images) that convincingly replicate specific individuals. Hong Kong experienced a globally reported incident in January 2024 where an employee of a multinational company was duped into transferring approximately HK$200 million after participating in a video conference with deepfake replicas of the company’s Chief Financial Officer and other colleagues.[5] This incident highlights several critical vulnerabilities for both the public and private sectors. Deepfakes bypass conventional human and technological identity verification processes (e.g., video calls, voice authentication for high-value transactions). The attacks exploit the inherent organisational trust and deference granted to senior management or government officials, especially in high-pressure financial or sensitive decision-making contexts. As GenAI tools become cheaper and more accessible, creating high-quality, convincing deepfakes is no longer limited to attackers with ample resources but is available to other criminal groups globally. This accessibility makes it easier to target employees in various public and private organizations.
2.2. Democratisation of Advanced Malware
GenAI is rapidly lowering the technical skill ceiling required for advanced cyber criminality, turning sophisticated attack methodologies into accessible services. Attackers are already utilising GenAI models, sometimes trained specifically on malicious datasets, such as WormGPT and DarkBard,[6] to generate malicious codes. These tools facilitate the creation of polymorphic malware—a code that autonomously changes its structure and signature with each instance or execution. This capability allows malicious payloads to evade traditional, signature-based antivirus and security systems, a core component of legacy defences still prevalent in many organisations. This threatens the public sector’s operational resilience, particularly in managing critical infrastructure where stability and continuity are paramount. Attacks on systems controlling transport, energy distribution, and public healthcare could be executed by attackers with relatively less expertise.
2.3 LLM-Specific Vulnerabilities and Data Leakage
Beyond utilising GenAI to create external threats, organisations that deploy or use LLMs internally face inherent risks from the models themselves. One example of these model-specific vulnerabilities is prompt injection, where attackers can manipulate prompts and other model inputs to hijack the model’s objective, causing it to reveal sensitive information, generate malicious code, or perform unauthorised actions. Attackers may also subtly corrupt the training data set of proprietary models, leading to biased, compromised, or exploitable model behaviour after deployment.
Another immediate risk in using LLM is unintentional data leakage. Employees using public GenAI tools for work, such as summarising documents or debugging codes, risk uploading confidential or sensitive data that is then processed and potentially used to refine the LLM, effectively leaking secrets or user privacy to third-party providers. A survey indicating that only a small percentage of enterprises in Hong Kong have established an AI security policy underscores this risk.[7]
3. Recommendations for Comprehensive Cyber Defence
Based on our analyses, we offer the following recommendations to strengthen cybersecurity in Hong Kong.
3.1 Using AI for Cyber Defence
It is important to leverage AI and machine learning to combat GenAI-powered cybersecurity threats. The government and the private sector should collaboratively invest in deploying machine learning and AI tools specifically designed for real-time threat detection and forensic analysis. These tools must be capable of analyzing behavioural anomalies and identifying sophisticated threats that bypass security systems. Funding should be provided to encourage and support research on using AI for cybersecurity defence.
The government should also consider mandating or incentivising the use of biometric verification and deepfake detection technologies, especially in areas involving critical infrastructure, high financial stakes, privileged access, or sensitive communication within both government and commercial environments. Deepfake detection systems based on biometrics can be designed to detect deepfake video or audio attempts at authentication, addressing the primary deepfake risk.
In addition, natural language process and GenAI can be employed to develop security tools specifically designed to analyse incoming communications (e.g., email, chat) for specific style or features, which may signal a GenAI-crafted spear-phishing attempt.
3.2 Preventing GenAI data leakage and threats
It is important to promote in both the public and private sectors the importance in using GenAI models responsibly and ensuring that they are well protected, to avoid being a source of data leakage. As the data used to train and fine-tune internal LLMs is a possible avenue of attack and data leakage, it is desirable to prioritise the use of anonymised or fully sanitised data sets for internal model training to minimise the risk associated with data poisoning attacks and data leakage. Data leakage prevention modules can also be directly integrated into all GenAI interfaces and pipelines. These modules should automatically mask or reject any personally identifiable information or confidential data before it can be inputted into an external LLM service.
In addition, GenAI and LLM deployment must adhere to the highest standards of secure software development. Before deploying any high-risk GenAI model, mandatory adversarial testing must be performed. This involves simulating attacks, including prompt injection, data poisoning, and attempts to extract sensitive training data. This will help identify and patch vulnerabilities proactively. Continuous monitoring tools should be used to track the models’ performance and output integrity. If a model’s performance or compliance degrades over time, it may signal a subtle, ongoing adversarial attack or unintentional bias, necessitating immediate human intervention and recalibration. Such practices should be promoted and incentivised in both the public and private sectors.
3.3 Training and awareness
Because of the easy access to GenAI tools, many employees may attempt to use publicly-available GenAI tools rather than those approved by their employer. Clear policies need to be established to specify what tasks can use GenAI tools, what tools can be used, and what data can be provided to these tools.
In addition, since a majority of successful cyber attacks exploit human weakness, mandatory and continuous training is essential. Training must be dynamic, focusing on current threats such as recognising deepfake media and identifying highly-personalised phishing. Systematic training on responsible and ethical use of GenAI tools and the risks associated with the inappropriate use of such tools (e.g., the explicit danger of inputting confidential data into public LLMs) is also necessary.
Organisations must foster a supportive culture where employees feel safe and empowered to report suspicious activity or accidental data exposure without fear of punitive action. This transforms the workforce from being a security vulnerability into a front-line defence layer.
It is noted that many organisations in Hong Kong, especially small-and-medium enterprises and non-government organisations, do not have the resources to conduct such training and security measures implementation. The government can provide assistance, which can include training workshops and funding focused on GenAI and cybersecurity.
Reference
[3] https://www.pcpd.org.hk/english/news_events/media_statements/press_20250312.html
[4] https://www.info.gov.hk/gia/general/202412/09/P2024120900460.htm
[5] https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
[6] Schröer, S. L., Pajola, L., Castagnaro, A., Apruzzese, G., & Conti, M. (2025). Exploiting AI for Attacks: On the Interplay between Adversarial AI and Offensive AI. IEEE Intelligent Systems.




