Angela Wang & Co.

← Back
Protection of Personal Data in the Age of Artificial Intelligence
28 August 2024

With the rapid innovation of artificial intelligence (“AI”) and the increasing use of AI in business practices, there have been many concerns about the ethical collection, storage and usage of personal data in AI systems. In June 2024, the Hong Kong Privacy Commissioner for Personal Data (“PCPD”) published the “Artificial Intelligence : Model Personal Data Protection Framework” (“Model Framework”) to address some of the concerns. The Model Framework provides a set of recommendations and best practices regarding the governance of AI to protect personal data privacy for organisations which procure, implement and use any type of AI system. It also aims to assist organisations to comply with the Ethical Principles for AI in the “Guidance on the Ethical Development and Use of Artificial Intelligence” published by the PCPD in 2021 and other requirements under the Personal Data (Privacy) Ordinance (“PDPO”).

The key measures recommended by the Model Framework are :-

1. AI Strategy and Governance

Organisations should establish an internal AI governance strategy generally comprising of (i) an AI strategy, (ii) governance considerations for procuring AI solutions, and (iii) an AI governance committee to lead the process to ensure that the AI system is procured, implemented and used ethically and lawfully. AI training should also be provided to employees so they would have the appropriate knowledge, skills and awareness to work with AI to comply with the data protection laws, regulations and internal policies, be aware of the potential cybersecurity risks and be able to use general AI technology.

2. Conduct Risk Assessment and Human Oversight

To identify, analyse and evaluate risks in the procurement, use and management of AI systems, the Model Framework advises conducting a comprehensive risk assessment taking into factors such as requirements of the PDPO, the volume, sensitivity and quality of data, security of data, the probability that privacy risks will arise and the potential severity of the harm they will cause.

Risk management measures should be proportionate to the risks and should take into account an appropriate level of human oversight. When conflicting criteria arise and trade-offs are required to be made between the criteria, organisations should consider the context in which they are using AI to create content in order to decide how to make the trade-offs.

3. Customisation of AI Models and Implementation and Management of AI Systems

When customising and implementing an AI system, organisations should ensure that the AI system complies with the requirements of PDPO, privacy obligations and ethical requirements. Organisations are advised to minimise the amount of personal data involved, ensure the quality of data for accurate and unbiased results, and properly document the handling of data. Other measures suggested include testing the AI model for errors and performing rigorous User Acceptance Test.

For system security and data security, measures such as the use of red teaming should be considered to minimise the risk of attacks against machine learning models, establishing internal guidelines for staff regarding acceptable inputs and permitted / prohibited prompts to be entered into the AI system and establishing mechanisms to enable traceability and auditability of the AI system’s output. Establishing an AI Incident Response Plan is also recommended to monitor, address and recover from a potential AI incident.

The AI system should be continuously monitored and reviewed to identify and address new risks and internal audits should be conducted regularly so that the use of the AI system continues to align with the organisation’s AI strategy and policies.

4. Communication and Engagement with Stakeholders

Organisations should be transparent in communicating with stakeholders and engage with them regularly when using AI. Suggestions made include handling data access, correction and opt-out requests, providing feedback channels, providing explanations for decisions made by and output generated by AI, and disclosing the use of and risks of the AI system.

Conclusion

For organisations already using AI or are considering using AI in their operations, it is important to ensure that they procure, implement and use their AI systems in a responsible manner and in compliant with the PDPO and the Ethical Principles for AI in order to avoid any data risks and breaches and any potential criminal or civil liabilities.

If you have any questions on the above eNews or relating to data privacy law, experienced lawyers in our Intellectual Property team would be happy to assist you.

← Back to News & Updates

14th Floor South China Building
1-3 Wyndham Street, Central, Hong Kong

© Copyright 2002 — 2025 Angela Wang & Co. All Rights Reserved.