HPCSA Booklet 20, published in September 2024, sets ethical guidelines for the use of artificial intelligence by registered health practitioners. This guide unpacks the principles, the practical compliance challenges, and the liability framework.
A South African Framework for Clinical AI
The Health Professions Council of South Africa published Booklet 20, Ethical Guidelines on the Use of Artificial Intelligence, in September 2024. The booklet is among the first formal regulatory frameworks for clinical AI in South Africa and complements the existing HPCSA ethical guidelines on telehealth (Booklet 10), informed consent (Booklet 4), and confidentiality (Booklet 5).
For HealthTech operators developing AI-assisted diagnostic tools, clinical decision support systems, and AI-enabled patient triage platforms, Booklet 20 is the reference framework against which the ethical practice of registered healthcare practitioners using AI tools is measured. Compliance shapes both the platform design and the practitioner deployment of AI tools.
The Ethical Foundation
Booklet 20 anchors AI use in healthcare to the foundational ethical principles applicable to all clinical practice. Patient interests remain primary. AI deployment must serve patient health and wellbeing, not displace clinical judgement or shift accountability to the system. Patient confidentiality, privacy, choice, and dignity must be respected throughout the AI deployment.
The booklet emphasises that AI does not change the practitioner's professional duty. The practitioner remains accountable for clinical decisions made with AI support, and the practitioner's professional indemnity exposure attaches to the AI-supported decision in the same way as to a non-AI clinical decision.
Categories of Clinical AI
AI in clinical practice spans multiple modalities including diagnostic AI (image interpretation, pattern recognition in clinical data), clinical decision support (treatment recommendations, drug interaction screening, dose adjustment), risk stratification (sepsis prediction, deterioration risk, readmission risk), administrative AI (clinical documentation, scheduling, billing), and patient-facing AI (chatbots, triage tools, conversational interfaces).
Booklet 20 applies across these categories, with the depth of clinical impact informing the rigour required of the deployment.
Informed Consent for AI-Assisted Care
Where AI materially affects the diagnostic or treatment pathway, the patient is entitled to know that AI is being used. Booklet 20 anticipates that informed consent for AI-assisted care should address the use of AI in the consultation or care process, the nature of the AI's role (informing the practitioner versus determining the outcome), the limitations of the AI tool, the alternative available without AI, and the data flow including any AI training implications.
For platform operators, the consent framework must be capable of presenting these elements clearly to patients without overwhelming them with technical detail. Layered consent (a short summary plus access to a more detailed disclosure) is an effective design.
Algorithmic Accountability
Booklet 20 places weight on algorithmic accountability, reflecting international consensus that clinical AI must be explainable, validated, and monitored. Practical compliance elements include validation against representative South African patient populations (avoiding distributional shift from training populations to deployment populations), explainability sufficient for the practitioner to understand and where appropriate override the AI output, monitoring for performance drift and adverse events post-deployment, and transparency to the practitioner and patient about the AI's training data, intended use, and known limitations.
Interaction with SAHPRA
The South African Health Products Regulatory Authority regulates medical devices, and software that meets the definition of a medical device (Software as a Medical Device, or SaMD) falls within SAHPRA's licensing framework. Booklet 20 operates alongside SAHPRA registration, addressing the ethical conduct of practitioners using SaMD rather than the device approval pathway itself.
HealthTech operators developing SaMD must complete the SAHPRA registration pathway in addition to ensuring that practitioner deployment complies with Booklet 20. The two regimes are complementary, not substitutes.
POPIA and AI Training Data
AI systems trained on clinical data engage POPIA in two distinct ways. First, processing of health information for training purposes requires lawful grounds under section 32 of POPIA, with explicit consent or another statutory basis required. Second, AI systems that process patient data in operation (whether for training, inference, or both) must comply with the section 19 security safeguards, the section 21 operator agreement requirements where third-party processors are involved, and the section 22 breach notification obligations.
De-identification is a partial mitigation but is increasingly understood not to be a complete release from POPIA, particularly where re-identification is feasible from auxiliary data.
Liability Architecture
The liability landscape for clinical AI involves multiple potential defendants. The practitioner remains professionally accountable. The HealthTech platform may face product liability, contractual liability, and consumer protection liability. The AI system manufacturer (where distinct from the platform) may face product liability. The institution employing the practitioner may face vicarious liability.
South African law has not yet produced extensive case law on clinical AI liability allocation, but the general principles of medical negligence, product liability, and contract law apply. Operators should construct contractual liability allocation between platform and practitioner that reflects the realistic distribution of fault and that survives termination of the platform-practitioner relationship.
Implementation Roadmap
HealthTech operators implementing AI tools should consider a phased approach: legal and ethical review against Booklet 20 and POPIA at the design stage, validation evidence assembly demonstrating performance in representative populations, SAHPRA registration where the tool meets the SaMD threshold, practitioner training on the tool's intended use and limitations, informed consent framework integration into the patient pathway, monitoring and incident reporting infrastructure, and regular ethics review of deployment outcomes.
How Mashiane Attorneys Can Assist
Our HealthTech practice advises HealthTech operators, AI developers, and healthcare institutions on Booklet 20 compliance review, SAHPRA SaMD registration, POPIA AI compliance, informed consent design, liability architecture, and incident response. Contact our team for an AI ethics compliance review.

