South Africa has had its first draft national artificial intelligence policy removed following the discovery of fictitious citations in the document that appeared to be AI generated.
The recall, which has emerged after the draft policy’s phony references were exposed, is more than a bureaucratic blip; it’s precisely the sort of gaffe that might cause a person to drop their mug halfway to their lips.
You have to ask yourself: Wait, the policy that is meant to regulate AI just got undermined by AI? That’s embarrassing, to be sure, but also instructive in that it’s a cautionary tale.
South African communications and digital technologies minister Solly Malatsi told an audience this past week that he suspects that AI generated citations were accidentally included in the draft policy document without proper verification and review.
“The integrity of the draft policy has been compromised,” Malatsi said in a statement on the subject, which goes to show you don’t need AI to realize when it isn’t a good idea to do something, like using it without human supervision. That supervision is the seatbelt: Only when you’re in a car accident do you realize that you actually had a seatbelt.
The draft policy had serious ambitions: Earlier this month, South Africa proposed a range of new institutions and incentives aimed at fostering AI development and innovation in its country, including the establishment of a National AI Commission, an AI Ethics Board, and an AI Regulatory Authority, in addition to the provision of tax incentives, grants, and subsidies that might incentivize local AI development.
In other words, Pretoria wanted to be on the front lines of artificial intelligence adoption in Africa, something that will require not only the government to get its ducks in a row, but also to avoid the appearance of moving quickly without proper verification.
The alarm went off after News24 revealed that some citations in the draft were apparently fabricated. This is a big deal because bogus references don’t just make citations more difficult to find or verify.
Instead, they lend spurious claims academic credibility, provide excuses for bad behavior, and mislead the public to believe that a policy is grounded in facts when it is actually just smoke and mirrors.
For a piece of policy on ethics, bias, data sovereignty and digital rights, it would not be a trivial blemish, it would be a stain that would leave a mark in many people’s memories.
The larger point isn’t that South Africa should stop trying to govern artificial intelligence. Far from it. South Africa has already started building the necessary institutional capacity and infrastructure, through its National AI Policy Framework, opened to public comment in 2024 to discuss AI’s economic opportunities and governance dilemmas. We shouldn’t forget that.
For all the issues that may surround the withdrawn draft, the need to govern AI remains. AI is impacting finance, education, the public sector and our media already; hoping that regulations can just wait will be an illusion masked as patience.
This also highlights an important consideration for every government agency, law firm, university and newsroom considering using generative AI. Make sure you’re the last line of defence on anything you submit. It’s a bit of a no brainer, I know, but that is exactly when things fall apart.
If the draft looks good, the references seem academic and the language seems strong, there is a tendency for everyone to think it must have been checked. And that is when everything will come back to bite you.
Credibility is easily shattered, and once a draft policy is suspected to be based on fiction, the debate becomes not just about “what” the policy says but about “who verified” the source material.
What could have gone undetected? The issue, then, is one of credibility, rather than of political embarrassment, even though there is plenty of political embarrassment.
Nevertheless, Malatsi’s choice to rescind the draft policy proposal was the correct one, even if doing so caused embarrassment and political pain. A better approach is for a national artificial intelligence (AI) strategy to be founded on solid sources rather than on faulty citations that nobody questioned. Well, obviously they were, as the above examples show.
South Africa has the opportunity to convert such an embarrassing situation to its advantage by ensuring that draft policy proposals go through independent reference checks, and that policy revision history logs are publicised.
Additionally, it should be made mandatory for human intervention to take place at the final stages of the drafting process to ensure the final document is correct before it is publicly consulted.
South Africa also needs more stringent guidelines on how and when AI can be used in policy proposals. It might not create a headline, but it is essential to policy governance, particularly in AI governance.
