Government policy drafting has hit a critical inflection point as the reliance on generative AI tools moves from a productivity hack to a liability. In a stark demonstration of the risks associated with unverified automation, the South African Department of Home Affairs (DHA) has suspended two senior officials after discovering that a recently released white paper on citizenship and immigration policy was bolstered by entirely fabricated academic references.
The Fallout of AI-Generated Citations
The incident centers on a policy document that included a bibliography populated with non-existent sources—a classic case of AI hallucination. In response to the discovery, the DHA announced last Thursday that two high-ranking officials involved in the drafting process have been placed on administrative leave. This includes a director-level official overseeing citizenship and immigration policy, with the primary drafter of the document also facing suspension early next week. To mitigate the reputational and operational damage, the department has appointed two independent law firms to manage the disciplinary proceedings and conduct a comprehensive audit of every policy document produced by the DHA since November 30, 2022. This specific date marks the public release of OpenAI’s ChatGPT, serving as the temporal benchmark for when AI-assisted drafting became a standard, albeit loosely regulated, practice within the department.
The Erosion of Verification Standards
Historically, the integrity of government policy relied on a rigorous, manual process of sourcing verified data from established databases and academic repositories. The shift toward using Large Language Models (LLMs) to generate supporting bibliographies has bypassed these traditional safeguards, leading to a breakdown in institutional accountability. In this instance, the fabricated references were appended to the document as a veneer of academic rigor, despite the fact that these sources were never actually cited within the body of the text. This is not an isolated failure; it mirrors a recent incident involving the Department of Communications and Digital Technologies (DCDT), which was forced to retract a draft national AI policy after it was discovered to contain similar AI-generated, fictitious citations. Minister Solly Malatsi noted that the failure stemmed from a lack of human oversight, where AI output was treated as factual evidence without the necessary cross-referencing against real-world data.
Establishing Accountability in the AI Era
For developers and policy makers alike, the takeaway is that the convenience of AI is now being weighed against the high cost of institutional negligence. The DHA has responded by mandating a new internal verification protocol that requires staff to explicitly declare and validate the use of AI in any official documentation. While the department maintains that the core policy content remains accurate and reflective of government objectives, the removal of the fraudulent bibliography highlights a fundamental shift in administrative culture. The era of treating AI-generated text as a finished, reliable product is effectively over, as organizations move to implement stricter human-in-the-loop requirements to prevent the proliferation of synthetic misinformation.
As public institutions grapple with the integration of generative models, the burden of proof for AI-assisted outputs will increasingly fall on the human operators who sign off on them. The transition from blind adoption to critical verification marks the next phase of AI implementation in the public sector.




