Introduction
The UK’s National Health Service (NHS) has announced plans to shutter the vast majority of its publicly available open-source code repositories, citing growing risks from advanced artificial intelligence tools capable of hunting down security vulnerabilities. According to a report by open-source advocate Terence Eden, the decision is a direct reaction to the increasing sophistication of large language models (LLMs) such as Anthropic’s Mythos—which can now automatically scan source code for weaknesses that human reviewers might miss. But Eden, a former employee of NHSX, argues that the move is not only unnecessary but also contradicts the UK government’s own stated commitment to open-source principles.

The New NHS Directive
Under the new guidance, NHS Digital intends to close or restrict access to nearly all of its open-source repositories hosted on platforms like GitHub. The rationale is that LLM-powered vulnerability scanners could exploit even minor flaws in the code, potentially leading to security incidents that could compromise patient data or disrupt health services. While the NHS has not published the exact list of repositories to be affected, the sweeping nature of the directive has alarmed the open-source community, many of whom see it as a step backward for transparency and public accountability.
Why the Fear? The Role of LLM Vulnerability Scanners
The sudden concern stems from the rapid evolution of AI-driven code analysis. Tools like Anthropic’s Mythos can scan thousands of lines of code in seconds, identifying potential buffer overflows, injection flaws, or insecure data handling. In theory, a malicious actor could use such a tool to find and exploit vulnerabilities in NHS software. However, as Eden points out, this threat scenario is largely hypothetical for the majority of NHS repos.
Terence Eden’s Counterargument: A Case for Open Source
Eden, who played a key role in the NHS’s digital transformation during the pandemic, strongly disagrees with the shutdown. He notes that the majority of NHS code repositories contain datasets, internal tools, guidance documents, research software, and front-end design elements—not mission-critical security software. “There is nothing in them which could realistically lead to a security incident,” Eden states. In other words, even the most sophisticated LLM scanner would find no exploitable vulnerabilities in a CSV file or a user-interface prototype.
Moreover, Eden argues that closing these repos undermines the very benefits of open source: peer review, community collaboration, and public trust. He warns that the move could deter external researchers and developers from contributing to NHS projects, ultimately making the service less secure rather than more.
The Covid-19 Contact Tracing App Precedent
To illustrate his point, Eden recalls his work on the NHS Covid-19 contact tracing app during the height of the pandemic. “We were so confident of the safety and necessity of open source, we made sure the app was open sourced the minute it was available to the public,” he writes. That app—installed on millions of phones and subjected to intense scrutiny from hostile state actors—was a high-stakes, high-profile piece of software. Yet, “despite publishing the code, architecture and documentation, the open source code caused zero security incidents,” Eden emphasizes. The app’s success, he believes, proves that openness can coexist with security, even for the most sensitive projects.
Contradiction with Government Policy
The NHS’s new directive also appears to run counter to established UK government policy. The UK Tech Code of Practice—published in 2022—explicitly states in Point 3: “Be open and use open source.” The code urges government departments to “publish source code, data, and algorithms where appropriate” and to “consider using open-source solutions to reduce costs and increase flexibility.” By closing the repositories, the NHS is effectively ignoring this guidance, raising questions about consistency within the public sector.
Eden sums up the irony: “Furthermore, this new guidance is in direct contradiction to the UK’s Tech Code of Practice point 3 ‘Be open and use open source’ which insists on code being open.” The contradiction is especially stark given that the NHS itself was a signatory to the code.
What This Means for the Future of Open Source in Government
The NHS’s decision could set a worrying precedent for other government agencies. If the default response to AI-driven security scanning is to retreat behind closed doors, public-sector transparency may suffer. Many open-source advocates fear that the move will erode trust, hinder innovation, and prevent the kind of collaborative security auditing that has historically made open-source software more resilient. As Eden notes, “The open source code caused zero security incidents” for the contact tracing app—a fact that challenges the assumption that secrecy equates to safety.
In the end, the NHS may be overcorrecting for a threat that, while real in theory, has not materialized in practice. A more balanced approach would involve risk-based assessments for each repository, rather than a blanket shutdown. Until then, the open-source community watches with concern as one of the world’s largest health services turns its back on the principles of openness and collaboration.
Conclusion
The debate between security and transparency is not new, but the NHS’s recent directive highlights how rapidly advancing AI tools are forcing organizations to re-evaluate long-standing policies. While protecting sensitive code is important, critics like Terence Eden argue that the current overreaction could do more harm than good. The success of the Covid-19 app, the harmless nature of most NHS repos, and the conflict with official government policy all suggest that a more nuanced approach is needed—one that embraces open source while managing real risks, rather than shutting down public access altogether.