Who sets the limits of artificial intelligence in the military sphere when the law remains silent?
In the recent confrontation between Anthropic and the Pentagon over the military use of artificial intelligence, many people have been struck by the phrase that sums it all up: ‘any legal use’. It sounds reasonable. Who could oppose what is legal? The problem is that when it comes to algorithmic surveillance and autonomous weapons, the legal framework of the US does not say enough. And when the law is silent, the Executive
Branch contracts, sets conditions via administrative directive, and moves the debate outside of Congress. This is the essence of the conflict and a warning for any democracy: if the law does not define the perimeter, the contract defines the perimeter.
The formal policy of the Department of Defense is set out in DoD Directive 3000.09 (amended January 2023). It is a document with security obligations, testing requirements, and the mandate to maintain ‘appropriate levels of human judgment’ in the use of force. But it does not prohibit fully autonomous weapons; it establishes requirements and review processes within an internal policy of the Executive Branch, not a congressional law. The Congressional Research Service (CRS) itself confirms that US policy does not prohibit the development or use of lethal autonomous systems. This combination—a demanding but non-binding administrative directive and the absence of a legislative prohibition—is the first crack in the system.
The broad federal framework is FISA (50 U.S.C. Chapter 36), designed for a world of traditional telephone surveillance, not for the massive inference of patterns or algorithmic correlation enabled by foundational models. This silence operates as a space of permissiveness: if there is no explicit prohibition and the government frames its actions under some title of FISA, use remains within the boundaries of ‘legality’. Judicial oversight aggravates the problem: the Supreme Court has restricted the right to challenge surveillance programs (Clapper v. Amnesty), and the expanded interpretation of state secrets privilege in FBI v. Fazaga acts as an additional shield. The result: little preventive oversight and few avenues to ascertain the extensive use of algorithms.
Finally, in the absence of specific legislation, the Administration unifies the criteria through contractual clauses. In artificial intelligence, this has crystallized into the principle of any lawful use: if it is legal, the provider cannot impose private vetoes. Abstractly, it sounds like a defense of state sovereignty. In concrete terms, it is a mirror clause of the void: if the law does not prohibit fully autonomous weapons or delineate massive algorithmic surveillance, ‘any lawful use’ is, in fact, almost any use.
This debate can be interpreted in various ways. The strategic view holds that the state cannot allow a provider to set its own rules amidst a crisis. The ethical view responds that what is legal is not necessarily tolerable when the cost of error is irreversible. The market view warns that if major players accept ‘any lawful use’, it becomes the de facto standard and those who voluntarily put brakes on it lose contracts.
But the perspective that needs to be highlighted is the legal-constitutional one. The clash between Anthropic and the Pentagon cannot be resolved with more clauses; it can be resolved with an appropriate legislative framework. Until Congress establishes a clear regulatory floor for lethal autonomy—with verifiable standards of human oversight—and for algorithmic surveillance—limits on re-identification and massive correlation, with enhanced authorization and traceability—‘any lawful use’ will remain an unknown. The CRS itself confirms that there is no federal prohibition on lethal autonomous weapons and that the DoD directive remains the only current reference: we depend on soft law when hard law is needed.
Are you saying that the executive branch should not act? No. In a context of global technological competition, executive orders that promote infrastructure and coordination are understandable. But the democratic minimum demands that the contours of surveillance and lethal autonomy be established by the legislature, not by the contracting table.
Currently, regulations have significant gaps and jurisprudence limits oversight avenues. It is a system designed for another era that is being applied to technology capable of deciding and observing on an unimaginable scale a decade ago. The phrase ‘any lawful use’ is not a principle; it is a deficiency. Until Congress legislates precisely—human oversight, limits on algorithmic surveillance, independent audits, and shared accountability between the provider and the state—the line between acceptable and unacceptable will continue to be drawn by opaque contracts and internal directives. And that demands too much trust and too little from the law.