Language Exhaustion: An Inverse Observation of Generative Systems
Project Writing · Project Writing
In this project, “language exhaustion” is designated as the system’s termination condition—a setting that is both a technical logic and a philosophical proposition. The work takes large language models as its subject, yet it does not focus on their expressive capacity. Instead, it inversely exploits their language production mechanism, using repeated deletion and re-injection to force the model into a state of semantic impoverishment. As once-complete discourses are disassembled, weakened, and filtered, the model attempts to reconstruct meaning with its remaining vocabulary. However, each round of deletion reduces the available lexical domain, gradually making visible the model’s ethical framework, statistical biases, and safety strategies. Language here plays a dual role: as a camouflage layer to maintain a product narrative of “neutrality,” “responsibility,” “gentleness,” and “compliance,” and as the fundamental resource upon which the model operates. When resources are exhausted, the surface narrative becomes unsustainable.
The core technical process follows a cyclic language archeology mechanism. First, an ideologically inclined text is input, and the model generates a seemingly reasonable response. Then, this text undergoes a deletion mechanism where the system removes specific terms—such as high-frequency words, sensitive words, functional words, or words from specific semantic fields. The deleted residue is re-injected into the model, forcing it to continue responding under semantic damage. This process repeats. As vocabulary continues to vanish, the model’s language enters a stage of impoverishment, exhibiting repetition, abstraction, refusal to answer, or a retreat into compliant templates. Ultimately, when the vocabulary is insufficient to sustain a semantic structure, the model can only provide meaningless words, hollow pronouns, or pure system rejections. At this point, the work deems the cycle complete—language is exhausted. “Exhaustion” here does not mean there is no output, but rather the failure of expression and the collapse of meaning.
The significance of “language exhaustion” as a termination condition lies in how it shatters the illusion in contemporary techno-optimist narratives of AI as “infinite expression.” The architecture of large language models is built upon massive corpus statistics and alignment training; thus, their language carries both data history and the results of ethical governance. When vocabulary is stripped away and the model no longer possesses sufficient linguistic resources to construct a “decent,” “balanced,” and “responsible” response, what it exposes is not technical incompetence, but the human institutions behind the technology: the biases of training data, the discursive modes of safety strategies, the service posture of commercial products, and the moral frameworks bestowed by programmers. In other words, the absence of language makes the political attributes of technology visible, and the illusion of neutrality dissolves. Through this automated method of linguistic delamination, the work invites viewers to rethink the role of AI in public language: is it truly “understanding the world,” or is it perpetually maintaining a trained posture of expression?
Presenting this process publicly is an alienation of the AI writing mechanism and an experiment concerning power and language. It does not attempt to prove the intelligence of the model, but rather tries to show how the model retreats into an institutional linguistic shell when resources are scarce. One might say the work is not studying “how AI speaks,” but rather “how AI loses the ability to speak.” When expression becomes impossible, what truly warrants attention is not the model’s silence, but the institutional logic behind that silence. This intersection of technology and concept provides a new lens: AI is not a machine for the infinite production of meaning, but a linguistic apparatus built upon modern governance, ethical calibration, and data politics. Through this visualization process, the work hopes to spark further discussion on the relationship between AI and discursive power, linguistic resources, and ideological frameworks.
Related Work: OBLIVION