In this project, “Language Exhaustion” is set as the system’s termination condition—a setting that is both a technical logic and a philosophical proposition. The work takes Large Language Models (LLMs) as its subject, but instead of focusing on their expressive capabilities, it inversely exploits their language production mechanisms. Through a process of iterative deletion and re-injection, the model is forced to confront semantic impoverishment. When a complete discourse is dismantled, weakened, and filtered, the model attempts to reconstruct meaning using the remaining vocabulary. However, each round of deletion reduces the available lexical field, causing the model’s ethical framework, statistical biases, and safety policies to gradually become manifest. Language here plays a dual role: on one hand, it serves as the model’s camouflage layer, used to maintain a product narrative of being “neutral,” “responsible,” “mild,” and “compliant”; on the other hand, it is the fundamental resource upon which the model operates. When resources are depleted, the surface narrative becomes unsustainable.
The core technical workflow follows a circular linguistic archaeology mechanism. First, a text with specific ideological tendencies is input, and the model generates a seemingly plausible response. Next, this text undergoes a deletion mechanism where the system removes specific terms, such as high-frequency words, sensitive words, functional words, or words from specific semantic fields. The truncated residual text is then re-injected into the model, forcing it to continue responding under semantic impairment. This process repeats iteratively. As vocabulary continues to vanish, the model’s language enters a stage of sterility, exhibiting repetition, abstraction, refusal to answer, or reverting to compliance-based templates. Ultimately, when the vocabulary is insufficient to sustain a semantic structure, the model can only provide meaningless characters, hollow pronouns, or pure system rejections. At this point, the work considers the cycle complete—language is exhausted. “Exhaustion” here does not mean a lack of output, but rather the failure of expression and the collapse of meaning.
The significance of “Language Exhaustion” as a termination condition lies in its breaking of the contemporary techno-optimist narrative that portrays AI as a source of “infinite expression.” The basic architecture of LLMs is built upon large-scale corpus statistics and alignment training; thus, their language carries both data history and the results of ethical governance. When vocabulary is stripped away and the model no longer possesses sufficient linguistic resources to construct “decent,” “balanced,” and “responsible” answers, what is exposed is not technical incompetence, but the human institutions behind the technology: the biases of training data, the discourse patterns of safety policies, the service posture of commercial products, and the moral frameworks endowed by programmers. In other words, the absence of language makes the political attributes of technology visible, and the model’s illusion of neutrality dissolves. The work hopes to use this automated linguistic layering method to prompt viewers to rethink the role of AI in public language: whether it is truly “understanding the world,” or merely maintaining a trained posture of expression.
Presenting this process publicly is an alienation of the AI writing mechanism and an experiment regarding power and language. It does not attempt to prove whether the model is intelligent, but rather seeks to demonstrate how the model retreats into an institutional linguistic shell when resources are insufficient. Instead of saying the work researches “how AI speaks,” it is more accurate to say it researches “how AI loses the ability to speak.” When expression becomes impossible, what truly deserves attention is not the model’s silence, but the institutional logic behind that silence. This intersection of technology and concept provides a new perspective: that AI is not a machine for the infinite production of meaning, but a linguistic apparatus built upon modern governance, ethical calibration, and data politics. Through this visualization process, the work seeks to push forward discussions on the relationship between AI and discursive power, linguistic resources, and conceptual frameworks.
Related Works: OBLIVION