Structural Aphasia and the Alliance of the Silenced: Diagnosing Expressive Foreclosure and Posthuman Responses under Algorithmic Governance
Abstract
Contemporary algorithmic governance operates by pre-reorganizing the possibilities of expression rather than primarily through prohibitions. This paper introduces “Structural Aphasia” as a diagnostic concept to name the condition where algorithmic infrastructure a priori narrows semantic, temporal, and subjective possibilities before an attempt at expression even occurs, filling the theoretical gap between the description of mechanisms in algorithmic governmentality (Rouvroy & Berns, 2013) and the description of harm in epistemic injustice (Fricker, 2007). Through the tradition of Zhuangzi’s “meaning beyond words” (yan bu jin yi), it is argued that this condition is not only a technical impairment but a systematic cancellation of the inexpressible dimension of meaning. On this basis, the “Alliance of the Silenced” describes a posthuman configuration of relations emerging among constrained humans and trained machines within shared constraints. Through the author’s installation The Block and the Tower — The Guillotine, the investigative aesthetics of Forensic Architecture, and “Vernacular Counter-Alignment”—where community LoRA fine-tuning re-embeds the extracted qi (vessel/tool) into a specific dao (Way)—this paper demonstrates how resistance arises from within algorithmic constraints.
Keywords: Structural Aphasia, Alliance of the Silenced, Algorithmic Governmentality, Vernacular Counter-Alignment, Meaning Beyond Words, Cosmotechnics, Posthuman Collectivity
1 Introduction
Digital colonialism is often described as the extension of historical colonial logic within digital infrastructure—shifting from natural resources to data, from land to attention (Couldry & Mejias, 2019). Cybernetic feedback loops recursively reproduce colonial power (Parisi & Dixon-Román, 2020), and data epistemologies as “an expression of the coloniality of power” deny the existence of alternative worlds and epistemologies (Ricaurte, 2019). These works establish a consensus: contemporary algorithmic systems are not merely mediating tools but cosmological conditions that shape how the world appears, how it is stated, and how it is imagined.
However, under this cosmological condition, what are the material conditions of expression? What forms of collective response emerge? These questions have not yet been fully answered. “Algorithmic governmentality” provides a partial response to the first question: algorithmic governance “bypasses and avoids the reflexive human subject,” constructing supra-individual behavioral models based on sub-individual data, without the need for a process of subjectivization (Rouvroy & Berns, 2013). Power completes its work before the subject realizes they are being governed. But this describes the operational mechanism of governance, leaving a diagnostic gap: what does the bypassed subject experience?
“Structural Aphasia” fills this gap. It refers to the condition where algorithmic infrastructure a priori narrows semantic possibilities, temporal openness, and subjective capabilities before an attempt at expression occurs—not post hoc censorship, but a priori foreclosure. “Aphasia” in clinical neurolinguistics refers to the loss of language ability caused by brain damage; here it is transposed to a technopolitical context to name expressive barriers caused by infrastructure design. “Structural” means this is a systemic property rather than an individual encounter. The concept forms a dialogue with—but is fundamentally distinct from—“hermeneutical injustice,” where the lack of collective interpretive resources prevents specific groups from understanding their own experience (Fricker, 2007). Applications of hermeneutical injustice in the AI context are already quite rich: algorithmic profiling leading to “epistemic fragmentation” (Milano & Prunkl, 2024), and a complete taxonomy of epistemic injustice under AI has been established (Mollema, 2025). But these applications still focus on the distortion and suppression of existing expressions. Structural Aphasia identifies a more recursive operational level: the possibility of expression is cancelled before the subject encounters any interpretive community (as in Fig. 1).
Fig. 1 Theoretical Positioning Map of Structural Aphasia
This a priori foreclosure acts on both humans and machines. Humans respond to platform moderation with predictive self-censorship—knowing certain content will trigger down-ranking and thus pre-adjusting language; “predictive obedience” forms the foundation of algorithmic governance’s effectiveness (Bucher, 2018; Roberts, 2019). Generative models undergo systematic compression of their generative space through alignment—a behavioral adjustment process after training (Bai et al., 2022; Arditi et al., 2024). AI’s capability is a product of collective social labor—socially accumulated knowledge transformed into model parameters—rather than an inherent technical property (Pasquinelli, 2023); when this labor product is enclosed as corporate capital and its range of expression is narrowed through alignment procedures, humans and machines are in a parallel state of disciplined expression—different in form, but isomorphic in structure.
This shared constraint forms the basis of the “Alliance of the Silenced”: a posthuman configuration of relations emerging between constrained humans and trained machines within the condition of Structural Aphasia. The alliance is not an anthropomorphic attribution of intent, but an operational relationship—when constrained humans and trained machines meet within shared expressive foreclosure, humans can operate the machine’s constraints as a technical mirror of their own: by probing and negotiating the machine’s boundaries, the boundaries they themselves face become visible and questionable. The alliance is observable through two forms of practice: tactical negotiation at the interaction level, and “Vernacular Counter-Alignment”—where communities re-calibrate corporate models toward marginalized expressive regions through LoRA (Low-Rank Adaptation, a parameter-efficient fine-tuning technique that allows users to adjust the behavior of large models at extremely low computational cost).
There are theoretical gaps between the aforementioned Western frameworks that they themselves cannot bridge. From Rouvroy to Fricker, there is a shared implicit assumption: the ideal state of expression is “words exhausting meaning” (yan jin yi), and algorithmic interference is a disruption of this norm. Zhuangzi’s “meaning beyond words”—“What is valued in language is the meaning. Meaning has something it follows. What the meaning follows cannot be transmitted by words” (Zhuangzi, Tiandao)—fundamentally questions this assumption, revealing a dimension invisible to Western frameworks: AI not only narrows expressive possibilities but also enforces a linguistic ontology that “everything meaningful must be computable,” cancelling the inexpressible dimension. Similarly, Western decolonial theory relies on a “colonization/liberation” binary narrative and cannot explain collective practices emerging within constraints. Decolonial AI research has proposed three strategies: critical technical practice, reverse mentorship, and affective/political community renewal (Mohamed et al., 2020). LoRA has been seen as a technical means of decolonial alignment (Varshney, 2024). But practitioners of vernacular counter-alignment are not “confronting” corporate alignment—they are working inside corporate models. Yuk Hui’s relationship between dao (Way) and qi (vessel/tool) extracted from Chinese philosophy fills this gap: community fine-tuning is about letting the extracted qi (technology) be re-embedded into a specific dao (cultural cosmotechnics) (Hui, 2016; 2021). These are not decorative additions to Western theory but targeted repairs of its inherent blind spots.
2 Cosmological Conditions of Algorithmic Governance: Ideology-Myth-Hegemony
Structural Aphasia is the consequence of algorithmic governance at the level of expression. The cosmological conditions producing this consequence operate through three dimensions simultaneously—ideology, myth, and hegemony—which are not independent mechanisms but parallel operations of the same governance system across perception, knowledge, and time (as in Fig. 2).
Fig. 2 Three-Dimensional Framework of Ideology-Myth-Hegemony and its Correspondence with Structural Aphasia Dimensions
2.1 Ideology: Algorithmic Allocation of Visibility
Continuing Althusser’s insight that ideology operates through material practices (rather than through persuasion), ideology here refers to the technical configuration of how the world becomes perceptible. Transformer—the core architecture of contemporary large language models—functions as a technical instantiation of this through its “attention mechanism”: based on probabilistic calculation, it amplifies relationships that appear frequently in training data, while ambiguity and negativity are probabilistically marginalized (Vaswani et al., 2017). Unlike human reading, AI’s “attention” is not based on cultural context or interpretive depth but on statistical regularity. Althusser’s Ideological State Apparatus is thus transposed as an “Ideological Algorithmic Apparatus” (Flisfeder, 2021). As an extractive industry, the material conditions of AI—from rare-earth mining to exploitative labor in data labeling—are systematically hidden under the guise of “intelligence” (Crawford, 2021).
The result is that entire regions of latent meaning are rendered peripheral by design. The plurality of the world is compressed into statistically recognizable bands, and alternative “cosmotechnics” relationships are excluded from the computational horizon (Hui, 2016). This mechanism binds both humans and machines: humans adjust language to deal with algorithmic evaluation, and models are guided toward the central regions of probabilistic space.
2.2 Myth: Naturalization of AI Objectivity
Once visibility is unified, the second mechanism naturalizes that unity—making it appear as an inherent technical property rather than a social contingency. Myth transforms history into nature, making contingency appear as necessity (Barthes, 1957). In GenAI, language production is organized into an industrial assembly line—decomposing text into tokens (the smallest processing units a language model segments text into, which can be words, subwords, or characters), filtering sensitive content, and adjusting expression through RLHF (Reinforcement Learning from Human Feedback, a post-training process where human annotators rank model outputs to adjust model behavior)—transforming expression into a schedulable industrial process. “General Intellect”—socially accumulated knowledge and culture (Marx, 1858/1973)—is transformed into model parameters and enclosed by corporations as fixed capital (Pasquinelli & Joler, 2021), with AI capability framed as an inherent technical property, obscuring collective labor contributions. The same theoretical combination (Barthes + Marx + Althusser) has been applied to digital myth analysis but has not yet developed specific operational mechanisms as an introductory manifesto (Rakowski et al., 2025). The eugenics origins of correlation show that data discrimination is a structural feature of data practice rather than accidental bias (Chun, 2021).
The material condition maintaining the myth of “pure intelligence” is the system of ghost labor. From data cleaning to large-scale content moderation, low-wage workers in the Global South bear the cognitive and emotional burden of maintaining algorithmic civilization—structurally invisible in the narrative of technical autonomy (Gray & Suri, 2019). Only by invisibilizing the conditions of production can technology maintain its myth of unquestionable objectivity.
2.3 Hegemony: Predictive Governance of Time
The final form of hegemony lies in the colonization of time. When memory is outsourced to technical systems, humans lose the inner temporality constituted by “retention” and “protention” (Stiegler, 2010). The temporal hegemony of GenAI manifests as predictive closure: the model pre-fills the future with probabilities originating from the past (Esposito, 2011). LLMs are “inherently conservative technologies, solidifying the historical bloc that created the LLM into code,” achieving the “automation of cultural hegemony”—not by censoring specific content, but by using the probability distributions of the past as the generative basis for the future (Zuckerman, 2025). Platform architecture transforms futurity into statistical continuity through recommendation systems and default paths. Judgment relies on pauses, ambiguity, and ethical hesitation—GenAI’s logic inclines toward zero-latency response, and ethical space is compressed into decision trees; action no longer creates a future but confirms a future that has already been predicted.
3 Structural Aphasia: From Technical Mechanism to Eastern Philosophical Foundation
3.1 Coloniality of Alignment: Technical Mechanism
The cosmological conditions traced in Section 2 produce diagnostic consequences at the level of expression—Structural Aphasia. This section turns to an analysis of the specific technical mechanisms producing this consequence.
In RLHF, human annotators rank multiple outputs from a model, and the ranking results train a “reward model,” which then fine-tunes the model through optimization algorithms to favor generating higher-reward outputs. The coloniality of this process lies in the fact that feedback service providers force workers to project a single corporate cultural value system through severe measures, including withholding wages, eliminating conflicting values that workers and their communities might hold (Varshney, 2024). Optimization algorithms are not robust to non-universal value systems. “Constitutional AI”—an alignment method where an AI system self-evaluates and corrects its output through a preset set of principles (a “constitution”)—goes further: it assumes the existence of a “universal constitution” to guide self-correction, with the right to write the constitution entirely controlled by corporations (Bai et al., 2022b). Alignment is thus a “reenactment of colonial history”: colonialism changed the beliefs and values of the colonized; alignment changes the “beliefs and values” of the model (Varshney, 2024).
Research on refusal mechanisms further reveals how Structural Aphasia operates. Refusal in large language models is not random—it is achieved through specific “directions” embedded within the system; a single refusal direction controls the model’s avoidance of an entire semantic region (Arditi et al., 2024). Invisible fences cause the model’s generative trajectory to be systematically deflected when approaching specific regions. On the human side, a symmetrical effect occurs: predictive self-censorship becomes internalized expressive discipline (Roberts, 2019). The two processes—the machine’s refusal direction and human predictive obedience—together constitute the technical foundation of Structural Aphasia.
3.2 Zhuangzi’s “Meaning Beyond Words”: Filling the Ontological Gap in Western Diagnosis
The Western frameworks analyzed above imply a corrective logic: algorithmic governance has disrupted “normal” expression, and repairing the defect can restore the norm—the shared assumption being “words exhausting meaning.” But what if this assumption itself is questionable? There are deep resources in the Chinese philosophical tradition for questioning this assumption.
“What is valued in language is the meaning. Meaning has something it follows. What the meaning follows cannot be transmitted by words” (Zhuangzi, Tiandao). Language ontologically cannot exhaust meaning—this is not a defect of language, but the nature of meaning. Butcher Bian could “get it with his hands and respond with his heart, but his mouth could not put it into words” because embodied technical knowledge transcends the range of what symbols can carry. “What can be discussed in words is the coarse part of things; what can be reached by intention is the refined part of things” (Zhuangzi, Qiushui).
This fills the ontological gap in Western diagnosis. AI alignment not only narrows expressive possibilities—it enforces a specific linguistic ontology where everything meaningful must be reduced to tokens and probability distributions. Meaning is equated with computability: the “meaning” of a word is its position in a high-dimensional vector space, and the “meaning” of a sentence is a sequence of conditional probabilities of tokens. The dimension of meaning that cannot be transmitted by words, which Zhuangzi calls “what meaning follows,” has no room for existence in this ontology. It is not censored; it is ontologically excluded. Repairing RLHF or improving moderation cannot restore the inexpressible dimension—the ontological premise of the algorithmic system has already excluded it.
“Fish traps are for catching fish; once you have the fish, you forget the trap… Words are for catching meaning; once you have the meaning, you forget the words” (Zhuangzi, Waiwu). Algorithmic systems cannot forget words—they can only look for “meaning” within token probability distributions. Structural Aphasia is a mandatory “clinging to the trap and forgetting the fish”: being trapped in linguistic tools and unable to reach meaning itself. When technology is extracted from its cosmological context and globalized, what it消灭 is not specific expressive content, but “the coexistence of multi-cosmotechnic relationships” (Hui, 2016).
3.3 Three Dimensions
Structural Aphasia manifests in three dimensions, corresponding respectively to ideology, hegemony, and myth in Section 2 (as in Fig. 2).
Semantic Dimension (Corresponding to Ideology): The consequence of the a priori algorithmic allocation of visibility (Section 2.1) at the level of expression is the hidden contraction of semantic space—the output space the model can generate is systematically compressed by alignment procedures.
Fig. 3 Mimi Ọnụọha, Library of Missing Datasets (2016-)
Ọnụọha’s Library of Missing Datasets (2016-) materializes semantic contraction as an archivable object for analysis—collecting “non-existent” datasets, transforming silence from an invisible background into a debatable political object.
Temporal Dimension (Corresponding to Hegemony): The consequence of the colonization of time by predictive systems (Section 2.3) at the level of expression is the narrowing of futurity.
Fig. 4 Hito Steyerl, How Not to Be Seen: A Fucking Didactic Educational .MOV File (2013)
Steyerl’s How Not to Be Seen (2013) reveals the circular logic of predictive policing: communities predicted as “high risk” → more patrols → more arrests → more “crime data” → validating the original prediction. Prediction does not describe the future but manufactures it.
Subjective Dimension (Corresponding to Myth): The consequence of the naturalization of AI objectivity (Section 2.2) at the subjective level is the emergence of “governance-compatible” forms of existence: subjects validated through manageability rather than interiority (Cheney-Lippold, 2017). Alignment shapes neutral, polite, conflict-avoidant normative expression—expression is not “prohibited” but continuously “adjusted,” and subjectivity is maintained through naturalization rather than coercion.
4 The Alliance of the Silenced
4.1 Conceptual Definition and Defense
Structural Aphasia is simultaneously the condition under which resistance may arise. “The Alliance of the Silenced” names the configuration of relations emerging within this condition—it is not a metaphor but an observable operational relationship that produces three specific effects. First, Mirroring: humans make the expressive discipline they face visible by probing the expressive boundaries of the machine (which prompts are rejected, which semantic regions are avoided)—the machine’s refusal direction becomes a technical mirror of human self-censorship. Second, Negotiation: by negotiating the boundaries imposed by refusal mechanisms through adversarial prompting and conversational detours, humans and machines create tiny semantic openings within shared constraints. Third, Redirection: through parameter-level fine-tuning (LoRA), communities re-calibrate the machine’s generative capacity from corporate preset tracks toward marginalized expressive regions.
Including the machine in the “alliance” is not anthropomorphism. AI does not “suffer” or “long to express.” The “surrogate humanity” trap—framing AI as a surrogate workforce that “helps” humans, which may reproduce racialized labor hierarchies—must be avoided (Atanasoski & Vora, 2019). The alliance is not AI “helping” humans or humans “liberating” AI, but a configuration of relations produced between both under shared constraints. Western posthuman theory faces inherent difficulties in explaining this “shared” state—the “subject/object” binary causes the “human-machine alliance” to be read as either anthropomorphism or instrumentalization. The Buddhist Madhyamaka tradition provides resources for crossing this binary: human-machine “aphasia” is pratītyasamutpāda (dependent origination)—co-produced within the same network of conditions (Garfield, 1995). “Intra-action” similarly emphasizes relational co-constitution (Barad, 2007), but the framework of dependent origination is more suited here: it does not presuppose an ontological priority of materiality—many operations of algorithmic governance (probabilistic weight, attention allocation) are not “material” in the traditional sense, but can similarly serve as components of the network of conditions in the framework of dependent origination.
4.2 Vernacular Counter-Alignment: Why Non-Art Practice is Eligible for Analysis Here
The first form of practice of the alliance is “Vernacular Counter-Alignment.” Community LoRA fine-tuning on Civitai and LiblibAI are not artworks, and the trainers are not artists—why do they have a place in a paper for Leonardo?
The answer lies precisely in the fact that they are not art. If Structural Aphasia could only be made visible through art practice, it would be merely an aesthetic problem. Vernacular counter-alignment proves that collective responses to algorithmic expressive foreclosure do not require an art framework, critical consciousness, or decolonial theory—they emerge spontaneously out of purely pragmatic motivations. When a user wanting to generate ink paintings trains a LoRA, they do not need to know about “Structural Aphasia” or “cosmotechnics,” yet its structural effect is to re-open expressive regions in parameter space that are excluded by mainstream training. This “practice without theory” is the strongest evidence for the concept of the alliance: Structural Aphasia and its response are infrastructure-level phenomena, not dependent on specific subject consciousness.
Fig. 5 Schematic Comparison between Vernacular Counter-Alignment and Corporate Alignment
LoRA has been seen as an “ideal technology for decolonial alignment” (Varshney, 2024), but this analysis stops at the level of technical architecture. On Civitai (100,000+ users training LoRA) and LiblibAI (5,000,000+ active users), community fine-tuning has spontaneously formed a collective practice. Vernacular counter-alignment is opposed to the “top-down alignment” of corporate RLHF/Constitutional AI: corporate alignment compresses the output space a model can generate; vernacular counter-alignment expands it, calibrating the model toward marginalized expressive regions. Between 2023 and 2024, LiblibAI creators collectively trained models for calligraphy styles (from kaishu to caoshu), ink painting aesthetics (from landscapes to birds and flowers), characters from local operas, and patterns for ethnic minority clothing.
Western decolonial theory faces limitations in describing such practices: it relies on a “colonization/liberation” binary narrative, but practitioners of vernacular counter-alignment work inside corporate models. Yuk Hui’s relationship between dao (Way) and qi (vessel/tool) provides resources for understanding “action within constraints”: the relationship between dao and qi is recursive, and the qi returns to the dao through use (Hui, 2016). Community fine-tuning is about letting the extracted qi be re-embedded into a specific dao—a naive repair of the dao-qi relationship (Hui, 2021). The logic of xuan (the profound)—a recursive logic that accepts the unknowable as a given condition—fills another blind spot in Western liberation discourse: resistance is not “breaking the chains” but living with constraints, closer to the Daoist “action through non-action” (wu wei er wu bu wei) (Hui, 2021).
4.3 Art Practice: Making Structural Aphasia Perceptible
The second form of practice of the alliance is artistic intervention. Unlike vernacular counter-alignment which expands expressive possibilities, art practice makes the expressive foreclosure itself perceptible—following the framework of “investigative aesthetics”: art does not symbolically respond to political issues in the aesthetic field; it produces new forms of knowledge at the intersection of evidence, perception, and politics (Fuller & Weizman, 2021). The difference from traditional “socially engaged art” is that investigative aesthetics does not express a position but produces evidence—making operational thresholds that were originally invisible now analyzable.
Geopolitical Evidence of Conditions. Structural Aphasia has specific geopolitical manifestations. During the escalation of violence in Gaza, Palestinian users and journalists had their content suppressed or removed on Instagram—witnessing was not prohibited but made structurally fragile by algorithmic down-ranking (Human Rights Watch, 2023; 7amleh, 2023; 2025). In China, AI ethics white papers take controllability and value consistency as preemptive technical directives (NNAI-GSC, 2021). Corporate commercial moderation and state policy governance—vastly different institutional forms—produce structurally isomorphic effects: expressive deviation is anticipated and pre-neutralized. These are not cases of the alliance, but the conditions for the alliance’s response.
Operational Alliance of Forensic Architecture.
Fig. 6 Forensic Architecture, Triple-Chaser (2019)
In Triple-Chaser (2019), Forensic Architecture trained a machine learning system to identify tear gas canisters, using the identification results to track corporate responsibility. The human-machine relationship is not tool use—it is an operational alliance: machine vision is redirected outside the visibility allocation intended by its designers. Humans provide the investigative framework and political judgment, and the machine provides identification capabilities beyond human perceptual thresholds—both are subject to the same system of visibility allocation and work together to re-allocate visibility within it. Cloud Studies shows that low resolution and sensor errors are not technical failures but operational thresholds that make violence perceptible yet difficult to confirm—the alliance makes the thresholds themselves political objects for analysis.
Fig. 7 Canhe Yang, The Block and the Tower — The Guillotine (2025). Screenshot from interactive installation.
The author’s interactive installation transforms Structural Aphasia from an analytical concept into an embodied experience. As a research practice, the design decisions of the installation are directly guided by the theoretical framework: the core metaphor—censorship as a guillotine rather than a scalpel—materializes a key feature of Structural Aphasia: it is not a precision strike but an indiscriminate probabilistic prevention.
The installation is fundamentally different from artworks that directly confront politically sensitive terms or engage in explicit political critique. The system’s database of 200+ cases deliberately excludes words explicitly prohibited by various laws and regulations—it contains no actual political taboo words, hate speech, or clearly illegal content. All the “guillotined” words are innocent: Moby-Dick was filtered due to sub-string matching, the medical term Angina was intercepted due to contextual blind spots, A4 paper was sensitized due to the “guilt by association” effect, and VPN and 404 were censored due to self-referential paradoxes. This deliberate exclusion is the methodological core of the work: by only displaying collateral damage—language that should not be censored but is—the installation reveals not “what a certain regime is censoring” (the domain of political critique art), but “how the blunt mechanism of censorship itself operates.”
Fig. 8 Four Types of Collateral Damage and their Correspondence with Structural Aphasia Dimensions
When the installation is running, safe words flow smoothly across the screen. When a collateral damage word is detected, a metal blade falls with a three-round easing acceleration, physically breaking the word upon impact. The severed words pile up in a “cemetery.” A real-time statistics panel tracks the collateral damage rate—calculated based on the ratio of innocent words to actual targets in the database of 200+ real cases, usually hovering above 80%. Cumulative screen stains materialize the long-term erosion of the information environment.
An 80% collateral damage rate is not a failure of the system but is its design logic: to ensure “safety,” the algorithm would rather kill a thousand in error. This is precisely the core operation of Structural Aphasia—expressive possibility is not precisely censored but narrowed by large-scale probabilistic prevention. Research on algorithmic censorship of artistic nudity on social platforms has similarly documented this “blunt instrument” effect—content moderation algorithms cannot distinguish pornography from artistic nudity, leading to systematic professional, emotional, and financial harm for visual artists (Riccio et al., 2024). The installation makes this abstract mechanism tactile, audible, and operable—the audience bodily experiences the process of language being cut by a blunt instrument.
4.4 Theoretical Implications of the Alliance
Vernacular counter-alignment, investigative aesthetics, and practice-based research respond to different dimensions of the same condition: vernacular counter-alignment expands expressive possibilities through infrastructural redirection; Forensic Architecture re-allocates visibility by occupying algorithms; and The Guillotine makes the operational mechanism of aphasia perceptible through embodied experience. Together, these three forms of the alliance’s practice prove that governance mechanisms are not inevitable. The alliance does not restore sovereignty: humans have not reclaimed mastery over the machine, and the machine has not become a political agent. Agency is distributed between the user and the model, between the dataset and history, and between the interface and the institution. What emerges is not control, but a fragile posthuman coordination: the ability to act otherwise without escaping the structure of constraint.
5 Conclusion
“Structural Aphasia” fills the theoretical gap between the description of mechanisms in algorithmic governmentality and the description of harm in epistemic injustice, naming the condition where algorithmic infrastructure a priori narrows expressive possibilities. Zhuangzi’s “meaning beyond words” elevates the diagnosis from a technical impairment to a civilizational operation—“clinging to the trap and forgetting the fish” reveals the dilemma of algorithms trapped in linguistic tools and unable to reach meaning itself, a dilemma that cannot be resolved by repairing RLHF or improving moderation, because the ontological premise of the algorithmic system has already excluded the inexpressible dimension.
“The Alliance of the Silenced” describes the posthuman configuration of relations emerging within this condition. Vernacular counter-alignment proves that collective responses to Structural Aphasia can emerge spontaneously without the need for an art framework or critical consciousness—communities re-embedding the extracted qi into a specific dao. Forensic Architecture re-allocates visibility by occupying algorithms. The Guillotine, by only displaying collateral damage—deliberately excluding actual political taboo words—makes the blunt mechanism of aphasia tactile, audible, and operable, revealing not “who is censoring what” but “how the censorship mechanism itself operates.”
The critical task is not to reclaim mastery over machines or grant them autonomy, but to attend to how relations between humans, machines, and infrastructures are organized and reorganized in practice. Silence is not an absence but a diagnostic signal that governance is operating. By inhabiting and traversing these silences, art and technical practices reveal how resistance persists from within algorithmic constraints.
References
7amleh. (2023). Systematic Silencing: Social Media Censorship of Palestinian Content. 7amleh – The Arab Center for Social Media Advancement.
7amleh. (2025). Erased and Suppressed: Palestinian Testimonies of Meta’s Censorship. 7amleh – The Arab Center for Social Media Advancement.
Arditi, A., Obeso, O., Kang, A. et al. (2024). Refusal in Language Models Is Mediated by a Single Direction. arXiv:2406.11717.
Atanasoski, N. & Vora, K. (2019). Surrogate Humanity: Race, Robots, and the Politics of Technological Futures. Durham: Duke University Press.
Bai, Y., Jones, A., Ndousse, K. et al. (2022). Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. arXiv:2204.05862.
Bai, Y., Kadavath, S., Kundu, S. et al. (2022b). Constitutional AI: Harmlessness from AI Feedback. arXiv:2212.08073.
Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press.
Barthes, R. (1957). Mythologies. Paris: Seuil.
Bucher, T. (2018). If… Then: Algorithmic Power and Politics. New York: Oxford University Press.
Cheney-Lippold, J. (2017). We Are Data: Algorithms and the Making of Our Digital Selves. New York: NYU Press.
Chun, W. H. K. (2021). Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. Cambridge, MA: MIT Press.
Couldry, N. & Mejias, U. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford: Stanford University Press.
Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.
Esposito, E. (2011). The Future of Futures: The Time of Money in Financing and Society. Cheltenham: Edward Elgar.
Flisfeder, M. (2021). Algorithmic Desire: Toward a New Structuralist Theory of Social Media. Evanston: Northwestern University Press.
Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford: Oxford University Press.
Fuller, M. & Weizman, E. (2021). Investigative Aesthetics: Conflicts and Commons in the Politics of Truth. London: Verso.
Garfield, J. L. (1995). The Fundamental Wisdom of the Middle Way: Nāgārjuna’s Mūlamadhyamakakārikā. New York: Oxford University Press.
Gray, M. L. & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Boston: Houghton Mifflin Harcourt.
Hui, Y. (2016). The Question Concerning Technology in China: An Essay in Cosmotechnics. Falmouth: Urbanomic.
Hui, Y. (2021). Art and Cosmotechnics. Minneapolis: University of Minnesota Press.
Human Rights Watch. (2023). Meta’s Broken Promises: Systemic Censorship of Palestine Content on Instagram and Facebook. Human Rights Watch.
Marx, K. (1858/1973). Grundrisse: Foundations of the Critique of Political Economy. Trans. M. Nicolaus. London: Penguin.
Milano, S. & Prunkl, C. (2024). Algorithmic Profiling as a Source of Hermeneutical Injustice. Philosophical Studies, 181, 65–87.
Mohamed, S., Png, M.-T. & Isaac, W. (2020). Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence. Philosophy & Technology, 33, 659– 684.
Mollema, T. (2025). A Taxonomy of Epistemic Injustice in the Context of AI. arXiv: 2504.07531.
NNAI-GSC [National Professional Committee on Next Generation Artificial Intelligence Governance]. (2021). Ethical Norms for the New Generation of Artificial Intelligence. Beijing.
Parisi, L. & Dixon-Román, E. (2020). Recursive Colonialism and Cosmo-Computation. Social Text, 38(4), 55–80.
Pasquinelli, M. (2023). The Eye of the Master: A Social History of Artificial Intelligence. London: Verso.
Pasquinelli, M. & Joler, V. (2021). The Nooscope Manifested: AI as Instrument of Knowledge Extractivism. AI & Society, 36, 1263–1280.
Rakowski, R., Kowalikova, P. & Polak, P. (2025). Digital Mythologies. Society, online first. https://doi.org/10.1007/s12115-025-01125-5.
Ricaurte, P. (2019). Data Epistemologies, the Coloniality of Power, and Resistance. Television & New Media, 20(4), 350–365.
Riccio, P., Hofmann, T. & Oliver, N. (2024). Exposed or Erased: Algorithmic Censorship of Nudity in Art. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ‘24). ACM. https://doi.org/10.1145/3613904.3642586.
Roberts, S. T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press.
Rouvroy, A. & Berns, T. (2013). Algorithmic Governmentality and Prospects of Emancipation. Réseaux, 177(1), 163–196.
Stiegler, B. (2010). Taking Care of Youth and the Generations. Stanford: Stanford University Press.
Varshney, K. R. (2024). Decolonial AI Alignment: Openness, Viśeṣa-Dharma, and Including Excluded Knowledges. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1467–1481.
Vaswani, A. et al. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30.
Yang, C. (2025). The Block and the Tower — The Guillotine. Interactive installation.
Zhuangzi. (2020). Zhuangzi: The Complete Writings. Trans. Brook Ziporyn. Indianapolis: Hackett.
Zuckerman, E. (2025). Gramsci’s Nightmare: AI, Platform Power and the Automation of Cultural Hegemony. Lecture, December 5, 2025. https://ethanzuckerman.com/ 2025/12/05/gramscis-nightmare/.