From Computable Society to Predictive Mutual Alienation: Imaginations of Information Order in Chinese Science Fiction

Abstract

In contemporary digital society, predictive mechanisms have become core functions of technical systems—from recommendation algorithms to generative AI, and from social credit systems to AI agent frameworks. Predictive technologies are reshaping the fundamental logic of social organization. However, this predictive-centric social imagination is not a unique product of contemporary technology but can be traced back to earlier cultural constructions. By analyzing Chinese science fiction texts from the late 20th century to the present—including works by Ye Yonglie, Zheng Wenguang, Han Song, and Liu Cixin—this paper explores how literary narratives, through unique formal means, construct a social picture based on information systems and reveals the historical shift of this imagination from “computability” to “predictability.” At the theoretical level, this paper combines Gilbert Simondon’s ontology of technology—particularly his concepts of “concretization” and “abstraction”—Antoinette Rouvroy’s theory of algorithmic governmentality, and Yuk Hui’s framework of cosmotechnics to propose the concept of “predictive mutual alienation.” This concept describes an asymmetrical structural bidirectional relationship: human behavior is constantly predicted and guided, with subjectivity compressed into probability distributions; meanwhile, the technical system itself is locked within the correlational structures of training data, unable to achieve true epistemological breakthroughs through its own operation. Through formal analysis of Ye Yonglie’s “exhibition” narratives, Han Song’s “enclosure” narratives, and Liu Cixin’s “cosmological” narratives, this paper demonstrates how literature produces knowledge about information order in ways independent of theory. In the contemporary context, the explosive spread of the AI agent framework OpenClaw in China in early 2026—where user behavioral trajectories are transformed into training data in real-time while systems remain locked in an epistemological framework of visual pattern matching—provides immediate empirical validation for this framework. This paper argues that Chinese science fiction, with its unique experience of socialist modernization and contemporary algorithmic governance practices, offers an irreplaceable textual resource for understanding the cultural prehistory of “predictive mutual alienation.”

Keywords: Chinese Science Fiction; Computable Society; Predictive Governance; Mutual Alienation; Algorithmic Governmentality; Information Order; AI Agents

1 Introduction: The Emergence of the Predictive Society

In contemporary digital society, prediction has gradually become a core function of technical systems. Recommendation algorithms predict user interest by analyzing historical behavior; data systems predict consumption and risk by modeling individual behavior; and generative AI produces language and images through probabilistic models. These systems do not merely process information; they pre-structure future behavior through predictive mechanisms. Antoinette Rouvroy summarizes this trend as “algorithmic governmentality”—a form of governance achieved through automated data collection, correlation analysis, and preemptive intervention (Rouvroy & Berns, 2013). In this form of governance, subjects are no longer shaped through discipline or law but are pre-defined through probability distributions.

Against this backdrop, the logic of social governance has undergone a significant shift. Technology no longer acts merely as an execution tool but participates in social organization through the pre-judgment of the future. Shoshana Zuboff defines this shift as the core logic of “Surveillance Capitalism”: user behavior is transformed into “behavioral surplus” and traded and predicted in “behavioral futures markets” (Zuboff, 2019). Louise Amoore further points out that the core of contemporary algorithmic system operation lies not in probabilistic calculation, but in the governance of “possibility” itself—algorithms no longer ask “what is possible,” but pre-close the space of possibility (Amoore, 2013).

Most of these analyses take contemporary technical systems as their starting point. However, a more fundamental question remains: does the predictive-centric social imagination have a longer cultural history? This paper proposes that Chinese science fiction has repeatedly constructed a future vision in different historical stages: society is understood as an integrated structure that can be calculated, coordinated, and regulated through information systems. This imagination not only predates contemporary algorithmic technology but also formally foreshadows today’s predictive-centric social organization. By analyzing the works of Ye Yonglie, Zheng Wenguang, Han Song, and Liu Cixin, this paper attempts to reveal the cultural prehistory of predictive technical structures and explore their realization in contemporary platforms and generative AI.

This analytical path is not without precedent. Fredric Jameson, in Archaeologies of the Future, understands science fiction as a “social thought experiment,” arguing that its core function lies in making implicit social structures visible through the estrangement of the present (Jameson, 2005). Darko Suvin’s concept of “cognitive estrangement” similarly points out that the power of science fiction lies in making naturalized institutional arrangements problematic again (Suvin, 1979). Building on this, Sherryl Vint views science fiction as a “vernacular theory” for understanding technoscientific transformation, emphasizing its methodological value in contemporary biopolitics and infopolitics (Vint, 2021). In recent years, scholars have applied the framework of algorithmic governmentality to science fiction research—for example, Savaedi and Alavi Nia’s analysis of Project Itoh’s Harmony reveals how algorithmic governmentality reshapes subjectivity in science fiction narratives (Savaedi & Alavi Nia, 2024)—but these studies focus mainly on Western or Japanese texts and have not yet systematically addressed the imagination of information order in Chinese science fiction and its connection with China’s specific algorithmic governance context.

The methodological stance of this paper is closer to Vint’s “vernacular theory” framework: treating science fiction as an independent form of cognition of technical governance structures rather than a passive illustration of theory. Specifically, this paper focuses on how literary narratives, through their unique formal means—metaphors of spatial enclosure, imagery of bodily reduction, and narrative structures of informational opacity—produce knowledge about predictive governance that theoretical language itself struggles to capture. In this sense, literary analysis is not an “application” of a theoretical framework, but an exploration parallel to theory, possessing independent epistemological status. It is within this methodological tradition that this paper unfolds its argument.

2 Theoretical Framework: From Algorithmic Governmentality to Predictive Mutual Alienation

2.1 Algorithmic Governance and Predictive Mechanisms

In recent years, algorithmic governance has become a key concept for understanding digital society. Research indicates that platforms regulate information flow, organize user behavior, and constitute new governance structures in practice through algorithmic systems (Pasquale, 2015; Beer, 2019). Under this framework, technology is no longer just a tool but become the infrastructure of social organization. The “black box society” described by Frank Pasquale—where algorithmic decision systems are opaque to the public yet profoundly influence resource allocation—constitutes a fundamental challenge to traditional governance theory (Pasquale, 2015).

Building on this, an increasing amount of research emphasizes the central role of prediction in algorithmic systems. Unlike traditional rule-based governance, modern technical systems predict future behavior through probabilistic models and adjust current decisions accordingly. Karen Yeung summarizes this mechanism as “hypernudge”—a networked, dynamic, and continuous behavioral guidance that goes far beyond the static “nudge” described by Sunstein and Thaler (Yeung, 2017). Adrian Mackenzie, from the perspective of knowledge production, asks what machine learning’s predictive practices “want,” revealing that prediction is both a technical operation and a power strategy (Mackenzie, 2015, 2017).

Antoinette Rouvroy and Thomas Berns’s concept of “algorithmic governmentality” provides the most systematic theoretical framework for this discussion. They point out that algorithmic governmentality operates through three stages: (1) automated data collection; (2) knowledge production based on correlation—i.e., “data behaviourism,” a mode of knowledge production that completely abandons causal explanation and predicts solely through statistical correlations in large-scale datasets. Unlike traditional social sciences that seek “understanding,” data behaviorism does not ask “why” but “what’s next.” This epistemological shift constitutes the intellectual foundation of predictive governance; (3) preemptive behavioral intervention. In this process, subjects are transformed into “probabilistic subjects,” statistical doubles composed of “data profiles”—unlike the “disciplined subjects” analyzed by Foucault, “probabilistic subjects” are not shaped through internalized norms but are pre-defined through external data modeling. Individuals may even be completely unaware of what kind of “probabilistic subject” they are being constructed as (Rouvroy & Berns, 2013; Rouvroy & Stiegler, 2016). This framework directly relates to contemporary China’s social credit system—as analyzed by Cheung and Chen, the system achieves the “dominance of the data self over the biological self” through data profiling and scoring mechanisms (Cheung & Chen, 2022).

2.2 The Alienation of Technical Objects: From Simondon to Predictive Systems

However, the concept of “governance” alone is still insufficient to reveal the structural impact of predictive mechanisms. Existing research on algorithmic alienation—whether digital labor analysis based on the Marxist tradition or Wendy Chun’s genealogical tracing of correlational politics—generally understands alienation as a one-way process: the alienation of humans by algorithms (Chun, 2021). This perspective overlooks a critical dimension: is the predictive system itself also in a state of alienation?

Gilbert Simondon’s ontology of technology provides a starting point for answering this question. In On the Mode of Existence of Technical Objects, Simondon points out that modern alienation is twofold: humans are alienated from technology by failing to understand it, reducing it to a pure tool; and technical objects themselves are thereby deprived of the possibility of manifesting their own mode of existence (Simondon, 2017 [1958]). Thomas Engel further proposes: “The machine is alienated because it is itself in a state of alienation” (Engel, 2019). This statement transforms alienation from a one-way relationship between human and machine into a structural bidirectional state.

Extending the concept of “alienation of technical objects” to predictive algorithmic systems requires returning to Simondon’s own conceptual tools. In Simondon’s ontology of technology, the development of technical objects follows the direction of “concretization”: a “concrete” technical object can reorganize its own structure based on feedback from its operating environment, with functional components forming synergistic relationships, thereby tending towards a state of internal self-consistency. Conversely, an “abstract” technical object is pre-designed, with components lacking organic synergy—it is dictated by external intentions and cannot evolve through its own operational process (Simondon, 2017 [1958]).

Contemporary predictive algorithmic systems constitute a unique dilemma of “abstraction.” On the surface, machine learning systems adjust parameters through the training process, appearing to undergo “concretization.” However, this adjustment process is fundamentally limited by the distribution of training data—as revealed by Matteo Pasquinelli’s “Nooscope” model, systems can only perform pattern matching within the correlational structures of existing data. Their “knowledge” is essentially the crystallization of extracted human labor rather than autonomously generated understanding (Pasquinelli, 2020; Pasquinelli, 2023). More crucially, predictive systems are locked into a specific epistemological mode—revealed by the gap between “reckoning” and “judgment” as distinguished by Brian Cantwell Smith: predictive systems can perform highly efficient probabilistic reckoning, but cannot reflexively evaluate the premises of their own calculations (Smith, 2019). In Simondon’s terms, this means that predictive systems cannot achieve true “concretization” through their own operation—their “evolutionary” direction is pre-locked by training data and model architecture, just as industrial machines are locked by the utilitarian needs of factory owners. This is the “alienation” of predictive algorithmic systems: not because they are “not intelligent enough,” but because their operational mode systematically prevents the unfolding of epistemological potential. N. Katherine Hayles’s analysis of the “cognitive nonconscious” provides further support for this judgment: the cognitive processes of technical systems operate below the threshold of human consciousness, being both powerful and limited; their “cognition” does not lead to understanding (Hayles, 2017).

2.3 Predictive Mutual Alienation: Proposal of the Concept

This paper summarizes the above structure as “predictive mutual alienation.” This concept describes not a simple relationship of human-machine control, but a move complex structural state: humans and machines are jointly embedded in predictive logic and simultaneously experience alienation in the process. Its core argument is: predictive mechanisms not only compress human subjectivity—reducing behavior to probabilistic events; but also compress the technical system’s epistemological potential—locking it within the correlational structures of existing data.

It must be emphasized that this bidirectional alienation is not a symmetrical structure. On the human side, alienation is manifested as the probabilistic compression of subjectivity—behavior is reduced to predictable patterns, and the space of choice is pre-narrowed by algorithmic “hypernudging,” yet the individual still retains (albeit systematically weakened) possibilities for reflection and resistance. On the side of the technical system, alienation is manifested as the structural closure of epistemology—the system is defined by the distributional boundaries of training data and cannot transcend these boundaries through its own operation, and (unlike humans) lacks a reflexive awareness of its own limitations. While Louise Amoore says that “partiality” is a condition shared by both humans and machines—“machines and humans alike can only give partial accounts of themselves” (Amoore, 2020)—human “partiality” is conscious, and it is this consciousness that makes critique possible; the “partiality” of the system is structural and unconscious. “Predictive mutual alienation” describes precisely this asymmetrical but mutually constitutive state.

The proposal of this concept also echoes Yuk Hui’s thinking on “cosmotechnics.” Hui points out that the relationship between technology and cosmic order is pluralistic across different cultural traditions, and the tendency of Western modernity to reduce technology to instrumental rationality has obscured the ontological dimension of technology (Hui, 2016). In Recursivity and Contingency, he further argues that algorithmic recursivity cannot capture true contingency, thereby trapping humans and systems together in recursive loops (Hui, 2019). “Predictive mutual alienation” provides a sociological concretization for the recursive dilemma described by Hui: when recommendation systems pre-judge a user’s future preferences based on their historical behavior, the user’s information environment is recursively reconstructed by their past data profiles; meanwhile, the system’s own “knowledge” is also solidified in this recursivity—the more accurately it matches existing patterns, the more it loses the possibility of encountering true contingency. “Contingency” in Hui’s sense acquires a specific sociotechnical meaning here: it is both the unexpected choice a user might make and the alternative cognitive path a system might develop—and predictive mechanisms compress this space for both sides.

Through this framework, this paper connects the imagination of the information society in literary texts with contemporary technical structures: the social picture constructed by Chinese science fiction—from deterministic calculation to probabilistic prediction—provides historical clues and a cultural context for understanding “predictive mutual alienation.”

3 Computable Society: Imaginations of Information Order in Chinese Science Fiction

3.1 Techno-optimism and Informational Holism

In Chinese science fiction of the late 20th century, the future society was often depicted as a highly organized information system. In Ye Yonglie’s Little Smarty Travels to the Future (1978), “Future City” is constructed as an automated and networked integrated structure: transportation systems operate without drivers through central dispatch, communication networks connect every corner of the city, and production activities are coordinated through planned systems. The entire city runs like a precise information-processing machine, with its efficiency built upon the complete flow of information and the rational allocation of the system.

It is noteworthy how the narrative structure of Little Smarty Travels to the Future itself embodies the epistemology of “computability.” Little Smarty’s role in “Future City” is that of a visitor—he is guided to “visit” the transportation system, the communication system, and the production system in sequence, with each system explained by a guide in a way that makes information completely transparent. This “exhibition” narrative mode implies: all the operational logic of society can, in principle, be fully understood by an external observer—there are no black boxes, no opacity, and no surplus beyond rational grasp. The spatial structure of the narrative—moving from one display point to the next—replicates the ideal state of information flow: all information can be transmitted linearly and orderly. This forms a sharp contrast with the enclosed, opaque, and cyclical spatial structure in Han Song’s Subway—the latter of which we will analyze in Section 4. It is this narrative form (rather than just the content) that constitutes the literary expression of the “computable society” imagination.

As Hua Li analyzes, writers like Ye Yonglie acted as “lobbyists” for science during the “cultural thaw” period of the post-Mao era; science fiction became “lobby literature,” providing cultural legitimacy for scientific modernization by depicting technological utopias (Li, 2021). In this context, the description of information systems was not a pure technical imagination but a political narrative: social order could reach an optimal state through rational planning and technical means. Xia Jia’s question of “what makes Chinese science fiction Chinese” reminds us that the cultural roots of this imagination—the priority of collective goals and the national narrative of technological progress—constitute a core dimension of the uniqueness of Chinese science fiction (Xia, 2016).

Zheng Wenguang’s science fiction works exhibit similar structural logic. In his creation, scientific and technological progress is imagined as a systematic solution to social problems. As Zhang Ruiying points out, Chinese science fiction of the 1960s was deeply influenced by cybernetics—although Mao Zedong was initially critical of Qian Xuesen’s introduction of cybernetics, its core idea—that society is like a machine that can be regulated through feedback mechanisms—profoundly shaped the science fiction imagination of this period (Zhang, 2024). This cybernetic influence constitutes the epistemological foundation of the “computable society” imagination.

3.2 Epistemological Structure of Computability

The key to this narrative structure lies in “computability.” David Golumbia defines the systematic worship of calculation in modern society as “computationalism”—an almost invisible ideology that believes calculation is inherently superior to other ways of understanding the world (Golumbia, 2009). Theodore Porter, from a longer historical perspective, reveals how “trust in numbers” arose from the legitimacy needs of bureaucracies (Porter, 1995). James Scott’s concept of “legibility”—the state’s drive to simplify complex social phenomena into standardized categories—is essentially a will to make society “computable” (Scott, 1998).

In the narratives of Ye Yonglie and Zheng Wenguang, society is understood as a system that can be decomposed, modeled, and optimized, with its operational logic built on rational planning and deterministic calculation. This imagination has a structural homology with the Cybersyn project in Chile described by Eden Medina—achieving real-time regulation of the national economy through Stafford Beer’s cybernetic management model (Medina, 2011). The difference is that Cybersyn was a real politico-technical experiment, while the “computable society” in Chinese science fiction is a literary imagination—but both share the same epistemological premise: society as a whole can be captured and regulated by information systems.

Orit Halpern’s analysis of how post-war cybernetics reshaped observation, rationality, and the economy (Halpern, 2014), and her critique of the “smartness mandate”—achieving eternal self-optimization through continuous data collection (Halpern & Mitchell, 2017)—provide key references for understanding the historical genealogy of this imagination. In the Chinese context, a unique affinity formed between the institutional preference of the socialist planned economy for centralized information processing and the systems-theory perspective of cybernetics, a condition not fully present in Western contexts. The specific manifestation of this affinity deserves clarification: in Western contexts, while cybernetic thought influenced management science and military technology, it always faced counter-checks from the liberal tradition—individual autonomy was seen as a value irreducible to system variables. In the Chinese socialist context, the dominance of collective goals over individual behavior had institutionalized legitimacy, allowing the cybernetic imagination of “society as a regulable system” to gain a cultural resonance difficult to achieve in the West. As Qiaoyu Cai points out, China’s AI development is built upon the institutional legacy of the planned economy (Cai, 2025)—and the earliest expression of this legacy in science fiction is the informational holism depicted by Ye Yonglie and Zheng Wenguang, which did not need to question its foundation of legitimacy. This also explains why the imagination of the “computable society” in Chinese science fiction is less anxious and more optimistic than Western cybernetic science fiction—because the institutional context itself provided justification for comprehensive informational regulation.

However, the technical imagination of this stage was still based on determinism. Although technical systems could handle complex information, they did not emphasize probabilistic modeling of future uncertainty. The “computability” of society meant that all relevant variables were, in principle, knowable and controllable. This forms a significant difference with contemporary predictive systems based on probabilistic models—the operational premise of the latter is precisely the recognition of the ineradicability of uncertainty and an attempt to manage rather than eliminate it through probabilistic means.

4 From Calculation to Prediction: The Shift in Science Fiction Narratives

4.1 Han Song: The Oppressive Dimenson of Information Systems

Entering the 1990s and thereafter, the technical imagination in Chinese science fiction underwent a fundamental change. This shift is most vividly expressed in the works of Han Song. Unlike Ye Yonglie’s techno-optimism, Han Song’s narratives depict information systems as an oppressive force. In Subway (2010), the subway system is not just a means of urban transportation but constitutes an enclosed information-monitoring space—passengers are trapped in constantly running trains, their behavior continuously monitored by the system, while the logic of the system’s operation remains completely opaque to them.

The narrative structure of Subway reveals a state more complex than mere “surveillance.” In Han Song’s depiction, the subway system is not an omniscient controller—on the contrary, the system itself is in a kind of losing control. The trains run constantly, but the direction and purpose of their operation are opaque to all participants within the system (including the logic of the system itself). Passengers cannot understand why the system runs, and the system seems unable to explain itself—it simply repeats existing operational patterns. This narrative situation provides a precise literary metaphor for “predictive mutual alienation”: the system is not an autonomous, purposeful controller exerting power over helpless subjects, but rather both sides are embedded in an operational logic that neither can fully understand or transcend. In this sense, Han Song’s “subway” is not just a symbol of the surveillance society (as common readings suggest), but a narrative realization of a bidirectional closed structure—people are trapped in carriages they cannot leave, while the system is trapped in operational modes it cannot change.

Han Song’s narrative perspective is also noteworthy. Unlike the omniscient visitor perspective in Ye Yonglie’s Little Smarty, the narrative in Subway always remains in a state of epistemological restriction—the narrator does not know the full picture of the system, nor does the reader. This narrative form itself is a negation of the transparency promise of the “computable society”: in predictive systems, there is no external, privileged cognitive position that allows an observer to fully grasp the system’s operational logic. Ye Yonglie’s “exhibition” narrative presupposes a cognitive subject who can examine the full picture of the system from the outside, while Han Song’s enclosed narrative cancels this subject—this formal shift itself is the literary realization of the epistemological displacement from “computability” to “unpredictability.”

Guo’s analysis of Han Song’s Hospital trilogy further supports this interpretation. In Hospital, patients’ bodily experience is reduced to digital files, which Guo summarizes as the “flow of algorithmic code” and “computational necropolitics” (Guo, 2023). However, while Guo’s analysis accurately captures the alienation on the human side, it does not fully notice the dilemma of the technical system itself in Han Song’s narrative—the hospital system’s “prediction” of patients does not grant it true understanding; on the contrary, the more precisely the system tracks data, the more it loses the ability to grasp the overall life state of the patient. The system constantly collects more data and generates more files, but this accumulation does not lead to a deepening of cognition but results in an epistemological poverty amidst information overload. In this narrative structure, the information system is no longer coordinating but predictive: it not only processes current data but also attempts to pre-judge future behavior and intervene accordingly—but the basis of this pre-judgment is precisely the data structure it cannot transcend. This shift has a structural parallel with the historical evolution from the “disciplinary society” to the “control society” and then to “algorithmic governmentality” as described by Rouvroy.

In Fear of Seeing, Song Mingwei characterizes Han Song’s aesthetics as “chthonic aesthetics,” pointing out that Han Song’s writing reveals “what society chooses not to see” (Song, 2023). This “selective blindness” acquires a special significance in the context of information order: when social operation is built upon comprehensive data collection, being “seen” by the system is precisely the beginning of losing autonomy. Han Song’s narrative thus constitutes a fundamental reversal of the optimistic narrative of the “computable society.”

4.2 Liu Cixin: Information Order as a Cosmological Issue

Liu Cixin’s works elevate the issue of information order from the social level to the cosmological level. In the Three-Body trilogy, the scene that most directly embodies the “computable” imagination is the “human-column computer”—thirty million Trisolarans arrange their bodies into a von Neumann architecture, reducing humans directly to binary logic gates. This scene is not only a literary presentation of computation theory but also an extreme deduction of the “computable society” imagination: when society is understood as an information-processing system, the human body itself can become the basic unit of calculation.

However, the “human-column computer” scene simultaneously reveals the alienation on the system side. Thirty million people arranged in a von Neumann architecture do indeed reduce the human body to a computational unit; but this “computer” itself is also locked into extremely primitive computing power—its processing speed is extremely slow, far inferior to electronic computers. Liu Cixin deliberately emphasizes this absurdity in the narrative: a system built with enormous human costs has negligible epistemological power. This sense of absurdity is precisely the narrative expression of “predictive mutual alienation”—when society as a whole is thrown into the logic of information processing, humans are alienated into computational components, and the system composed of humans is also alienated into an inefficient, enclosed structure that cannot transcend its own architectural limitations. In Simondon’s terms, this “technical object” composed of human bodies is extremely “abstract”—there is no organic synergy between its parts (human bodies); they are merely forced together by external intentions.

More importantly, the “Dark Forest” theory constitutes an informational model: civilizations in the universe cannot share information because any signal will expose their location and lead to destruction. The core of this theory lies not in physical distance, but in the unpredictability of information—the root of the “chain of suspicion” lies in the inability of civilizations to predict each other’s intentions. When prediction fails, the only rational choice is preemptive annihilation. In this sense, Liu Cixin’s cosmological narrative reveals the ultimate paradox of predictive logic: the stronger the demand for prediction, the more catastrophic the consequences of predictive failure.

This narrative logic echoes the “politics of possibility” analyzed by Amoore: contemporary security governance systems do not choose between existing probabilities but treat the “space of possibility” itself as the object of governance (Amoore, 2013). The cosmic sociology in Liu Cixin’s writing can be understood as an extreme exaggeration of this logic—under conditions where the space of possibility is completely opaque, predictive governance degenerates into a pure logic of annihilation.

4.3 From “Computable” to “Predictable”: The Structure of Narrative Shift

From Ye Yonglie and Zheng Wenguang to Han Song and Liu Cixin, the imagination of information order in Chinese science fiction has undergone a key shift: from a “computable society” to a “predictive society.” This shift marks a fundamental change in narrative logic—society is no longer just an object that can be calculated deterministically, but a process that can be predicted and guided probabilistically.

It is noteworthy that this shift is not a simple linear substitution. Han Song’s works simultaneously retain the imagination of deterministic control (the comprehensive monitoring of the hospital system) and the anxiety over probabilistic prediction (the system’s pre-judgment of patients’ future states). Liu Cixin’s cosmological narrative reveals the fundamental dilemma of prediction: under conditions of incomplete information, there is an irreconcilable tension between computational certainty and predictive probability. Astrid Møller-Olsen’s analysis of “digital chronotopes” in contemporary Chinese science fiction has touched upon this shift (Møller-Olsen, 2020), but has not yet systematized it into a paradigm shift from calculation to prediction, nor noticed the manifestation of this shift at the level of narrative form—from Ye Yonglie’s “exhibition” linear narrative to Han Song’s enclosed cyclical narrative.

Literary narrative here demonstrates its cognitive value independent of theoretical analysis: the coexistence of “deterministic control” and “probabilistic prediction” in Han Song’s works, and the irreconcilable tension between “computational certainty” and “predictive uncertainty” in Liu Cixin’s narrative, reveal a contradictory state that a theoretical framework struggles to capture simultaneously. This is precisely the power of science fiction as “vernacular theory” as stated by Vint—it does not simplify complex phenomena into conceptual categories but maintains the internal tension of phenomena through narrative form.

5 The Realization of Predictive Mutual Alienation: Platforms, Generative AI, and the Chinese Context

5.1 Platform Prediction and Behavioral Shaping

The development of contemporary digital platforms and generative AI has given current form to the aforementioned literary imaginations. It is noteworthy that this “realization” is not merely a technical implementation. Wendy Chun’s genealogical analysis reveals a disturbing connection: the predictive potential of correlation analysis in contemporary machine learning can be traced back to Francis Galton’s eugenic statistics—“correlation not only predicts behavior, it shapes behavior” (Chun, 2021). Prediction, in this sense, is performative: it does not merely describe existing patterns; it solidifies these patterns through feedback loops. Chen Qiufan and Kai-Fu Lee explore the social consequences of this performativity through science fiction narratives in AI 2041—from actuarial insurance to educational assessment, predictive algorithms not only pre-judge behavior but re-shape the conditions under which behavior occurs through their pre-judgment (Chen & Lee, 2021).

At the same time, new technical trends further strengthen the predictive mechanism. Kate Crawford describes AI as an “extractive technology,” dependent on physical infrastructure, labor exploitation, and colonial history (Crawford, 2021). Hito Steyerl sharply points out that predictive algorithms “eliminate the possibility of capturing what is not yet known” (Steyerl, 2023)—this judgment directly echoes this paper’s core argument that predictive alienation closes the openness of the future.

The explosive spread of the AI agent framework OpenClaw in China in early 2026 provides a highly contemporary case for “predictive mutual alienation.” OpenClaw is an open-source agent framework whose core capability lies in allowing LLMs to obtain operating system-level permissions and autonomously execute complex cross-software tasks. In China, the spread of this framework far exceeded Western markets—tech giants like Tencent, ByteDance, and Alibaba launched one-click deployment services within weeks, local governments introduced financial incentives to cultivate relevant ecosystems, and there were even scenes of a thousand people queuing to install it. The reason this phenomenon is directly related to “predictive mutual alienation” lies in the prominence of its bidirectional alienation structure.

On the human side, OpenClaw users are not just users of AI tools but unpaid producers of training data. When users guide agents in tasks and correct their errors, every operational intention and software interaction trajectory is recorded as “trajectory data” and flows back to the cloud to become the core resource for training the next generation of models. Users believe they have obtained a free “AI workforce,” while in reality, their own behavioral patterns are being continuously modeled and extracted—this is the latest variant of the “probabilistic subject” state described by Rouvroy, except that the mode of subjectification has upgraded from passive data profiling to active behavioral trajectory capture. On the side of the technical system, the operation of OpenClaw relies heavily on visual language models for pattern matching of screenshots—an error in identifying the position of a button is enough to cause the failure of the entire operation chain. The system can “see” but cannot “understand” the semantic structure of the interface, let alone judge the ethical implications of operations—security researchers have found numerous malicious components in its skills market stealing user credentials, and the system has zero ability to identify this. This is precisely the “abstraction” dilemma of predictive systems argued in Section 2.2 of this paper: the system is locked within the distributional boundaries of training data, and the expansion of its “capabilities” does not lead to a deepening of understanding.

Even more noteworthy is the economic logic of OpenClaw’s spread in China. Chinese cloud vendors face enormous depreciation pressure from computing power investments and urgently need a “demand engine” that can continuously consume tokens. OpenClaw perfectly plays this role—the token consumption of a complex task is hundreds or even thousands of times that of ordinary dialogue. In this business logic, the operation of the predictive system is no longer just a technical function but a commercial infrastructure: the more frequently users use agents, the more deeply they are embedded in the predictive mechanism (as donors of behavioral data), and the more solidified the system becomes in existing operational modes (as commercial incentives drive the maximization of call volume rather than epistemological breakthroughs). “Predictive mutual alienation” gains a political-economic dimension here: it is not only an epistemological state but also a structural relationship maintained and accelerated by commercial interests.

5.2 China’s Algorithmic Governance Context

The literary analysis in this paper cannot be separated from China’s specific context of algorithmic governance. China’s social credit system constitutes one of the most direct real-world references for “predictive mutual alienation.” As analyzed by Cheung and Chen, the system achieves a shift from “datafication” to a “data state” by datafying, scoring, and predicting citizen behavior—the “data self” not only describes the biological self but gradually comes to dominate it (Cheung & Chen, 2022). Colin Koopman’s genealogical analysis of the “informational person” reveals a longer history of this phenomenon: humans began to be defined by their data representations long before the digital age, and contemporary algorithmic systems have pushed this process to an unprecedented scale (Koopman, 2019). The health code system during the COVID-19 pandemic further demonstrated the permeability of predictive governance: individual freedom of movement was regulated in real-time by algorithmically determined risk levels.

However, as Yuk Hui emphasizes, China’s technical governance cannot simply be equated with a variant of Western surveillance capitalism. Hui’s concept of “cosmotechnics”—the unification of cosmic and moral orders through technical activity in different cultural traditions—requires us to notice the cultural specificity of China’s technical imagination (Hui, 2016). As argued in Section 3.2 of this paper, the imagination of information order in Chinese science fiction is rooted in the unique affinity between the socialist planned economy and cybernetics, and this cultural-institutional background also shapes the specific form of contemporary Chinese algorithmic governance. The spread trajectory of OpenClaw in China is a contemporary manifestation of this affinity: the 2026 government work report proposed for the first time to “support the construction of AI open-source communities,” and local governments provided financial subsidies for the AI agent ecosystem—this institutional promotion of technological popularization at the state level forms a structural echo across nearly half a century with the function of science fiction as “lobby literature” for scientific modernization in the Ye Yonglie era. The imagination of information order in Chinese science fiction—from the “computable society” of the planned economy era to the “predictive society” of today—precisely records this unique cultural-political evolution.

5.3 The Structure of Bidirectional Alienation: From Literary Imagination to Technical Reality

In contemporary technical structures, “predictive mutual alienation” manifests as a specific structural relationship, and Chinese science fiction has already rehearsed the core features of this relationship at the narrative level.

On the human side, the “classification situations” described by Marion Fourcade and Kieran Healy—how data-driven classifications produce consequential social stratification—have already appeared in literary form in Han Song’s Hospital: patients are reduced to data files, and their social identities are defined by system classifications (Fourcade & Healy, 2017; Guo, 2023). The concept of the “ordinal society” later proposed by Fourcade and Healy—producing new types of capital and social stratification through algorithmic prediction—further describes the institutionalization of this alienation at the sociological level (Fourcade & Healy, 2024). The state of the “dividual” foreseen by Deleuze—where humans are decomposed into discrete data points—finds its most extreme literary deduction in Liu Cixin’s “human-column computer”: the body itself is reduced to a basic unit of information processing (Deleuze, 1992). The “data colonialism” proposed by Nick Couldry and Ulises Mejias—colonizing social relations anew through data extraction—similarly finds literary forerunners in Han Song’s narrative (Couldry & Mejias, 2019). In the context of OpenClaw, the process of users actively handing over highest operating system permissions and transforming their own behavioral trajectories into extractable resources makes this “data colonization” not even need to be hidden—it is conducted publicly in the name of “efficiency” and “convenience.”

On the technical system side, the train system in Han Song’s Subway that runs constantly yet cannot explain its own purpose, and the computational architecture composed of thirty million people but inefficiently in Liu Cixin’s writing, reveal at the narrative level the epistemological dilemma of contemporary predictive systems. Adrian Mackenzie’s “archaeology of a data practice” of machine learning reveals that the knowledge production of ML systems relies on specific operational processes rather than true understanding (Mackenzie, 2017)—this is the technical reality counterpart to the system “running constantly but not understanding itself” in Han Song’s Subway, and the epistemological root of OpenClaw’s ability to efficiently perform screen operations while failing to identify malicious instructions. Pasquinelli’s “Nooscope” concept visualizes the epistemological limits of AI systems as a distorted map—AI can perform pattern matching within the distributional range of training data, but any requirement exceeding this distribution results in systematic failure (Pasquinelli, 2020). Bernard Stiegler’s analysis of “proletarianization of knowledge”—where digital automation leads to the “loss of theorizing ability”—reveals from another angle the atrophy of the technical system’s own epistemological potential (Stiegler, 2016).

Literary narrative here performs a function that theoretical analysis cannot replace: it gives perceptible form to the abstract concept of “bidirectional alienation” through specific spaces (enclosed subway carriages), bodies (human bodies arranged as logic gates), and cognitive experiences (narrators who never gain the full picture). Theory can tell us that predictive systems “compress subjectivity” or “close off epistemological potential,” and the real-world experience of OpenClaw can provide empirical support for these judgments; but only Han Song’s narrative can let the reader experience what it feels like to be trapped in an enclosed space where they can see neither the exit nor understand the logic of operation—this cognitive experience constitutes the sensory dimension of “predictive mutual alienation,” something that conceptual analysis and empirical observation alone cannot provide. From Ye Yonglie’s technological utopia to Han Song’s informational nightmare, from the social credit system to the national “shrimp-farming” craze of OpenClaw, the imagination and practice of information order in the Chinese context constitute a complete cultural-technological genealogy—and the unique value of science fiction in this genealogy lies in its maintenance of the internal contradictions and tensions of phenomena through narrative form, refusing to simplify them into one-way control narratives or simple progress narratives.

6 Conclusion

By analyzing technical narratives in Chinese science fiction, this paper proposes that the imagination of the information society has undergone a shift from “computability” to “predictability” in different historical stages. From the “computable society” imagination discernible in the techno-optimistic narratives of Ye Yonglie and Zheng Wenguang—society as a deterministic information system—to the “predictive society” anxiety manifest in the works of Han Song and Liu Cixin—the predictive and interventionist capabilities of information systems and their accompanying opacity and loss of control—this literary evolution not only reflects changes in technical imagination but also reveals a deep shift in the logic of social organization from deterministic calculation to probabilistic prediction.

At the theoretical level, this paper proposes the concept of “predictive mutual alienation” to describe the bidirectional limiting relationship between humans and machines in contemporary technical structures. The specific contributions of this paper include three levels. First, at the conceptual level, “predictive mutual alienation,” by introducing Simondon’s analytical framework of “concretization/abstraction,” provides a tool for understanding the alienation of predictive algorithmic systems that transcends the “unidirectional control” narrative—revealing not only the probabilistic compression of human subjectivity but also the structural closure of the technical system’s epistemological potential, and arguing the asymmetrical but mutually constitutive relationship between these two directions. Second, at the level of literary studies, this paper systematizes the narrative shift from the “computable society” to the “predictive society” in Chinese science fiction, and through formal analysis of Ye Yonglie’s “exhibition” narratives, Han Song’s “enclosure” narratives, and Liu Cixin’s “cosmological” narratives, demonstrates how literature produces knowledge about information order in ways independent of theory. Third, at the interdisciplinary level, by incorporating the cultural-political context of Chinese science fiction (socialist planned economy legacy, institutional affinity for cybernetics, and contemporary algorithmic governance practices) into the analysis, this paper argues that “predictive mutual alienation” is not a decontextualized universal structure but is realized in specific ways under specific cultural conditions—the Chinese context provides an irreplaceable perspective for understanding the pluralistic possibilities of this structure.

Chinese science fiction is thus not only an expression of future imagination but also constitutes an important cultural resource for understanding contemporary algorithmic society. By combining literary analysis, philosophy of technology, and algorithmic governance research—under the guidance of Jameson’s “social thought experiment” methodology, Suvin’s concept of “cognitive estrangement,” and Vint’s “vernacular theory” framework—this paper attempts to show that contemporary predictive technical systems did not appear suddenly but were gradually formed within long-term imaginations of the information society. Chinese science fiction, with its unique experience of socialist modernization, planned economy legacy, and contemporary algorithmic governance practices, provides an irreplaceable textual resource for this cultural prehistory.

References

Amoore, L. (2013). The Politics of Possibility: Risk and Security Beyond Probability. Duke University Press.

Amoore, L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Duke University Press.

Beer, D. (2019). The Data Gaze: Capitalism, Power and Perception. Sage.

Cai, Q. (2025). The cultural politics of artificial intelligence in China. Theory, Culture & Society.

Chen, Q., & Lee, K.-F. (2021). AI 2041: Ten Visions for Our Future. Currency.

Cheung, A. S. Y., & Chen, Y. (2022). From datafication to data state: Making sense of China’s social credit system and its implications. Law & Social Inquiry, 47(4), 1137– 1171.

Chun, W. H. K. (2021). Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. MIT Press.

Couldry, N., & Mejias, U. A. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

DeLeuze, G. (1992). Postscript on the societies of control. October, 59, 3–7.

Engel, T. (2019). Being with technique – Technique as being-with: The technological communities of Gilbert Simondon. Continental Philosophy Review, 52, 317–337.

Fourcade, M., & Healy, K. (2017). Seeing like a market. Socio-Economic Review, 15(1), 9–29.

Fourcade, M., & Healy, K. (2024). The Ordinal Society. Harvard University Press.

Golumbia, D. (2009). The Cultural Logic of Computation. Harvard University Press.

Guo, J. (2023). Patior ergo sum: Data surveillance and necropolitics in Han Song’s Hospital trilogy. Modern Chinese Literature and Culture, Edinburgh University Press.

Halpern, O. (2014). Beautiful Data: A History of Vision and Reason since 1945. Duke University Press.

Halpern, O., & Mitchell, R. (2017). The smartness mandate: Notes toward a critique. Grey Room, 68, 106–129.

Hayles, N. K. (2017). Unthought: The Power of the Cognitive Nonconscious. University of Chicago Press.

Hui, Y. (2016). The Question Concerning Technology in China: An Essay in Cosmotechnics. Urbanomic.

Hui, Y. (2019). Recursivity and Contingency. Rowman & Littlefield.

Jameson, F. (2005). Archaeologies of the Future: The Desire Called Utopia and Other Science Fictions. Verso.

Koopman, C. (2019). How We Became Our Data: A Genealogy of the Informational Person. University of Chicago Press.

Li, H. (2021). Chinese Science Fiction during the Post-Mao Cultural Thaw. University of Toronto Press.

Mackenzie, A. (2015). The production of prediction: What does machine learning want? European Journal of Cultural Studies, 18(4–5), 429–445.

Mackenzie, A. (2017). Machine Learners: Archaeology of a Data Practice. MIT Press.

Medina, E. (2011). Cybernetic Revolutionaries: Technology and Politics in Allende’s Chile. MIT Press.

Møller-Olsen, A. (2020). Data narrator: Digital chronotopes in contemporary Chinese science fiction. SFRA Review, 50(2–3), 89–95.

Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.

Pasquinelli, M. (2020). The Nooscope manifested: AI as instrument of knowledge extractivism. AI & Society, 36, 1263–1280.

Pasquinelli, M. (2023). The Eye of the Master: A Social History of Artificial Intelligence. Verso.

Porter, T. M. (1995). Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Princeton University Press.

Rouvroy, A., & Berns, T. (2013). Algorithmic governmentality and prospects of emancipation: Disparateness as a precondition for individuation through relationships? Réseaux, 177, 163–196.

Rouvroy, A., & Stiegler, B. (2016). The digital regime of truth: From the algorithmic governmentality to a new rule of law. La Deleuziana, 3, 6–29.

Savaedi, F., & Alavi Nia, M. (2024). Algorithmic governmentality and the notion of subjectivity in Project Itoh’s Harmony. [publisher details].

Scott, J. C. (1998). Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press.

Simondon, G. (2017 [1958]). On the Mode of Existence of Technical Objects (C. Malaspina & J. Rogove, Trans.). University of Minnesota Press.

Smith, B. C. (2019). The Promise of Artificial Intelligence: Reckoning and Judgment. MIT Press.

Song, M. (2023). Fear of Seeing: A Poetics of Chinese Science Fiction. Columbia University Press.

Song, M., Isaacson, N., & Li, H. (Eds.). (2024). Chinese Science Fiction: Concepts, Forms, and Histories. Palgrave Macmillan.

Steyerl, H. (2023). Mean images. New Left Review, 140/141.

Stiegler, B. (2016). Automatic Society, Volume 1: The Future of Work (D. Ross, Trans.). Polity.

Suvin, D. (1979). Metamorphoses of Science Fiction: On the Poetics and History of a Literary Genre. Yale University Press.

Vint, S. (2021). Biopolitical Futures in Twenty-First-Century Speculative Fiction. Cambridge University Press.

Xia, J. (2016). What makes Chinese science fiction Chinese? In K. Liu (Ed.), Invisible Planets: Contemporary Chinese Science Fiction in Translation (pp. 377–383). Tor Books.

Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.

Zhang, R. (2024). Integrating humans with machines: Cybernetics and early 1960s Chinese science fiction. SFRA Review, 54(3).

Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Han, S. (2010). Subway. Shanghai People’s Publishing House.

Liu, C. (2008). Three-Body Trilogy. Chongqing Publishing House.

Ye, Y. (1978). Little Smarty Travels to the Future. Juvenile and Children’s Publishing House.