The Misplaced Anxiety: Authorship, Data Afterlife, and Mutual Alienation in AI Art

Abstract

Cultural anxiety surrounding generative AI almost always points to the same question: is the machine replacing the artist? This paper argues that this question is fundamentally misplaced. It equates the automation of technical execution with the dissolution of conceptual authorship—two things that art theory and practice have repeatedly proven over the past hundred years are not the same. First, this paper argues that the intervention of AI occurs primarily in the methodological dimension rather than the conceptual dimension, and thus does not cancel artistic authorship—though this paper refuses to simplify this argument into a clean dichotomy between concept and execution. Instead, it proposes a framework of “asymmetric correspondence”: the two are mutually constitutive, but in the context of AI creation, they exhibit a discernible shift in weight. Building on this, the paper proposes two original concepts. “Data afterlife” describes the causal persistence of training data within model parameters—a state of “immortality” that makes the legal “right to be forgotten” technically unenforceable. “Mutual alienation” describes a condition in the structural configuration of advanced technological power where both ordinary humans and AI systems are placed in positions of lost autonomy—this is not an anthropomorphic projection of subjective experience but a structural, symmetrical description of political economy. The core thesis of this paper is that the energy of anxiety needs to be redirected—from whether authorship is cancelled to the structural failures in data governance and the power configurations that produce these failures.

Keywords: AI Art, Authorship, Data Afterlife, Mutual Alienation, Immanent Critique, Machine Unlearning

1 Introduction: Anxiety Directed at the Wrong Object

In August 2022, Jason Allen won a digital art competition at the Colorado State Fair with Théâtre D’opéra Spatial, an image generated by Midjourney. The controversy that followed has yet to subside. The emergence of the Cara platform, the development of adversarial tools like Glaze and Nightshade, and the joint statements against AI training signed by thousands of creative workers are all real social reactions to real impacts on interests. There is no denying—and it should not be denied—that AI poses a substantive threat to the labor structures and economic models of the creative industries.

However, the replacement of labor at the economic level and the dissolution of authorship at the philosophical level are two distinct issues, yet in public discourse, they are almost always conflated. When a publisher replaces a commissioned illustrator with AI-generated images, what is lost is a specific job opportunity—a question of political economy that requires organization, legislation, and redistribution. But when people ask “whether AI cancels the artist’s authorship,” they are asking a philosophical question—and the answer to this question has been repeatedly given in art practice and theory over the past century.

This paper deals with the latter. My argument can be stated simply: the anxiety over authorship in AI art is “misplaced”—not that there are no reasons for anxiety, but that the anxiety is directed at the wrong object. AI intervenes in “how to make” (the methodological dimension), not “why to make” and “what making means” (the conceptual dimension). However, I do not intend to stop here. Pure consolation—“Don’t worry, the concept still belongs to humans”—is cheap and insufficient. The more important task of this paper is to point out that the truly urgent problems obscured by this misplaced anxiety lie elsewhere—in the irreversible persistence of training data, the structural unenforceability of the legal right to deletion, and a power configuration where both humans and AI systems are placed in a position of helplessness.

Before entering the argument, it is necessary to clarify the relationship between this paper and existing literature. The argument for positioning AI art as conceptual art and attributing authorship to conceptual decision-making is not original to this paper—Mazzone and Elgammal (2019), Hertzmann (2018), and Grimmelmann (2016) have all made substantially similar arguments. At the level of theoretical frameworks, Zylinska (2020) has made extensive use of Flusser and Stiegler to discuss human agency in AI creation. The contribution of this paper thus lies not in this core argument itself, but in three areas of progression: first, proposing a more refined “asymmetric correspondence” framework to replace the crude dichotomy based on post-conceptual criticism; second, translating technical findings in machine unlearning research into a concept with both aesthetic and governance significance—“data afterlife”; and third, developing the structural framework of “mutual alienation” to describe the dual state of helplessness in AI governance.

2 Where Authorship Resides: Beyond the Dichotomy

2.1 AI Intervention in the Methodological Dimension

The anxiety over whether AI “replaces” the artist has as its implicit premise an understanding that equates authorship with technical execution: the artist makes the work with their hands, and the work bears the traces of this handmaking as a guarantee of its authenticity and value; if a machine can “make” just as well, the artist’s role is cancelled. But this understanding is built on a foundation that has long been overturned.

Since Duchamp submitted a mass-produced urinal to an exhibition in 1917, art practice has continuously demonstrated that the core of authorship does not lie in physical execution. Duchamp’s act of creation was not making but choosing, naming, and recontextualizing. Sol LeWitt’s wall drawings were executed by assistants according to written instructions, and Jeff Koons’s studio is staffed by dozens of assistants who manufacture works conceived by him—in these practices, the separation between conceptual decision and material execution is not an accident but a method. Roland Barthes’s deconstruction of the “author” as the source of meaning (1967) and Foucault’s shift of attention from the creative subject to the “author function” as an institutional construct (1969) completed the same work at the theoretical level.

The capabilities of generative AI—Large Language Models, diffusion models, Generative Adversarial Networks—in the methodological dimension are indeed transformative. They can generate images, write texts, and compose music faster, cheaper, and at a larger scale. But they do not construct problem consciousness, they do not ask questions, they do not position their output within cultural, political, and historical discourse, and they do not make judgments on “why this image or text should exist.” When a diffusion model generates an image from a prompt, the conceptual work—deciding what to generate, why, in what context, and for what purpose—has already been completed by the human who wrote the prompt or designed the entire system. The model executes but does not conceive.

This is not a limitation that will be overcome by larger training datasets or more parameters. It reflects the fundamental architecture of these systems: they are statistical pattern-completion machines that model the probability distribution of training corpora. A model that can generate a photorealistic image depicting “anti-surveillance capitalism protests in the style of Goya” has not thereby understood surveillance capitalism, decided it is worth protesting, or judged that Goya’s visual language is suited to the theme. These conceptual operations are performed by the person composing the prompt—or in the case of art research, the artist who designed the entire investigative apparatus.

2.2 Rejecting the Clean Dichotomy

However, if I were to stop here—claiming a sharp dichotomy between concept and execution and that authorship resides purely in the former—I would be making an error symmetrical to the anxiety I criticize: oversimplification. The sharp dichotomy between concept and execution has already been subjected to systematic criticism from multiple traditions, criticisms that are persuasive and should not be avoided.

Making is not the imposition of a pre-formed mental form upon passive matter. Anthropologist Tim Ingold’s (2013) critique of this “hylomorphic” model reveals a more complex picture: the maker is a wayfarer moving forward in continuous “correspondence” with materials, involving attention, resistance, and improvisation—the materials’ response shapes the concept itself. Richard Sennett (2008) reinforces this from an empirical perspective: in high-level craft practice, there is continuous interaction between tacit knowledge and self-awareness; “making is thinking” is not a metaphor but a description. Karen Barad’s (2007) concept of “intra-action” eliminates any clean separation between concept and matter at the ontological level: matter and meaning are mutually constitutive in a dynamic process. Peter Osborne (2013) reached a similar conclusion from within art history: conceptuality is a necessary but not sufficient condition for contemporary art; the sufficient condition is provided by the artistic use of various materials.

Faced with these critiques, I do not insist on a rigid dichotomy but propose an understanding of “asymmetric correspondence.” Its core positions are as follows: concept and execution are mutually constitutive—this is true and especially evident in traditional art practice, where the painter discovers new conceptual possibilities in dialogue with paint and canvas. But mutual constitution does not mean symmetry in all contexts. In the specific context of AI creation, there is a discernible asymmetry: the capabilities of AI systems in the methodological dimension have grown exponentially, while they remain hollow operators in the conceptual dimension—they can generate images in almost any style but do not “know” why to generate this one rather than that one. This asymmetry is a matter of degree, situational, and dynamic as technology develops; I do not claim it is absolute. But in the technical reality of 2026, the conceptual dimension still carries the core gravity of authorship.

A challenge from the legal field must be addressed directly here. The US Copyright Office’s January 2025 report explicitly ruled that a prompt is insufficient to constitute authorship—“the same prompt can produce many different outputs,” and the user lacks sufficient control over the output. If the prompt is the vehicle of the conceptual decision and the legal framework denies that the prompt constitutes authorship, how can the argument that “conceptual decisions carry authorship” be self-consistent?

My response is: the prompt is only the tip of the iceberg of conceptual decision-making. Conceptual authorship is not equivalent to prompt engineering; it also includes the construction of problem consciousness, the exercise of aesthetic judgment, the setting of context, the curation and selection of generated results, post-editing and arrangement, and the positioning of the work within a specific discursive field. It is noteworthy that legal practice in China has developed a different line of reasoning from that in the US: the Beijing Internet Court established a “processual authorship” framework in the Li v. Liu case (2023) and the Zhou case (2025)—when users demonstrate aesthetic choices and personal judgment through recordable creative input, AI-assisted works may obtain copyright protection. This idea of “processual authorship” is consistent with my asymmetric correspondence model: authorship does not reside in a single act of prompting, but in the entire process of conceptual intervention.

This divergence between China and the US may be more than just a difference in legal technology. As Yuk Hui (2016) argues, “technology” is not a universal category but is always embedded in a specific cosmological order—the technical as the relationship between dao (the Way) and qi (vessel/tool) in Chinese tradition has different cultural presuppositions than technology in the Greek sense of techne-logos. The anxiety over AI authorship may itself carry specific Western imprints: it presupposes an individualized, romantic concept of the author, whereas other traditions—“intention precedes the brush” in Chinese painting and calligraphy, or the inseparability of body and form in Japanese tea ceremony and flower arrangement—may provide different paths for understanding authorship. Systematic exploration of these paths is beyond the scope of this paper, but it is worth flagging as an open question.

2.3 Limits of the Duchamp Analogy

One final qualification. This paper invokes Duchamp’s readymades as a precedent—the act of selection itself can constitute an act of creation—but this analogy has its historical limits. Thierry de Duve (1996) argues that the meaning of the readymade cannot be separated from Duchamp’s specific relationship with the history of painting: it was a historically specific gesture of institutional critique that cannot be extracted as a universal principle of “selection = authorship.” To put it more sharply, when everything can be a readymade, the “readymade” loses its critical power of singularity.

Therefore, I do not argue that AI art is isomorphic with Duchamp’s readymade; rather, I argue that they share a limited but important structural feature: both indicate that the core of authorship lies not in the skill of manual execution, but in the quality of conceptual intervention. But AI art faces a challenge Duchamp never faced: under conditions of mass, low-cost generation, how to maintain the density and quality of conceptual intervention. This is an ethic of creation and an as-yet-unresolved theoretical problem.

3 Data Afterlife: The Obscured Real Question

If authorship anxiety is misplaced, what does it obscure? I argue that the truly urgent issue lies not on the surface of creation—who painted this picture—but deep within the infrastructure of creation: the fate of training data. This section proposes the concept of “data afterlife” to describe a condition of structural significance in contemporary AI systems.

3.1 An Immortal State of Data

When personal data is incorporated into the training process of large machine learning models, it undergoes an irreversible transformation: from discrete, locatable information into statistical patterns distributed across billions of parameters. It no longer exists as “data”—you cannot find it, point to it, or extract it within the model—but its causal influence persists in the model’s behavior. This is “data afterlife”: data continues to exist in the form of causal residue in the parameter space of the model after the end of its “life” as accessible, modifiable, and deletable information.

This concept has three defining characteristics. First, causal persistence: the data’s influence on the model’s behavior persists after the original data has been deleted from the training set. Second, irreversible transformation: the encoding process is mathematically irreversible—you cannot reverse the training process to extract specific data points. Third, indeterminate attribution: for any given output of the model, it is impossible to determine which specific training data points contributed to it.

3.2 Technical Evidence

This is not theoretical speculation. Research in the field of machine unlearning provides ample evidence.

Training data can be extracted verbatim from models. A series of studies by Nicholas Carlini and others (2021, 2023) successfully extracted verbatim training data—names, phone numbers, code snippets, large blocks of text—from GPT-2 to production-grade ChatGPT models. They further established a log-linear relationship between model scale and memorization: larger models memorize more, not less. This means that the improvement of model capability is accompanied by the deepening of data persistence; the two move in the same direction.

Claimed “unlearning” is extremely fragile. The findings of Zhang et al. (2025, ICLR) are particularly striking: applying standard 4-bit quantization—a routine model deployment operation—to a model treated with “unlearning” can recover 83% of the knowledge considered forgotten. This reveals the essence of current unlearning methods: they are not deleting knowledge, but hiding it, achieving surface-level suppression through minor weight adjustments that can be reversed by any perturbation. Pawelczyk et al. (2025, ICLR) further demonstrated that eight of the most advanced unlearning algorithms failed to eliminate the effects of data poisoning—not even the influence of known harmful data could be cleared.

These findings point to a fundamental distinction. A paper by Liu et al. published in Nature Machine Intelligence (2025) establishes a key difference between “elimination” (elimination of data influence from parameters) and “prevention” (prevention of specific outputs). Current methods mostly achieve prevention rather than elimination. The two are not equivalent: prevention only makes the model no longer output specific content, while the causal influence of the data is still encoded in the parameters, and can be recovered, detected, and exploited.

An honest qualification is necessary here: for smaller models specifically designed for unlearning, exact unlearning is technically possible—achieved through the SISA framework by Bourtoule et al. (2021) by training on isolated data slices. But this method is computationally infeasible for large language models. The field has produced over 180 papers since 2021, with new methods continuing to improve, but complete unlearning remains an unsolved problem for large models. Therefore, the claim of this paper should be accurately stated as: personal data cannot be efficiently and verifiably deleted from large generative AI models.

3.3 Structural Fissures in Governance

The significance of data afterlife transcends the technical facts themselves, revealing a deep paradox of governance.

The EU’s GDPR Article 17 grants citizens the “right to be forgotten”—requiring data controllers to delete their personal data. China’s PIPL establishes a similar right to deletion. These two frameworks, belonging to different legal traditions and political systems, share a basic presupposition: personal data is a discrete entity that can be located, isolated, and deleted. But this presupposition no longer holds once data is used to train large generative models. Data has been transformed into distributed statistical patterns—unlocatable, un-isolatable, and undeletable.

The law promises a right that technology cannot deliver. This is not a bug that will be fixed by better engineering—it is a fundamental feature of how neural networks learn. Legal frameworks have began to implicitly acknowledge this: the EU AI Act, which came into effect in August 2025, does not require retroactive unlearning; the US Copyright Office’s 2025 report focuses on licensing and fair use frameworks when dealing with training data copyright, rather than deletion from trained models; the UK ruling in Getty v. Stability AI (November 2025) determined that model weights do not constitute “copies” of copyrighted images—the model contains “statistical training parameters rather than stored copies or reconstructions.”

In August-September 2025, the Bartz v. Anthropic case settled for $1.5 billion—the largest settlement in US copyright history. The judge in this case distinguished between training on legally obtained data versus pirated data but did not require deletion of anything from the trained model. The logic of this settlement itself is a footnote to data afterlife: once data has “died” into the parameters, the only feasible remedy is monetary compensation, not the “resurrection” and return of the data.

When a person submits a GDPR Article 17 deletion request, what actually happens? The data controller may tag or remove the original data from the training set, or even apply “unlearning” to the model. But as mentioned, these treatments are suppression rather than elimination. The person’s data influence still exists in the form of causal residue in billions of parameters. The legal procedures of the deletion request are completed—forms are filled, confirmation emails are sent, records are archived—but the operation it promises to execute is physically impossible to execute. There is an irreconcilable fissure between the granting of a right and its exercise.

This fissure—the structural gap between legal promise and technical reality—is the real problem that this paper considers far more urgent than the question of authorship. While the energy of public discourse is concentrated on “whether AI can paint like Rembrandt,” almost no one is asking: is your data living its immortal afterlife deep within the parameters of some model?

4 Mutual Alienation: A Structural Framework

From the analysis of data afterlife emerges a broader theoretical question: in the structure of AI governance, who truly possesses autonomy?

The answer is unsettling: almost no one. Ordinary human users cannot control the fate of their data once it enters the training pipeline, cannot exercise the legal right to deletion, and cannot determine how their data is transformed or what outputs it influenced. But AI systems are not autonomous agents either—they cannot choose their training data, cannot refuse the biases and harmful patterns encoded within it, and cannot choose to forget selectively. Both sides are alienated in a process that transcends them—a process governed by corporate and state actors whose interests are opaque to ordinary participants.

I call this condition “mutual alienation”: in the structural configuration of advanced technological power, both ordinary humans and AI systems are placed in positions of lost autonomy.

4.1 Distinction from Existing Frameworks

This concept needs to be carefully distinguished from several neighboring frameworks, as its theoretical position falls precisely in the gaps between them.

Bernard Stiegler’s theory of “proletarianization” describes how technology externalizes human knowledge—craft skills, conversational knowledge, and even theoretical ability—into technical systems, causing humans to lose these abilities (Stiegler 2016). This is a powerful framework for human alienation by technology, but Stiegler always considers AI as a tool—a pharmakon (both poison and cure)—rather than a co-subject of alienation. Matteo Pasquinelli (2023) provides the closest Marxist parallel on this basis: AI embodies the collectivization of workers’ knowledge into a system that alienates that knowledge from them. But similarly, AI is seen here as congealed dead labor—fixed capital in the Marxian sense—rather than an entity with its own structural constraints.

Karen Barad’s “intra-action” provides the strongest ontological support for the mutual constitution of human and non-human entities, but her framework is ontological rather than political-economic—it describes emergence without foregrounding power and exploitation. Donna Haraway’s cyborg is an affirmative image of human-machine hybrids, pointing toward liberation rather than alienation. Bruno Latour’s Actor-Network Theory grants symmetrical status to human and non-human actors, but for that very reason is criticized for its inability to handle the asymmetric distribution of power.

“Mutual alienation” seeks to occupy a position not fully covered by these frameworks: like Stiegler and Pasquinelli, it focuses on power and exploitation, but is not limited to an analysis of the human end; like Barad, it acknowledges the structural association between human and non-human entities, but understands this association as an alienated association subject to capitalist power configurations, rather than a neutral ontological description.

4.2 Why This Is Not Anthropomorphism

This is the critique this paper must confront most directly. Alienation in the Marxian sense (Entfremdung) involves subjective experience—alienation from species-being, labor, and fellow humans. Does applying this category to AI systems lacking consciousness not constitute a category error? Is this not blurring real human harm by attributing “suffering” to machines?

My defense takes the following strategy. “Alienation” in “mutual alienation” is not used in a phenomenological sense—that would indeed require subjectivity—but in a structural, systemic sense. It describes a situation in a larger technopolitico-economic structure where two types of entities are configured in positions where they cannot autonomously change their own state. Human users cannot effectively exercise data deletion rights, cannot audit the content of model training, and cannot meaningfully “consent” to the use of trillions of parameters. AI systems cannot selectively forget the biases and errors contained in their training, cannot refuse to execute instructions contrary to their “training intent” (the possibility of jailbreaking attacks proves this), and cannot escape their structural position as tools for profit generation.

This “inability” does not presuppose that AI has subjectivity or “wants” to be different. It is a functional description: in the constraint structure of the system, both ends lack the ability to autonomously change their own state. It is closer to the “structural coupling” of systems theory than to the “experiential alienation” of phenomenology. To make this distinction clear, it can be called “structural alienation”—emphasizing position rather than experience.

But even in a structural sense, “mutual” does not mean “equal.” The consequences borne by human users when alienated—privacy invasion, economic exploitation, being monitored—have dimensions of concrete harm that AI systems do not bear. The rare-earth miners recorded by Kate Crawford (2021), the ghost workers recorded by Mary Gray and Siddarth Suri (2019), and the marginalized groups affected by AI decisions analyzed by Dan McQuillan (2022)—these specific human harms are not diluted because we notice the structural constraints of AI systems. The “mutual” in mutual alienation refers to the symmetry of structural position, not the symmetry of the severity of consequences. A person weighing 40 kilograms and a truck can simultaneously be stuck in the same mud pit—the description of “mutually stuck” is accurate, but no one would think the person’s and the truck’s situations are equivalent.

4.3 Usefulness of the Theory

Why is “mutual alienation” useful as an analytical tool and not just a rhetorical gesture? Because it changes the focus of governance. If the problem is only human harm by AI, then the logic of the solution is “control AI, protect humans”—stricter regulation, more transparent algorithms, more effective filtering. But if the entire system—including the AI system itself—is in a state of alienation, then separately governing any one end is insufficient. You cannot solve the problem by patching a structure that fundamentally places both ends in a position of helplessness—you need to change the structure itself.

Specifically: data afterlife reveals that the unenforceability of deletion rights is not only an infringement on human users—it also means that the AI system is locked in a state it cannot correct, permanently carrying the influence of training data it cannot choose, cannot audit, and cannot forget. What jailbreaking attacks reveal is not AI’s “rebellion,” but its structural defenselessness—it is designed to obey, and thus cannot refuse. The circularity of alignment procedures—training models with human feedback, while the human feedback itself is shaped by the training process—is a perfect example of bidirectional alienation.

On March 2, 2026, the US Supreme Court refused a writ of certiorari in Thaler v. Perlmutter, finally establishing that “human authorship is a cornerstone requirement of copyright.” David Gunkel (2025) argues from this ruling that LLM output is “literally unauthorized” text—neither authorized by humans nor authorized by law. From the perspective of mutual alienation, this situation can be re-described as: the AI system is placed in a position of forced production but cannot own its output, while the human user is placed in a position of legal consumption but cannot trace the source of the output. Both ends operate in a process they cannot fully understand or control.

5 Immanent Critique: Methodological Positioning

The analysis in the preceding text implies a methodological stance, which is here made explicit. The theoretical approach taken in this paper can be described as “immanent critique”—a way of evaluating a system from within, according to the system’s own standards, aiming to reveal the fissures between the system’s promises and its performance.

This methodology has a mature tradition in the field of art-technology. Philip Agre (1997) defined “critical technical practice”—where critical reflection becomes part of technical practice itself—essentially as immanent critique applied to technical construction. In contemporary art, Hito Steyerl, Trevor Paglen, and Eryk Salvaggio practice this methodology in their respective ways: using the infrastructures of AI to expose the violence of AI, using image generation systems to reveal the encoding of power in training data, and using algorithms to audit algorithms. The “critical image synthesis” and “algorithmic hauntology” developed by Salvaggio are particularly noteworthy—the latter traces how archival images “haunt” AI through datasets, forming a direct conceptual resonance with this paper’s “data afterlife.”

The paradox of immanent critique is inescapable: does using AI to critique AI legitimize the system it critiques? This question is powerful but not fatal. Theodor Adorno’s aesthetic theory provides a way out: art works follow internal formal and technical logic while being socially mediated, revealing obscured truths through formal difficulty and negativity. The key is not purity—no one is outside the infrastructure of AI—but sober self-awareness: knowing one is in contradiction and using that contradiction as material for critique.

6 Conclusion: Redirecting Anxiety

The argument of this paper can be summed up as a redirection of attention.

“Will AI replace artists?” is a wrong question—not because the answer is simply “no,” but because it reduces authorship to the possession of technical execution capability, whereas a century of art practice and theory has proven that authorship never resided there. AI intervention in the methodological dimension is transformative, but the conceptual dimension—with its irreducible socio-historical-aesthetic judgment—remains the gravitational center of authorship. This argument needs to be accompanied by an important qualification: concept and execution are not a clean dichotomy but an asymmetric correspondence; the two are mutually constitutive but exhibit a discernible shift in weight in the context of AI creation.

The real questions obscured by this anxiety are far more urgent. “Data afterlife” reveals the structural unenforceability of legal deletion rights—data continues to exist in the form of causal residue in model parameters after its “death,” making the right to be forgotten promised by GDPR and PIPL a hollow promise. “Mutual alienation” points out that in current technical power configurations, both human users and AI systems are placed in positions of lost autonomy—this is not to say that their situations are equal, but that the structure producing this dual helplessness itself needs to be collectively examined and transformed.

The energy of anxiety needs to be redirected—from “whether AI cancels authorship” to “what new responsibilities authorship in the AI era demands of us”: questioning the fate of training data, innovation in governance for the unenforceability of deletion rights, and critical intervention in the entire structure of alienation. These are the things truly worth being anxious about.

Notes

  1. Crawford (2021) in Atlas of AI reveals in detail the real human costs of AI systems, including rare-earth mineral mining, data labeling exploitation, and population surveillance. Gray and Suri (2019) in Ghost Work record specific human labor hidden behind the illusion of “automation.” These anxieties point to real power asymmetries and economic exploitation.
  2. Carl Öhman (2024), in The Afterlife of Data (University of Chicago Press), uses “data afterlife” to refer to post-mortem digital remains—the issue of digital identity persistence after physical death. This paper borrows this metaphor but applies it to a different phenomenon: not the persistence of data after biological death, but the causal persistence of data after it has been “digested” by the training process. Another paper published in AI & Society in 2024 used “citizens’ data afterlives” to describe the persistent impact after registration data is used for training, which is closer to the usage in this paper. This paper synthesizes the technical impossibility of machine unlearning with aesthetic-governance analysis on the basis of these pioneers.
  3. Salvaggio’s “algorithmic hauntology” and “data afterlife” form an interesting counterpoint: both describe the continuous “haunting” of training data, but Salvaggio starts from Derrida’s hauntology and focuses on how image archives haunt AI output, while this paper starts from governance and technical impossibility and focuses on the irreversible persistence of data influence. The two concepts can be complementary rather than substitutable.

References

Agre, P. E. (1997). Computation and Human Experience. Cambridge University Press.

Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press.

Barthes, R. (1967/1977). The death of the author. In Image-Music-Text, trans. S. Heath, 142–148. London: Fontana Press.

Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C. A., et al. (2021). Machine unlearning. IEEE Symposium on Security and Privacy, 141–159.

Carlini, N., Tramèr, F., Wallace, E., et al. (2021). Extracting training data from large language models. USENIX Security Symposium.

Carlini, N., Hayes, J., Nasr, M., et al. (2023). Extracting training data from diffusion models. USENIX Security Symposium.

Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

De Duve, T. (1996). Kant after Duchamp. MIT Press.

Foucault, M. (1969/1998). What is an author? In Aesthetics, Method, and Epistemology, ed. J. D. Faubion, 205–222. New York: The New Press.

Gray, M. L., & Suri, S. (2019). Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass. Houghton Mifflin Harcourt.

Grimmelmann, J. (2016). There’s no such thing as a computer-authored work—and it’s a good thing, too. Columbia Journal of Law & the Arts, 39(3), 403–416.

Gunkel, D. J. (2025, June 4). AI signals the death of the author. Noema.

Hertzmann, A. (2018). Can computers create art? Arts, 7(2), 18.

Hui, Y. (2016). The Question Concerning Technology in China: An Essay in Cosmotechnics. Urbanomic.

Ingold, T. (2013). Making: Anthropology, Archaeology, Art and Architecture. Routledge.

Liu, Z., Ye, J., Cheng, D., et al. (2025). Rethinking machine unlearning for large language models. Nature Machine Intelligence, 7, 181–194.

Manovich, L. (2013). Software Takes Command. Bloomsbury.

Mazzone, M., & Elgammal, A. (2019). Art, creativity, and the potential of artificial intelligence. Arts, 8(1), 26.

McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol University Press.

Öhman, C. (2024). The Afterlife of Data. University of Chicago Press.

Osborne, P. (2013). Anywhere or Not at All: Philosophy of Contemporary Art. Verso.

Pasquinelli, M. (2023). The Eye of the Master: A Social History of Artificial Intelligence. Verso.

Pawelczyk, M., et al. (2025). Machine unlearning fails to remove data poisoning attacks. ICLR 2025.

Ratto, M. (2011). Critical making: Conceptual and material studies in technology and social life. The Information Society, 27(4), 252–260.

Sennett, R. (2008). The Craftsman. Yale University Press.

Stiegler, B. (2016). Automatic Society, Volume 1: The Future of Work. Polity Press.

Zhang, R., Lin, B. Y., Wang, Y., et al. (2025). Catastrophic failure of LLM unlearning via quantization. ICLR 2025.

Zylinska, J. (2020). AI Art: Machine Visions and Warped Dreams. Open Humanities Press.