Learning Friction as a Design Principle: Adversarial AI Learning Systems for Art and Design Education

Abstract

Generative Artificial Intelligence (GenAI) is systematically rewriting the conditions under which learning occurs in art and design education. Its technical architecture—centered on predictive logic, instantaneous feedback, and stylistic convergence—is eroding embodied experience, material resistance, and practical judgment, which together form the ontological foundation of artistic learning. However, existing research tends to favor efficiency assessments of AI in education or stays at the macro level of philosophical critique, with little work translating critical theory into actionable educational design principles. This study seeks to bridge this gap. Grounded in the philosophy of technology, learning sciences, and critical design theory, the research proposes “learning friction” as an integrated conceptual framework. This framework unifies cognitive “desirable difficulties” (Bjork, 1994), physical “material resistance” (Ingold, 2013; Sennett, 2008), and political “adversarial design” (DiSalvo, 2012) into a set of educational design principles. Building upon this, the study presents a modular “Adversarial AI Learning System” (AALS) framework that repositions commercial GenAI models from “generators” to “tools for regulating learning conditions.” By introducing structured friction at critical nodes of the creative process—including judgment deferral protocols, material return mechanisms, and deviation provocation modules—the system aims to restore the space for uncertainty and subjectivity in the learning process. Finally, this paper situates the concept of learning friction within the broader context of algorithmic governance critique, arguing that “de-frictioning” is not merely an educational issue but a concrete manifestation of the structural compression of subjectivity by predictive algorithmic systems. This study provides a theoretical framework and design scheme for maintaining diversity and complexity in art education in the age of automation.

Keywords: Generative AI; Art and Design Education; Learning Friction; Adversarial Learning Systems; Critical AI Literacy; Desirable Difficulties; Algorithmic Governance

1 Introduction

1.1 Statement of the Problem

The rapid proliferation of Generative AI (GenAI) in art education is triggering profound structural issues. This impact is not limited to tool-level replacement but involves a systematic rewriting of perceptual structures, learning paths, and the position of the subject (Stiegler, 2016). Creation traditionally stems from the continuous interaction between body, material, and environment (Dewey, 1934). However, under a technical logic characterized by “zero friction” and platform acceleration, this continuous interaction is being replaced by symbolic operation systems dominated by statistical models—experiential learning giving way to probabilistic generation (Manovich, 2023).

This replacement is not a neutral efficiency gain. From the perspective of learning science, the “desirable difficulties” defined by Bjork (1994)—cognitive conditions that increase the learner’s burden in the short term but enhance memory retention and transferability in the long term—are being systematically erased by the instantaneous feedback and high-completion output of GenAI. Learners thus bypass deep encoding processes and fall into the “illusion of competence” described by Bjork and Bjork (2011), feeling as though they have mastered the material without actually forming a stable knowledge structure.

From the perspective of anthropology and practical philosophy, craftsmanship and tacit knowledge are built upon the processual structures of repetition, failure, and material negotiation (Sennett, 2008; Polanyi, 1966). Ingold (2013) describes creation as a “correspondence” between the creator and the material, a relationship that requires the creator to continuously respond to the physical resistance and unpredictability of the material. However, GenAI operations in “latent space”—the mathematical space within models that compresses high-dimensional data into low-dimensional representations—sever these critical experiential links, replacing direct interaction between body and material with statistical compression.

Simultaneously, this cognitive and aesthetic crisis is deeply coupled with a broader logic of technological governance. Rouvroy (2013) points out that predictive models reshape the space of action through preemptive intervention at the pre-conscious level, inducing subjects’ behavior before intentions are even formed. In educational settings, this means AI platforms not only replace the “difficult” parts of learning but also constitute a hidden political structure through classification biases in training datasets—learners unconsciously forced to accept specific visual taxonomies and worldviews (Crawford & Paglen, 2019; Crawford, 2021; Noble, 2018). Learners lacking “critical AI literacy” are prone to mistaking these algorithmic biases for neutral knowledge (Long & Magerko, 2020; Raley & Rhee, 2023).

1.2 Inadequacies of Existing Responses

In the face of these challenges, existing research has formed two main response paths, but both have obvious limitations.

The first path focuses on assessing the efficacy of GenAI in education. A series of empirical studies have examined the impact of AI-assisted teaching on students’ learning outcomes, finding that GenAI can indeed enhance divergent thinking and lower emotional barriers during the early exploratory stages of creativity (Fang, 2026), but also leads to an “ideation-execution gap”—where GenAI accelerates concept generation yet falls short during the execution phase that requires technical precision, embodied practice, and situational judgment (Fang, 2026). Empirical research by Doshi and Hauser (2023) further indicates that while GenAI enhances individual creativity, it significantly reduces the collective diversity of output, suggesting a systematic risk of aesthetic homogenization in AI-assisted creation. However, research along this path primarily focuses on outcomes rather than conditions—i.e., whether AI is “effective” rather than “what it has changed.”

The second path starts from the philosophy of technology, conducting macro-level critiques of AI’s impact on subjectivity. Stiegler’s (2016) discourse on the automatic society, Hui’s (2016, 2019) reflections on technodiversity and cosmotechnics, and Pasquinelli and Joler’s (2021) analysis of AI as a tool for knowledge extractivism provide important frameworks for understanding the systematic impact of GenAI. However, these theoretical critiques mostly remain at the diagnostic level and have not been systematically translated into actionable educational design schemes.

This study seeks to bridge the gap between these two paths: translating critical theory into educational design principles and validating their effectiveness through Design-Based Research (DBR).

1.3 Research Questions

Based on the above problem framework, this study proposes two interconnected theoretical questions:

RQ1: How does the default technical logic of GenAI specifically reshape the structure of judgment and subjectivity in art learning? That is, how can “de-frictioning” as a technical force be conceptualized in terms of its impact on learning conditions?

RQ2: Under the learning conditions where GenAI intervenes, what design principles and system architectures can theoretically respond to the challenge of de-frictioning and restore the space for uncertainty, difference, and judgment in the learning process?

2 Literature Review

2.1 GenAI and Art Education: Current State of Empirical Research

Research on the application of GenAI in art education has experienced significant growth in recent years. Fang (2026), in a systematic review, analyzed tool usage, pedagogical frameworks, and practice modes of GenAI in higher art education, identifying text-to-image generation models (such as DALL-E, Midjourney, Stable Diffusion) and conversational AI (such as ChatGPT) as the most frequently implemented tool categories. The review found that GenAI is used for three main functions: creative production, pedagogical scaffolding, and instructional design. However, the review also pointed out a recurring structural issue—the “ideation-execution gap”: while GenAI has significant advantages in the concept generation stage, it struggles to provide effective support in the execution phase, which requires technical nuance, embodied practice, and situational judgment.

At the empirical level, multiple studies have reported positive impacts of GenAI on students’ creative self-efficacy and classroom engagement. However, these positive effects need to be examined within a broader framework. The experimental study by Doshi and Hauser (2023) found that while individuals using GenAI to assist in creation showed an increase in creativity scores, the overall diversity of the group’s output decreased significantly. This finding is of great significance for art education: if AI-assisted creation leads to aesthetic convergence at a system level, then “creativity enhancement” at the individual level may be a statistical illusion.

The study by Abdulmajid et al. (2025) provides a more critical perspective. Observing digital art education in the Gulf region, the study noted three types of systematic errors when students used Stable Diffusion XL to represent Gulf cultural heritage: sociocultural dissonance (e.g., incorrect representation of gendered dress), temporal misalignment (anachronistic historical details), and morphological hallucinations (structural or biological inaccuracies). More importantly, the study observed the spontaneous formation of “Algorithmic Resistance” behavior among students—counteracting AI’s stereotypical output through exclusionary syntax (negative prompts) and fine-tuning generation constraints. The researchers thus proposed the concept of “curatorial verification,” arguing that GenAI education should go beyond skill training in prompt engineering to cultivate students’ ability to critically evaluate and finely adjust AI outputs.

Together, these studies point to one conclusion: the impact of GenAI on art education cannot be judged simply as “effective” or “ineffective”; the key question is how it has changed the conditions under which learning occurs—that is, the structural relationship between the learner, materials, tools, and their own judgment.

2.2 Adversarial Design and Critical Educational Theory

In Adversarial Design (2012), Carl DiSalvo proposes that design can serve as a form of political engagement, challenging established beliefs, values, and factual assumptions by creating “agonistic spaces.” This concept draws on Mouffe’s theory of “agonistic pluralism,” emphasizing that democratic societies require legitimate spaces for conflict and dissent rather than false consensus. In his subsequent work, Design as Democratic Inquiry (2022), DiSalvo further develops this idea, advocating for “doing design otherwise” to maintain the vitality of local democracy.

Introducing adversarial design into an educational context requires a critical conceptual translation. DiSalvo’s original focus was on agonistic engagement in the public political sphere, whereas “adversity” in an educational context points to a critical consultative relationship between the learner and the technical system. Biesta’s (2013) philosophy of education provides a bridge for this translation. Biesta argues that the core value of education lies not in eliminating risk and uncertainty, but precisely in maintaining the “beautiful risk of education”—those unpredictable, open moments that may lead to failure but may also give birth to true learning. In the context of GenAI systematically eliminating these “moments of risk” with its high-completion output, adversarial design provides a strategy for consciously reintroducing uncertainty into the learning process.

Dunne and Raby’s (2013) “Speculative Design” provides another complementary path. Speculative design advocates that design should not merely serve problem-solving but should act as a medium for exploring alternative possibilities and questioning established assumptions. In the context of AI education, this means curriculum design should not merely ask “how to better use AI,” but also “what are the creative possibilities outside (or alongside, after) AI.”

2.3 Desirable Difficulties, Embodied Learning, and Material Resistance

The concept of “desirable difficulties” proposed by Bjork (1994) provides the theoretical foundation for this study from the perspective of learning science. “Desirable difficulties” refer to learning conditions that increase the difficulty of encoding in the short term but enhance memory retention and knowledge transferability in the long term, specifically including spaced practice, interleaved practice, the testing effect, and the generation effect. Bjork and Bjork (2011) emphasize that there is often a disconnect between the subjective fluency of learning and the actual learning effect—learning methods that feel “easy” do not necessarily lead to lasting learning outcomes, while those that feel “difficult” may produce deeper cognitive encoding.

This cognitive science discovery echoes interestingly with the traditions of embodied cognition and practical philosophy. Sennett (2008), in The Craftsman, discusses how craftsmanship is formed through repetition, failure, and continuous negotiation with physical materials, emphasizing that “resistance” is not an obstacle to learning but a condition for it. Ingold (2013), from an anthropological perspective, proposes that “making” is essentially a “correspondence” between the creator and the material—the creator must continuously respond to the physical properties, accidental changes, and unexpected effects of the material, rather than unilaterally imposing a preset intention on inert matter. Polanyi’s (1966) discussion of “tacit knowledge” further indicates that many critical pieces of practical knowledge cannot be explicitly stated or encoded; they can only be internalized through repeated practice involving bodily engagement.

The intervention of GenAI applies smoothing pressure precisely at these three levels: the difficulty of cognitive encoding, the physical interaction between body and material, and the practical accumulation of tacit knowledge. When AI provides high-completion visual output at millisecond speeds, learners skip both the deep encoding process at the cognitive level and the material negotiation process at the bodily level, further losing the opportunity to accumulate tacit knowledge through repeated trials and failures.

2.4 Research Gap and Positioning of This Study

Synthesizing these three groups of literature, a clear research gap can be identified: existing work is either empirical research on AI educational efficacy that lacks a critical philosophy of technology perspective; or theoretical technical critique that lacks a translation path to educational design; or general frameworks for critical AI literacy (Long & Magerko, 2020) that lack concretization for the specificities of art education.

The positioning of this study lies at this intersection: integrating critical theory (Stiegler, DiSalvo, Biesta) with learning science (Bjork) and embodied practice theory (Ingold, Sennett) into a set of educational design principles, and proposing an actionable design framework based on this integration. The core question of this research is not “is GenAI good or bad for education,” but “how to restore the structures of resistance essential for learning through design when the frictional conditions of learning are systematically removed.”

3 Theoretical Framework: “Learning Friction” as an Integrative Concept

3.1 Proposal of the Concept

This study proposes “learning friction” as the core concept of its theoretical framework. Learning friction refers to structural conditions in the learning process that slow down, obstruct, or deflect the learner’s existing path. While these conditions increase the difficulty of learning in the short term, they play an irreplaceable constructive role in the formation of judgment, the differentiation of creativity, and the establishment of subjectivity in the long term.

Learning friction is not a brand-new invention but an integrative restatement of three existing but mutually independent theoretical traditions:

Cognitive Friction: Derived from Bjork’s (1994) concept of “desirable difficulties.” It refers to cognitive conditions that increase encoding difficulty but enhance long-term memory retention and transferability. In art learning, cognitive friction manifests as the judgment process of choosing between multiple possible visual schemes, the repeated clarification and revision of one’s own creative intentions, and the continuous evaluation of the relationship between the work and the frame of reference.

Material Friction: Derived from the practice theories of Ingold (2013) and Sennett (2008). It refers to the resistance encountered by the creator in the process of interacting with physical materials—the unpredictability of materials, the limitations of tool use, and current boundaries of bodily skills. Material friction forces the creator to respond rather than merely execute, creating a space for continuous negotiation between “intention” and “result.”

Political Friction: Derived from DiSalvo’s (2012) adversarial design theory and Biesta’s (2013) educational risk theory. It refers to critical conditions that expose the biases and default logic of technical systems, prompting learners to question rather than comply with established frameworks. Political friction is not about creating meaningless conflict but about maintaining the learner’s right to negotiate within the technical environment—understanding what forces are shaping them and how to consciously respond to that shaping.

3.2 GenAI as a Technical Force for “De-frictioning”

Under this framework, the educational impact of GenAI can be re-understood as a triple “de-frictioning” process:

At the cognitive level, AI’s instantaneous feedback and high-completion output eliminate “desirable difficulties”—learners no longer need to make difficult choices between multiple schemes because AI can generate dozens of schemes in seconds; learners no longer need to repeatedly revise their own intentions because AI’s output is already “good enough.” This cognitive smoothing leads to a hollowing out of judgment—learners retreat from “making judgments” to “selecting AI outputs.”

At the material level, AI’s symbolic operation severs the embodied interaction between the creator and physical materials. When creation shifts from canvas, clay, or screen-printing to text prompts, bodily participation is extremely simplified, and the “dialogue between the hand and the head” described by Sennett loses its physical foundation. The “correspondence” mentioned by Ingold is replaced by a unidirectional “command-output” relationship.

At the political level, the default logic of commercial GenAI platforms—converging towards the mean of data distribution and optimizing for click-through rates and user satisfaction—constitutes a form of hidden aesthetic governance. Manovich (2023) points out that AI-generated visual output tends towards high-frequency areas in data distribution, leading to systematic stylistic homogenization. Crawford and Paglen (2019) reveal how the classification biases embedded in training datasets constitute a hidden political structure of visual production. In the absence of critical awareness, learners easily internalize these algorithmic biases as “natural” aesthetic standards.

3.3 Design Principles of Learning Friction

Based on the above analysis, this study derives three design principles from the three dimensions of “learning friction,” serving as the theoretical foundation for the subsequent “Adversarial AI Learning System”:

Principle I: Judgment Deferral. Derived from the cognitive friction dimension. Consciously delaying the timing of AI intervention in the learning workflow, requiring learners to complete an independent judgment process—including scheme conception, standard setting, and self-evaluation—before obtaining AI output. This principle aims to restore the deep encoding process covered over by AI’s instantaneous feedback.

Principle II: Material Return. Derived from the material friction dimension. Embedding mandatory steps for physical material operation within AI-assisted digital creative workflows, requiring learners to undergo at least one round-trip between digital generation and physical making. This principle aims to restore the embodied experience of body-material interaction.

Principle III: Deviation Provocation. Derived from the political friction dimension. By exposing AI’s default tendencies (such as high-frequency areas in generation space and cultural biases in training data) and actively introducing creative constraints that deviate from these defaults, learners are provoked to produce visual schemes that deviate from the “average” of the algorithm. This principle aims to restore aesthetic difference and critical awareness.

4 Adversarial AI Learning System: Design Framework

4.1 System Overview

Based on the aforementioned theoretical framework and experimental design, this study proposes a modular “Adversarial AI Learning System” (AALS). This system is not a technical platform or a software application but a pedagogical workflow architecture designed with learning friction as its principle, which can be embedded in different types of art and design courses.

The core idea of AALS is to reposition commercial GenAI models from “generators” to “tools for regulating learning conditions”—i.e., AI is not a tool used to replace creation but a means used to create specific learning conditions (including contrast, conflict, and bias exposure). The goal of the system is not to exclude AI but to place its use within a critical pedagogical scaffold.

4.2 System Architecture

AALS consists of four functional modules, each corresponding to a specific type of learning friction:

Module I: Judgment Anchoring Module

Function: To establish an independent judgment baseline for the learner before AI intervention.

Process: After receiving the creative brief, learners enter a 24-48 hour “AI-free period.” During this time, they complete three tasks: (a) hand-draw sketches for at least three directions; (b) write a justification of within 100 words for each direction; (c) choose one as the “main direction” and record the selection criteria. These records constitute the learner’s “judgment baseline,” used for subsequent comparison with changes in judgment after AI intervention.

Design Rationale: Bjork’s (1994) “generation effect” indicates that information generated independently by the learner has stronger memory retention than information received passively. This module activates deep encoding at the cognitive level by forcing independent judgment.

Module II: Material Shuttle Module

Function: To create a mandatory round-trip cycle between digital generation and physical making.

Process: After using AI tools to generate visual schemes, learners must choose at least one scheme for physical material translation. The form of translation is determined by the nature of the course—it could be manual collage, screen-printing, ceramics, hand-drawing, 3D models, etc. After completing the physical translation, learners need to (a) take photographs, (b) describe “accidents in the physical process” and “new understandings brought by accidents” in the judgment log, and (c) revise the digital scheme based on the physical making experience—constituting a complete “digital → physical → digital” cycle.

Design Rationale: Ingold’s (2013) “correspondence” theory and Sennett’s (2008) theory of craftsmanship. The physical resistance of materials is completely eliminated in digital creation; this module restores the experiential dimension of body-material interaction through mandatory physical intervention.

Module III: Bias Exposure Module

Function: To make AI’s default tendencies visible and debatable.

Process: Teachers guide learners in an “AI Generation Space Map” activity. Specific operations include: the whole class uses the same prompt to let AI generate a large number of images (e.g., 100), then collectively cluster and arrange these images to identify AI’s default style areas (high-frequency areas) and empty areas (low-frequency or missing areas). On this basis, a classroom discussion is launched: why does AI tend to generate these styles? What do the missing styles mean? What biases in the training dataset lead to these tendencies?

Design Rationale: Research by Crawford and Paglen (2019) on the politics of AI training datasets, and the “social impact of AI” competency dimension in Long and Magerko’s (2020) critical AI literacy framework.

Module IV: Tension Collaboration Module

Function: To establish constructive tension relationships between learners and between learners and AI.

Process: Learners pair up in “tension pairs.” Each pair of learners exchanges their AI-generated results and requires the other to re-interpret these results aiming for “maximum deviation”—which could be re-creation with physical materials or deliberate “reverse” generation by modifying prompts. After completion, both sides discuss: what new possibilities did the deviation produce? Where is deviation valuable? Where is deviation just for the sake of deviation?

Design Rationale: DiSalvo’s (2012) adversarial design theory. This module extends “adversity” from a binary relationship between the learner and the AI system to social interaction among learners, making critique a collaborative practice rather than an isolated act.

4.3 The Role of the Teacher

In AALS, the role of the teacher is redefined into a triple identity:

Builder of Judgment Frameworks: Responsible for designing the specific parameters of each module (such as the length of the “AI-free period,” medium selection for material translation, and discussion framework for bias exposure), and dynamically adjusting these parameters according to the students’ specific circumstances.

Interpreter of Bias: In the bias exposure module, teachers need to possess sufficient technical understanding to explain the technical and social reasons behind AI generation tendencies—for example, why do AI-generated “Chinese style” images always present a specific visual pattern? This involves a comprehensive understanding of cultural biases in training datasets, mathematical characteristics of model architectures, and optimization goals of commercial platforms.

Participant in Meaning Negotiation: The teacher is not the final arbiter of judgment but a conversational partner jointly participating in meaning negotiation with the learner. The teacher’s core task is not to tell students “what is good,” but to help students understand “how standards of what is good are constructed”—and how AI’s default logic implicitly influences the construction of these standards.

4.4 System Boundaries and Limitations

It should be clearly pointed out that AALS does not advocate an “anti-AI” or “AI-free” educational stance. Its core goal is not to exclude AI from art education but to transform the use of AI from “unconditional acceptance” to “critical negotiation.” The design assumption of the system is: when learning conditions are redesigned with friction, uncertainty, and visible technical bias, learners’ judgment, creative difference, and subjectivity awareness can gain greater generative space. This assumption needs to be tested through empirical research, providing a testable theoretical proposition for subsequent empirical studies.

Furthermore, AALS currently focuses on undergraduate-level art and design education; its applicability at different educational levels (such as graduate and vocational education) and in different cultural contexts requires further research. The system also places high demands on teachers’ critical AI literacy—if teachers themselves lack an understanding of AI’s technical characteristics and social impact, Module III and the second identity of the teacher’s role will be difficult to implement effectively.

5 Learning Friction and Algorithmic Governance: Theoretical Expansion

5.1 De-frictioning as an Educational Manifestation of Algorithmic Governance

The conceptual framework of “learning friction” not only has analytical value for pedagogy but also forms a deep theoretical resonance with contemporary algorithmic governance critique. As described by Rouvroy and Berns (2013), “algorithmic governmentality” operates through three stages: automated data collection, knowledge production based on correlation, and preemptive behavioral intervention. The operational logic of GenAI in educational scenarios is highly isomorphic with this: learners’ behavioral data is collected as material for training and optimization, probabilistic models replace causal knowledge construction, and AI’s instantaneous output pre-structures the learner’s next step.

From this perspective, “de-frictioning” is not merely an educational issue—it is a concrete manifestation of the structural compression of subjectivity by predictive algorithmic systems. When AI systems take “zero friction” as their design goal, they are actually executing a “preemptive intervention” in the sense of Rouvroy: before the learner forms an independent judgment, they pre-occupy the space of judgment with high-completion output. Yeung (2017) summarizes this mechanism as “hypernudge”—a networked, continuous behavioral guidance—in an educational context, AI’s instantaneous feedback is precisely a hypernudge at the cognitive level.

5.2 Learning Friction and Mutual Alienation

Situating the concept of learning friction within the theoretical framework of “mutual alienation” can reveal deeper structures. Mutual alienation describes an asymmetrical bidirectional state: human behavior is reduced to probability distributions, compressing subjectivity; while the technical system itself is locked within the correlational structures of training data, unable to achieve true epistemological breakthroughs.

In educational settings, the specific manifestation of this bidirectional alienation is: the learner’s aesthetic judgment is “predicted” and homogenized by AI’s default distribution—the “decline in collective diversity” found by Doshi and Hauser (2023) is empirical evidence of this process; while the AI system itself is limited by its training data, unable to generate truly novel schemes beyond existing aesthetic distributions—its “creativity” is essentially a reorganization rather than a breakthrough of high-frequency patterns in the training data. The design significance of learning friction gains new theoretical depth here: it is not only the restoration of learning conditions but also an active intervention in the structure of mutual alienation—by creating cracks on the smooth surface of the predictive system, it reserves space for the contingency of judgment and the unpredictability of materials.

5.3 From Immanent Critique to Educational Design

The methodological stance of this study can be understood as an extension of “immanent critique” in the field of educational design. Immanent critique—simultaneously using a system as a tool and as an object of critique—already has a deep genealogy in contemporary computational art practice (Agre, 1997). The core strategy of AALS is precisely an immanent critique in the field of education: not excluding AI, but creating visible ruptures in the process of using AI, so that learners can develop critical awareness of the AI system within the AI system.

This methodological choice is complementary to Dunne and Raby’s (2013) “speculative design”: speculative design asks “what alternative possibilities are,” while immanent critique asks “how critique is possible within the existing system.” For learners situated in the inescapable educational reality of commercial GenAI platforms, the latter question may be of more operational significance than the former.

6 Discussion

6.1 Theoretical Contributions

The core theoretical contribution of this study lies in the proposal of “learning friction” as an integrative conceptual framework. The value of this framework lies in:

First, it unifies three previously independent theoretical traditions—“desirable difficulties” in learning science, “material resistance” in practical philosophy, and “adversity” in critical design—under a common conceptual umbrella, giving them new comprehensive explanatory power in the context of GenAI education. The reason these three types of “friction” can be integrated is that GenAI applies “de-frictioning” pressure at all three levels simultaneously—this synchronicity makes the response of any single theoretical tradition inadequate, necessitating an integrated framework.

Second, the concept of “learning friction” shifts the discussion about the educational impact of GenAI from the “effect evaluation” paradigm to the “condition analysis” paradigm. While a large number of existing studies ask “does AI effectively assist learning,” the learning friction framework asks “how AI changes the very structure of the conditions under which learning occurs.” This paradigm shift allows some seemingly positive AI effects (such as the enhancement of learners’ self-efficacy and the acceleration of creative speed) to be re-examined—they may precisely be manifestations of the “illusion of competence” (Bjork & Bjork, 2011) produced after cognitive friction is eliminated.

Third, by linking learning friction with broader theoretical frameworks such as algorithmic governmentality and mutual alienation, this study demonstrates how educational issues are embedded within larger technopolitical structures. “De-frictioning” is not an isolated educational phenomenon but a concrete projection of the universal operational logic of predictive technical systems in the field of education.

6.2 Implications for Educational Practice

For art and design educators, the practical implications of this study are concentrated in the following aspects:

First, the introduction of AI tools should not take “efficiency” as its only consideration. Currently, many art institutions position GenAI as “a tool to enhance creative efficiency”; this positioning overlooks the educational value of “inefficient” steps in art learning (such as repeated trials, material exploration, and slow judgment). AALS provides an alternative way of positioning—treating AI as a means to “create valuable learning difficulties.”

Second, the role of the teacher needs to shift from “technical trainer” to “critical mediator.” In the AI era, the core task of art and design teachers is not to teach students how to write prompts or how to choose AI tools, but to help students understand the technical logic and cultural biases behind AI output and make grounded judgments based on this understanding.

Third, curriculum design should consciously protect “non-AI spaces.” This does not mean prohibiting the use of AI, but rather reserving dedicated time and space in the curriculum workflow for independent judgment, physical material operation, and critical reflection, giving learners the opportunity to establish comparative self-awareness between “with AI” and “without AI” conditions.

6.3 Limitations and Future Directions

As a theoretical work, the primary limitation of this study is that the AALS design framework has not yet undergone systematic empirical validation. Whether the three design principles of learning friction—judgment deferral, material return, and deviation provocation—can produce the expected effects in real teaching environments requires testing through subsequent Design-Based Research (DBR). Furthermore, AALS currently focuses on undergraduate-level art and design education; its applicability at different educational levels and in different cultural contexts requires further research. The system also places high demands on teachers’ critical AI literacy—if teachers themselves lack an understanding of AI’s technical characteristics and social impact, the bias exposure module will be difficult to implement effectively.

Prior directions for future research include: implementing AALS in real teaching environments and collecting empirical data; developing quantitative assessment tools for learning friction; and exploring the issues of localized adaptation of AALS in different cultural contexts (particularly under China’s AI governance framework).

7 Conclusion

This study starts from a core contradiction facing art and design education: while GenAI provides unprecedented creative convenience, it is systematically eliminating the structural conditions upon which art learning depends—the “frictional” steps involving judgment difficulty, material resistance, and critical negotiation.

In response to this contradiction, the research makes contributions at two levels. At the theoretical level, it proposes “learning friction” as an integrative conceptual framework, synthesizing learning science, practical philosophy, and critical design theory in the context of GenAI education, providing a new analytical paradigm for understanding AI’s impact on learning conditions and linking it to broader theoretical contexts of algorithmic governmentality and mutual alienation. At the design level, it proposes a modular “Adversarial AI Learning System” (AALS) design framework, translating theoretical critique into actionable educational design principles—judgment deferral, material return, and deviation provocation—and implementing them through specific pedagogical modules.

This study argues that in the GenAI era, the core task of art education is not to teach how to “generate,” but to train how to “judge”—how to maintain the openness of aesthetic judgment in automated systems, how to uphold critical awareness in the face of algorithmic bias, and how to establish meaningful creative round-trips between the physical and the digital. Learning friction is not an obstacle to learning; it is a condition for it. When the technical system makes everything smooth, consciously designing friction becomes an educational ethic.

References

Abdulmajid, M., Alali, N., & Alsharrah, A. (2025). Critical agency and hybrid cognition in digital art education: Human-AI co-creation using Stable Diffusion XL. SSRN. https://doi.org/10.2139/ssrn.5983388

Barab, S., & Squire, K. (2004). Design-based research: Putting a stake in the ground. Journal of the Learning Sciences, 13(1), 1-14.

Biesta, G. (2013). The beautiful risk of education. Paradigm Publishers.

Bjork, R. A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe & A. Shimamura (Eds.), Metacognition: Knowing about knowing (pp. 185-205). MIT Press.

Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. Psychology and the Real World: Essays Illustrating Fundamental Contributions to Society, 2, 59-68.

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.

Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.

Crawford, K., & Paglen, T. (2019). Excavating AI: The politics of images in machine learning training sets. https://excavating.ai

The Design-Based Research Collective. (2003). Design-based research: An emerging paradigm for educational inquiry. Educational Researcher, 32(1), 5-8.

Dewey, J. (1934). Art as experience. Minton, Balch & Company.

DiSalvo, C. (2012). Adversarial design. MIT Press.

DiSalvo, C. (2022). Design as democratic inquiry: Putting experimental civics into practice. MIT Press.

Doshi, A. R., & Hauser, O. P. (2023). Generative artificial intelligence enhances creativity but reduces the diversity of novel content. arXiv. https://doi.org/10.48550/ arXiv.2312.00506

Dunne, A., & Raby, F. (2013). Speculative everything: Design, fiction, and social dreaming. MIT Press.

Fang, Z. (2026). Integrating generative AI in higher art education: A systematic review of tools, pedagogies, and practices. SN Computer Science, 7, 215.

Hui, Y. (2016). On the existence of digital objects. University of Minnesota Press.

Hui, Y. (2019). Recursivity and contingency. Rowman & Littlefield.

Ingold, T. (2013). Making: Anthropology, archaeology, art and architecture. Routledge.

Long, D., & Magerko, B. (2020). What is AI literacy? Competencies and design considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-16). ACM.

Manovich, L. (2023). Artificial aesthetics: A critical guide to AI, media and design. Manovich.net.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.

Pasquinelli, M., & Joler, V. (2021). The Nooscope manifested: AI as instrument of knowledge extractivism. AI & Society, 36(4), 1263-1280.

Polanyi, M. (1966). The tacit dimension. Doubleday.

Raley, R., & Rhee, J. (2023). Critical AI: A field in formation. American Literature, 95(4), 693-709.

Rouvroy, A. (2013). The end(s) of critique: Data behaviourism versus due process. In Privacy, due process and the computational turn (pp. 143-168).

Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press.

Sennett, R. (2008). The craftsman. Yale University Press.

Stiegler, B. (2016). Automatic society, volume 1: The future of work. Polity Press.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. PublicAffairs.

Agre, P. E. (1997). Toward a critical technical practice: Lessons learned in trying to reform AI. In G. Bowker et al. (Eds.), Social science, technical systems, and cooperative work (pp. 131-157). Lawrence Erlbaum.

Rouvroy, A., & Berns, T. (2013). Gouvernementalité algorithmique et perspectives d’émancipation. Réseaux, 177(1), 163-196.

Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136.