AI and Learner Agency: A Framework for Preserving Student Autonomy in Educational Technology
Preserving Learner Autonomy in AI-Enhanced Education: A Framework for Empowerment and Agency
Abstract
This paper examines the complex relationship between artificial intelligence (AI) in education and learner agency, building on established frameworks for understanding student autonomy in technology-enhanced learning environments. Drawing from Suárez et al.'s multidimensional model of learner agency in mobile learning contexts, we analyze how AI systems can both enhance and undermine six key dimensions of student agency: control over goals, content, actions, strategies, reflection opportunities, and self-monitoring capabilities. Through systematic analysis of mechanisms, concrete examples, and evidence-based countermeasures, we propose a comprehensive framework for designing and implementing AI educational tools that preserve and enhance rather than diminish learner autonomy. Our findings suggest that while AI has significant potential to support student agency through intelligent scaffolding and personalized learning pathways, poorly designed systems risk creating automation dependency, reducing critical thinking opportunities, and homogenizing learning experiences. We conclude with practical design principles, policy recommendations, and evaluation metrics for ensuring AI serves as a tool for empowerment rather than replacement of human decision-making in educational contexts.
Keywords: artificial intelligence, learner agency, educational technology, student autonomy, learning analytics, human-AI collaboration
1. Introduction
The rapid integration of artificial intelligence into educational environments presents both unprecedented opportunities and significant risks for learner agency—the capacity of students to exercise meaningful control over their learning processes (Bandura, 2006; Reeve & Tseng, 2011). While AI systems promise personalized learning experiences, intelligent tutoring, and adaptive support, they also introduce new forms of technological mediation that may fundamentally alter the relationship between learners and their educational environments.
Building on the seminal work of Suárez et al. (2018) on mobile technologies and learner agency, this paper extends their multidimensional framework to examine how AI specifically impacts student autonomy across six critical dimensions: control over learning goals, content selection, learning actions, strategic approaches, metacognitive reflection, and self-monitoring processes. Unlike previous mobile technologies that primarily served as information access tools, AI systems actively participate in educational decision-making, potentially assuming roles traditionally reserved for human judgment.
Recent UNESCO guidelines on AI in education (UNESCO, 2021) and emerging research on AI's educational impact (Holmes et al., 2019; Zawacki-Richter et al., 2019) highlight the urgent need for frameworks that ensure technology enhances rather than replaces human agency. This paper responds to that need by providing a systematic analysis of AI's impact on learner autonomy, supported by practical interventions and evaluation approaches.
2. Theoretical Framework: Learner Agency in AI-Enhanced Environments
2.1 Defining Learner Agency
Learner agency encompasses students' capacity to intentionally influence their learning experiences through goal-directed actions, metacognitive awareness, and strategic decision-making (Bandura, 2006). Following Suárez et al.'s (2018) framework, we conceptualize agency as operating across six interconnected dimensions:
1. Control over goals: Students' ability to participate in defining and negotiating learning objectives
2. Control over content: Capacity to select, evaluate, and work with learning materials and information sources
3. Control over actions: Freedom to choose specific learning activities, their timing, and intensity
4. Control over strategies: Autonomy in selecting metacognitive approaches and learning methods
5. Opportunities for reflection: Access to spaces and tools that promote metacognitive thinking
6. Opportunities for monitoring: Ability to track, interpret, and act upon learning progress data
2.2 AI as a Mediating Technology
Unlike traditional educational technologies that primarily serve instrumental functions, AI systems exhibit three characteristics that fundamentally alter their relationship to learner agency:
- Proactive recommendation: AI systems actively suggest goals, content, and actions rather than simply responding to user queries
- Opacity: Machine learning algorithms often operate through processes that are not transparent to users
- Adaptive behavior: AI systems modify their responses based on data patterns, potentially creating feedback loops that shape user behavior
These characteristics position AI as an active participant in educational decision-making, requiring careful consideration of how human agency is preserved within human-AI collaborative learning environments.
3. Analysis: How AI Can Undermine Learner Agency
3.1 Control over Goals
Mechanisms of Agency Reduction:
AI systems can undermine goal-directed learning through several mechanisms. Adaptive learning platforms often generate "optimal pathways" based on algorithmic analysis of student performance data, prioritizing system-defined objectives over personally meaningful goals. These systems frequently optimize for easily measurable outcomes such as completion rates or test scores, potentially overlooking deeper formative objectives that students might value.
Concrete Example:
A learning management system with AI capabilities analyzes a student's activity patterns and suggests completing six specific modules based on aggregate performance data. The student accepts this recommendation to "save time," despite having a genuine interest in exploring one topic more deeply. The system's efficiency-focused recommendation effectively redirects the student away from personally meaningful learning goals toward algorithmically optimized objectives.
Evidence Base:
Research on recommendation systems in education demonstrates a tendency toward over-dependence when students lack training in critical evaluation of AI suggestions (Baker & Inventado, 2014). Studies show that students frequently accept AI recommendations without questioning their alignment with personal learning objectives, particularly when presented with apparently authoritative algorithmic guidance.
3.2 Control over Content
Mechanisms of Agency Reduction:
AI content recommendation systems can limit learner autonomy through biased filtering algorithms that prioritize popular or commercially aligned sources. Generative AI tools present particular challenges by producing polished content without clear provenance, potentially obscuring the diversity of perspectives and sources that inform learning. Automated curation systems may inadvertently reduce exposure to alternative or critical materials by optimizing for engagement metrics rather than intellectual breadth.
Concrete Example:
A student requests an AI assistant to provide a summary of research on climate change. The system generates a comprehensive-appearing summary without indicating that it primarily drew from recent publications in high-impact journals, potentially missing important alternative perspectives or critical analyses from different disciplinary approaches. The student accepts this as "the definitive source," losing awareness of the broader scholarly conversation.
Evidence Base:
UNESCO's AI in education guidelines emphasize the critical importance of transparency and source attribution in AI-generated content (UNESCO, 2021). Learning analytics literature demonstrates that when content curation is administrator-focused rather than student-facing, learners experience reduced control over their information resources (Gašević et al., 2015).
3.3 Control over Actions
Mechanisms of Agency Reduction:
Perhaps most concerning is AI's potential to automate tasks that traditionally require human decision-making and strategic thinking. When AI systems complete assignments, generate responses, or execute learning activities at the click of a button, students lose opportunities to practice deliberate choice-making and develop strategic approaches to problem-solving. This creates what researchers term "automation bias"—the tendency to over-rely on automated systems even when human judgment would be superior.
Concrete Example:
An AI-powered writing assistant generates a complete essay response to a prompt. The student makes minor edits and submits the work, believing they have engaged meaningfully with the assignment. However, the essential cognitive work of organizing ideas, developing arguments, and making rhetorical choices has been outsourced to the AI system, eliminating the very thinking processes the assignment was designed to develop.
Evidence Base:
Research on AI dialogue and tutoring systems reveals that excessive assistance can lead to learned helplessness and reduced problem-solving initiative (Aleven et al., 2003). Studies demonstrate that "help" becomes counterproductive when it eliminates the productive struggle necessary for deep learning.
3.4 Control over Strategies
Mechanisms of Agency Reduction:
AI systems may inadvertently homogenize learning strategies by recommending "optimal" approaches based on aggregate performance data. This algorithmic standardization can reduce the diversity of learning approaches and discourage experimentation with alternative methodologies. When systems optimize for short-term measurable outcomes, they may prioritize strategies that yield quick results over approaches that develop deeper understanding or creative thinking.
Evidence Base:
Research on adaptive learning systems suggests that algorithmic optimization often converges on strategies that work for average learners while potentially disadvantaging students with alternative learning preferences or cultural approaches to knowledge construction (Baker & Inventado, 2014).
3.5 Opportunities for Reflection
Mechanisms of Agency Reduction:
AI systems can reduce metacognitive reflection opportunities through several pathways. Immediate feedback and correction can eliminate the productive uncertainty that motivates deeper thinking. When AI provides solutions without requiring justification or explanation of reasoning processes, students miss opportunities to develop metacognitive awareness. The speed and efficiency of AI interactions may reduce the natural pauses that allow for contemplation and self-assessment.
Evidence Base:
Learning analytics research demonstrates that reflection-promoting features must be explicitly designed into educational technologies; they do not emerge naturally from data collection and feedback systems (Winne, 2017).
3.6 Opportunities for Monitoring
Mechanisms of Agency Reduction:
Many AI educational systems collect extensive data about student behavior but present this information primarily to administrators or teachers rather than students themselves. When monitoring data is opaque or incomprehensible to learners, they lose the capacity for informed self-regulation. Systems that evaluate performance without explaining the basis for judgments prevent students from understanding and improving their learning processes.
Evidence Base:
Research consistently shows that student-facing learning analytics dashboards can enhance self-regulation and agency, while administrator-oriented systems may actually reduce student autonomy (Jivet et al., 2017).
4. Cross-Cutting Issues: Systemic Threats to Agency
4.1 Automation Bias and Dependency
The tendency to uncritically accept AI recommendations represents a fundamental threat to learner agency. Research from cognitive science demonstrates that humans exhibit systematic biases toward trusting automated systems, even when human judgment would be superior (Parasuraman & Riley, 1997).
4.2 Opacity and Insufficient Explainability
When AI systems cannot explain their recommendations or decision-making processes, students lose the ability to evaluate, critique, or learn from these interactions. This opacity prevents the development of AI literacy skills necessary for maintaining agency in increasingly AI-mediated environments.
4.3 Bias and Inequality
AI systems may perpetuate or amplify existing educational inequalities by encoding historical biases in their training data or optimization functions. When algorithms penalize certain cultural communication styles or learning approaches, they effectively limit authentic choices for affected student populations.
4.4 Privacy and Surveillance
Extensive data collection by AI systems may create surveillance effects that inhibit authentic expression and experimentation. Students may self-censor or conform to perceived algorithmic preferences, reducing the intellectual risk-taking essential for genuine learning.
4.5 Skill Erosion (Deskilling)
Repeated reliance on AI for tasks like writing, calculation, or information synthesis may lead to atrophy of these capabilities. This "deskilling" effect reduces students' capacity for independent intellectual work, creating long-term dependence on technological support.
5. Framework for Preserving and Enhancing Learner Agency
5.1 Design Principles
Based on our analysis and existing research, we propose ten core principles for AI educational design that preserves learner agency:
1. Human-in-the-Loop: Ensure final decisions remain with human learners, particularly in formative learning contexts
2. Minimal Explainability: Provide brief explanations and confidence levels for all AI recommendations
3. Suggest, Don't Impose: Offer multiple alternatives with clear trade-offs rather than single "optimal" paths
4. Provenance and Authorship: Clearly indicate sources and estimated AI contribution in all generated content
5. Student-Facing Analytics: Design dashboards that are comprehensible and actionable for learners themselves
6. Data Control and Consent: Enable students to view, export, and delete their data where technically feasible
7. AI Literacy Integration: Include critical AI evaluation skills in curriculum design
8. Reversibility Mechanisms: Allow students to undo and review AI-mediated decisions
9. Equity Auditing: Regularly assess differential impacts across demographic groups
10. Authentic Assessment: Emphasize demonstrations of skills that cannot be easily automated
5.2 Practical Countermeasures by Agency Dimension
For Goal Control:
- Implement interface features that allow students to modify AI-suggested objectives
- Provide transparency about the criteria underlying goal recommendations
- Design systems that propose multiple reasoned pathways rather than single solutions
- Include goal negotiation as an explicit curricular skill
For Content Control:
- Mandate source attribution for all AI-generated materials
- Implement "edit, don't replace" modes that require human revision of AI outputs
- Maintain human editorial oversight to ensure resource diversity
- Teach critical evaluation of AI-produced content as a core literacy skill
For Action Control:
- Design interaction flows that require meaningful human intervention
- Position AI as a prototyping tool rather than a final executor
- Include assessment criteria that value revision processes and reasoning
- Create assignments that inherently require human judgment and cannot be automated
For Strategy Control:
- Present multiple learning pathways with explicit trade-offs
- Value strategic choice and process documentation in evaluation
- Create institutional incentives for methodological experimentation
- Design "safe failure" spaces where alternative approaches can be tested
For Reflection Opportunities:
- Integrate reflective prompts into every AI interaction
- Require authorship statements explaining AI use in assignments
- Design dashboards that display learning processes, not just outcomes
- Build in deliberate pauses that resist the immediacy bias of AI systems
For Monitoring Opportunities:
- Create student-facing analytics with actionable explanations
- Enable data export and ownership features
- Teach data literacy skills for interpreting learning metrics
- Design transparent evaluation criteria that students can understand and challenge
6. Evaluation Framework: Measuring Impact on Agency
6.1 Quantitative Indicators
Systematic evaluation of AI's impact on learner agency should include measurable indicators such as:
- Interaction Patterns: Frequency of AI use, time spent on human review of AI outputs, number of edits made to AI-generated content
- Performance Metrics: Comparative performance on tasks requiring non-automatable skills
- Behavioral Analytics: Analysis of decision-making patterns and strategic choices over time
- Engagement Measures: Time spent on reflective activities, frequency of goal modification, diversity of learning strategies employed
6.2 Qualitative Indicators
Complementary qualitative measures should assess:
- Self-Reported Agency: Student perceptions of control and autonomy in learning processes
- Decision-Making Processes: Interview data about how students evaluate and use AI recommendations
- Portfolio Analysis: Examination of learning artifacts for evidence of human reasoning and creativity
- Metacognitive Awareness: Assessment of students' understanding of their own learning processes
6.3 Research Design Considerations
Robust evaluation requires:
- Mixed-Methods Approaches: Combining quantitative metrics with qualitative insights
- Longitudinal Studies: Tracking changes in agency over extended periods
- Control Group Comparisons: Studying learning with and without AI support
- Contextual Sensitivity: Recognizing that agency impacts may vary across disciplines and learning contexts
6.4 Warning Signs
Educators and researchers should monitor for indicators that suggest AI may be undermining agency:
- Reduced Human Editing: Increasing submission of AI-generated content without meaningful human modification
- Declining Reflection: Decreased engagement with metacognitive activities and self-assessment
- Homogenization: Convergence of student work styles and approaches
- Authorship Confusion: Uncertainty about the relative contributions of human and AI input
- Strategic Atrophy: Reduced experimentation with alternative learning approaches
7. Discussion and Implications
7.1 The Paradox of AI and Agency
Our analysis reveals a fundamental paradox in AI educational applications: while these systems have the potential to enhance learner agency by providing personalized support and freeing cognitive resources for higher-order thinking, they simultaneously risk undermining the very decision-making processes that constitute agency. This paradox suggests that the relationship between AI and learner autonomy is not predetermined but depends critically on design choices, implementation strategies, and pedagogical integration.
7.2 Beyond Individual Agency: Systemic Considerations
The preservation of learner agency in AI-enhanced environments requires attention to systemic factors beyond individual tool design. Institutional cultures that emphasize efficiency over deep learning, assessment systems that reward compliance over creativity, and economic pressures that favor scalable solutions may all work against agency-preserving AI implementation. Addressing these challenges requires coordinated effort across multiple levels of educational systems.
7.3 The Evolution of Human-AI Collaboration
Rather than viewing AI as a replacement for human capabilities, our framework positions AI as a collaborator that should enhance rather than substitute for human judgment. This requires a fundamental shift in how we conceptualize the role of technology in education—from automation tool to thinking partner. Such partnerships require new forms of literacy that enable learners to effectively collaborate with AI while maintaining their autonomous decision-making capacity.
7.4 Cultural and Contextual Considerations
The impact of AI on learner agency may vary significantly across cultural contexts, disciplinary domains, and individual learning preferences. What constitutes appropriate autonomy and decision-making authority varies across cultures, suggesting that one-size-fits-all approaches to AI integration may inadvertently privilege certain cultural perspectives while marginalizing others. Future research should examine how agency-preserving AI design principles may need adaptation across different educational contexts.
8. Limitations and Future Research
This paper's analysis is based primarily on emerging research and theoretical frameworks, as the rapid pace of AI development means that long-term empirical studies are not yet available. Future research should address several key areas:
- Longitudinal Impact Studies: Extended research tracking the long-term effects of AI use on student learning and agency development
- Cross-Cultural Validation: Testing of agency frameworks across different cultural and educational contexts
- Discipline-Specific Analysis: Examination of how AI impacts agency differently across various academic domains
- Developmental Considerations: Investigation of how AI effects vary across different age groups and developmental stages
- Intervention Effectiveness: Empirical testing of proposed countermeasures and design principles
9. Conclusion
The integration of artificial intelligence into educational environments represents both a significant opportunity and a substantial risk for learner agency. Our analysis demonstrates that AI systems can enhance student autonomy when intentionally designed to preserve human decision-making and promote metacognitive reflection. However, poorly implemented AI risks creating dependency, reducing critical thinking opportunities, and homogenizing learning experiences.
The framework presented in this paper offers practical guidance for educators, designers, and policymakers seeking to harness AI's benefits while preserving the essential human elements of learning. By attending to the six dimensions of agency—control over goals, content, actions, strategies, reflection, and monitoring—we can ensure that AI serves as a tool for empowerment rather than replacement of human judgment.
The key question for the future of AI in education is not whether these technologies will transform learning environments—they already are. Instead, the critical question is whether these transformations will enhance or diminish human agency. The answer to that question depends on the choices we make today about how to design, implement, and evaluate AI educational systems.
As we move forward, it is essential to remember that the goal of education is not efficiency or optimization, but the development of thoughtful, autonomous individuals capable of navigating an increasingly complex world. AI can support this goal, but only if we remain vigilant about preserving the human agency that lies at the heart of meaningful learning.
References
UNESCO. (2021). *AI and education: Guidance for policy-makers*. UNESCO Publishing.


There is a Call For Papers for the 22nd International Conference Mobile Learning (ML 2026)
7 - 9 March 2026, Zagreb, Croatia
I think you should propose this paper there
https://www.mlearning-conf.org/