Issues in Artificial Intelligence in Education: Ethical, Equity, Pedagogical, and Policy Challenges in the Generative AI Era
February 2026
Abstract
The integration of artificial intelligence (AI), particularly generative AI (GenAI), into educational systems has accelerated dramatically since 2022, offering unprecedented opportunities for personalization, automation, and accessibility while simultaneously surfacing profound challenges. As of the 2024–2025 academic year, usage rates reached 85% among teachers and 86% among students in K–12 settings in the United States. This article provides a comprehensive, rigorous review of the core issues confronting AI in education (AIED), drawing on systematic reviews, policy reports, and empirical studies published through early 2026. Key domains examined include ethical dilemmas (privacy, bias, transparency), equity and access disparities, pedagogical impacts on human agency and relationships, threats to academic integrity, and systemic implementation barriers. Recent trends highlight a shift toward human–AI collaboration and multimodal systems, yet persistent gaps in training, regulation, and oversight remain. Anchored in UNESCO’s human-centered frameworks, the analysis underscores the urgent need for rights-based, inclusive policies that preserve learner autonomy and educational equity. Future directions emphasize global dialogue, AI literacy, and hybrid pedagogies to harness AI’s potential without compromising core educational values.
Keywords: AI in education, generative AI, ethical challenges, algorithmic bias, digital equity, academic integrity, human-centered AI, policy frameworks
Introduction
Artificial intelligence has transitioned from a peripheral innovation to a pervasive presence in education. Intelligent tutoring systems, adaptive learning platforms, and GenAI tools such as large language models now support curriculum design, personalized feedback, assessment, and administrative tasks across K–12, higher education, and vocational contexts. The release of accessible GenAI applications in late 2022 catalyzed exponential adoption; by 2025, 86% of education organizations reported using generative AI—the highest rate across industries.
Yet this rapid integration has not been without friction. Systematic reviews identify recurring clusters of challenges: technological limitations, pedagogical misalignment, ethical risks, and systemic inequities. UNESCO’s 2025 anthology AI and the Future of Education: Disruptions, Dilemmas and Directions frames AI as a disruptive force that reshapes not only practices but also underlying assumptions about knowledge, agency, and inclusion. This article synthesizes the most recent evidence (2024–early 2026) to provide scholars, policymakers, and practitioners with a rigorous update on these issues, emphasizing empirical findings and normative implications.
Recent Trends in AI Adoption
Adoption surged in 2024–2025. Teachers primarily use AI for content creation (69%), student engagement (50%), and grading (45%), while students employ it for tutoring (64%) and career advice (49%). Microsoft’s 2025 AI in Education Report documents year-over-year increases in frequent use among educators (up 21 points in the US) and students (up 26 points for school-related tasks). Emerging applications include multimodal learning analytics, LLM-based feedback, and AI-supported special education tools such as individualized education program (IEP) drafting.
Research frontiers have shifted toward human–AI collaboration, generative content co-creation, and ethical governance. Bibliometric analyses reveal a post-2022 pivot in keywords from “machine learning” to “ChatGPT,” “ethics,” and “bias.” International bodies have responded with competency frameworks for teachers and students, guidance on GenAI, and calls for rights-based approaches.
Ethical Challenges
Ethical concerns dominate recent literature. A 2025 systematic review of 53 peer-reviewed articles on GenAI identified data privacy, algorithmic bias, misinformation, loss of cognitive autonomy, and academic plagiarism as primary risks.
Privacy and Data Security
AI systems require vast student data—behavioral patterns, performance metrics, and personal information—raising compliance issues with laws such as FERPA and GDPR equivalents. Education ranks among the most targeted sectors for cyberattacks, amplifying breach risks. Institutional misuse of data for training models without consent compounds vulnerabilities.
Conclusion
As AI permeates education, its issues transcend technical hurdles to touch the essence of teaching and learning: human relationships, effortful cognition, fairness, and agency. The evidence through early 2026 reveals a technology of immense promise shadowed by risks of dehumanization, inequity, and ethical erosion. Realizing benefits demands deliberate, evidence-based stewardship grounded in human rights and pedagogical wisdom. Policymakers, educators, and developers must collaborate to ensure AI serves as an amplifier of human potential rather than its diminisher. Only through sustained critical engagement, robust governance, and commitment to inclusion can education navigate the disruptions and realize the directions charted by leading international frameworks.
References
(Selected; full bibliography would expand to 50+ sources)
- • Center for Democracy & Technology (via EdWeek). (2025). Schools’ Embrace of AI Connected to Increased Risks.
- • Garzón, J., et al. (2025). Systematic reviews in Computers & Education: Artificial Intelligence and Multimodal Technologies and Interaction.
- • Microsoft Education. (2025). 2025 AI in Education Report.
- • Renta-Davids, et al. (2025). Scoping review in Review of Education.
- • UNESCO. (2025). AI and the Future of Education: Disruptions, Dilemmas and Directions; AI and Education: Protecting the Rights of Learners.
- • Systematic review in Frontiers in Education (2025).
This synthesis equips readers with the most current, rigorously evidenced understanding of AIED issues as of February 2026, facilitating informed scholarship and practice.









