Why knowing things is still important in the age of AI
Alan Harrison
The inner cloud
On a recent visit to Stonehenge I stood facing the ancient stones, listening to the audio guide:
One of the last prehistoric activities at Stonehenge was the digging around the stone settings of two rings of concentric pits, the so-called Y and Z holes, radiocarbon dated by antlers within them to between 1800 and 1500 BC.
That sentence is rich with information, but makes sense only if you already understand terms like “prehistoric”, “concentric”, “radiocarbon dating”, “antlers” and “BC”. Without that prior knowledge, the words blur into abstraction. You could Google each term, of course, or ask your AI assistant to “summarise” or “explain” this sentence, with variable results. But if you’ve outsourced your knowledge “to the cloud”, you’re left stitching fragments of meaning together without context, indeed without understanding. You will always be playing catch-up with someone who has this knowledge safely stashed in their long-term memory – their own “inner cloud”. To stand inside a story, not just beside it, we need more than access, we need understanding.
Powerful knowledge
This is why knowledge matters. Michael Young describes powerful knowledge as the structured, subject-specific understanding that gives learners access to ideas and ways of thinking beyond their everyday experience (Young, 2008). It’s not just information – it’s a gateway to intellectual liberation. E.D. Hirsch’s concept of cultural literacy echoes this: familiarity with a shared body of knowledge that enables full participation in civic and cultural life (Hirsch, 1987). Without it, learners are excluded not just from comprehension, but from conversation. They lose the ability to interpret, connect, and contribute.
And prior knowledge in a domain is key to the efficient acquisition of further knowledge in the same domain, surpassing even reading proficiency in value. Recht and Leslie’s “baseball study” (1988) showed that pupils with domain-specific knowledge – in baseball – outperformed stronger readers when interpreting a baseball text. Smith et al. (2021) found that prior knowledge consistently improved comprehension across 23 studies. Schema theory (Bartlett, 1932) explains why: we understand new information by linking it to what we already know. Sweller’s Cognitive Load Theory (1998) adds that learning is most efficient when new content connects to existing schemas. Andrew Kemp of training provider TeachHQ (Kemp, 2024) reinforces this: prior knowledge isn’t just recall, it’s the scaffold for thinking, or as David Didau puts it, knowledge is what we think with (Didau, 2014). Tom Sherrington explores this concept further in the context of the classroom, in his recent article: Everyone must be thinking to be learning – but what are they thinking *with*? (Sherrington, 2025)
Limiting friction
But what if we only need knowledge in our brains until the interface between humans and AI becomes completely frictionless? Soon we will just ask an assistant to explain, translate, or summarise anything instantly. Looking only slightly further ahead, AI will anticipate our needs and present us with relevant content we can understand, at the point of need, in the mode we prefer (text, audio or video: on a screen, in our ears or on our AR glasses) – so why bother remembering it ourselves?
Because frictionless isn’t seamless. The interface may be smooth, but the thinking still falls to us. We must interpret the AI’s response, judge its accuracy, and connect it to what we already know – assuming we know enough to do so. This is where internal knowledge matters. Having concepts stored in long-term memory allows us to spot nonsense, follow complex arguments, and make creative leaps. It gives us fluency, not just access. And fluency is what makes us persuasive, adaptable, and resilient.
And even the best AI tools have limits. AI strategist Michelle Jamesina (2025) notes that AI lacks persistent memory unless externally scaffolded. Claudius Mbemba, Principal Software Engineer at Seattle tech consultants CapitalTG, explains that even with long context windows, AI struggles with coherence (Mbemba, 2025). And the NSW Australia Department for Education (2024) reminds us that prompting pupils to retrieve knowledge from memory improves encoding and reduces misconceptions – something no AI can do for us. So yes, the interface may become very slick. But if we outsource too much to the cloud, we risk becoming fluent in search but illiterate in comprehension.
The human in the loop
This is the crux of why the human in the loop remains vital. AI can store, retrieve, and recombine information with astonishing fluency. But it lacks intentionality. It doesn’t know which ideas you’re trying to connect, which contradictions you’re wrestling with, or which half-formed insight you’re chasing. It can’t prioritise, contextualise, or emotionally weight information unless you guide it.
Knowledge isn’t just data, it’s direction. When we internalise concepts, we build schema: mental frameworks that help us interpret new information, spot relevance, and discard noise. Without these frameworks, AI’s output becomes a stream of plausible fragments, but someone’s still got to do the thinking, and that someone is still us!
Mbemba explains that even with extended context windows – allowing AI to process thousands of tokens at once – coherence remains a challenge. The model may retain surface-level continuity, but it often loses track of deeper argument structure, shifts in tone, or the evolving intent behind a user’s query. In other words, it can remember what was said, but not always why it mattered.
This is why frictionless access doesn’t equal frictionless understanding. You can ask an AI to explain “radiocarbon dating” or “concentric pits”, but unless you know what you’re trying to understand – and why – the explanation floats untethered. You need to be the one steering the inquiry, evaluating the response, and deciding what to do next. The human in the loop isn’t just a safety feature; it’s the source of meaning. It’s not enough to retrieve facts or follow arguments, we must be able to interrogate them, challenge them, and choose how to respond. This is the terrain of critical thinking.
Critical thinking is critical
Critical thinking has long been championed as a core skill, necessary for growth and competent participation in society. “The unexamined life is not worth living”, claimed Socrates, urging his followers to question everything, even himself and his teachings. Kant’s motto “Sapere aude” – “Dare to know” – is a rallying cry for independent reasoning. He argued that maturity comes from using one’s own understanding without reliance on external authority.
In an AI-rich world, this advice warns against outsourcing judgment to algorithms and urges us instead to cultivate the capacity to interrogate, evaluate, and decide. And there is evidence that critical thinking is crucial to the learning process, not just the life that comes after. Problem-solving – the application of knowledge to new challenges – quite self-evidently requires critical thinking, but studies have shown as far back as 1994 that it also improves the acquisition of knowledge, not just its application: “The experimental students [who received additional critical thinking instruction] scored significantly higher than the control on a knowledge test, suggesting that ‘knowledge of facts’ as one educational goal and ‘learning to think’ as another, need not conflict, but rather can interact with each other.” (Zohar et al, 1994).
Critical thinking also features strongly in recent predictions of enduring “21st Century Skills”, including in the OECD’s “Future of Education and Skills 2030” report (2021). In this analysis, creativity and critical thinking are not just vital but jobs that require them may withstand the rise of AI:
Although computers are making inroads into many domains, they are unlikely to replace workers whose jobs involve the creation of new ideas. – OECD 2021
But critical thinking doesn’t operate in a vacuum. It draws on the tools and structures of disciplines, on Foucault’s “gaze”: particular ways of looking, unique to specialisms, “which confer a power on the looker to notice and experience certain things” (Foucault, 1977). The discipline allows us not just to evaluate existing ideas, but to generate new ones. And this is where the limits of outsourcing knowledge become even clearer. An AI-rich world may offer instant access to facts, but it cannot confer disciplinary fluency. It can retrieve knowledge, but it cannot participate in knowledge-making. It has no outward gaze, its 20:20 vision is directed inward at its corpus of training data, drawn from everything already known. To make new knowledge, an agent – whether human or not – would need more than information, they would need initiation into the discipline itself, and this is a truly human endeavour.
Meaning-making
To properly examine this point, let’s step back and explore where knowledge comes from. Substantive knowledge is that knowledge already put into the world by the discipline, “the content that teachers teach as established fact” (Counsell, 2018). It’s the “stuff” which is knowable about the subject, both declarative and procedural. In maths this could be “the internal angles of a triangle sum to 180 degrees” (substantive and declarative) or “to calculate the mean, take the sum and divide by the count” (substantive and procedural).
Disciplinary knowledge in contrast, can be understood as knowledge of how participation in a discipline allows us to create meaning, or put new knowledge into the world. “How ideas are presented and developed, as figures, statements, pieces, products and essays, all form part of the disciplinary code” (Ashbee, 2021). In maths this could be constructing a formal proof, modelling a real-world phenomenon with equations, or devising a new algorithm to solve a previously intractable problem. It’s not just knowing that the angles of a triangle sum to 180 degrees, it’s proving why that’s true, exploring when it isn’t (on curved surfaces), or applying that principle to design a navigation system. Disciplinary knowledge is what allows experts to extend the field, challenge assumptions, and generate new insights.
Consider knowledge not just as the known but of the means of knowing: ways of establishing new knowledge within the subject domain. To get philosophical again, this creation of meaning enables Heidegger’s dasein: being-in-the-world, the actualisation of self through interactions with things and people (Heidegger, 1927, 1953). This is why knowledge creation is not just technical, but human. As the British Library’s Knowledge is Human paper (2025) argues, the creation and curation of knowledge are fundamentally human acts: rooted in interpretation, context, and care. The report warns that over-reliance on AI-generated content risks eroding trust, distorting context, and weakening the public knowledge ecosystem.
If we outsource our brains to a machine – even an extremely fluent one – we risk denying ourselves this actualised, rich life: the experience of being a knower, not just a consumer of knowledge, but also a contributor to the public noosphere – that collective space of human thought, interpretation, and meaning-making. Over-reliance on AI doesn’t just impoverish individual understanding; it dilutes the shared cognitive commons we all depend on.
Thinking the unthought
This is the difference between retrieval and creation: between having access to knowledge and being able to use it to generate new insights. Disciplinary knowledge is not just about what is known, but about how new knowledge is made. It’s the engine of innovation, interpretation, and critique. And this is precisely where generative AI cannot go. It can remix, rephrase, and reassemble existing text with astonishing fluency, but it cannot perform the discipline. It cannot theorise, hypothesise, or interrogate.
According to Wheelahan (2010), knowledge enables humans to “theorise possibility and think the un-thought”. But LLMs create plausible sounding output by regurgitating parts of existing text in new ways. They are not in any meaningful sense performing the discipline of science, of historiography, of linguistics, mathematics or computer science. They are not putting new knowledge into the world. Generative AI cannot think the unthought.
Thinking is what makes us human
Taken together, these arguments point to a deeper concern. When we outsource our cognitive labour to machines – even highly fluent ones – we risk more than just forgetting facts. We risk weakening the very capacities that make us human: the ability to think critically, to participate in disciplinary meaning-making, to generate new knowledge rather than merely retrieve old. The human in the loop is not just a technical necessity but an epistemic one. Without internalised knowledge, we cannot evaluate what we’re told. Without disciplinary fluency, we cannot contribute to the ongoing project of knowledge itself. And without the capacity to think the unthought, we cannot actualise ourselves as subjects – as beings who interpret, decide, and create. Over-reliance on AI doesn’t just dull our skills; it diminishes our humanity.
This is why internalised knowledge matters. It’s not just about passing tests – it’s about building the conceptual frameworks that allow us to think critically, creatively, and independently. As Hirsch argued, cultural literacy equips learners with the shared references and background knowledge that make comprehension possible. Without it, even the most fluent AI explanations remain untethered – fragments without a scaffold. And as Young reminds us, powerful knowledge is not simply what we know, but what we can do with it: the ability to transcend our immediate experience and engage with the world through disciplinary lenses. These lenses aren’t just academic tools – they’re gateways to participation, agency, and meaning-making.
Warnings from education research
We’ve been focusing on the practical impact of AI in our everyday lives: the human cost of offloading knowing and remembering onto machines. I enjoyed Stonehenge because I knew stuff that helped me understand the audio guide without the friction of an AI mediator and with the depth of understanding afforded to me by the schema in my long-term memory. But what if I had acquired that knowledge with the help of AI in the first place? Using AI in education to speed the learning process – to shortcut around the boring stuff, leaving more time for high-value activities, onboarding my schema with the help of AI, if you will – that’s OK, right? The answer is: it depends. Judicious use of AI by teachers, or limited use by pupils, may have value – but over-reliance on AI is not just harmful to our personhood, it’s threatening education goals too.
In a systematic review of 14 studies examining how generative AI tools affect students’ cognitive performance, Zhai et al. (2024) showed that students who relied on AI in their studies displayed reduced motivation to engage in independent analysis, increased use of cognitive shortcuts (heuristics) over deep reasoning, diminished ability to evaluate sources and construct arguments, and a higher risk of academic misconduct due to uncritical use of AI outputs. Likewise, Melisa et al. (2025) recognised that ChatGPT had some positive effects – such as facilitating quick access to diverse perspectives and helping students construct arguments – but warned that “over-reliance on AI can hinder students’ motivation for self-reflection and critical evaluation”. It seems the research is telling us that overuse of AI in the learning process can harm development of critical thinking skills.
This is the deeper risk of over-reliance on generative AI in education. If we allow students to bypass the slow, sometimes difficult process of acquiring knowledge, we don’t just weaken their recall – we limit their reach. We deny them access to the very tools that make education emancipatory. Young’s vision of education as a means of social justice depends on giving all learners access to the specialised knowledge that disciplines offer. AI may offer shortcuts, but it cannot substitute for the long journey of becoming a knower.
This is where Gert Biesta’s work offers a vital lens. He argues that education serves not just to qualify learners with knowledge and skills, or to socialise them into existing norms, but to subjectify – to support the emergence of the person as an autonomous, responsible being (Biesta, 2010). Over-reliance on AI risks narrowing education to qualification alone: efficient, fluent, but hollow. If we want learners to become more than users of information – if we want them to become thinkers, citizens, and creators – then we must preserve space for uncertainty, struggle, and self-formation. AI can assist, but it cannot subjectify. That remains human work.
Opportunities for the brave
That said, when used well, the evidence suggests that AI can support – rather than supplant – the learning process. A systematic review by Garzón et al. (2025) analysed 155 empirical studies and found that AI tools can enhance learning outcomes, personalise instruction, and increase student motivation when thoughtfully integrated into teaching practice. Adaptive systems can offer timely feedback, scaffold complex tasks, and help learners visualise abstract concepts. For students with diverse needs, AI can provide differentiated pathways through content, enabling access and engagement that might otherwise be out of reach. These are not trivial gains – they represent real opportunities to close gaps, accelerate progress, and support inclusion.
Bauer et al. (2025) go further, proposing the ISAR model to describe how AI can augment or even redefine learning tasks. Rather than simply substituting existing instruction, AI can open up new forms of inquiry – simulating historical debates, giving voice to literary characters (Harrison, 2025), modelling scientific systems, or generating multiple perspectives on a text – that deepen understanding and promote metacognition. The key, they argue, lies in implementation. AI’s benefits are not automatic, but contingent on how it is used. When paired with strong pedagogy and critical framing, AI can become a powerful ally in cultivating the very capacities – curiosity, creativity, criticality – that this article defends.
Knowledge is agency
So where does this leave us, as educators? AI is here, and when used thoughtfully, it can be a powerful ally – accelerating access, supporting differentiation, and opening new forms of inquiry. But its presence demands more than enthusiasm. It demands pedagogical adaptation. Simply layering AI onto traditional tasks – like setting essay homework without rethinking purpose or process, all but ensuring student use of LLMs as an “essay mill” – risks undermining the very capacities we aim to cultivate.
The research is clear: over-reliance on generative AI can erode critical thinking, reduce motivation for independent analysis, and weaken students’ ability to evaluate, synthesise, and create. If we want learners to become thoughtful, creative, and discerning participants in the world, we must design learning that invites them to think – with and without AI.
And at the heart of it all, one truth remains: knowing things still matters. Not just for passing tests, but for making sense of the world. For seeing patterns, asking better questions, and standing not just at a monument, but inside a story. Knowledge is not a relic – it’s a lens, a scaffold, a source of agency. In an age of human-like machines, it is what makes us fully human.
References
- Ashbee, R. (2021). Curriculum: Theory, Culture and the Subject Specialisms. Routledge. https://doi.org/10.4324/9781003039594
- Bartlett, F. C. (1932). Remembering: A Study in Experimental and Social Psychology. Cambridge University Press.
- Bauer, E., Greiff, S., Graesser, A. C., Scheiter, K., & Sailer, M. (2025). Looking beyond the hype: Understanding the effects of AI on learning. Educational Psychology Review, 37, Article 45. https://doi.org/10.1007/s10648-025-10020-8
- Biesta, G. (2010). Good Education in an Age of Measurement: Ethics, Politics, Democracy. https://doi.org/10.4324/9780203852767
- British Library. (2025). Knowledge is Human: The Information Ecosystem in the Age of AI. https://www.bl.uk/stories/blogs/posts/knowledge-is-human
- Counsell, C. (2018). Taking curriculum seriously. Impact: Journal of the Chartered College of Teaching. https://my.chartered.college/impact_article/taking-curriculum-seriously/
- Didau, D. (2014). Thinking with and about. Learning Spy blog. https://learningspy.co.uk/featured/thinking/
- Foucault, M. (1977). Discipline and Punish: The Birth of the Prison (A. Sheridan, Trans.). Pantheon Books.
- Garzón, J., Patiño, E., & Marulanda, C. (2025). Systematic review of artificial intelligence in education: Trends, benefits, and challenges. Multimodal Technologies and Interaction, 9(8), 84. https://doi.org/10.3390/mti9080084
- Harrison, A. (2025), Time-saving conversations with LLMs. HWRK Magazine. https://hwrkmagazine.co.uk/time-saving-conversations-with-llms/
- Heidegger, M. (1927/1953). Being and Time (J. Macquarrie & E. Robinson, Trans.). Harper & Row.
- Hirsch, E. D. (1987). Cultural Literacy: What Every American Needs to Know. Houghton Mifflin.
- Jamesina, M. (2025). AI memory limitations: Why ChatGPT and Claude APIs forget (+ solutions). Michelle Jamesina Blog. https://michellejamesina.com/ai-memory-limitations-solutions/
- Kemp, A. (2024). The importance of prior knowledge in learning: A guide for educators. TeachHQ blog. https://teachhq.com/article/show/the-importance-of-prior-knowledge-in-learning-a-guide-for-educators
- Mbemba, C. (2025). Overcoming memory limitations in generative AI: Managing context windows effectively. CapitalTG Blog. https://blog.capitaltg.com/overcoming-memory-limitations-in-generative-ai-managing-context-windows-effectively/
- Melisa, R., Ashadi, A., Triastuti, A., Hidayati, S., Salido, A., Ero, P. E. L., Marlini, C., Zefrin., & Fuad, Z. A. (2025). Critical thinking in the age of AI: A systematic review of AI’s effects on higher education. Educational Process: International Journal, 14, e2025031. https://doi.org/10.22521/edupij.2025.14.31
- NSW Department of Education. (2024). Artificial intelligence in education. https://education.nsw.gov.au/teaching-and-learning/education-for-a-changing-world/artificial-intelligence-in-education
- (2021). Future of Education and Skills 2030: Conceptual Learning Framework. https://www.oecd.org/en/about/projects/future-of-education-and-skills-2030.html
- Recht, D. R., & Leslie, L. (1988). Effect of prior knowledge on good and poor readers’ memory of text. Journal of Educational Psychology, 80(1), 16–20. https://doi.org/10.1037/0022-0663.80.1.16
- Sherrington, T. (2025). Everyone must be thinking to be learning – but what are they thinking *with*? Teacherhead blog. https://teacherhead.com/2025/10/29/everyone-must-be-thinking-to-be-learning-but-what-are-they-thinking-with/
- Smith, R., Snow, P., Serry, T., & Hammond, L. (2021). The role of background knowledge in reading comprehension: A critical review. Reading Psychology, 42(3), 214–240. https://doi.org/10.1080/02702711.2021.1888348
- Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251–296. https://doi.org/10.1023/A:1022193728205
- Wheelahan, L. (2010). Why Knowledge Matters in Curriculum: A Social Realist Argument. Routledge. https://doi.org/10.4324/9780203860236
- Young, M. (2008). Bringing Knowledge Back In: From Social Constructivism to Social Realism in the Sociology of Education. https://doi.org/10.4324/9780203073667
- Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: A systematic review. Smart Learning Environments, 11, 28. https://doi.org/10.1186/s40561-024-00316-7
- Zohar, A., Weinberger, Y., & Tamir, P. (1994). The effect of the biology critical thinking project on the development of critical thinking. Journal of Research in Science Teaching, 31(2), 183–196. https://doi.org/10.1002/tea.3660310208
