Generative AI as Pedagogical Machine
What is AI trying to teach us about teaching, learning, and writing?
Artificial Intelligence (AI) is always trying to teach us something. This is most obvious when we use it in classrooms, workshops, or self-paced learning environments, but it’s true pretty much all the time. The big hitters (ChatGPT, Gemini, Claude, Pi), their research siblings (Elicit, Research Rabbit), and their ed tech cousins (Grammarly Go, Packback) are pedagogical machines: digital tools that teach us certain ways of understanding and interacting with the world.
Here on Prose and Processors, I plan to start a new series on AI’s pedagogies. In each post, I will examine a specific platform and tease out how a pedagogical mindset can help us understand what it is trying to teach us. In this post, I want to lay a foundation that explains what I am getting at when I call AI pedagogical machines.
Image created using Adobe Express.
What is pedagogy?
Pedagogy is the theory and practice of teaching: a systematic explanation of how best to teach a particular subject, but one that is often tentative because it must be responsive to the reality of on-the-ground experiences and material circumstances.
Pedagogical theory will often include both a “how”—a description of practice—and a “why”—a justification or motivation for that practice. For instance, in a writing classroom, I will probably assign drafts (“how”) because I understand writing as a process (“why”). However, because my pedagogy is to some extent tentative, its theory remains open to refinement based on practice. If I notice that my students don’t often revise their drafts, then I may decide to break the process down into smaller write-feedback-revise cycles (“how”) to help them gain a more refined grasp of writing processes as social and iterative (“why”).
Other on-the-ground realities, such as curricular outcomes, accreditation standards, and assessment can impact how an educator understands and implements a pedagogical theory. In some instances, such elements might even become the pedagogy—think, “teaching to the test.”
Pedagogy is also material and embodied. It doesn’t just live in our heads. We and our students are whole people, and our pedagogies are bound up with what we say and do as much as what we think and feel. In a commonplace example, the teacher who lectures at a podium in the front of the room enacts a different pedagogy from the one who arranges the desks in a circle, their own included. Available technologies, class size, textbook quality, LMSs, and a host of other material factors also impact pedagogy.
Pedagogy is also ethical: it entails value judgments about right and wrong, better and worse, effective and ineffective, harmful and beneficial. For example, many of my educator friends and colleagues regularly contend with the ethics of attendance policies or late work. They feel keenly the relationship, and often conflict, between responsibility and accountability, on the one hand, and compassion and understanding, on the other. Each decision is a vote for one set of ethical principles or another.
This ethical dimension also points to pedagogy’s ideological dimension. To paraphrase James Berlin, ideology involves what exists, what is good, and what is possible. To take just one example, our pedagogy might imply that systemic injustice exists or does not (existence); incorporate historically marginalized perspectives or uphold a canon (goodness); and seek to right injustice, or uphold the status quo (possibility).
Pedagogy certainly has other dimensions, but its theoretical, practical, material, embodied, ethical, and ideological dimensions offer plentiful fruit for thinking through teaching and technology alike.
How does pedagogy apply to AI?
My assumption in setting up this theory is that all generative AI platforms can be understood as having an implicit or explicit pedagogy. General-purpose tools like ChatGPT may not be purpose-built with teaching and learning in mind, but their interface and output may still lead teachers and learners to interact with them in some ways and not others; therefore, they have an implicit pedagogy. Others, like PackBack, are built specifically for classrooms, so their pedagogy is much more explicit.
Regardless of where a platform sits on the implicit-explicit spectrum, it is designed by people to be used by people, meaning the code, algorithms, interface, and output do not wholly determine how the pedagogy plays out. Nevertheless, the platform will suggest certain preferred pedagogies. My goal will be to explore some of those preferred pedagogies via my own interactions with these tools and not others. I will ask:
What does this platform ask me to do, pedagogically speaking?
Why does it ask me to do it?
What does this platform suggest about my material circumstances?
What does this platform ask me to do physically with my eyes, ears, hands, etc.?
What value judgments does this platform suggest or ignore? What value judgments does it suggest I make?
What the above questions suggest, pedagogically speaking, about the real, the good, and the possible?
My goal in doing so will be to help readers make decisions about which generative AI platforms to reject (if their pedagogies are at odds) and which ones they might want to use (if their pedagogies are complementary), and how they might augment the tools with other teaching techniques (to better account for those platforms’ limitations).
I hope you’ll stay tuned for the first post of the series, which will focus on PackBack’s Writing Lab.
Wishing you success! Your mention of revision, a teachable aspect of writing, in the context of feedback raises interesting questions about how learners value and take up input from peers and from AI. Is this the sort of research question you envision?
Looking forward to this series, Chris!
There’s also an interesting tension between how these platforms’ creators want us to use the platforms, and how we can use them antagonistically—brushing them against the grain, if you will.