The Most Important Question Educators Can Ask about AI
Before all else, we need to center our values
In my workshops and speaking events at universities and schools, audiences regularly ask good questions about generative AI: What are some ethical considerations for teaching with AI? If it isn’t possible to identify AI-generated text with any real confidence, what do I do? How do we help students develop a sense of when it is worthwhile to use AI, and when it isn’t? How can I provide personalized learning experiences for my students using AI? I’m already so busy; how can I even start to address AI without totally overhauling my courses? These questions are born out of curiosity about the technology and how it might impact our students and our assignments. However, I think there is a prior, much more fundamental, question we need to ask ourselves as educators:
How can I use AI to teach what is most important to me and my field?
Image courtesy of Adobe Express.
This question is fundamental because it focuses on core values, a sense of purpose, and crucial processes and concepts that drive inquiry in our disciplines and professions. To direct the future of teaching and learning, educators of all stripes need to get crystal clear about our purpose. This sense of purpose should drive how we think about and engage with AI (or not). Otherwise, the technology tail will wag the pedagogical dog.
When I ask, “How can I use AI?” I don’t necessarily mean that all educators should bring ChatGPT into the classroom explicitly, although that may be part of the answer for many of us much of the time. Rather, I am pointing to AI as a technological and cultural phenomenon that looks poised to catalyze change across every educational sector. Our students are already asking, What will I be expected to know about AI when I get a job? Or, why do I need to know this, and why do I need to learn it from you, when I can just ask ChatGPT? Answers like “AI hallucinates” or “it’s bad for the environment” are true as far as they go, but for most of us, and our students, these answers don’t get at the fundamental question unless our work is devoted to, say, the nature of truth or ecological justice.
The way we answer the fundamental question may well sidestep AI: I want you to learn a particular mode of inquiry that is essential for doing science, or This field requires slow, careful, deliberate attention to the power of language, or It’s crucial that you know how to make stuff before you let a tool do it for you.
The way we answer the fundamental question may also engage AI: We’ll use this tool to learn how to ask really good questions about social organizations, or AI will speed up our technical process so we can focus more energy on problem-solving, or AI will let you practice in a low-stakes environment so you can be more present with real people when the time comes.
Let me use my own area as an example. I run a writing across the curriculum (WAC) program and writing center at a large research university in the Southeastern United States. I also conduct research into effective writing pedagogy and WAC program administration. When I ask myself about what is most important to me and my field, I instantly think of my core identity as a teacher of teachers. My teaching, administration, and writing—including this newsletter—are ultimately undergirded by my deep and abiding desire to help colleagues across disciplines become better teachers. Of course, my specific focus is on helping them teach about and with writing. But the interpersonal dialogue, problem-solving, and intellectually rich discussions of teaching are what drive me.
Therefore, when I think about the second part of the fundamental question, I ask myself, how can AI further my goal of teaching teachers how to teach (with) writing, without sacrificing meaningful conversations about teaching? In response, I have decided to use this moment as an opportunity to:
Encourage faculty to refocus their attention on being as transparent with students about why their subjects and assignments matter
Remind faculty that they already have conceptual tools they can use to understand generative AI and its impact on their fields
Turn generative AI from a tool for producing text, images, and sound into an object of inquiry itself (which will be the subject of a future post)
All these techniques turn our attention to the intellectual core and underlying values of our work.
Now, faculty could go to ChatGPT to learn about writing and writing pedagogy. My previous post demonstrated as much. AI-driven professional development could put me out of a job. In a worst-case scenario AI future, there may not even be an audience for that professional development because there would be no faculty. From where I sit, the only way to stave off such a future is to render my “why” as clear and direct as possible, convince others that that “why” ought to be of value to them, and make sure they understand why I (or other people like me) should be central to that work, with or without AI.
The same goes for all of us and our respective fields: if we cannot articulate our field’s value in the age of generative AI and convince others of it as well, then we’re much more likely to succumb to a logic of efficiency. This is why the question that leads this post is so fundamental to me. What is most important to us and our fields? How can we reaffirm those driving principles within the context of a world with prevalent AI?
Hi Chris! I'm excited to find your substack and reengage with you here. My (small, overworked) English dept. has really been spinning its wheels to get a handle on AI, and your thoughts have been really helpful to me. I especially loved this post about centering and articulating the values of our field.
One thought that occurred to me as a value proposition to students had to do with the power of examples as a learning tool. LLMs are trained on, essentially, millions of examples, which is why they can reproduce the form of genre-specific writing, even if the content is a hodge-podge of clichés, platitudes, and generalizations. The reason GPT can write a better memo than some of my professional writing students is simply that it has digested many more examples of a memo than the average 18 or 19 year old college student. But humans also learn much more quickly and require far fewer examples than an LLM to learn a genre and internalize its expectations. One of the changes I'm going to make to my teaching, especially in a genre-focused class like professional writing, is to simply provide more good examples than I have in the past to make students feel more empowered and less likely to turn to an automated text generator.
Also--a quick book recommendation for everybody in the comments: I just started reading Literary Theory for Robots: How Computers Learned to Write, which was released earlier this year, and it's quite good and accessible.
This is an inspiring (albeit occasionally scary) read. I am inspired, Chris, because your post gave me a better handle on how to engage my students (and fellow faculty) on the issue of AI (non) integration and use. I've been trying to get to the why question, and you helped me think though that better. I'm going to take it a step further and let students ask and answer that question in relation to why they're in college, why they want to learn to write (and every skill and knowledge it embodies). And I was a bit scared because I think there is a significant chance that academia and society will adopt this technology (like it did social media) without thinking through in ways you've been advocating for and showing us. But I guess all we have the best we can do. Thank you for another thought-provoking post.