I’ve been writing, researching, and teaching about generative AI for nearly three years now, and I fear I have been misunderstood, or perhaps just unclear. Let me explain.
Over the last several months, I have been working on an article with Lydia Wilkes (due out this fall) in which we try to carve out an “unsettled middle” between uncritical AI integration (into courses, curricula, etc.) and AI refusal. I won’t get into the details of that piece—I’ll share it once it’s out—but basically we think this unsettled middle is a great place to engage (ourselves and others) in ethical discernment.
I had the opportunity to summarize this idea of the unsettled middle and ethical discernment during a Q&A session at last week’s International Writing Across the Curriculum Conference. The next day, a colleague told me folks were surprised that I had “changed my position” on AI. I was taken aback. I don’t think I have changed my position, just refined it.
You see, I have always been ambivalent about AI.
I didn’t ask what they thought my position had been, but I suspect they thought of me as somehow “in favor” of AI. And it’s true, I tend to advocate for AI engagement in my work. I think AI can do some good for some writers, researchers, and knowledge workers in some situations. I also think its increasing presence across personal, educational, workplace, and civic domains is a strong reason for teaching our students what AI is and how it works.
At the same time, I have also urged a healthy skepticism and strong critique along the way, given the well-documented biases, environmental damage, political dangers, and cognitive risks it poses. I have always been unsettled by these negative aspects of AI. But for me, being unsettled or being ambivalent does not necessitate disengagement or refusal. One can be troubled by a technology and still engage with it.
What I have been trying to do is trouble what feels increasingly like a binary: either you’re an AI “booster” singing its praises, or you’re an AI critic who will never touch it. I am neither of these.
I am in favor of situated engagements and situated refusals, which depend on rhetorical situations, ethical frameworks and quandaries, classes and curricula, and so on.
I cannot say that every use or non-use of the technology is (un)ethical, (un)critical, or (un)necessary without understanding something about the context.
I suspect I will remain ambivalent (but curious), unsettled (but engaged) for a long time. So long as I do, I will seek to understand the aims, exigencies, circumstances, and ways of thinking that motivate the people I work with to use or refuse AI and then respond, recommending ways and reasons for using, minimizing, or rejecting the technology as appropriate.
Well said! I'm in the same place as a new WPA, wanting to support discussions on GenAI with students and colleagues, even if it's all about when and why to refuse. Highlighting those refusal points at the same time offers chances for underscoring the value of the human (touch).
Thank you. The field needs to hear more of these kinds of nuanced approaches. My concern is that thumbnail arguments of refusal get interpreted as a kind of shaming — and this is not at all helpful for our need to grapple with the numerous problems we face as compositionists and members of higher ed more generally right now.