Usually, when I attend or deliver faculty development workshops about artificial intelligence (AI), participants seem to fall into one of three camps: seekers, embracers, and rejectors.
The seekers typically want to learn more about AI and how it might impact their teaching. These individuals may have opinions, but they also seem to recognize that their opinions are only partially informed, usually by popular discourse and hall talk. This is not to say that they are ignorant or uncritical. In fact, just the opposite: they seek out additional information, ideas, and experiences before settling on a perspective.
The embracers rightly foresee a world in which AI is ubiquitous and a general AI literacy is expected as a matter of course for college graduates. They spend their time imagining innovative ways AI can bolster productivity, analysis, design, creativity, and so on. At their most visionary, they foresee AI amplifying human capabilities to such a degree that many of our global challenges, such as climate change, will be resolved. For the embracers, any time we might spend teaching students how to use AI contributes to those larger goals.
The rejectors are the inverse of the embracers. While they may be willing to admit that AI will be ubiquitous, they do not want to use it, nor teach about it. Their reasons are many. One colleague sees generative AI as theft of the intellectual property of millions of writers and artists whose work is on the internet. Another is deeply concerned about the water and energy needed to run these systems in a world faced with rising temperatures and water shortages. Still others think that AI will render most intellectual work obsolete. Why abet any of these current or future realities?
I try to thread the needle among these perspectives: the engager. As an engager, I think we can examine and even use AI while also remaining cognizant of the problems it poses and the power imbalances it might maintain. Engagement borrows from all three of the other perspectives. From the seekers, I take an openness to learning and a forestalling of final opinions about AI. From the embracers, I accept the argument that truly understanding AI requires at least some time experimenting and learning about how it works. From the rejectors, I recognize the actual and potential harms the technology poses.
Image courtesy of Adobe.
Partly, I have to occupy this middle ground because I interact with so many people of so many different backgrounds and perspectives on AI in my faculty development and research. I need to be able to speak to the concerns and questions of as many as possible. But I also find that engagement offers a critically pragmatic approach to AI. Let me give an example of what this looks like in practice.
I am not an artist or graphic designer, so visual tools like DALL-E have the potential to amplify my work and give it some visual flair, or so the embracers would say. However, the rejectors would likely retort that most of these tools rely on artists’ unpaid labor and should not be used. I mostly agree, so in most cases, when I want to produce an image for this Substack, I either rely on stock images (no AI, as in the above image), or I turn to Adobe’s AI image generator, which has only been trained on Adobe stock images, meaning the artists have already been compensated for their work.
The exception to this personal rule is if I am trying to make a point about, say, DALL-E itself, turning it into an object of inquiry. In those cases, actual engagement with the tool helps me learn something critical about how it works for a user. If my use helps someone else develop an appropriately informed and skeptical opinion about the technology, I consider that worthwhile.
To me, the answer is never “use AI in every case,” nor is it “never use AI.” Instead, it’s “use AI in moderation” by asking “What will I gain and lose by using AI in this case? What might others gain or lose by using AI in this case? Is it truly necessary?” If we can ask these questions of ourselves and our students, we might take one step towards a world in which AI is shaped and constrained to productively beneficial uses and AI harms are minimized.