If you spend time reading about generative AI on LinkedIn, you have probably noticed that much of the discussion focuses on use cases and user advice. People regularly showcase specific prompts and outputs they have used to get a desired or surprising result, or else they share one of many different frameworks for effective prompting. Others have developed taxonomies of many types of generative AI on the market. Personally, when I see these taxonomies, I get overwhelmed. How can any one person learn so much?
Rather than succumb to overwhelm when faced with so many choices, and either default to a big hitter or ignore AI altogether, I have come to believe that critical AI literacy is an essential capacity we can use to approach our use of any AI technology.
I define critical AI literacy as a capacity to navigate generative AI platforms and contexts with attention to the benefits and risks of the technology for individuals, organizations, communities, and cultures and their knowledge-making and communication capacities. I plan to flesh out and apply this idea over time in “Prose and Processors.”
This week, I want to focus first on why I choose the term literacy to characterize the necessary perspective we need to take on generative AI. (I’ll focus on the critical component next week.) On the most basic level, literacy means the ability to read and write, and typically the term points to the written word as the primary mode for conveying meaning.
Generative AI, too, hinges on users’ reading and writing. (Actually, I would argue that all AI does, but I will save that argument for another post.) Simplistically, we must be able to write prompts and read output to achieve whatever goals brought us to AI in the first place. More complexly, we must be able to “read” contexts of use to decide whether, when, and how to use generative AI. We must be able to read data and privacy policies (and we should be reading them!). And we need to be able to rewrite output to better match our goals, correct misinformation, and reduce bias.
Moreover, we need not limit our definition of literacy to the written word. For decades, researchers in rhetoric, composition, writing, and literacy studies have pointed to the blurry lines between writing and other modes of communication, such as images, sounds, and code. We “read” and “write” these other modes just as we do seemingly “text-only” genres (which themselves are visual, relying on font size, paragraph layouts, etc.). The term “multimodal” has recently entered the public lexicon as LLMs can read and generate images, texts, voices, code, and music; scholars have been using it for decades to describe genres like websites, graphic novels, memes, and infographics.
In short, generative AI is a literacy technology. The spread of previous literacy technologies, from the printing press to word processing to social media, have contributed to profound social changes, both positive and negative. Generative AI is no different. I plan to explore some of those changes through the lens of critical AI literacy in this newsletter.
Do you agree that AI is principally a literacy technology? What implications does that spark for you?
Great post, Chris. I can think of a couple additional elements of critical AI literacy that would be helpful within your definition of the term:
1) statistical literacy - understanding statistics, and even simply understanding that genAI's output is based on statistics, is critical to understanding what AI tools are and how they operate
2) related to this, bullshit literacy - it's absolutely essential to understand that genAI tools are always, only bullshitting, in the technical sense defined by Harry Frankfurt. (It doesn't mean that they don't produce useful output, at times. Just that even the useful output is bullshit, and requires humans to give it meaning.)
Understanding how genAI tools always only parrot, never think or communicate, is essential to critical AI literacy, imo.
Looking forward to future posts!