Defining the "Critical" in Critical AI Literacy
How can we amplify benefits and mitigate harms?
Literacy and AI are human endeavors. No matter the technological advancement (from the codex to the Kindle; from rudimentary chatbots to fine-tuned, algorithmic assistants), people ultimately imagine, design, create, test, revise, edit, refine, apply, distribute, and use them. And as we know, humans are flawed, subject to all kinds of psychological biases, political orientations, and sociocultural positionalities. We can attempt to put checks on our skewed realities via mechanisms like evaluation, accountability, and peer review, but pure objectivity is an ever-receding horizon.
Kate Viera and her many co-authors have defined literacy as “a sociohistoric phenomenon with the potential to liberate and oppress.” Literacy can be liberatory when readers and writers gain agency over their individual wellbeing and their communities’ social and economic futures. It can be oppressive when it is systematically withheld or used to regulate—that is, limit—people’s ability to communicate, create, consume, identify, learn, vote, move, and love as they please.
When we understand AI as another literacy technology, then we can also understand that it, too, has the potential to liberate or oppress. This potential stems from the “full stack” of human decisions, from data classification to algorithm creation to interface to use case.
Some of the potentially or actually oppressive effects of AI have received a lot of media play, including white-collar job loss, environmental degradation, and algorithmic bias. Each of these effects can be understood through the lens of literacy. Job loss, for example, is due in part to the fact that generative AI can produce literacy products faster, at scale, for cheaper. Also, the going phrase—AI won’t take your job; someone who knows how to use AI will—posits a set of AI literate haves and have-nots. Environmental degradation may not seem immediately related to questions of literacy, but land and water rights (to take just one example) hinge on legal documents, including deeds and treaties, that may be inaccessible to some readers, or else those readers may not have access to legal counsel to help them negotiate equitable distribution of resources. And algorithmic bias, of course, is a matter not only of the stereotypes that are apparent in generative outputs, but also the sometimes-bizarre classification schemes and training decisions by humans, all of which have linguistic, and thus literacy, components. (Kate Crawford’s Atlas of AI has some startling examples on this topic.)
Perhaps even more unsettling, and less well known, generative AI output has the potential to shape our general dispositions to the world. This potential both includes and surpasses nefarious practices of mis- and disinformation. Last year, Cornell University reported on research conducted by Maurice Jakesch, a graduate student in information science who
asked more than 1,500 participants to write a paragraph answering the question, “Is social media good for society?” People who used an AI writing assistant that was biased for or against social media were twice as likely to write a paragraph agreeing with the assistant, and significantly more likely to say they held the same opinion, compared with people who wrote without AI’s help.
I don’t have access to the complete methodology here, but the implications are startling: built-in algorithmic biases can influence our thoughts and beliefs. Of course, this goes for just about any medium, but the potential effects are amplified with generative AI. Think of it this way:
A platform is intentionally biased towards certain kinds of results.
Users’ results are published on the internet.
Said results are used as training data for other platforms.
Users of those other platforms get similarly skewed results, perhaps subtly enough that human trainers do not see it.
More skewed results continue to circulate and shape our dispositions.
The geopolitical implications here are certainly unsettling.
Thus, we need to be aware of how AI applications are built and used because the ways they use language can facilitate potential abuses of power.
But what about the liberatory end of the literacy spectrum? Can AI in general, or generative AI specifically, help people live better, freer lives? The general hope seems to be that AI will free people’s time for more meaningful work—that is, more creative, intellectually engaged, or spiritually enriching work. Kate Crawford thinks that prospect is dubious. At the end of Atlas of AI, she argues that, even with a critical mass of ethically trained computer scientists, CEOs, and everyday users, the oppressive effects of AI will not disappear without a corresponding reorganization of the social and economic order because current AI models and theories are infused with militaristic and capitalistic logics that are at odds with democratic uses.
Maybe so. But in the absence of large-scale systemic disruption, what might we do in the meantime? Might we seek out examples of (at least potentially) liberatory approaches to AI? Take, for example, AI for the People, a MacArthur Foundation-funded “public interest Responsible AI Team that focuses on helping to integrate ethical principles into product policy,” with a special focus on racial justice. Their goal is to introduce policy proposals and practices that reduce or eliminate technological harm to underrepresented or minoritized groups, especially Black people. To be sure, eliminating harm does not necessarily equate to the democratization of benefits. And it may be that we won’'t be able to “policy” our way out of worst-case economic and environmental scenarios. Still, the work of AI for the People and similar organizations seems like a pretty good start to me.
To go a step further, we might want to start thinking about critical AI literacy as not only a practice in identifying risks and harms, but also one of seeking, naming, and spreading benefits equitably.
One way of doing so might be for AI professionals and ethicists to listen to, and work alongside, communities to identify and build AI applications that serve their needs, whether educational, environmental, or economic. Such applications would require a new, community-engaged design paradigm. The products may still generate a profit for developers, but, ideally, not at the expense of a given community. This approach will not necessarily revolutionize society, but if it increases benefits and reduces harms, that seems like a net positive to me.
All this may strike you as focused primarily on the macrosocial and -economic levels, and that would be true. In future posts, I plan to focus more on critical approaches to functional and rhetorical uses of generative AI. Stay tuned for more!