In keeping with my recent writing about metaphors for AI, I want to introduce a new one: AI as theater.
I have much love for the theater. I credit my time on stage in high school as the single biggest factor in overcoming shyness and becoming more fully the person I am today. In part, theater allowed me to take on identities that were not me via the characters I played. But acting also taught me a lot about myself: that I had I voice, that I could be funny, that some people might find me attractive, that I could be a leader.
What might we learn about AI by thinking of it as a theater? First, let’s think visually, taking ChatGPT as our example. To me, the ChatGPT interface has a clear theater-like quality. Its prompting box rests at the bottom, much like a stage, a space of possibility. The suggested prompts and the central question, “How can I help you today?” are like set pieces, designed to invite you into the world being staged by OpenAI. The white space and the edge of the browser window (or monitor) are like the proscenium, which separates the audience from the stage and frames the action there.
I once had a kissing scene in high school. My girlfriend ran the curtain for that show. She was not happy. I don’t foresee that happening with ChatGPT, though.
As an audience, we see the actors and hear the musicians, but we are usually expected to ignore the other people who played a hand in the production. Rarely do we see the director, stage manager, or the playwright. The technical crew is usually hidden by the dark when they move set pieces, or else they are costumed to be part of the illusion. By the same token, when we are drawn into the stage of ChatGPT, we are supposed to believe that the only players are us (the user) and the bot. We are not supposed to think about the developers, designers, scientists, or business people who make and sell it—let alone the miners or factory workers who contributed to the hardware necessary for sufficient computer power. And yet, all of these “players” had a role in staging our experiences with the platform. Much like a play, ChatGPT is a created thing, designed to have a particular effect on the audience, and infused with so many choices, decisions, errors, oversights, and even creativity.
In short, the interface acts like a proscenium that separates us from everything that happens “behind the scenes” of AI. If, as part of critical AI literacy, we experiment with such metaphors, they might just draw our attention to the interface itself. Exercises in metaphor can help us better understand what we are and are not supposed to experience as users of AI.
For some of us, the experience of using ChatGPT for the first time was alienating. Its speed and its fluency can be unsettling, as can its ability to complete complex tasks. We might go so far as to say that it has an alienation effect (German: verfremdungseffekt), a theatrical technique popularized by Bertolt Brecht. Basically, Brecht did not want his audiences to over-identify emotionally with his characters. He wanted us to view them and their situations dispassionately so we could analyze them and perhaps learn something, often political, but sometimes existential, from their situation.
Brecht achieved this effect through many techniques, including having his characters address the audience directly, thus breaking what is typically called the “fourth wall,” which is often framed by the proscenium. Perhaps one of the reasons ChatGPT can be so alienating initially is that it breaks the 4th wall of the screen, addressing us directly in ways most other software does not. In keeping with Brecht’s hope for the alienation effect, ChatGPT’s alienation effect has led many of us to treat AI as an object of inquiry, to not get sucked into the reality it stages for us. We may also be confronted with this effect when the platform does not behave responsibly, as happened with its major malfunction earlier this week.
However, unlike Brecht, OpenAI, along with Google, Anthropic, and the other AI players big and small, do not want us to rest in that alienated mindset. They don’t really want us to be there at all. To the extent that they want us to pay attention to the interface itself, they want us to be wowed. Otherwise, they want us to have a seamless experience. They want the interface to be transparent. They want the players to be invisible.
Now, I get it. No one wants a hard-to-use platform that malfunctions all the time. It would be silly to sell such a thing. But for that very reason, we (users generally, but educators specifically) need to develop our own AI verfremdungseffekts so that we can step back from the staged action and view what we’re seeing a bit more dispassionately.
For kicks, I decided to ask ChatGPT to stage a play. Here is the prompt I used:
Write a one-act play. The first character is Sam. Sam is a 30-something senior developer for an AI software company. He is from the Midwestern U.S. His primary motivation is to create a seamless, nearly transparent experience for AI users so they can accomplish their goals quickly and efficiently. He also wants to make a profit. The second character is Bertolt. Bertolot is a 50-something programmer from Germany. He thinks the AI platform should have some interface elements that will alienate users and force them to think critically about AI. He is less interested in money than in the freedom it might represent. Write their dialogue with one another. At least once, have one of them break the fourth wall and address the audience directly. In addition to the dialogue, include a description of the setting, costuming, and stage directions.
Here is what ChatGPT wrote:
**Title: The Interface Dilemma**
**Setting:** The stage is split into two distinct areas representing the offices of Sam and Bertolt within the same AI software company. Sam’s office is sleek, with modern furniture, a large monitor displaying code, and a whiteboard filled with user flow diagrams. Bertolt’s office contrasts sharply, filled with books, papers, and multiple screens showing various AI interface designs. A large digital clock is visible on the back wall, counting down, adding a sense of urgency to the play.
**Costuming:**
- **Sam** wears casual business attire: a crisp, button-down shirt, jeans, and glasses. His outfit reflects a blend of professional and approachable.
- **Bertolt** is in a more relaxed ensemble, with a turtleneck, blazer, and round spectacles, symbolizing his intellectual approach and European flair.
---
**Act I**
*Lights up, revealing Sam typing away at his computer. Bertolt is seen through a virtual window on a screen, suggesting a video call between their offices.*
**Sam:** (Focused, typing) We’re on the brink, Bertolt. This update could redefine how users interact with AI. It’s sleek, intuitive... (He pauses, looking directly at Bertolt) It’s everything the market demands.
**Bertolt:** (Leaning back, thoughtful) Ah, Sam, my friend, but is it what the users truly need? Efficiency, yes, but at what cost? Should we not encourage them to ponder the tool they wield?
**Sam:** (Sighs, leans back) I get it, but we’re running a business, Bertolt. Our goal is to make this as accessible as possible. Profits allow us to innovate, to push boundaries.
**Bertolt:** (Chuckles) And yet, by removing all barriers, we risk creating a tool devoid of soul, a servant without a face. (He picks up a small AI device) This, my friend, is more than code and profit. It's the future of human thought, interaction...
**Sam:** (Interrupts, slightly frustrated) But a tool nobody uses because it’s too complex or intimidating benefits no one, Bertolt. We need balance.
*Bertolt walks to a large sketchpad, begins drawing an interface concept.*
**Bertolt:** Balance, yes. But consider this - (He gestures to the sketch) an interface that dares the user to think, to question. Isn't that the ultimate freedom?
*Sam stands, approaches the digital window, looking intently at the sketch.*
**Sam:** It’s... different. But won’t it push users away? We need to meet them where they are, not where we think they should be.
**Bertolt:** (Pausing, then softly) Sometimes, Sam, to find new lands, one must be willing to lose sight of the shore.
*Sam looks conflicted but intrigued. Bertolt smiles slightly, sensing the shift.*
**Sam:** (Thinking aloud) Maybe there’s a middle ground. A way to introduce your concept without full alienation... A nudge towards critical thinking without sacrificing accessibility.
**Bertolt:** (Nods) A compromise then. Let us find a way to blend our visions, to guide without dictating. (He extends his hand towards the camera) Together?
**Sam:** (Smiles, reaches towards his screen) Together.
*Sam turns to the audience, breaking the fourth wall.*
**Sam:** In the quest for innovation, we often forget that the true path lies in understanding, not just efficiency. Our journey reflects that - a balance between leading and learning.
*Bertolt starts sketching again, this time both are discussing enthusiastically, blending their ideas on the digital canvas between them.*
**Bertolt:** (Looking up, directly at the audience) And perhaps, in questioning, we find our true freedom. Not just in the interfaces we create, but in every choice we make.
*Lights dim as they continue their work, the digital clock stops, symbolizing a moment outside of time - a creative limbo where anything is possible.*
**End of Act I**
*Lights out.*
Well, it certainly won’t win any awards. The conflict isn’t much of one. The direct audience address at the end feels a bit more moralizing than alienating, although the staging and acting could change that. Indeed, the staging DALL-E created for the play certainly adds to the alienating effect.
I wonder if Sam Altman’s office looks like this?
First, the overall architecture looks more brutalist than sleek and modern. The circular diagrams are unsettling—the sort of thing I would imagine in a military installation rather than an AI company. The map on the back monitor is not quite right. The stool in the center blends into the table. What do the nonsensical numbers at the top count? Sam (stage left?) and Bertolt (stage right?) do not actually look at one another. Imagine if the whole play were staged with them in those positions? Their commitment to work together, to find a compromise, would be steeped in irony.
And then, of course, there is the man near center stage. He could be Sam or Bertolt. I prefer to think of him as Claude. His role is to stand stock still, unblinking, staring at the audience. Alienating indeed.
What do you think? Would you come to see my play?
No doubt, I’m having some fun at ChatGPT’s expense. My point is this: we can take up these different metaphors for AI and follow the logic they entail. In doing so, we can reduce our enchantment with the technology, at least a little bit. And the less enchanted we are—and perhaps the more we have fun at its expense—the more likely we are to remember that the technology has benefits and limitations, risks and rewards. Such disenchantment is essential for critical AI literacy.