6 Comments

I love the simple and clever premise here. Great post!

Expand full comment

This is great. I love your deep dive into the pedagogies embedded within these programs!

Expand full comment

Thanks so much! I'm enjoying writing them!

Expand full comment

AI translates natural language into mathematic language, the language of computer science, which can predict what a reasonable human might say. That’s it. Its functionality is grounded in statistical linguistics and its output is artificial. It can’t write in the sense humans write because it can’t think as humans do. While humans are capable of mathematical thinking, they can think in lots of different ways—imaginative, scientifically, ethical reasoning, empathetic thinking, discursive, sensory experience.

While natural language is predictable in that it is patterned it is generated in an in-the-moment ontological event, spontaneous, contingent, skin in the game. AI has no emotion and no motivation nor intention. Therefore no pedagogy as the term is used—a specialized expertise peculiar to humans. I’m uneasy with any pedagogy that micro manages writing activities whether human or bot. For my money, teachers can help kids by designing activities over time that give them space to create AI and writing experiments and then structured collaborative reflective analysis

Expand full comment

I mostly agree with you. I would add one caveat: while AI does not "write" in the sense that it has no emotion or motivation, humans do. We're sense-making creatures, which means we will try to make sense of AI output regardless of whatever lack of sense went into the input. We can't help it! This means it's highly likely that users will infer a pedagogy--infer meaning about how to write well, for instance--whether AI can think pedagogically or not. That's why I write these things: not to attribute agency or thinking to the platforms, but to help readers think critically about the messages they may be receiving from AI about teaching, learning, and writing.

Expand full comment

The problem is the AI’s model of writing K12 is a mirror image of the classic rhetorical modeling current-traditional practice merged with “the” writing process. It models and therefore reinforces the “real” model of instruction kids experience. I call it hamburger instruction and suggest that the bot makes hamburgers. They taste good to kids who have trained on a writer’s welfare plan, being scaffolded to death, never reaching the potential of their own voice. Once you’ve tasted the sweetness of saying what YOU mean, you can’t look at bot output as human writing. Instruction has to change or we will produce bot-dependent children for whom it’s normal to let a bot write for you. Natural language models operate on the 19th century concept of the paragraph as does current

-traditional rhetoric. We are in danger of building a brainwashing tool if we don’t teach students what human verbal thought is and what artificial verbal thought isn’t. They need hands-off guided exercises in designing, implementing, and reflecting on writing issues. I published an example not many days ago. Two questions for dry writing (no bot): What does “what do you think” mean? What does “What do you mean?” mean. Then collaboratively with a bot followed by whole class discussion then written dry reflection: What did I learn about using a language bot?

Expand full comment