Your conclusion rings true to me. Students will need really high quality writing instruction with rich reflective discussions about their experiments and experiences before they will see how the bot has specialized, sort of niche tasks that amplify their own power as writers. They need to practice their own voices under strong teachers to grasp that AI simply cannot write. It’s a Swiss Army knife, not a great orator, poet, novelists, or academic. I very much appreciated your steering clear of evoking fear while pointing out the deep problems ahead.
"AI simply cannot write." Exactly, Terry! It produces text, but that's not the same as the richly complex task of actually writing, and actually understanding what writing is and can mean.
I'm absolutely nerding out over this. I love your approach to analyzing ChatGPT's own understanding of writing.
1. Have you tried this with Claude 3? That is much better a writing than ChatGPT, out of the box. It might be interesting to compare.
2. Have you played with cross-genre writing with AI? I wonder how well that would work.
3. I imagine that you could (if you wanted to) create a GPT that overrides the formalist understanding of writing. But we'd have to be careful for whether, and when, it falls back to its default, formalist definition.
Thanks, Jason. I'll have to try this with Claude 3 and see how it does. I'm not sure what you mean by cross-genre writing. Do you mean repurposing material from one genre into another? I imagine many AI tools would produce some pretty good first pass drafts, so long as they didn't introduce new hallucinations.
I've toyed with the idea of a "rhetorically-aware" GPT that would do better than the default, but haven't gotten around to it yet. You have me thinking I need to go ahead and do it and see what it can do!
I think, after rereading what you wrote here, you're already ahead of the game on doing these kinds of "cross-genre" experiments. Things along the lines of "Write a story that is part Western, part sci-fi, and part bildungsroman."
As in most other cases, LLMs pose what I call the learner's paradox: they help you learn only the extent you know what they telling you. With humans, you can rely on their cultural, disciplinary, contextual, and other knowledge and value systems. With AI tools, you never know. If you were not a writing/rhetoric professor, you would be impressed by the thinly formalistic stuff it would give you. And you would be the one to hallucinate. But here you took the tool to the task, and gained only as much as you were able to judge. That is to say NOTHING about the utter, absolute, and miserable lack of cross-cultural knowledge based of today's LLMs.
Now, regarding the presentation you mention, which I didn't go to (I'm glad LOL), you were luckier than me last week: right before I facilitated a workshop at a teaching symposium, a nursing professor took the audience into a demonstration of how they've been relying (completely, happily, in pleasantly surprised ways) on ChatGPT to create every aspect of their teaching. They said: "Can you guess what I did next?" Of course, the answer was: "I used ChatGPT." When they went so far as to show a patient profile that they used for teaching/learning, I was horrified. I had heard at an earlier discussion that asking AI tools to generate cases "saves time and effort" especially when it comes to healthcare in minority communities. I also recently heard a faculty training presenter saying: AI tools can draft essays, granted -- and I thought, what, granted? AI is exposing how shallow and irresponsible educators across the disciplines have been (much like MOOC exposed terrible teachers and teaching ideas). Unfortunately, the bad ideas and practices might become far more entrenched this time around than ever before.
Anyway, this post is a very thought-provoking discussion of genre in relation to LLMs. Thank you for sharing.
Yes, the learner's paradox is exactly what I was getting at here! What ChatGPT can tell me is no match for time, practice, exposure, habituation, and feedback.
Re: your patient profile example, bias could be such a huge problem there. We already know that Black American women suffer huge health disparities in our country because their symptoms are regularly ignored or dismissed regularly. Do we really think ChatGPT would do better at representing patients from such backgrounds, trained as it is on our already-biased data?
Your conclusion rings true to me. Students will need really high quality writing instruction with rich reflective discussions about their experiments and experiences before they will see how the bot has specialized, sort of niche tasks that amplify their own power as writers. They need to practice their own voices under strong teachers to grasp that AI simply cannot write. It’s a Swiss Army knife, not a great orator, poet, novelists, or academic. I very much appreciated your steering clear of evoking fear while pointing out the deep problems ahead.
"AI simply cannot write." Exactly, Terry! It produces text, but that's not the same as the richly complex task of actually writing, and actually understanding what writing is and can mean.
instructive & intriguing
I'm absolutely nerding out over this. I love your approach to analyzing ChatGPT's own understanding of writing.
1. Have you tried this with Claude 3? That is much better a writing than ChatGPT, out of the box. It might be interesting to compare.
2. Have you played with cross-genre writing with AI? I wonder how well that would work.
3. I imagine that you could (if you wanted to) create a GPT that overrides the formalist understanding of writing. But we'd have to be careful for whether, and when, it falls back to its default, formalist definition.
Thanks, Jason. I'll have to try this with Claude 3 and see how it does. I'm not sure what you mean by cross-genre writing. Do you mean repurposing material from one genre into another? I imagine many AI tools would produce some pretty good first pass drafts, so long as they didn't introduce new hallucinations.
I've toyed with the idea of a "rhetorically-aware" GPT that would do better than the default, but haven't gotten around to it yet. You have me thinking I need to go ahead and do it and see what it can do!
Very cool. These are great ideas!
I think, after rereading what you wrote here, you're already ahead of the game on doing these kinds of "cross-genre" experiments. Things along the lines of "Write a story that is part Western, part sci-fi, and part bildungsroman."
And I love that idea of a rhetorically-aware GPT.
As in most other cases, LLMs pose what I call the learner's paradox: they help you learn only the extent you know what they telling you. With humans, you can rely on their cultural, disciplinary, contextual, and other knowledge and value systems. With AI tools, you never know. If you were not a writing/rhetoric professor, you would be impressed by the thinly formalistic stuff it would give you. And you would be the one to hallucinate. But here you took the tool to the task, and gained only as much as you were able to judge. That is to say NOTHING about the utter, absolute, and miserable lack of cross-cultural knowledge based of today's LLMs.
Now, regarding the presentation you mention, which I didn't go to (I'm glad LOL), you were luckier than me last week: right before I facilitated a workshop at a teaching symposium, a nursing professor took the audience into a demonstration of how they've been relying (completely, happily, in pleasantly surprised ways) on ChatGPT to create every aspect of their teaching. They said: "Can you guess what I did next?" Of course, the answer was: "I used ChatGPT." When they went so far as to show a patient profile that they used for teaching/learning, I was horrified. I had heard at an earlier discussion that asking AI tools to generate cases "saves time and effort" especially when it comes to healthcare in minority communities. I also recently heard a faculty training presenter saying: AI tools can draft essays, granted -- and I thought, what, granted? AI is exposing how shallow and irresponsible educators across the disciplines have been (much like MOOC exposed terrible teachers and teaching ideas). Unfortunately, the bad ideas and practices might become far more entrenched this time around than ever before.
Anyway, this post is a very thought-provoking discussion of genre in relation to LLMs. Thank you for sharing.
Yes, the learner's paradox is exactly what I was getting at here! What ChatGPT can tell me is no match for time, practice, exposure, habituation, and feedback.
Re: your patient profile example, bias could be such a huge problem there. We already know that Black American women suffer huge health disparities in our country because their symptoms are regularly ignored or dismissed regularly. Do we really think ChatGPT would do better at representing patients from such backgrounds, trained as it is on our already-biased data?