4 Comments

This was fascinating to read, Chris. While reading it, I also thought about how humans don't just seek to produce text (or even produce meaning); they try to get things done, and those actions are relative to other humans who have emotions, values, and relations to the writer. Writers and readers share context that goes far beyond the semantic or even the linguistic, such as cultural, political, social, etc. It is one thing that AI tools can "process" and "produce" language with stunning capacity, but they use those superior abilities to fool us -- meaning they fail to meaningfully engage all the other domains when they engage us merely linguistically. Communicating with humans versus AI to me feels like living life versus watching it in video ... or something like that. It's limited in so many ways, even though I feel that it is powerful (and helpful) in "some" ways.

Expand full comment

Thanks Shyam. I couldn't agree more. We need to be careful of being charmed by the reductively linguistic because it's a poor cipher for all those other social, cultural, and political aspects that make communication so difficult.

Expand full comment
Comment deleted
Feb 6Edited
Comment deleted
Expand full comment

Thanks for the thoughts. I certainly agree that AI-generated text can be better or worse depending on prompting strategies and model sophistication. However, I still think that, at least for now, humans have intentions, whereas generative AI does not have an intention. Intentions are social and experiential as much as they are linguistic. And that's to say nothing of the embodied nature of it!

But thanks also for the nudge about lists. I'll have to think about that. I'm not sure I would agree that the model "knows" about more sophisticated content that it can't fit. I would say that it hasn't been properly prompted to represent a different set of words that we then interpret as meaningful and more complicated. (And that means the onus is probably on me to continue working on prompting differently.)

Expand full comment
Comment deleted
Feb 9
Comment deleted
Expand full comment

Agreed. It's pretty difficult to envision the future of the technology from a users' perspective, especially thinking about the ways in which it puts words together to appear like ideas, at least to me as a user/reader. The experience of reading ChatGPT's words is usually pretty mundane and disappointment for me, but occasionally interesting and thought-provoking. I'm not sure it's become *more* thought provoking on average, but I do still see some valuable sparks in what it says.

Expand full comment