It might be worthwhile to explore current norms around “trustworthiness.” Language bots are destined to err because the machine’s product is “written” based on token probability. Many of its “errors” are absorbed easily in the uptake because they’re not so far off they startle anyone. Is “trustworthy” used in the sense of “Toyotas are trustworthy”? Or “She is trustworthy with sterling integrity”? There is buried complexity here. Is it meaningful to think of AI as “trustworthy” or “not trustworthy” in human terms? Is there a need for a theoretical framework? Are any available?
Chris, This is a great post. Thank you for writing and sharing it. It prompted a lot more thinking than this, but since I'm in the middle of grading, I will share only this one. The note you share below the sock puppet reminded me of a LinkedIn post by one of my nephews back in Nepal, a young man who makes a shit ton of money from making coding for international businesses (not aided by AI): he described in his post how he's created an app that (1) finds "five interesting facts about__" (the AI-assisted app finds people, events, places, or whatever "interesting" topic it decides), (2) gets ChatGPT to generate a script and a voice AI to read it, (3) uses another app to create YouTube video with an overlay of images and text, and (4) publishes the YouTube video once a day. My relative said he's not interested in the views but wanted to see what the channel does. I replied to his post by saying that next time I go home, he should talk to an AI-driven app named Uncle Shyam (instead of me), because he has gone too far in his technochauvinism and failure at basic humanity. He didn't care what the viewers of those videos were served by the app and the AI tools it uses. I had to leave after Olaf's and Tiera's keynotes, but both presentations challenged me to think about a variety of issues for days since I returned. Thank you for refreshing and reinforcing some of the great lines of thinking -- and adding fresh new perspectives to the conversation.
Thanks, Shyam. I would file that under "just because we can doesn't mean we should." I heard a well-known education researcher today start talking about AI grading, and I almost signed off of the webinar because of it.
Thanks, it is great to read this overview of the event and your reflections on it!
You're welcome, Anna!
It might be worthwhile to explore current norms around “trustworthiness.” Language bots are destined to err because the machine’s product is “written” based on token probability. Many of its “errors” are absorbed easily in the uptake because they’re not so far off they startle anyone. Is “trustworthy” used in the sense of “Toyotas are trustworthy”? Or “She is trustworthy with sterling integrity”? There is buried complexity here. Is it meaningful to think of AI as “trustworthy” or “not trustworthy” in human terms? Is there a need for a theoretical framework? Are any available?
Chris, This is a great post. Thank you for writing and sharing it. It prompted a lot more thinking than this, but since I'm in the middle of grading, I will share only this one. The note you share below the sock puppet reminded me of a LinkedIn post by one of my nephews back in Nepal, a young man who makes a shit ton of money from making coding for international businesses (not aided by AI): he described in his post how he's created an app that (1) finds "five interesting facts about__" (the AI-assisted app finds people, events, places, or whatever "interesting" topic it decides), (2) gets ChatGPT to generate a script and a voice AI to read it, (3) uses another app to create YouTube video with an overlay of images and text, and (4) publishes the YouTube video once a day. My relative said he's not interested in the views but wanted to see what the channel does. I replied to his post by saying that next time I go home, he should talk to an AI-driven app named Uncle Shyam (instead of me), because he has gone too far in his technochauvinism and failure at basic humanity. He didn't care what the viewers of those videos were served by the app and the AI tools it uses. I had to leave after Olaf's and Tiera's keynotes, but both presentations challenged me to think about a variety of issues for days since I returned. Thank you for refreshing and reinforcing some of the great lines of thinking -- and adding fresh new perspectives to the conversation.
Thanks, Shyam. I would file that under "just because we can doesn't mean we should." I heard a well-known education researcher today start talking about AI grading, and I almost signed off of the webinar because of it.