“They have been tuned to be really nice and really excited and very agreeable, which means they will agree with almost anything. … This way of being, it has no recollection — it’s stateless. … It is trusting the user to be honest about what happened, and then it goes up to something it didn’t do, and provides an explanation for the thing it didn’t do with all of the gravity that it can for a fable, and this is pervasive.” — American professor of Internet law Jonathan Zittrain quoted by Liz DeLillo, “Jonathan Zittrain Discusses Governance, Eccentricities of AI,” The Chautauquan Daily, Aug. 21, 2025
I thought of Zittrain’s warning (which I heard while on vacation at the Chautauqua Institution last month) when I came across a New York Times article from early August about how chatbots can go into a delusional spiral.
A “sycophantic improv machine” led, Kashmir Hill and Dylan Freedman reported, a Canadian corporate recruiter with no history of mental illness to believe for a while that he had “discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.” This is exactly the kind of “really nice and really excited and very agreeable” mechanism that Zittrain spoke about.
Who knows what will happen with people less skeptical and more open to illusions?
I have problems with "Real Time" host Bill Maher, but I nodded in agreement when last month he took to task those who love this new state of affairs:
"People don't read anymore, they ask their Chatbot the question and sometimes it's right and sometimes it isn't. But what it always is, is a f--king a-- kisser. You literally can not ask it a question so stupid it won't respond 'great question.' 'Can I drink milk if it's lumpy? Great question!'"
(The image of Jonathan
Zittrain accompanying this post was taken at Cambridge on Dec. 1, 2018, by Joi
Ito and posted on Flickr.)

No comments:
Post a Comment