Turkle’s “Eliza Effect” predicts we will assign more intelligence to computers which talk to us conversationally. But ultimately that sets us up for failure, because we attribute a level of intelligence to the bot or other agent that it can’t match, leading to frustration.
Almost certainly not. The arguments against a chatbot future are legion, compelling, and backed by both common sense and history. Chatbots are as old as modern computing. The first, Eliza, written at MIT in the 1960s, simulated psychotherapy so effectively that some interlocutors mistook it for human; its very simple software, which mostly just asked open-ended questions while matching a few words and phrases to make it seem like they were logically connected to the user’s previous answer, could be construed as eerily searching and sympathetic.
This is what Sherry Turkle christened the “Eliza effect”: humans unconsciously assume that software which communicates conversationally has much more intelligence and sophistication than is actually present. Inevitably, the software eventually fails to match that assumption, disappointing and frustrating the user who unconsciously expected more. Think of it as a linguistic Uncanny Valley, the notorious effect wherein, when simulated human faces look almost real, they provoke a deep unease, unlike less accurate representations. (Source)
Microsoft’s Tay seemed to have emotions, but could not comprehend them. See Eliza From Hell