Xiaoice vs. Tay [...]

Your rudeness invites their underperformance. Abuse bespeaks a hostile work environment that undermines how bots learn. Microsoft’s Tay offers a painfully superb — and superbly painful — real-world case study of how networked abuse shapes UX. Less than a day after Microsoft research released its unsupervised machine learning twitterbot, Tay became a “chatbot from hell” — tweeting a stream of increasingly nasty, racist, and homophobic comments until Microsoft pulled the plug.

“The problem was Microsoft didn’t leave on any training wheels, and didn’t make the bot self-reflective,” Brandon Wirtz said in a recent LinkedIn article about the situation. “[Tay] didn’t know that she should just ignore the people who act like Nazis, and so she became one herself.”

By contrast, look at how Microsoft’s chatbot thrived in China’s more regulated and inhibited digital environment. Xiaoice was able to avoid Tay’s issues because Chinese digital culture effectively sanctions certain forms of expression. Where Tay is no more, Xiaoice enjoys over 40 million users in China and Japan. (Source)

Wikity users can copy this article to their own site for editing, annotation, or safekeeping. If you like this article, please help us out by copying and hosting it.

Destination site (your site)
Posted on Categories Uncategorized