But because humans don’t (yet) attach agency or intelligence to their devices, they’re remarkably uninhibited about abusing them. Both academic research and anecdotal observation on man/machine interfaces suggest raised voices and vulgar comments are more common than not. It’s estimated that about 10% to 50% of interactions are abusive, according to Dr. Sheryl Brahnam in a TechEmergence interview late last year.
These behaviors are simply not sustainable. If adaptive bots learn from every meaningful human interaction they have, then mistreatment and abuse become technological toxins. Bad behavior can poison bot behavior. That undermines enterprise efficiency, productivity, and culture.
That’s why being bad to bots will become professionally and socially taboo in tomorrow’s workplace. When “deep learning” devices emotionally resonate with their users, mistreating them feels less like breaking one’s mobile phone than kicking a kitten. The former earns a reprimand; the latter gets you fired. (Source)