“We must build AI for people; not to be a person,” Mustafa Suleyman’s recent essay on the importance of avoiding what he terms ‘seemingly conscious AI ‘is a welcome intervention, and much better argued than his recent book, The Coming Wave.
It could be argued that it’s also a good way to put some clear blue water between OpenAI, Anthropic, Google and Meta, all of which are hell-bent on giving us new token-based pals who are fun to be with, and his current employer Microsoft and the trusty but personality-free Copilot sidekick, helpful but in no way a likely boyfriend, but that doesn’t invalidate the argument.
What Suleyman is arguing is that the current design of chatbots is designed to exploit vulnerabilities in human psychology in order to lock themselves into people’s lives, and that doing this is a bad idea and should stop. Because just as certain visual illusions, like those of MC Escher or the Land Effect, are impossible not to see as they are hard-coded into our perceptual apparatus, certain emotional illusions around interactions with non-sentient actors are impossible not to feel.
We failed to address these issues when we allowed social media and social platforms to use the same intermittent reinforcement systems that make gambling so problematic, and Suleyman argues that we have a space within which we could do something about the design of interactive LLM-based services to ensure that they don’t exploit a parallel set of unpatched vulnerabilites.
Continue reading
